paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
nips_2022_NI7moUOKtc
Debiased Self-Training for Semi-Supervised Learning
Deep neural networks achieve remarkable performances on a wide range of tasks with the aid of large-scale labeled datasets. Yet these datasets are time-consuming and labor-exhaustive to obtain on realistic tasks. To mitigate the requirement for labeled data, self-training is widely used in semi-supervised learning by iteratively assigning pseudo labels to unlabeled samples. Despite its popularity, self-training is well-believed to be unreliable and often leads to training instability. Our experimental studies further reveal that the bias in semi-supervised learning arises from both the problem itself and the inappropriate training with potentially incorrect pseudo labels, which accumulates the error in the iterative self-training process. To reduce the above bias, we propose Debiased Self-Training (DST). First, the generation and utilization of pseudo labels are decoupled by two parameter-independent classifier heads to avoid direct error accumulation. Second, we estimate the worst case of self-training bias, where the pseudo labeling function is accurate on labeled samples, yet makes as many mistakes as possible on unlabeled samples. We then adversarially optimize the representations to improve the quality of pseudo labels by avoiding the worst case. Extensive experiments justify that DST achieves an average improvement of 6.3% against state-of-the-art methods on standard semi-supervised learning benchmark datasets and 18.9% against FixMatch on 13 diverse tasks. Furthermore, DST can be seamlessly adapted to other self-training methods and help stabilize their training and balance performance across classes in both cases of training from scratch and finetuning from pre-trained models.
Accept
This paper proposed a novel Debiased self-training (DST) approach to reduce both data bias and self-training bias during SSL. The proposed method is simple and empirically seems quite effective. Reviewers are generally positive about the novelty of the method and significance of the results. While authors have tried to address and improve some of the review issues, the related work section and empirical comparison with recent SOTA SSL methods still could be improved. For example, some recent related SSL works are still missing, including but not limited to - DASO: Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced Semi-Supervised Learning, CVPR2022 - CoMatch: Semi-supervised Learning with Contrastive Graph Regularization, ICCV2021 Overall, the paper presents a novel framework for SSL, the empirical results of the method are quite positive, the paper can be accepted, but the authors are recommended to further improve the discussion/comparison of recent related work.
train
[ "MNldj4rUHSB", "h-glLyeY432", "hlP_XBCxNEU", "UL4gThotfQd", "p4II0lBrVaI", "XR0Lw7dYrRk", "YYLORyS8Jl0", "kf6TYThrO3P", "o7HgBwCoOL", "MJYPOh5P-7_", "jKq7_yJC5ov", "IV_xf4jVPC", "BnMIK-Elbt8" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We'd like to thank Reviewer G3sn again for providing an impressively insightful pre-rebuttal review, which has enabled us to make an effective response. We'd also thank you for carefully judging our feedback and acknowledging our work in the final review.", " Thanks for the enthusiastic reply from the authors. It was a great journey with the authors; because all my concerns were properly and clearly resolved by additional experimental supports, I would like to higher my review score to accept.", " **Q4**: The paper is relying on an assumption for mistakes of $h'$ that does not always hold. What if $h'$ is perfect while $h$ is imperfect on some data points? \n\nIt should be clarified that this paper does not rely on the assumption that $h'$ is worse than $h$ on each data point.\n\n- As mentioned in $\\underline{\\text{Section 3}}$, *bias* is defined as an expectation error on a distribution, rather than the accuracy of a specific data point.\n- Our approach is able to reduce *data bias* as long as $h'$ has a larger bias than $h$.\n\nThe hypothesis $h'$ has a larger bias than $h$ has been confirmed experimentally and is theoretically reasonable.\n- As shown in $\\underline{\\text{Figure 8 (Appendix B.5)}}$, the bias of $h'$ (in red) is much larger than that of $h$ (in green) throughout the training process.\n- Strict proof is quite difficult for multi-classification problems. For simplicity, we can consider the binary classification problem, just as many learning theories did. Then we have the conclusion that as long as $h$ performs better than random guesses, $\\underline{\\text{Equation 6}}$ will encourage $h'$ to have a larger bias than $h$. In other words, if the accuracy of $h$ is greater than 50\\%, then $\\underline{\\text{Equation 6}}$ will encourage the predictions of $h'$ opposite from $h$, resulting that the accuracy of $h'$ to be lower than 50\\%. Note that SSL usually assumes that $h$ performs better than random guesses on the unlabeled data. Otherwise, this will be an ill-defined problem, and no SSL method can solve it. \n", " **Q3**: Two concerns: (i) When the pseudo-labeling is incorrect, the feature generator is still contaminated by the given signal. (ii) For the correct pseudo labels, the pseudo-labeling head cannot gain their benefit to improve pseudo-labeling quality.\n\n- As you mentioned, training with noisy pseudo-labeled data has both benefits and risks. But the backbone and head might have **different tolerances for noisy pseudo labels**. To verify this, we perform the following experiments. \n\n - We randomly select two subsets $\\mathcal{S}_\\text{clean}$ and $\\mathcal{S}_\\text{noisy}$ of size $1000$ and $5000$ on *CIFAR-100*, where all labels in $\\mathcal{S}_\\text{clean}$ are accurate, while labels in $\\mathcal{S}_\\text{noisy}$ are noisy. The noise ratio of labels in $\\mathcal{S}_\\text{noisy}$ is denoted by $\\gamma$. We always train both the head and the feature extractor with clean data in $\\mathcal{S}_\\text{clean}$ and focus on how to exploit noisy data. Specifically, we compare four methods:\n\n - (1) Clean Only: do not use data in $\\mathcal{S}_\\text{noisy}$;\n - (2) Noisy Head: $\\mathcal{S}_\\text{noisy}$ is only used to train the head;\n - (3) Noisy Backbone: $\\mathcal{S}_\\text{noisy}$ is only used to train the backbone (our suggested method);\n - (4) Noisy All: $\\mathcal{S}_\\text{noisy}$ is used to train both the head and the backbone (FixMatch).\n \n The table below reports the accuracy of these methods when $\\gamma$ equals $5\\%$, $10\\%$, and $20\\%$ (The accuracy is $75.7$ when $\\gamma$ is $0$). Results suggest that the backbone has a better tolerance for noisy pseudo labels compared with the head.\n\n | | Train head with clean data | Train backbone with clean data | Train head with noisy data | Train backbone with noisy data | $\\gamma=5\\%$ | $\\gamma=10\\%$ | $\\gamma=20\\%$ |\n | -------------- | :-------------------: | :--------------------------------: | :-------------------: | :-------------------------------: | :---------: | :----------: | :----------: |\n | Clean Only | $\\checkmark$ | $\\checkmark$ | | | 61.5 | 61.5 | 61.5 |\n | Noisy Head | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | | 67.1 | 66.2 | 63.2 |\n | Noisy ALL | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | 73.7 | 72.3 | 68.0 |\n | Noisy Backbone | $\\checkmark$ | $\\checkmark$ | | $\\checkmark$ | **74.3** | **72.9** | **71.7** |\n\n- Since the backbone and head have different tolerances for noisy pseudo labels, **the benefits of using pseudo-labeled data on the backbone outweigh the risks, while the risks of using pseudo-labeled data on the head outweigh the benefits**. Therefore, avoiding training the head with pseudo-labeled data will bring performance gains on the original FixMatch as shown in the ablation study ($\\underline{\\text{Table 3}}$). Corresponding results are also shown below.\n\n| | Train head with clean data | Train backbone with clean data | Train head with noisy data | Train backbone with noisy data | Supervised Pre-training | Unsupervised Pre-training |\n| ------------------------------------- | :-------------------: | :--------------------------------: | :-------------------: | :-------------------------------: | :---------------------: | :-----------------------: |\n| Baseline | $\\checkmark$ | $\\checkmark$ | | | 48.2 | 46.5 |\n| FixMatch | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | 53.1 | 51.4 |\n| DST w/o worst (nonlinear pseudo head) | $\\checkmark$ | $\\checkmark$ | | $\\checkmark$ | **60.6** | **60.9** |\n\n", " We would like to sincerely thank Reviewer G3sn for providing insightful reviews and valuable comments. We have clarified the questions in the following response.\n\n**Q1**: Is the worst head $h'$ different from $h$ or $h_{pseudo}$?\n\nSorry for the misunderstanding. \n- The worst-case head, main head, and pseudo head do not share parameters. To avoid misunderstandings, we add a description of the overall loss function in $\\underline{\\text{Section 4}}$.\n\n- As mentioned in $\\underline{\\text{Appendix A.1}}$, the worst-case head and non-linear pseudo head have the same architecture, but with a different architecture from the linear main head. \n\n**Q2**: How to draw the 'worst possible head' during training in a hands-on manner? Does it require additional heavy computations?\n\n* As mentioned in $\\underline{\\text{Appendix B.5}}$, we optimize $\\psi$ and $h'$ with stochastic gradient descent alternatively. In other words, during each iteration, the minimization or maximization is performed only once. For more details, you can check the code provided in the supplementary material.\n* The alternative minimax game will introduce little computation cost during training and no additional computation cost during inference. As mentioned in $\\underline{\\text{Appendix B.5}}$, when training $1000k$ iterations on *CIFAR-100 using 4 2080 Ti GPUs*, FixMatch takes $104$ hours while DST takes $111$ hours, only a $7\\%$ increase in time.\n", " **Q5:** There are a few representative SSL methods missing in the comparison such as UDA.\n\n- We have included UDA in both $\\underline{\\text{Table 1}}$ and $\\underline{\\text{Table 2}}$ in the original paper. Below is the average performance of UDA and DST under different settings.\n\n |Method|Train from Scratch|Supervised Pre-training|Unsupervised Pre-training|\n |:----|:---:|:---:|:---:|\n |UDA|55.4|59.6|58.7|\n |DST (FixMatch)|**78.3**|**71.1**|**68.7**|\n\n- To further address your concern, we add more results in $\\underline{\\text{Table 1}}$ and $\\underline{\\text{Table 2}}$ of the revised draft. Below is the average performance of these methods.\n\n |Method|Train from Scratch|Supervised Pre-training|Unsupervised Pre-training|\n |:----|:---:|:---:|:---:|\n |VAT [1]|23.0|55.4|51.3|\n |ALI [2]|22.7|51.9|49.1|\n |RAT [3]|34.2|58.3|56.1|\n |MixMatch|46.9|58.7|56.0|\n |DST (FixMatch)|**78.3**|**71.1**|**68.7**|\n\n [1] Virtual adversarial training: a regularization method for supervised and semi-supervised learning. PAMI 2018\n\n [2] Adversarially learned inference. ICLR 2017\n\n [3] Adversarial transformations for semi-supervised learning. AAAI 2020\n\n**Q6:** Ablation study on the requirement of label size is missing.\n\n- We evaluate the performance of DST with $1000$ labels ($10$ labels per class) on *CIFAR-100* in $\\underline{\\text{Appendix B.3}}$.\n\n |Method|Supervised Pre-training|Unsupervised Pre-training|\n |:----|:---:|:---:|\n |Baseline|61.5|56.2|\n |FixMatch|67.8|64.2|\n |FlexMatch|71.2|71.1|\n |DebiasMatch|73.5|73.9|\n |DST (FixMatch)|**75.6**|**76.8**|\n\n- Further, we conduct more fine-grained experiments on *CIFAR-100* with supervised pre-trained models. \n - The table below compares the accuracy of Baseline, FixMatch, and DST when the number of labeled data per class **varies from $1$ to $25$**. \n\n |Labels per Class|1|2|4|6|8|10|16|25|\n |:----|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|\n |Baseline|23.6|34.5|48.2|54.5|59.2|61.5|66.6|70.8|\n |FixMatch|2.1|23.5|53.1|63.2|66.1|67.8|73.4|76.1|\n |DST|**36.4**|**56.7**|**70.4**|**72.7**|**74.3**|**75.6**|**79.2**|**80.3**|\n\n - The corresponding visualization is in $\\underline{\\text{Figure 5 (Appendix B.3)}}$.\n \n - Results suggest that DST is less sensitive to the amount of labeled data than FixMatch. \n", " We would like to sincerely thank Reviewer KT8G for providing insightful reviews and valuable comments. We have clarified the questions in the following response. All the updated parts in the revised draft are highlighted in blue font.\n\n**Q1:** What are the parameter size and the computation cost of the proposed method compared to other existing methods?\n\n- $\\underline{\\text{Appendix A.1 and A.2}}$ have already mentioned the architecture and projection dimension of the introduced heads in general. \n\n- Further, we list the parameter size of models used in our experiments below. \n\n |Method|WRN-28-2|WRN-28-8|WRN-37-2|ResNet50|\n |:----|:---:|:---:|:---:|:---:|\n |FixMatch (UDA, FlexMatch)|1.60M|23.4M|6.66M|27.6M|\n |DST|1.67M (+4.4%)|24.6M (+5.3%)|6.73M (+1.0%)|33.2M (+16.7%)|\n\n- $\\underline{\\text{Appendix B.5}}$ has discussed the computation cost. When training $1000k$ iterations on *CIFAR-100 using 4 2080 Ti GPUs*, FixMatch takes $104$ hours while DST takes $111$ hours, only a $7\\%$ increase in time.\n\n- Note that the introduced heads will be discarded during inference. **Thus, DST will not introduce any additional parameters and costs during inference.**\n\n**Q2:** While the method is similar to \"mutual learning\", except sharing the backbone of the two models, what could be the additional benefits upon \"mutual learning\"?\n\n* As mentioned in $\\underline{\\text{Related Work (Line 88-90)}}$, the key difference between mutual learning and DST is whether they generate and utilize pseudo labels independently.\n * In mutual learning, each classifier head is still trained with potentially incorrect pseudo labels generated by other heads. \n * In DST, the classifier head that generates pseudo labels is never trained with pseudo labels, leading to better tolerance to inaccurate pseudo labels.\n * If mutual learning is called *bidirectional learning*, then DST can be called *unidirectional learning*.\n* As shown in $\\underline{\\text{Table 3}}$, the benefits of mutual learning are limited since both heads in mutual learning will be trained with unreliable pseudo labels from the other head. In contrast, our design mentioned in $\\underline{\\text{Section 4.1}}$ brings larger improvements. \n\n**Q3**: What is the difference of implementation between the proposed method and FixMatch? Does the proposed method require certain unique training setups?\n\n- There is no difference between their data augmentations and training schedules.\n\n - As shown in $\\underline{\\text{Section 5}}$ and $\\underline{\\text{Appendix A.2}}$, we adopt the **same hyperparameters, network architectures, data augmentation and training schedule as FixMatch**. The choice of data augmentation, training schedule, and most important hyperparameters when trained from scratch is listed as follows. \n\n |Weak Augmentation $\\alpha$|Strong Augmentation $\\mathcal{A}$|Training Schedule|\n |:---:|:---:|:---:|\n |random-horizontal-flip|RandAugment|cosine learning rate schedule|\n\n |Total Iterations|learning Rate|Mini-batch Size|Weight Decay|confidence threshold|\n |:---:|:---:|:---:|:---:|:---:|\n |1000k|0.03|512|{0.0005, 0.001}|0.95|\n\n- DST does not require certain unique training setups that differ from other SSL methods.\n\n - DST is evaluated both when trained from scratch and finetuned from some pre-trained models. \n\n - As stated above, the implementation of DST is the same as FixMatch.\n\n - $\\underline{\\text{Table 4}}$ also suggests that DST is not tailored for FixMatch and can improve the performance of $5$ different SSL methods (Mean Teacher, noisy student, DivideMatch, FixMatch, and FlexMatch).\n\n**Q4:** The proposed method is not the best on all the datasets. What could be the possible reasons that lead to its weaker performance on certain datasets?\n\n- We conjecture this is because the performance of DST is related to that of the base method. By default, we use FixMatch because it is simple and representative. However, its performance may be weaker than that of subsequent methods, such as FlexMatch.\n\n- To verify this, we add the results of DST (FlexMatch) in $\\underline{\\text{Table 2}}$. DST (FlexMatch) surpasses existing SSL methods on all the datasets. Results with supervised pre-trained models are also shown below.\n\n | |Caltech|CF-10|CF-100|SUN|DTD|Aircraft|\n |:----|:---:|:---:|:---:|:---:|:---:|:---:|\n |SOTA|88.6|91.0|65.7|48.3|52.5|37.5|\n |DST (FlexMatch)|**90.6**|**95.9**|**71.2**|**49.8**|**56.2**|**44.5**|\n\n | |CUB|Flowers|Pets|Cars|Food|\n |:----|:---:|:---:|:---:|:---:|:---:|\n |SOTA|58.9|95.6|88.3|60.5|53.5|\n |DST (FlexMatch)|**70.5**|**95.8**|**90.4**|**72.7**|**57.1**|\n", " **Q3:** DivideMix and MixMatch should be cited/discussed relative to the current work. Wouldn't \"DivideDST\" be even more effective?\n\n- MixMatch is mentioned in our original version. \n - In the $\\underline{\\text{Related Work}}$ part, we have discussed MixMatch as a method for generating higher-quality pseudo labels. \n - In the $\\underline{\\text{Experiments}}$ part, we have provided the results of MixMatch ($\\underline{\\text{Table 1}}$).\n\n- DivideMix is included in our revised draft.\n - DivideMix proposes to simultaneously train two networks and performs label co-refinement and label co-guessing with both networks to alleviate confirmation bias. Due to the limited space, we classify DivideMix as a method that **adopts and improves** **mutual learning** to improve tolerance for inaccurate pseudo labels in the $\\underline{\\text{Related Work}}$ part as follows. \n\n \"Co-training, MMT, **DivideMix** and Multi-head Tri-training introduce multiple models or classifier heads and learn in an online mutual-teaching manner.\"\n\n - We also add comparison results between DivideMix and DivideDST (DivideMix utilizing DST instead of MixMatch) in $\\underline{\\text{Table 4}}$. The results below show that DivideDST also yields consistent improvements against DivideMix. \n\n |Pre-training|Supervised| Supervised |Unsupervised| Unsupervised |\n |:----|:---:|:---:|:---:|:---:|\n |Label Amount|400|1000|400|1000|\n |DivideMix|55.8|67.5|53.6|64.9|\n |DivideDST|**69.1**|**75.1**|**65.0**|**74.2**|\n\n - Implementation details. DivideMix is originally developed for learning with noisy labels. To apply DivideMix in the context of SSL, we first pre-train a model with the labeled dataset $\\mathcal{L}$. Then we obtain noisy labels on the unlabeled dataset $\\mathcal{U}$, which will be further divided into a clean subset $\\mathcal{U_\\text{clean}}$ and a noisy subset $\\mathcal{U_\\text{noisy}}$. The labeled and unlabeled datasets in DivideMix are $\\mathcal{L}\\cup \\mathcal{U_\\text{clean}}$ and $\\mathcal{U_\\text{noisy}}$, respectively.\n\n**Q4:** The overall loss function and the weight of the adversarial loss term.\n\nThanks for your suggestions!\n- We add the overall loss function in $\\underline{\\text{Section 4 (Equation 8)}}$. \n- The trade-off hyperparameter of the adversarial loss is fixed to $1.0$ in all the experiments. For simplicity, it is omitted in $\\underline{\\text{Equation 8}}$.", " We would like to sincerely thank Reviewer ej4w for providing insightful reviews and valuable comments. We have clarified the questions in the following response. All the updated parts in the revised draft are highlighted in blue font.\n\n**Q1:** The related work section lacks a summary of relevant adversarial techniques in general and as applied or potentially applied to SSL.\n\nTo address your concern, we provide a discussion of relevant adversarial techniques of SSL in the $\\underline{\\text{Related Work}}$ part.\n\n\"Inspired by Generative Adversarial Networks (GANs) [1], some works introduce adversarial training into semi-supervised learning. A line of works [2, 3, 4, 5] exploit fake samples from the generator by labeling them with a new “generated” class and forcing the discriminator to output class labels. Another line of works use adversarial training to construct adversarial samples [6], e.g., VAT [7] injects additive noise into input, VAdD [8] introduces adversarial Dropout layers and RAT [9] expands the noise in VAT into a set of input transformations. These methods aim to impose a local smoothness on the model and do not involve training with pseudo labels. In contrast, in our method, the goal of the adversarial training process is to estimate the worst case of pseudo labeling and avoid such cases.\"\n\n[1] Generative adversarial nets. NeurIPS 2014\n\n[2] Semi-supervised learning with generative adversarial networks. arXiv 2016\n\n[3] Improved techniques for training gans. NeurIPS 2016 \n\n[4] Good semi-supervised learning that requires a bad gan. NeurIPS 2017\n\n[5] Adversarially learned inference. ICLR 2017\n\n[6] Explaining and harnessing adversarial examples. ICLR 2015\n\n[7] Virtual adversarial training: a regularization method for supervised and semi-supervised learning. PAMI 2018\n\n[8] Adversarial dropout for supervised and semi-supervised learning. AAAI 2018\n\n[9] Adversarial transformations for semi-supervised learning. AAAI 2020\n\n**Q2:** Comparison with adversarial SSL methods.\n\nWe add comparisons with adversarial SSL methods including VAT, ALI, and RAT in $\\underline{\\text{Table 1}}$ (training from scratch) and $\\underline{\\text{Table 2}}$ (finetuning from pre-trained models). The table below reports the average performance of these methods under different settings. Results show that DST significantly outperforms existing adversarial SSL methods.\n\n|Method|Train from Scratch|Supervised Pre-training|Unsupervised Pre-training|\n|:----|:---:|:---:|:---:|\n|VAT|23.0|55.4|51.3|\n|ALI|22.7|51.9|49.1|\n|RAT|34.2|58.3|56.1|\n|FixMatch|75.4|59.3|55.6|\n|DST (FixMatch)|**78.3**|**71.1**|**68.7**|\n\n", " We appreciate all three reviewers for their insightful and constructive comments. We have uploaded a revised draft to address all reviewers' comments. The updated parts are highlighted in blue font. Below is a summary of the main changes:\n\n- We add more baselines, including some adversarial methods, as suggested by reviewers R1 & R2.\n- We add a summary of relevant adversarial techniques applied in SSL, as suggested by reviewer R1.\n- We add the overall loss function to avoid misunderstanding. \n\nWe hope our responses and revisions will address all reviewers' concerns!", " The authors propose debiased self-training (DST) as an approach to reduce both data (sampling) bias and self-training bias during SSL. The DST recipe consists of two main elements: \n\n1) Pseudo-labels are treated as a related task, in that they are used to train a dedicated pseudo-classifier and shared learned features, while the main task classifier is trained only on ground-truth data, and, \n\n2) The learned features are additionally adversarially optimized to minimize the performance of an additional \"worst-case\" classifer, which is trained to miminmize loss on supervised data, whilst maximizing the loss of predictions from the pseudo-classifier. \n\nResults with both cold-start and pre-trained features on several datasets demonstrate solid gains over SOTA SSL approaches. Originality: 7\nQuality: 7 (6 before rebuttal response)\nClarity: 7\nSignificance: 7\n- While multi-classifier co-training approaches are ubiquitous in SSL approaches, multi-task learning is a standard way to improve feature representations for related tasks, and adversarial feature optimization has been widely investigated, optimizing the task features for a pseudo-classifer and adversarially against a worst-case classifier while training the task classifier is a nice, highly effective, and to my knowledge novel idea.\n- With that said, the related work section lacks a summary of relevant adversarial techniques in general and as applied or potentially applied to SSL (e.g. VAT, VAaD, ALI, etc.....). This is the most serious limitation of the paper currently, and I urge the authors to address this and fully and completely situate their work in the rebuttal (otherwise, I will not be able to reccommend that the paper be accepted).\n- In addition, adversarial SSL techniques ideally should be included in some of the results comparisions, although the methods that I am aware of are outperformed by many of the techniques already being compared.\n\nPost-rebuttal:\n- Thank you to the authors who have addressed my remaining questions and concerns by expanding the related work section, and including additional results comparing to and combining with DivideMix. I have increased my score from weak accept to accept. Well done! See limitations section. - See S&W section.\n- DivideMix and MixMatch should be cited/discussed relative to the current work. Wouldn't \"DivideDST\" (DivideMix utilizing DST instead of MixMatch) be even more effective?\n- The overall loss function and the weight given the adversarial term (7) are not stated in the paper (looking quickly at the code, it seems to be 1.0). Please confirm this, and update the paper accordingly.\n", " The paper analyzes the different types of bias in self-training, including data bias and training bias, and proposes to generate and utilize pseudo labels decoupled by two parameter-independent classifier heads to avoid error accumulation. Further, the worst case of self-training bias is estimated and used to guide the generation of pseudo labels. The proposed method is compared to several recent semi-supervised learning methods on a variety of datasets, and shown to better on most of the datasets. + The experimental results are competitive compared to the other existing semi-supervised learning methods.\n\n+ The proposed model is simple yet effective. The proposed idea of reducing the training and data bias is interesting.\n\n- In terms of model performance, the proposed method is not the best on all the datasets. \n\n- While the comparison is extensive, there are a few representative SSL methods missing in the comparison such as UDA [a] that uses the different unsupervised learning for semi-supervised learning. \n\n- Additional ablation study on requirement of label size is missing. For instance, when varying the label size, how does the model performs? Is it sensitive to the label size as compared to the other existing approach? \n\n[a] Unsupervised Data Augmentation for Consistency Training 1. What is the parameter size and the computation cost of the proposed method compared to the other existing methods?\n\n2. While the method is similar to the \"mutual learning\", except sharing the backbone of the two models, what could be the additional benefits upon \"mutual learning\"? Please consider to add some discussion. \n\n3. What are the difference of implementation between the proposed method and the FixMatch, such as data augmentation and training schedules? Does the proposed method require certain unique training setups that differs from other SSL methods? \n\n4. In terms of model performance, the proposed method is not the best on all the datasets. What could be the possible reasons that lead to its weaker performance on certain datasets? - In terms of model performance, the proposed method is not the best on all the datasets. \n\n- While the comparison is extensive, there are a few representative SSL methods missing in the comparison such as UDA [a] that uses the different unsupervised learning for semi-supervised learning. \n\n- Additional ablation study on requirement of label size is missing. For instance, when varying the label size, how does the model performs? Is it sensitive to the label size as compared to the other existing approach? \n\n[a] Unsupervised Data Augmentation for Consistency Training", " This paper suggests a new self-training approach called DST for semi-supervised learning. Usual self-training approaches generate incorrect pseudo-labelings and thus suffer from training instability due to the accumulated error. Specifically, based on data analysis, DST points out that there are two kinds of biases, namely training bias accumulated due to self-training strategy and data bias as a result of randomly sampled scarce samples. To address them, DST proposes two-phase solutions. First, to reduce training bias, DST adds a classification head to disentangle generation and utilization of pseudo labels. A default head is used for classification and pseudo-labeling, while another head is used to receive the pseudo-labels and perform training on unlabeled samples. Then, to reduce data bias, DST adversarially optimizes the worst training bias which is an indirect proxy of training bias. ### Strengths\n\n1. The motivation and the design of DST are well-connected to each other.\n2. Figures help the understanding of the paper.\n3. Empirical results support the effectiveness of the paper.\n\n### Weaknesses\n\n1. Some part of the paper is misleading or not clearly explained, is the 'worst head' $h^{'}$ different from $h$ or $h_{pseudo}$? And how to draw the 'worst possible head' $h^{'}$ during training in a hands-on manner? Does it require additional heavy computations?\n2. Two aspects make me concerned about the suggested method: (i) when the pseudo-labeling is incorrect, the feature generator $\\psi$ is still contaminated by the given signal. (ii) On the other hand, for the correct pseudo-labels, the pseudo-labeling head cannot gain their benefit to improve pseudo-labeling quality.\n3. The paper is relying on an assumption for mistakes of $h^{'}$ that does not always hold. The discrepancy with the current pseudo-labeling function also happens when the updated pseudo-labeling head $h^{'}$ is perfect while the current pseudo-labeling $h$ is imperfect. In this case, the effect of model learning would result in an undesirable way to induce the model to learn a worse hyperplane. See the weakness above. There is no severe limitation found in the paper so far." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "h-glLyeY432", "BnMIK-Elbt8", "UL4gThotfQd", "p4II0lBrVaI", "BnMIK-Elbt8", "YYLORyS8Jl0", "IV_xf4jVPC", "o7HgBwCoOL", "jKq7_yJC5ov", "nips_2022_NI7moUOKtc", "nips_2022_NI7moUOKtc", "nips_2022_NI7moUOKtc", "nips_2022_NI7moUOKtc" ]
nips_2022_pd6ipu3jDw
Transformer-based Working Memory for Multiagent Reinforcement Learning with Action Parsing
Learning in real-world multiagent tasks is challenging due to the usual partial observability of each agent. Previous efforts alleviate the partial observability by historical hidden states with Recurrent Neural Networks, however, they do not consider the multiagent characters that either the multiagent observation consists of a number of object entities or the action space shows clear entity interactions. To tackle these issues, we propose the Agent Transformer Memory (ATM) network with a transformer-based memory. First, ATM utilizes the transformer to enable the unified processing of the factored environmental entities and memory. Inspired by the human’s working memory process where a limited capacity of information temporarily held in mind can effectively guide the decision-making, ATM updates its fixed-capacity memory with the working memory updating schema. Second, as agents' each action has its particular interaction entities in the environment, ATM parses the action space to introduce this action’s semantic inductive bias by binding each action with its specified involving entity to predict the state-action value or logit. Extensive experiments on the challenging SMAC and Level-Based Foraging environments validate that ATM could boost existing multiagent RL algorithms with impressive learning acceleration and performance improvement.
Accept
All reviewers agree that this paper makes a good contribution in developing a novel transformer-based memory structure for MARL. The developed approach is evaluated through comprehensive and solid experiments. The authors have also clearly addressed the questions/concerns raised by the reviewers.
train
[ "glyeHm1r0uL", "p6a5mNg6q6t", "YKbITWdaknG", "gnsWFiQzFyF", "TnxWaDvOZhD", "d5qIbUcfnsQ", "GvfBGPSd7C", "oo_rfeSHwXP", "pTm1O8s_k4h", "P4Tr0D--pFK", "pi8bo1VM453" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I have no remaining concerns and still suggest to accept the paper.", " Hi authors,\n\nthank you for trying to cover my questions.\n\n[wall-clock time] yes, the comparison in wall-clock time is dependent on which hardware is used, but as you mentioned, if same hardware was used, then the comparison should be good enough.\n\n[apply appending style memory or TrXL style memory] yes, Transformer looks suitable architecture to encode the observations from multiagent environments. However, what I want to see was if ATM uses more powerful memory module like appending style (e.g., TrXL or Vanilla Transformer) then the memory capacity will increase which could make better results. If so, it is also interesting, but it can be another project. \n\nThanks!", " Thank you for the insightful rebuttal. All my concerns/questions have been addressed. I still recommend this paper for acceptance.", " Dear Reviewer mbJQ,\n\nThank you sincerely for the insightful discussion and below is our response to your further questions and concerns.\n\n[wall-clock time] Thanks very much for your suggestion, we rerun the 4m_vs_5m in smac and 15x15_3p_5f in lbf, and plot the learning curves in terms of the wall-clock time. We have uploaded the figures (pdf format) in the attachment, which shows that our proposed ATM solve the tasks better than GRU-based models in wall-clock time. Notes that the wall-clock time performance depends heavily on the hardware environment. For example, if we run multiple ATM or GRU instances on a single machine simultaneously and the CPU becomes the bottleneck resource, the wall-clock time would be much longer than running on a single instance. Under this situation, the CPU bottleneck will reduce the ATM/GRU wall-clock time ratio. Thus, we rerun the 4m_vs_5m in smac and 15x15_3p_5f in lbf one instance per time on a single machine respectively for ATM and GRU’s wall-clock time comparison.\n\n[apply appending style memory or TrXL style memory] ATM’s multi-slot memory mechanism (e.g., 3 slots in smac and lbf or more if necessary) provides the shortcut recurrence to focus on longer sequential information, replacing the GRU’s single path of information flow with a network of shorter self-attention paths. Moreover, as shown in our ablation study in Figure 3(a), ATM indeed benefits from considering the multiagent characters of the factored multiagent observation space and the action space of meaningful entity interactions, and Transformer is the ideal structure to incorporate the multiagent observation/action space characters. Therefore, ATM is a powerful memory structure in the partially observable multiagent environments.", " Hi authors,\n\nthank you for addressing the points that I mentioned.\n\n[wall-clock time] \nWhat I want to see was to compare ATM and GRU-based models in wall-clock time comparison (similar graph with Fig. 2 and 6, but x-axis is relative wall-clock time) to see how much ATM is efficient in wall-clock time. Could you upload that also?\n\nAs see your tables for wall-clock time comparison and graphs in the paper, ATM looks can solve the tasks better than GRU-based models while it is not much more efficient than GRU in wall-clock time.\n\n[apply appending style memory or TrXL style memory] \nIt is interesting. What you mentioned is the far past knowledge over 3 steps is not usually useful for the agent. Then why Transformer-based model could work better than GRU-based models that can encode near past knowledge well? Because it can infer the interactions between entities and allies explicitly?\n", " We sincerely appreciate the valuable comments from the reviewer. We provide clarification to your questions as below.\n\n[wall-clock time] To make a fair comparison of the wall-clock time. We test ATM-QMIX and GRU-QMIX on smac while ATM-MAA2C and GRU-MAA2C on lbf with the condition that the CPU/GPU/Memory utilization rate keeps less than the maximum load.\n\nWe run 200k steps for each method in each map of smac. The results are shown in Table 1 below.\n\nTable 1. Wall-clock time in smac.\n| Map | ATM-QMIX time | GRU-QMIX time | ATM/GRU ratio |\n|----------------|:----------:|:----------------------------------:|:----------------------------------:|\n| 4m_vs_5m | 46min | 22min | 209%\n| 5m_vs_6m | 45min | 24min | 188%\n| 6h_vs_8z | 44min | 28min | 157%\n| corridor | 57min | 37min | 154%\n\nWe run 2000k steps for each method in each scenario of lbf. The results are shown in Table 2 below.\n\nTable 2. Wall-clock time in lbf.\n| Scenario | ATM-MAA2C time | GRU-MAA2C time | ATM/GRU ratio |\n|----------------|:----------:|:----------------------------------:|:----------------------------------:|\n| 3p_vs_5f | 55min | 39min | 141%\n| 4p_vs_5f | 60min | 41min | 146%\n| 4p_vs_6f | 60min | 40min | 150%\n\nWe could see that ATM is slower than GRU as ATM needs more computations but their wall-clock time on these tasks is at the same level.\n\n[apply appending style memory or TrXL style memory] If we set the memory slot number to be equal to the past timestep number, ATM could attend the past hidden states explicitly. On the smac and lbf tasks, we found that using the most recent memory (e.g., set memory slot number at 3) could achieve superior performance while increasing the memory slot number does not improve the performance consistently. As we focus on developing a simple yet efficient memory mechanism that considers the multiagent characters, we do not apply the computationally expensive appending style or XL style memory. We will add this discussion in the revised version.\n", " We sincerely appreciate the constructive comments from the reviewer. We provide clarification to your questions and concerns as below.\n\n[Lack of coordination] We follow the setting that all agents predict their next action at the same time, which is common in the MARL literature [1, 2].\n\n[partially observable setting] We use the field of view to introduce partial observability. In detail, if one entity is out of the agent’s view, the information about this entity (including relative spatial positions between this unseeable entity and the agent itself) will not appear in the agent’s local observation. With the help of the transformer, ATM can receive the input of a dynamic agent number or entity number (by setting a maximum entity number and padding zeros to the unseeable entity if this entity is out of view).\n\n[playing against learning agents] Enabling the proposed model to play against other learning agents is a challenging open problem where the environment dynamics are changing with the learning of the opponent agents. It is possible to equip ATM with the opponent modeling methods to explicitly model the opponent learning agents to tackle this problem. This could be an interesting research direction for future work.\n\n[gray squares in Figure 1] The grey squares represent relative positional embeddings with spatial or sequential order information of entities. For memory or duplicated self entities, it is the one-hot ID. For allies or other entities (such as enemies), it is the relative distance between the central self entity and the allies or other entities. We mentioned the grey squares in Figure 1’s caption and explain them in Line 121-128. We will elaborate on our description to make it clearer.\n\n[Question 1] Yes, we will add the indexing as suggested.\n\n[Question 2] It is because that Agent 5 is dead after around time step 20, then the attention mechanism computes Agent 5’s attention weights at near 0.\n\n[Question 3] It is because, after the first few steps of walking, agents are attacking the enemies and walking to avoid being killed. Then the agents are also concentrating on the current allies and enemies to make decisions and give attention weights to these related entities. Appendix C gives a deeper analysis and shows that memory still plays an important role when the agent makes decisions.\n\n[Question 4] Although it is an interesting comparison, the current SMAC platform does not support replacing the built-in AI with trained agents and we follow the setting of previous works such as QMIX and QPLEX.\n\n[Minor] We will revise the minors as suggested.\n\n**Reference**\n\n[1] Lowe, R., WU, Y., Tamar, A., Harb, J., Pieter Abbeel, O., & Mordatch, I. (2017). Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. Proceedings of the 31st Advances in Neural Information Processing Systems, 6379–6390.\n[2] Rashid, T., Samvelyan, M., Witt, C. S. de, Farquhar, G., Foerster, J. N., & Whiteson, S. (2018). QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning. Proceedings of the 35th International Conference on Machine Learning, 4292–4301.", " We sincerely appreciate the insightful comments from the reviewer. We provide clarification to your questions and concerns as below.\n\n[transformer-based working memory] While previous works focus on the single-agent memory mechanisms, we propose a unique transformer-based multiagent memory mechanism by considering the multiagent characters in both the observation space and action space. In particular, for the first time, we develop a novel memory structure with the help of the working memory updating mechanism by explicitly considering the allied agents for the factorized multiagent observation space. We will elaborate on our description of this contribution.\n\n[observation] The $o_{ally}$ is the agent’s observation of an ally such as the ally’s attribution features (including the ally’s health and relative coordinates between the ally and the agent). Agents cannot share information such as their first-person observations among themselves as we assume there is no communication. At the same time, if one entity is in both agent $i$ and agent $j$’s sight ranges, then agents $i$ and agent $j$’s own local observations include the same seen entity. In Line 186-192, we provide the details of each entity type (vector features instead of images) in the smac tasks as an example. In smac, the exact observation of agent $i$ is a set of entities $e$ where $e$ ∈ {$o_{self}^{i}$,**o**$^{i}\\_{ally}$,**o**$^{i}\\_{enemy}$} and $d(e,i)<d_{sight}^{i}$ ($d(e,i)$ is the distance between the entity $e$ and agent $i$, and $d_{sight}^{i}$ is agent $i$’s sight range). We are glad to elaborate on our description of the observation as suggested.\n\n[alternatives to the entity-bound action layer] As stated in Eq. (7) and Line 148-150, one common alternative to the EBA layer is using linear layers to map the self entity embedding to all action nodes’ values. Here we use this setting to ablate EBA in the experiments as the self entity embedding also contains the information from all entities after the transformer. We will elaborate on our description of the ablation setting.\n\n[interpretation of the memory] In Figure 5(a), the memory attention heatmap shows how the newly generated memory focuses on each kind of entity during the whole episode. In Figure 5(a), the enemy and ally entities receive the most attention weights after the first few couples of steps, which means the enemy and ally information is absorbed into the new generated memory slot during these timesteps. This helps interpret what the memory consists of.\n\n[Limitation] It is interesting and possible to generalize the entity-bound action layer into the lower-level action space. However, the low-level actions such as rotation or firing in a certain direction may be related to different entities at different timesteps, which is difficult to configure manually or by rule. For this purpose, it needs to design a mechanism to automatically map entities to actions, which is not trivial. We list it as our future work as shown in the conclusion section.", " This paper presents an approach for modeling partially-observable multi-agent MDPs. The paper proposes two modifications on top of a general transformer for modeling the relationship between observation and action: first a memory module that maintains information from previous observations associated with all agents, and second a prediction layer that filters information for predicting actions by the arguments of that action (e.g., predicting to attack enemy i is conditioned on the observation information about enemy i). They evaluate on two MARL environments and show improvements over existing methodologies for modeling memory over time, including RNN-based methods. Strengths:\n\nThis is an interesting problem and an interesting approach to the problem. The paper is relatively clear and experiments on both environments are mostly comprehensive and performance over compared methods is impressive.\n\n--- \n\nWeaknesses:\n\nThe way that transformer-based working memory is introduced in 42-48 makes it seem as though the main contribution is applying an existing method to an existing problem. I'm not familiar enough with related work to make this judgment, however.\n\nI was confused about what an observation includes: is o_ally the observation *of* an ally, or the ally's observation? If the latter, does this also hold for enemies, and if so, it seems as though an agent shouldn't have access to the enemy's observation. If the former, is there information sharing between agents (i.e., sharing their first-person observations, which may not overlap with another ally's)? How would this affect performance? If would help to have a formal definition of the task, including what exactly observations look like (are they images?). * What would alternatives to the entity-bound action layer be? Using only memory of o_self to predict actions, or using some combination (e.g., concatenation) of all observations (which has a drawback of many parameters, some of which may be unnecessary)? I'm a bit confused what it means to ablate EBA in the experiments, and whether you experimented with all three settings (EBA, self-only, and all observations).\n* Did you perform any interpretation of the memory? E.g., if memory/observation can be interpreted as an image. The entity-bound action layer seems specific to the action spaces available in the two environments. What if the action space is much lower-level, such that there aren't semantic relationships between a particular action and a particular entity in the environment (e.g., instead of \"attack enemy i\", an agent may have to move to a strategic position and rotation with multiple navigation actions, then \"attack\" by firing some weapon in the direction it is pointed)? Would the entity-bound action layer still generalize to this?", " This paper is about designing a new architecture, Agent Transformer Memory (ATM), for multiagent reinforcement learning where each agent is equipped with its own working memory and action space. Specifically, the authors propose to use a transformer-based architecture to process both the moving entities in the environment and the recent memory slots. The output of the transformer consists of an updated memory slot (to be pushed to the limited-capacity working memory) and the encoded representations for each entity. Then, each agent selects an action (from their respective action space) based on their most recent memory slot and the representations of their surrounding entities. Experiments were conducted on the well-established StarCraft Multi-Agent Challenge (SMAC) and Level-Based Foraging (LBF) environments. The authors test the proposed agent transformer memory with QMIX and QPLEX algorithms commonly used on SMAC, and with MAPPO and MAA2C for experiments on LBF. Empirically, it was shown that agents with memory and personalized action space perform better. The authors also investigate different memory updating schemes and show that using ATM works best. **What I like about this paper**\n- Each agent having their own point-of-view of the environment, own working memory, and own action space, while using the same model (same weights, different input $T_{in}$).\n- The ablation study of the proposed architecture ATM. It does seem like having a working memory is important (even if only one slot).\n\n**Potential weaknesses**\n - Lack of coordination between the agents. If I understood correctly, all agents predict their next action at the same time. That said, it seems to be a common approach in the MARL literature.\n - Even though this work takes place in a partially observable setting, the proposed technique requires to know in advance how many entities (and their attributes) there will be in the game. Since the proposed architecture requires the relative spatial positions between the self entity and all other entities (even if they are out of the field of view?), it is not clear to me what remains \"partially observable\".\n - It is not clear how the proposed model performs when playing against other learning agents.\n\n**Originality, quality, clarity, and significance**\n\nAs pointed out in the related work, the concept of having a transformer-based working memory is not new. The paper would benefit from stating clearly how the proposed technique differs from each related work. The main thing I could see is the application of a transformer encoder in the multi-agent RL setting with agent-centric observations and working memory. It wasn't clear to me how the Entity-Bound Action Layer related to any previous work (if any). I found the paper technically sound and the ablation study is backing up the proposed agent transformer memory component.\n\nI found the paper well-written and well-organized for the most part. In Figure 1, it is not clear what the gray squares represent, is it the spatial/temporal embeddings, or the entity/memory IDs? While it is unclear how the proposed model compares to the actual state-of-the-art on tested environments, the empirical results suggest the ATM architecture would be useful to others in the community.\n\nOverall, I tend to recommend this paper for acceptance.\n\n**Minor**\n- p.2 (l.83): should $d_o$ be $d_e$, if not what is $d_o$?\n- p.5 (l.164): \"for four times\" -> \"four times\"\n- p.7 (l.243): \"...agent need remember them.\" -> \"..agent needs to ...\"\n- p.7 (l.244): \"...sight field\" -> \"..sight range\"\n- p.7 (l.246): \"...focusing fire\" -> \"..focused fire\" - Based on Eq. 4 and 5, should $T_{in}$ and $T_{out}$ be indexed by $i$, i.e., $T_{in}^i$ and $T_{out}^i$?\n- In Figure 5, why is Agent 5 heat map incomplete? Was it taken out of action around time step 20?\n- From Figure 5, it looks like the memory is not very useful to attend to after the first few couples of steps. Why is that?\n- Did the authors try making the trained agents with different architectures play against each other (instead of against the built-in AI)? The authors discussed the limitations and social impacts in the Appendix. Notably, how the learning process requires some exploration that could lead to unsafe situations for both the agents and humans, and also how the Entity-Bound Action Layer requires expert knowledge to manually configure it.\n", " This paper proposes a new Multiagent Reinforcement Learning (MARL) method using Agent Transformer Memory (ATM). ATM consists of 4 parts; memory, self, allies and entities. Especially, for self, it duplicates the token to use it for Entity-Bound Action (EBA) layer which affects the performance clearly. For memory, ATM uses working memory concept, and other memory types are also evaluated as an ablation study. ATM-based MARL agent outperforms other methods for StarCraft Multi-Agent Challenge (SMAC) and its architecture (EBA and working memory) shows better performance than alternatives (e.g., relational memory or without EBA).\n\n=================================\n\nWhat happening if they apply appending style memory to their model is interesting and they didn't show that, but except that, they handled lots of concerns through author-reviewer discussion phase. I sustain my score. Strengths:\n- It proposes a new memory module, ATM which can infer the interaction between the agent, allies, entities and memory.\n- It evaluates for SMAC, ATM-based agent outperforms GRU-based one which is usually used.\n- It reports the ablation studies for EBA and memory types and the number of memory slots.\n\nWeaknesses:\n- As shown in [1], the agent with Transformer requires more computations, even though it can benefit for sample efficiency [1,2] or interactions between entities like this paper. For example, in Figure 2 (a) and (b), ATM-based models outperform GRU-based models in steps, but when comparing in wall-clock time, ATM-based model could be much slower than recurrent module-based models.\n- The memory following the working memory mechanism is interesting, but it cannot attend the past explicitly. Other transformer-based agents [2,3] get the advantages from attending the past directly, while it cannot do that through ATM architecture.\n\n[1] Parisotto, Emilio, and Ruslan Salakhutdinov. \"Efficient transformers in reinforcement learning using actor-learner distillation.\" arXiv preprint arXiv:2104.01655 (2021).\n\n[2] Parisotto, Emilio, et al. \"Stabilizing transformers for reinforcement learning.\" International conference on machine learning. PMLR, 2020.\n\n[3] Chen, Chang, et al. \"Transdreamer: Reinforcement learning with transformer world models.\" arXiv preprint arXiv:2202.09481 (2022). - Could you share the plots in Figure 2 and 6 by comparing in wall-clock time? I think ATM should be slower than GRU, but want to know how much slower. If ATM is not slower than GRU, then it is interesting also.\n- Did you try to apply appending style memory or TrXL style memory? Those one can be computationally expensive, but useful from the perspective that it can support to attend the past explicitly. I think that one of the limitations of ATM is computationally more expensive than recurrent module. I don't want to say \"so we don't use ATM\". I want to discuss the comparison in the aspect of computation or time to get the action from the agents. Even though ATM is more expensive, sometimes we must pick ATM to solve some tasks requiring to infer huge interactions. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "oo_rfeSHwXP", "gnsWFiQzFyF", "GvfBGPSd7C", "TnxWaDvOZhD", "d5qIbUcfnsQ", "pi8bo1VM453", "P4Tr0D--pFK", "pTm1O8s_k4h", "nips_2022_pd6ipu3jDw", "nips_2022_pd6ipu3jDw", "nips_2022_pd6ipu3jDw" ]
nips_2022_kyY4w4IgtM8
Sharing Knowledge for Meta-learning with Feature Descriptions
Language is an important tool for humans to share knowledge. We propose a meta-learning method that shares knowledge across supervised learning tasks using feature descriptions written in natural language, which have not been used in the existing meta-learning methods. The proposed method improves the predictive performance on unseen tasks with a limited number of labeled data by meta-learning from various tasks. With the feature descriptions, we can find relationships across tasks even when their feature spaces are different. The feature descriptions are encoded using a language model pretrained with a large corpus, which enables us to incorporate human knowledge stored in the corpus into meta-learning. In our experiments, we demonstrate that the proposed method achieves better predictive performance than the existing meta-learning methods using a wide variety of real-world datasets provided by the statistical office of the EU and Japan.
Accept
This paper presents a novel meta-learning approach based on learning a sentence encoder which maps feature descriptions to embeddings. The sentence encoder is shown to generalize to new tasks during the test phase, hence allowing few-shot learning. The main concern raised by the reviewers was about the use of only two datasets which are non-standard for evaluation meta-learning. However, as the authors note, the proposed approach requires using datasets where feature descriptions are available and hence the choice of datasets seems reasonable. The authors are encouraged to revise the paper to discuss how the approach might be generalized other setups in meta-learning.
train
[ "jSdJE1-nBa1", "zwe2qC4O-Qi", "Z2iNMkdAXuD", "mh7ojlizpSk", "NHc9-6Ss-zW", "nraSoK-9UW", "xxFXC9XmR5B", "LXGKjsbz1xV" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your constructive comments.\n\n> It looks like the only difference between the proposed method and a baseline (MDK + B) is the usage of the feature encoder (Fig 1), which is a 3 layer neural network. It looks like the authors agree with that as well (line 220). So the technical novelty (although guided by good intuition) seems to boil down to the addition of a single layer on top of a baseline.\n\nMDK+B is not an existing method. We compared with MDK+B to demonstrate the effectiveness of the feature encoder in the proposed method. The original meta deep kernel learning (MDK) [13] cannot use feature descriptions, cannot handle tasks with different number of features, and cannot be used for our problem. Therefore, the technical novelty is not just the addition of a single layer on top of an existing method.\n\n> I think it is important to experiment with different types of sentence encoders. Given that the descriptions they consider are very short, it's possible that simple word2vec vectors (bag-of-words averaging) could do the trick.\n\nMDK+W can be think as the proposed method with a different type of (very simple) sentence encoders, where principal component analysis is used for the sentence encoder. We did not conduct experiments with other types of sentence encoders since any sentence encoders can be used in our framework. We will conduct experiments with different sentence encoders (e.g., word2vec) to check the sensitivity of sentence encoders.\n\n> Another baseline could have been added, which fine-tunes the NN model on the downstream task.\n\nWe did not include fine-tunes the NN model on the downstream task for comparing methods, because such an approach does not work well when the number of labeled instances is very small, which is our focus, as shown in existing meta-learning papers (e.g., [3]).\n\n> Is Ours + T the same as Ours + F in Table 4?\n\nYes. We will fix it.\n\n> How do you think the model would fare if you directly used the instance embeddings (z) to make the prediction through a function approximator which is fine-tuned on the support set rather using a GP. In other words, what if you did few-shot fine-tuning on the support set? I believe this would be equivalent to NN with few-shot fine-tuning.\n\nWe think that such an approach can be used for our problem (we understand that it uses model-agnostic meta-learning (MAML) [3] instead of meta deep kernel learning (MDK) [13] in our framework). Our model can be think that the last layer of NN based on GP is fine-tuned using the support set. We used GPs since MDK works better than MAML in [13]. \n\n", " \nThank you for your constructive comments.\n\n> the empirical evaluation is done only on two relatively small datasets, which makes it difficult to judge how general the proposed method is in practice. It would be much more informative to try this method on large datasets.\n\nWe chose these two datasets since they contain feature descriptions, and many tasks that are necessary for meta-learning. The total number of instances in the meta-training data is 175,910 in e-Stat and 315,210. They are not small compared with commonly used meta-learning datasets (e.g., miniImageNet contains 50,000 training images, and Omniglot contains 32,460 handwritten characters). We did not use commonly used datasets in the existing meta-learning literature, e.g., Omniglot, Mini-imagenet, and datasets in Meta-Dataset[20], because they do not contain feature descriptions as described at the last sentence of Sec 4.1.\n\nWe believe that the better performance of the proposed method compared with the comparing method in Table 4 demonstrates the effectiveness of each component in the proposed method. The decrease of the error as the number of meta-training datasets in Figure 3 demonstrates that the meta-learning of the proposed method works well.\n\n> For categorical features that do not come with descriptions, is there any systematic way to generate the feature description?\n\nThe feature descriptions might be generated by supervised learning to predict the description given categorical features, or other features. Although we do not know existing work for generating the feature descriptions, if it exists, we can use it before applying the proposed method.\n\n> Since the feature description is manually written, will the model quality be sensitive to the choice of the description?\n\nIf the pretrained BERT gives appropriate representations for descriptions (e.g., even if descriptions are not the same but have similar meaning, their representations are similar), or if the sentence encoder is trained well by meta-training data, the choice of the description would not be so sensitive.\n", " Thank you for your constructive comments.\n\n> Despite the plausible setting, there are still a few points that the authors fail to explain elaborately. For example, whether the feature description sets or feature type sets exist intersection between meta-training datasets and meta-test datasets. The question is out of my curiosity about whether the meta-training model using feature descriptions could really generalize to those unseen tasks which contain entirely different feature description sets or even feature types from the meta-training tasks.\n\nThere are intersections of feature descriptions between meta-training and meta-test datasets. For example, country names and years appear in most tasks in Eurostat. In Eurostat, there were 8059 unique feature descriptions. 3881 out of the 8059 appeared only in one task. The average number of tasks that a unique feature description appears is 13 tasks. We will add explanations on the datasets.\n\nSince some of the feature descriptions (e.g., country names and years) appear in many tasks, we cannot show the performance on meta-test tasks with entirely different feature description sets. Instead, the following table shows the test error on tasks where more than half of the feature descriptions do not appear in the meta-training tasks. The proposed method also achieved lower errors in tasks that contain feature descriptions that do not appear in the meta-training data than the other methods. We will add these analysis.\n\nTable: Average test mean squared errors on tasks where more than half of the feature descriptions do no appear in the meta-training tasks (#support=5).\n||Ridge|GP|HML|NN|MDK+C|MDK+W|MDK+B|Ours-M|Ours|Ours+F|\n|:----|----:|----:|----:|----:|----:|----:|----:|----:|----:|---:|\n|e-Stat |0.968|0.892|0.836|0.613|0.665|0.747|0.685|0.830|0.607|0.568|\n|Eurostat|1.019|0.942|0.993|1.013|0.987|0.993|1.023|0.951|0.866| |\n\n> First, the method seems to depend on the number of meta-training datasets. Second, in this work, it is to be doubted whether the method could extend to those datasets, where instances contain numerical features or the corresponding labels are discrete. It could be better to find a dataset conforming to the characteristics described above to evaluate the effectiveness of the method.\n\nWe admit that the method depends on the number of meta-training datasets as described in the limitation section. Many existing meta-learning methods also have this limitation. Although we described how to handle numerical features and discrete labels in Section 3.4, we have not evaluated how they work in our experiments since we cannot find datasets with many tasks of numerical features or discrete labels with feature descriptions. Despite these limitations, we believe our contributions are important for the study to share knowledge in various tasks in meta-learning.\n\n> A baseline that neural network with feature description is not meta-trained could be considered, which could evaluate the necessity of meta-training despite using the feature description.\n\nNN in our experiments corresponds to the neural network with feature description without meta-training. The better performance of the proposed method compared with NN demonstrates the effectiveness of meta-learning in the proposed method.\n\n> Whether the feature description sets or feature type sets exist intersection between meta-training datasets and meta-test datasets.\n\nAs described above, there are intersections of feature descriptions and types between meta-training datasets and meta-test datasets.", " Thank you for your constructive comments.\n\n> Despite the idea being general, the evaluation setting looks very restricted. The datasets are not commonly used in the existing meta-learning literature as far as I know. I would suggest the author justify why they chose these datasets, and test their method more broadly on other datasets too.\n\nWe did not use commonly used datasets in the existing meta-learning literature, e.g., Omniglot, Mini-imagenet, and datasets in Meta-Dataset[20], because they do not contain feature descriptions as described at the last sentence of Sec 4.1. We used e-Stat and Eurostat since they contain feature descriptions, and there are many tasks that are necessary for meta-learning. We will add more justification why we chose these datasets in the revised paper.\n\n> It's not clear what the tasks are in their datasets and how related they are. How different are the features of different tasks? The author should give some examples for readers to better understand the difficulty of the generalization across these tasks.\n\nIn Eurostat, there were 8059 unique feature descriptions. 3881 out of the 8059 appeared only in one task. The average number of tasks that a unique feature description appears is 13 tasks. The difficulty of the generalization across these tasks is shown by the low performance of the comparing methods. For example, due to the difficulty of the generalization across tasks without feature descriptions, the error by HML was high. The low performance of NN (using BERT representations of feature descriptions as input of a neural network) also shows the difficulty. We will add examples to show how the features of different tasks are different. \n\n> From the results in Table 4, it seems most of the improvement is from using pretrained models (comparing NN v.s. ours), which makes me question the contributions of other parts of the proposed method.\n\nThe better performance of Ours compared with NN demonstrates the effectiveness of the task adaptation described in Sec 3.2.2. Table 3 explains the difference of Ours and the other comparing methods. The better performance of Ours compared with MDK+W shows the effectiveness of the use of pretrained BERT. The better performance compared with MDK+B shows the contribution of the translation by the sentence encoder in Eq.(1). The better performance compared with Ours-M shows the contribution of the mean function in Eq.(4). We will clarify the contribution of each part of the proposed method.\n\n> Regarding technical details, the author averages the embeddings of different features (equation 3), which raises a concern about how representative the resulted vector can be if the number of features increases. In this work, it seems not a big issue because the number of features for each task is very small (4-5), but I question this if the proposed method is used for many real machine learning problems which usually have thousands of features.\n\nAs shown in Deep sets' paper [29], permutation invariant functions can be represented by $\\rho(\\sum_x \\phi(x))$ using suitable transformation $\\phi$ and $\\rho$. We use neural networks $f_{FE}$ and $f_{IE}$ before and after the summation, which is a similar structure with the deep set. We used the average instead of the summation because the average is often more stable than the summation. The averaging has been successfully used for modeling permutation invariant functions, which include cases with a large number of embeddings (e.g., [Garnelo et al, Conditional Neural Processes, ICML, 2018]). The representation power can be improved by using attention networks for $f_{FE}$ before the averaging, which is one of our future works as described in Conclusion.", " This paper proposes a meta-learning method that uses natural language to describe features. The natural language feature description can be encoded with pretrained models in the same way for different tasks, thus enabling the transferability of these features across tasks. The author conducted experiments using some statistical datasets that have categorical features and numerical labels and split these datasets into subsets for meta training, meta validation, and meta testing. Experiments show that the proposed method achieve faster adaptation to a new task with a few task-specific examples. Strengths:\n1. The idea of describing features in natural language in the meta-learning setting looks novel and reasonable because such natural language features can be easily sharable.\n2. The problem formulation and method description are very clear.\n\nWeaknesses:\n1. Despite the idea being general, the evaluation setting looks very restricted. The datasets are not commonly used in the existing meta-learning literature as far as I know. I would suggest the author justify why they chose these datasets, and test their method more broadly on other datasets too.\n2. It's not clear what the tasks are in their datasets and how related they are. How different are the features of different tasks? The author should give some examples for readers to better understand the difficulty of the generalization across these tasks.\n3. From the results in Table 4, it seems most of the improvement is from using pretrained models (comparing NN v.s. ours), which makes me question the contributions of other parts of the proposed method. \n4. Regarding technical details, the author averages the embeddings of different features (equation 3), which raises a concern about how representative the resulted vector can be if the number of features increases. In this work, it seems not a big issue because the number of features for each task is very small (4-5), but I question this if the proposed method is used for many real machine learning problems which usually have thousands of features. See the weaknesses above. Yes, the author discussed the limitations quite well.", " The paper proposes a meta-learning method that builds relationships across supervised learning tasks with different feature spaces using feature descriptions written in natural language. When different tasks may share similar or related feature descriptions, benefiting from the pre-trained language model (e.g., BERT), which contains different kinds of knowledge implicitly, the proposed method could improve the generalization performance on unseen tasks with small, labeled data. The authors also empirically demonstrate that the proposed method outperforms the existing meta-learning methods in real-world datasets. Strengths:\n1. The novel problem setting is reasonable and challenging, where instances have the same feature space in the same task and different feature description sets and feature spaces among different tasks.\n\n2. Extensive experiments and ablation studies are provided to determine the effectiveness of the proposed method. \n\n3. The paper is well-written and easy to follow.\n\nWeaknesses:\n1. Despite the plausible setting, there are still a few points that the authors fail to explain elaborately. For example, whether the feature description sets or feature type sets exist intersection between meta-training datasets and meta-test datasets. The question is out of my curiosity about whether the meta-training model using feature descriptions could really generalize to those unseen tasks which contain entirely different feature description sets or even feature types from the meta-training tasks.\n\n2. As the authors mentioned in the Limitation section, the proposed method exits several limitations. First, the method seems to depend on the number of meta-training datasets. Second, in this work, it is to be doubted whether the method could extend to those datasets, where instances contain numerical features or the corresponding labels are discrete. It could be better to find a dataset conforming to the characteristics described above to evaluate the effectiveness of the method.\n\n3. A baseline that neural network with feature description is not meta-trained could be considered, which could evaluate the necessity of meta-training despite using the feature description. Whether the feature description sets or feature type sets exist intersection between meta-training datasets and meta-test datasets? The authors have addressed the limitations.", " This work proposes to use the BERT representation of the textual description of the feature value as input feature to improve the model generalization, particularly in the meta-learning setup. Empirically, compared with baseline meta-learning methods on 2 datasets, the proposed method achieves better or competitive performances. The idea of the work is very interesting and intuitive. The paper is well written and easy to understand. The empirical evaluation shows some promising results. \n\nHowever, the empirical evaluation is done only on two relatively small datasets, which makes it difficult to judge how general the proposed method is in practice. It would be much more informative to try this method on large datasets.\n - For categorical features that do not come with descriptions, is there any systematic way to generate the feature description?\n- Since the feature description is manually written, will the model quality be sensitive to the choice of the description? The work only tests the proposed method in two relatively small datasets. Hence, it's hard to judge how general the conclusions are.", " This paper considers the problem of heterogenous meta learning, where the different datasets potentially have different number and type of features.\nTo enable using the same model for all the datasets, the authors consider an approach where they use textual feature descriptions, which makes the model permutation invariant with respect to the features and also agnostic to the number of features.\nThis allows the same model to be used for all tasks, regardless of their composition of features, because new features can simply be encoded using a sentence encoder.\nThey show that they outperform other approaches on two datasets. **Strengths**\n1. The approach is very sound and in line with other recent approaches which propose to use descriptions of classes, features, and tasks.\n1. The method was compared with a lot of other baselines which helps put the scores in context.\n1. The gains seems to be consistent and strong on both meta-tasks considered.\n\n\n**Weaknesses**\n1. It looks like the only difference between the proposed method and a baseline (`MDK + B`) is the usage of the feature encoder (`Fig 1`), which is a 3 layer neural network. It looks like the authors agree with that as well (line 220). So the technical novelty (although guided by good intuition) seems to boil down to the addition of a single layer on top of a baseline.\n1. I think it is important to experiment with different types of sentence encoders. Given that the descriptions they consider are very short, it's possible that simple word2vec vectors (bag-of-words averaging) could do the trick.\n1. Another baseline could have been added, which fine-tunes the `NN` model on the downstream task. 1. Is `Ours + T` the same as `Ours + F` in Table 4?\n1. How do you think the model would fare if you directly used the instance embeddings (`z`) to make the prediction through a function approximator which is fine-tuned on the support set rather using a GP. In other words, what if you did few-shot fine-tuning on the support set? I believe this would be equivalent to `NN` with few-shot fine-tuning. Has been addressed in the supplementary PDF." ]
[ -1, -1, -1, -1, 5, 6, 4, 7 ]
[ -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "LXGKjsbz1xV", "xxFXC9XmR5B", "nraSoK-9UW", "NHc9-6Ss-zW", "nips_2022_kyY4w4IgtM8", "nips_2022_kyY4w4IgtM8", "nips_2022_kyY4w4IgtM8", "nips_2022_kyY4w4IgtM8" ]
nips_2022_YxUdazpgweG
MultiScan: Scalable RGBD scanning for 3D environments with articulated objects
We introduce MultiScan, a scalable RGBD dataset construction pipeline leveraging commodity mobile devices to scan indoor scenes with articulated objects and web-based semantic annotation interfaces to efficiently annotate object and part semantics and part mobility parameters. We use this pipeline to collect 230 scans of 108 indoor scenes containing 9458 objects and 4331 parts. The resulting MultiScan dataset provides RGBD streams with per-frame camera poses, textured 3D surface meshes, richly annotated part-level and object-level semantic labels, and part mobility parameters. We validate our dataset on instance segmentation and part mobility estimation tasks and benchmark methods for these tasks from prior work. Our experiments show that part segmentation and mobility estimation in real 3D scenes remain challenging despite recent progress in 3D object segmentation.
Accept
The reviewers tend to agree on the value of this 3D dataset, but point to some questions about labelling and accuracy. The rebuttal very convincingly addresses these points, clarifying the novelty and value of this new dataset. I agree with the authors that datasets are clearly in scope for the main NeurIPS program and that the datasets track explicitly includes as a FAQ: "My work is in scope for this track but possibly also for the main conference. Where should I submit it?" with the answer "This is ultimately your choice".
test
[ "SnjXEVQ8Vm2", "hGjTYegkgu", "fS8cfhPUsxO", "g9U8dcH7hP", "W0JTrPUrMO8x", "mLzE2MzwYtc", "yLHZ0mfWAq", "K5ADA_8ixwL", "XKWPZvuyPI", "sG-YfIucLF", "VTSCtlgUGnr", "6E4lU7FKknX", "biZuN7V3kJj" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their effort in reviewing our paper and recognize the reviewer's opinion. However, we would like to point out that the NeurIPS call for paper explicitly lists \"Infrastructure (e.g., datasets, competitions, implementations, libraries)\" as one of the paper topics sought by the main track of NeurIPS (see https://neurips.cc/Conferences/2022/CallForPapers). In our opinion, the existence of a parallel datasets and benchmarks track should not be the sole grounds for rejection of otherwise legitimate work. We trust that the reviewers and area chair will interpret the NeurIPS call for papers and associated policies without taking this restrictive view which can in our opinion be quite harmful for the NeurIPS community as a whole and disincetivize dataset and benchmark contributions at the main track of NeurIPS.", " Thanks to the authors to spend time addressing my concerns. I think this paper should be submitted to the dataset track, not the main program. My rating remains as reject.", " Thank you R#bG7Q for your time and feedback. Please let us know if we can clarify any remaining questions or concerns. In particular, we hope we have adequately addressed your questions about the scanning hardware and its impact on quality. We are happy to provide additional information during the discussion period.", " Thank you R#dEd3 for your time and feedback. Please let us know if we can clarify any remaining questions or concerns. In particular, we hope we have adequately addressed your questions about dataset and annotation quality, the mobility prediction task definition, and differences compared to Rescan. We are happy to provide additional information during the discussion period.", " Thank you R#gq65 for your time and feedback. Please let us know if we can clarify any remaining questions or concerns. In particular, we hope we have adequately addressed your questions about the hierarchical part labeling and differences compared to ScanNet. We are happy to provide additional information during the discussion period.", " We thank all the reviewers once again for their time and feedback. Please don’t hesitate to follow up if you have any remaining questions or concerns that we can clarify during the discussion period.", " **1. Depth map resolution: Compared to other RGBD datasets with a relatively high-resolution (e.g. at least 480*640) depth maps, the resolution in this dataset is much lower since it used mobile device for scanning. I suspect this could cause problems like losing details of object geometry when doing TSDF Fusion using low-res depth maps. Can the authors comment on this?**\n\nThis is an excellent point. The hardware we used for our data collection (iPhone and iPad devices with built-in LiDAR sensor) provides 256x192 resolution depth which is lower than the nominal spatial resolution of sensors used by some prior work (Kinect v1 for NYUv2, Structure ST01 for ScanNet, and Kinect v2 or “Azure Kinect”). These sensors can provide depth at 640x480 or higher nominal resolution. However, the effective depth resolution is significantly lower and correlated with the resolution of the projected infrared (IR) fixed dot pattern for IR-based sensors (Kinect v1 and Structure ST01), or the use of pixel binning to improve range and range resolution for time-of-flight sensors (Kinect v2 / Azure Kinect).\n\nIn early prototyping of our data collection workflow, we compared overall 3D reconstruction quality between the Kinect v1 / Structure sensors used for ScanNet and the built-in LiDAR on iOS devices. We found that the quality obtained with the latter, overall matched and sometimes even exceeded that obtained by depth sensors with higher nominal depth resolution. We attribute this to improved depth accuracy and improved frame-to-frame tracking. Moreover, the devices we used operate at a fairly high RGB resolution (1920x1440) and at high frequency (60Hz), enabling better tracking and high-resolution surface texture acquisition which is a significant reconstruction quality differentiator relative to prior work (see statistics comparing MultiScan with ARKitScenes in the common response to all reviewers).\n\n**2. Although using mobile device for scanning can certainly help the scalability of the data acquisition, they often provides worse depth maps than dedicated depth sensors, such as Microsoft Azure. Did the authors compare the depth map quality between depth sensors when deciding the scanning device?**\n\nWe agree that using dedicated depth sensors can give higher quality depth maps that can improve reconstruction. However, our focus is to develop a pipeline that can be used with commodity hardware available to many people. We believe that research on real-world 3D scene representations is bottlenecked by the availability of 3D data and that by using sensors on mobile devices we can enable collection of more spaces. Note that our pipeline for reconstruction and annotation can still be used by those with access to MS Azure and other dedicated depth sensors.\n\n**3. It would be helpful if the author could provide an example of the scan and annotations.**\n\nPlease see the common response for a description of more examples of scan reconstruction quality available at this anonymous URL: [https://multiscan3d.github.io/](https://multiscan3d.github.io/) .\n\n**4. Missing explanation on some details**\n\na. *At line 132, “We also filter out pixels where depth changes by more than 5cm between adjacent frames” Why is this useful and correct?* We found empirically that depth values that change rapidly between adjacent frames tend to be noisy. When these values are filtered out, the resulting reconstruction is cleaner, with fewer floating artifacts (see supplement L49-54, and supplement Figure 2). We found that using depth confidence values alone was not sufficient for removing such floating artifacts.\n\nb. *At line 135-136, how the voxel size and truncated distance are selected?* We tried different voxel sizes and SDF truncation values on reconstructions of several sample scans. We found that a truncation distance of 0.08m and voxel size of 9.77mm gave the best tradeoff between having sufficient resolution vs introducing noise. More specifically, we started with the Open3D defaults and compared truncation values of 0.05m and 0.08m, and voxel sizes of 5.86mm, 9.77mm, 19.10mm and found that values of 0.08m and 9.77mm respectively worked best overall. Smaller voxel sizes for the TSDF volume provide higher resolution and can give more detailed surface reconstructions. However, if the depth maps are not accurate, higher resolutions also introduce additional reconstruction noise. A smaller truncation distance helps preserve more details in the scene, but is again vulnerable to depth noise.\n\n**5. Scanning device and depth sensor consistency. The authors mentioned they developed both iOS and Android app for scanning, but in the paper, seems like only iOS devices are used. Is it correct?**\n\nYes, we only used iOS devices for data collection. This was because we did not have access to Android devices with depth sensors. We developed our pipeline to also support Android devices so that it is useful to a larger number of potential users.", " **1. Lack of examples of 3D scans to assess the quality of the reconstruction**\n\nPlease see the common response for a description of more examples of scan reconstruction quality available at this anonymous URL: [https://multiscan3d.github.io/](https://multiscan3d.github.io/) .\n\n**2. Scans appear to be of lower quality than other similar datasets such as ARKit scenes**\n\nARKitScenes [8] scans are PLY format mesh files with vertex colors only, unlike the textured mesh reconstructions we create in MultiScan. In addition, ARKitScenes does not provide semantic instance segmentations (only object bounding boxes). See the common response for some comparative statistics between ARKitScenes and MultiScan.\n\n**3. The main body of paper should clearly state what annotations are provided with the dataset**\n\nThank you for the suggestion. We will move relevant information from the supplement (Section A.3, L95-97, L122-143) to the main paper.\n\n**4. Are the instance segmentations provided consistent across scans? What if there are multiple instances of the same object?**\n\nThese are annotated as distinct object instances. Note that the focus of our work is not to identify what objects in a scan are the “same object repeated”, but to connect objects in different temporal states. Unlike Rescan [9] which has a proportionally large number of scenes with the same object repeated multiple types (same style of chair), our scenes are mostly taken from home environments where this occurs less frequently. Our focus is also more on articulated objects such as kitchen cabinets and their changing states. While instances of such objects can be similar to one another, they are typically fixed in place and cannot be moved from one position to another.\n\n**5. Most of the related work section is focused on static datasets**\n\nPlease see L64-77 for a discussion of “Interactive environments and objects” which includes datasets of articulated objects and CAD scenes, and L78-93 for a discussion of “Reconstruction of articulated objects”. If there are any additional dynamic datasets that should be discussed, please let us know.\n\n**6. “I wasn't able to completely understand the problem statement in 5.2 (Mobility prediction). From my understanding, the input is a point cloud Q at time t and the goal is to predict point cloud Q' at time t`. Is this problem even possible?”**\n\nThis is a misunderstanding. That is not our mobility prediction problem statement. We set up mobility prediction similarly to prior work, where the input is a point cloud Q and the goal is to predict what parts can move and their motion parameters in 3D. Specifically, we predict a set of moving parts with their motion types, motion axis direction, and origin for rotational joints (see L257-260). With this predicted information, it is possible to convert the point cloud Q into a dynamic point cloud (e.g., the cabinet door can be opened/closed by taking the points associated with the cabinet door and rotating them about the motion axis). However, we do not attempt to predict a different point cloud Q’ at time t’.\n\n**7. What sort of articulated objects captured by this approach would not be possible with the Rescan methodology?**\n\nOur annotation pipeline allows for the annotation of parts and part mobility information, which was not possible with the Rescan [9] annotation pipeline. Also note that similarly to ARKitScenes, Rescan does not provide textured mesh reconstructions that can capture finer details which are useful in annotating object parts.\n", " **1. Details of the hierarchical part labels**\n\nThe hierarchical part labels are explained in the supplement L122-124 (*“Annotators provide a label of the form `object_id:part_id = object_category.object_index:part_category.part_index` that is used to identify the object and part category and instance.”*). For instance, we annotate a cabinet with two openable doors as having: one static part `cabinet.1:cabinet.1` and two doors `cabinet.1:door.1` and `cabinet.1:door.2`. Each of the two doors will also have annotated motion parameters (see common response).\n\n**2. How can the hierarchical part labels be used for benchmarks?**\n\nIn our submission, we take the hierarchical part labels and construct three benchmark experiments on different levels of the hierarchy (see Section 5.1).\n1. *Object instance segmentation given the entire scene.* For this task, all part labels belonging to the same object are aggregated into a single object instance (e.g., the three cabinet part labels are combined into one object label `cabinet.1`). See Figure 1 third column and Table 2.\n2. *Part instance segmentation given ground truth object segmentation.* Having the hierarchical annotation allows us to extract individual objects from the scene and construct a benchmark that focuses on the part instance segmentation given the object segmentation. See Figure 1 fourth column and Table 3 left.\n3. *Part instance segmentation given predicted object segmentations.* As we have both the object and part level annotations we then create a combined benchmark where we investigate the performance of part-level instance segmentation at the scene level. We take a two-stage approach where we first predict the objects and then for each predicted object, we perform part-level segmentation. See Table 3 right. As expected, this approach results in lower performance than when ground-truth object segmentation is provided.\nThese three experiments provide initial benchmarks with the MultiScan object-part hierarchy. The MultiScan data can allow for the development of additional tasks (each with their appropriate evaluation metrics). For instance, we also benchmark part mobility prediction for each object (given ground-truth object segmentations) on our dataset.\n\n**3. How is the hierarchical annotation different from ScanNet?**\n\nScanNet [7] does not provide hierarchical annotation of objects and their parts. Please see common response for a detailed list of the annotations that MultiScan provides that are not in ScanNet.\n", " We thank all reviewers for their time and thoughtful feedback. The reviewers noted that we contribute a “large scale dataset with a comprehensive pipeline for annotations” (R#gq65), that MultiScan is the “largest dynamic scans dataset to date” (R#dEd3) and is “useful to researchers working on dynamic reconstruction and part mobility prediction” (R#dEd3). Furthermore, R#bG7Q states that “articulation annotation is useful for the community” and that the “efficient capturing and annotation pipeline” enables “other researchers could build upon the code to expand the dataset”. In addition to the scalable acquisition and annotation pipeline and large-scale articulated scan dataset the reviewers have noted, we also carried out a systematic benchmark of methods for part instance segmentation and part mobility parameter estimation, laying foundations for future work on these challenging tasks.\n\nHere, we address questions that are common across reviewers. We also provide responses to specific reviewer questions directly below each review.\n\n**Novelty of MultiScan relative to prior datasets**\n\nMultiScan is the first dataset of articulated, interactive scans of indoor scenes. While there have been prior efforts on articulated single object datasets (both synthetic CAD models and scanned real objects), we provide the first dataset of indoor scenes annotated with movable object parts and their motion parameters. Several annotations are unique to MultiScan:\n1. Semantic instance segmentations for both objects and their parts\n2. Semantically meaningful oriented bounding boxes (OBBs) with consistently defined front and up orientations for every object\n3. Annotation of object parts that can move and how they move, with motion parameters including motion type (revolute vs prismatic), motion axis and origin, and a semantically meaningful motion range (fully closed to fully open)\n\nWe provide the details of the annotation interface and process in the supplement (Section A.3). Also see Table 1 and Section 2 in the main paper for more details on how Multiscan differs from related efforts.\n\n**Scan and annotation quality is hard to judge**\n\nR#dEd3 mentioned it is “difficult to assess the quality of the 3D scans” and R#bG7Q that it would be “helpful if the author could provide an example of the scan and annotations.” We provide higher-resolution animations and interactive 3D mesh visualizations of example scans and annotations from MultiScan at this anonymized URL: [https://multiscan3d.github.io/](https://multiscan3d.github.io/) The page includes all examples from Fig 11 of the supplement as well as additional scans.\n\nNote that our annotation pipeline can also be applied to earlier datasets (ScanNet [7], ARKitScenes [8], Rescan [9], RIO [41]) for part and part mobility annotation. However, these datasets lack textured mesh reconstructions and fine-grained (i.e. at the level of mesh triangles) semantic instance annotations. In MultiScan, we use textured mesh reconstructions and triangle-level semantic annotation to capture finer details that are important for part segmentation. R#dEd3 specifically inquired about MultiScan reconstruction quality relative to ARKitScenes. Unlike MultiScan, ARKitScenes reconstructions are vertex-colored PLY format meshes that do not use texture maps for fine-grained surface detail. Thus, ARKitScenes scans are limited by the geometric resolution of the mesh and often miss or “blur out” details such as handles, knobs and cabinetry edges etc. which are particularly important for moving part annotation. To give a general sense of the surface detail captured in our reconstructions we compare the vertex-based color resolution of ARKitScenes against the texture-based color resolution of MultiScan, in terms of mean number of color values per mesh surface area unit. ARKitScenes has $0.658\\frac{\\text{vertices}}{\\text{cm}^2}$, whereas MultiScan has $79.5 \\frac{\\text{texels}}{\\text{cm}^2}$ (a more than 100x difference in reconstructed surface color value resolution).\n\nWe are happy to provide further information to clarify any additional questions by the reviewers.\n", " This paper presents MultiScan, an RGBD dataset for indoor scenes with semantically annotated 3D objects. The acquisition approach is similar to ScanNet [7] which used many users with commodity iOS and Android devices with active LiDAR sensors. Compared to existing 3D datasets, this paper achieved dense textured meshes, multi-level hierarchical annotations (needs elaboration) and object parts correspondences with respect to time and motion changes. \n\nWhile the paper presented an interesting extension to existing large scale datasets, I think this paper is more suited for the NeurIPS Datasets and Benchmark track. The strength of this paper is that it presented a large scale dataset with a comprehensive pipeline for annotations. It also has advantages for recording scenes at multiple timestamps with object motions such as opening a drawer/window. \n\nFor the weaknesses, the paper does not present the details for the hierarchical part labels and how it can be evaluated/benchmarked. It is also not clear how the hierarchical annotation is different from ScanNet [7]. None. I think the limitations have been addressed.", " The paper proposed a new dataset and annotation methodology for 3D scans with articulated objects. Unlike previous datasets such as ScanNet or ARKit scenes, this dataset includes part-level semantic segmentations part mobility parameters. The authors create a specialized annotation tool for labels Strengths:\n* This is the largest dynamic scans dataset to date. The authors propose an annotation pipeline which allows scans to be labeled and collected at scale.\n* Scans are captured using common devices which reflect real use cases.\n* I think this dataset could be useful to researchers working on dynamic reconstruction and part mobility prediction.\n\nWeaknesses:\n* It's difficult to assess the quality of the 3D scans in the paper and supplemental pdf. From the visual examples provided in the paper, there appear to be of lower quality than other similar datasets such as ARKit scenes. Given that this is a dataset paper, some example meshes should have been included in the supplemental to better assess the quality of the reconstructions.\n* The main body of paper should clearly state what annotations are provided with the dataset. All descriptions are of the mobility annotations are quite vague. The most informative descriptions I could are:\nLn 43: a dataset of densely annotated 3D interiors with object, part, and part mobility annotations\nLn 159: Object and part instances are correlated across scans of the same scene with consistent object and part IDs\nLn. 172: We define an articulated object to be an object consisting of rigid parts that are connected by joints\n\nFrom these descriptions I am still not exactly what form of annotations are provided (i.e. object poses, axes of rotation, motion constraints?). Also what if the scene contains multiple instances of the same object (for example multiple instances of the same chair). In that case, it doesn't seem possible to match instances. I believe Rescan [9] solved this problem using a permutation matrix between object instances.\n\nBasically, the Dataset (Sec 4) of the paper really needs to be expanded whereas some of the details in the acquisition and processing stages can be moved to the supplemental.\n\n* Most of the related work section is focused on static datasets (e.g. ScanNet, ARKitScenes) whereas the overall goal of this dataset is more similar to Rescan [9]. I think more discussion of dynamic datasets is warranted. \n* The motion parameter annotation methodology is not really a novel contribution as the authors note in the supplementary material that it is adopted from Xu et al.[47] with the addition of defined open/closed states. \n* I wasn't able to completely understand the problem statement in 5.2 (Mobility prediction). From my understanding, the input is a point cloud Q at time t and the goal is to predict point cloud Q' at time t`. Is this problem even possible? If a door is opened, wouldn't it be equally probable that it (1) opens (2) remains half-open or (3) becomes fully open? I feel like I am missing something here if the authors could clarify that would be helpful.\n* I am not really sure if there are any particular benefits of this dataset that could not be achieved by scaling up Rescan [9]. What sort of articulated objects captured by this approach would not be possible with the rescan methodology? * Are the instance segmentations provided consistent across scans? What if there are multiple instances of the same object?\n* Regarding the last point above, could the authors provide some clarification regarding the mobility prediction task? Limitations are adequately discussed", " This paper presents a new RGBD dataset called Multiscan focusing on articulated objects. When it is released, this dataset would provide raw RGBD data, camera poses, geometry reconstruction, and semantic annotation at the object level for each scan. The paper provides detailed explanation on data acquisition and annotation. It also provides benchmark on instance/part segmentation, and mobility prediction in the experiment section. Strengths:\n- The dataset especially with the articulate annotation is useful for the community.\n\n- The code and data will be publicly available under permissive MIT and CC-BY-NC licenses. Since the capturing and annotation is quite efficient, other researchers could build upon the code to expand the dataset.\n\nWeaknesses:\n- Depth map resolution:\nCompared to other RGBD datasets with a relatively high-resolution (e.g. at least 480*640) depth maps, the resolution in this dataset is much lower since it used mobile device for scanning. I suspect this could cause problems like losing details of object geometry when doing TSDF Fusion using low-res depth maps. Can the authors comment on this? \nAnother related comment: Although using mobile device for scanning can certainly help the scalability of the data acquisition, they often provides worse depth maps than dedicated depth sensors, such as Microsoft Azure. Did the authors compare the depth map quality between depth sensors when deciding the scanning device? \n\n- Demo in supplementary material\nIt would be helpful if the author could provide an example of the scan and annotations.\n\n- missing explanation on some details\nAt line 132, \" We also filter out pixels where depth changes by more than 5cm between adjacent frames\" Why is this useful and correct?\nAt line 135-136, how the voxel size and truncated distance are selected? - Scanning device and depth sensor consistency\nThe authors mentioned they developed both iOS and Android app for scanning, but in the paper, seems like only iOS devices are used. Is it correct? If both iOS and Android are used, presumably they use different depth sensors, then should make sure the same scene scanned by different devices should produce the consistent reconstruction. N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "hGjTYegkgu", "XKWPZvuyPI", "yLHZ0mfWAq", "K5ADA_8ixwL", "XKWPZvuyPI", "sG-YfIucLF", "biZuN7V3kJj", "6E4lU7FKknX", "VTSCtlgUGnr", "nips_2022_YxUdazpgweG", "nips_2022_YxUdazpgweG", "nips_2022_YxUdazpgweG", "nips_2022_YxUdazpgweG" ]
nips_2022_qq84D17BPu
Toward Equation of Motion for Deep Neural Networks: Continuous-time Gradient Descent and Discretization Error Analysis
We derive and solve an ``Equation of Motion'' (EoM) for deep neural networks (DNNs), a differential equation that precisely describes the discrete learning dynamics of DNNs. Differential equations are continuous but have played a prominent role even in the study of discrete optimization (gradient descent (GD) algorithms). However, there still exist gaps between differential equations and the actual learning dynamics of DNNs due to discretization error. In this paper, we start from gradient flow (GF) and derive a counter term that cancels the discretization error between GF and GD. As a result, we obtain EoM, a continuous differential equation that precisely describes the discrete learning dynamics of GD. We also derive discretization error to show to what extent EoM is precise. In addition, we apply EoM to two specific cases: scale- and translation-invariant layers. EoM highlights differences between continuous and discrete GD, indicating the importance of the counter term for a better description of the discrete learning dynamics of GD. Our experimental results support our theoretical findings.
Accept
Reviewers were unanimous in recommending that the paper be accepted, and I accordingly recommend the same. I encourage the authors to take into account suggestions made by reviewers so as to further improve the text in the camera-ready version.
test
[ "AqAhq4j-fb", "KQgPqkd6lCc", "t0Cfu9_AZjx", "EXsuZlV5WIm", "OqeDadyeolJw", "tq2sm42Fpnm", "gZNtkH7td47i", "T6EyO-po4xw", "D20lsHVxobp", "y6_7_ZmMu3o", "wG_B2fsmud", "wXJelpaxe_-", "m2ublMtaNAc", "Iz50w-RRmB", "oy8p_VG_H5C", "w9yZ-gYqS5", "L-ZkcQZ90xd", "X_Fgcbk4OfT", "EJ1OTwhJ3xD", "MzXKD3ywbEb" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate your suggestions!\nUsing a tiny synthetic dataset and an extremely small network would be a nice idea.\nWe will keep trying for possible future updates.\nWe agree it would make our paper much stronger.", " This reasoning is understandable and I accept it. Could it be possible on a network of maybe even 10 parameters, using some sort of synthetic system rather than real data. Maybe it could be a single layer? Or maybe a full network could be trained first, with the first layer being a convolutional layer with possibly 9 parameters. Then freeze all layers except the first, reinitialize the first layer, then train from there?\n\nI understand this is very difficult with the remaining time, let alone the computational/memory costs. But I do think it would make the paper so much stronger, I endorse the paper regardless.", " Thank you for your support!", " I appreciate the authors’ thorough response to my questions. I still think the paper is worthy of publication, thus keep my score (7) with a higher confidence.", " Thank you very much!", " I thank the authors for the clarifications and the details response delineating the differences with the previous work. With this in mind, I will update my score.", " Thank you for your response!\n\n> I would still like to see an experiment on possible the second order term, just to show the difference it can make (which is hopefully small). Maybe this could be done on a very small network? With only a few hundred parameters, just to show the results I think this would improve this paper even more.\n\nWe tried to run experiments using the second-order term, but we have a hard time removing the OOM error even though we use a network with only a few hundred parameters.\nTo use the second-order term, we have to compute $\\boldsymbol{g}^\\top \\nabla H \\boldsymbol{g}$, $H^2 \\boldsymbol{g}$, and $H\\boldsymbol{g}$ at a time. The first problem is the computation of $\\nabla H$, i.e., the third derivative of the loss function. The second problem is the multiplications of $\\nabla H$, $H$, and $\\boldsymbol{g}$ (In the experiments in the paper, we addressed this problem by using a numerical differentiation described in Appendix E).\nWhat is worse, the memory consumption multiplicatively increases due to the third problem, namely, the full-batch training.\nWe will keep addressing these problems.\n\nThe second-order term is $\\eta$ $(\\sim 10^{-2}, 10^{-3})$ times smaller than the first-order term. Thus, we believe that although adding this term will reduce the discretization error in accordance with our theory, the reduction would be very small.\n\nAgain, thank you for strongly supporting our paper!", " Dear authors,\n\nThank you for your responses, which answer the majority of my questions well. I would still like to see an experiment on possible the second order term, just to show the difference it can make (which is hopefully small). Maybe this could be done on a very small network? With only a few hundred parameters, just to show the results I think this would improve this paper even more.\n\nIn light of other reviews placing the work into more context with surrounding literature (thank you to the other reviewers), I have to drop my score slightly. I still recommend accept and think this is a nice paper.", " Thank you for your time and efforts.\nWe truly appreciate your perspective on the significance and our contributions.\nWe will include all the suggestions in an updated version.\n\n\n> Line 240 (Definitions), a symmetry is defined as a transformation of parameters such that the loss function is unchanged. Is this definitely correct? Should it not be a transformation that leaves the predictions unchanged?\n\nThere are mainly two definitions of symmetry of weight parameters: one is the invariance of the output from the network (e.g., [[Godfrey+, 2022](https://arxiv.org/abs/2205.14258)]), and the other is the invariance of the loss function (e.g., [31, 32]). Our definition follows the latter.\n\n\n> Is it possible to run similar experiments for layers that do not have any symmetry, such as a standard linear layer? This could make a nice piece of future work.\n\nThank you for the suggestion!\nIt is interesting to analyze the differences between the dynamics of weight parameters with and without symmetry.\nWe expect that \n1) compared with scale-invariant layers, the decay dynamics of weight norms are dominated by not only weight decay but the loss gradients (in the sense that the weight update contains a lower-order contribution of $\\eta$ than scale-invariant layers) if the symmetry is absent, which makes the analysis difficult except for the limiting phase ($t \\rightarrow \\infty$), and\n2) compared with translation-invariant layers, not only theta_{A parallel} but theta_{A perp} is affected by the loss gradients if the symmetry is absent.\n\nThus, we will see clear empirical differences between the layers with and without symmetry.\n\n> It seems that the majority of experiments were run using the first term in the expansion of $\\xi$. Is it possible to run an ablation where we see how adding the next term affects results, and on and on until say the 5th term (to justify using only the first term)? I appreciate this requires huge amounts of memory for the higher derivatives.\n\nThat is an important experiment to check the validity of using only the first term, but the experiment with the second- and even higher-order terms requires computing higher-order derivatives of the loss function, which is prohibitively heavy (Section 6), as you mentioned.\nSuch a heavy computation could be circumvented by applying Hessian-free optimization [51] and any other related techniques, but it is out of our current scope because it requires additional implementations of other technical algorithms.\n\nHowever, we can infer under what conditions the higher-order corrections cannot be neglected (Figures 2 and 3). \nFigure 2 shows that the higher-order correction dominates the early phase of training.\nFigure 3 shows that we need the higher-order corrections to keep discretization error small for a large learning rate of the order of $\\sim 10^{-1}$.\nWe avoid these two effects in the experiments in Section 5 by 1) fitting the decay curves in sufficiently long runs to avoid the effects of the early training phase and 2) using small learning rates ($ \\ll 10^{-1}$). They make the first-order approximation reliable.\n\n> Apart from the theoretical contributions (which are already fantastic), do you envision any practical applications of this work?\n\nThank you for the question.\nIt would be difficult to directly apply our theory to practical problems, but our work lays foundations for formally quantifying the discrepancy between GF and GD, which is often missing in the literature, and thus bridges discrete and continuous analyses of GD algorithms. \nSuch an analysis eventually leads to a deep understanding of the learning dynamics of DNNs, which will be the foundation of new learning algorithms and models for real-world problems.\n\nThank you again for the support and helpful feedback!", " > The proposed method is not guaranteed for GD with a large learning rate, thus cannot be used for explaining some interesting phenomena, e.g., the regularization effect of an initial large learning rate. However, I think it is not a crucial drawback of this paper, considering the essential assumption of GF.\n\nWe agree that the analysis of GD's learning dynamics with a large learning rate is an exciting future work.\nWe are aware that our analysis cannot be applied to the large-learning-rate regime because we assume learning rates must be sufficiently small. \nThis condition is hard to avoid because, as you mentioned, it is the essential assumption of general GF-type methods.\n\nOne approach is to include as many higher-order corrections to EoM as possible, which prevent discretization error from diverging.\nHowever, this approach is not relevant when the step size is as large as $O(1)$ where the series expansion with respect to the step size diverges.\n\nThere are interesting phenomena in the large-learning-rate regime: unstable convergence with large learning rates, escaping from local minima, and the regularization effect of an initial large learning rate.\nWe do not have any other concrete and relevant ideas to tackle these interesting problems so far, but discussions are welcome!\n\n\n> Is there any idea that can tighten the learning rate bound (13) more efficiently?\n\nThat is a very interesting question.\nBasically, it may be difficult to tighten the bound in our setting because this bound is deeply related to the error bound of the classic Euler method. Tightening the bound may be possible by restricting the conditions on the objective function, e.g., assuming convexity.\n\n---\nIn light of our responses, we would greatly appreciate it if you would be able to consider a raise in your score. Thank you!", " Thank you for your time and efforts.\nWe agree with the summary and strength given in the review, and we really appreciate your perspective on the significance and novelty of our contributions.\nWe will incorporate all the suggestions in an updated version.\n\n> I am not sure whether the approach can be easily generalized for the mini-batch stochastic gradient descent method.\n\nThere are a few gaps to generalize our approach to the mini-batch SGD (and other practical optimizers).\nAn interesting approach is to extend the error analysis of the Euler-Maruyama method to include the counter term. \nThe first-order error analysis is given in [23, 24], and a general error analysis between SGD and SDE is given in [67]. They can be a good starting point. \n\nIn addition, several recent papers [7, 9, 13, 14, 16, 54, 55, 56] proposed variants of gradient flow, and we believe our work lays foundations for formally quantifying the discrepancy between these variants and the practical optimizers they aim to represent.\n\nWe mention these points in Section 7 and Appendix G.\n\n> While the authors theoretically prove high-order corrections are required to cancel the leading order of discretization error, it will be great if the authors (1) experimentally show the discrepancy between the GF with the proposed correction and that with a first order correction, and (2) demonstrate the former can approximate GD well compared the latter, e.g., in Figure 2 or Figure 4.\n\nThank you for the constructive suggestion! \nYes, it is definitely interesting to experimentally compare the first-order correction with the higher-order corrections.\nUnfortunately, we use only the first-order correction in the experiment because the higher-order corrections require the third- and higher-order derivatives of the loss function and are extremely memory-consuming (Section 6). \nSuch a heavy computation could be circumvented by applying Hessian-free optimization [51] and any other related techniques, as discussed in Section 6, but it is out of our scope because it requires additional implementations of other technical algorithms.\nInstead, we focus on theoretical analysis of our approach and on experiments in a minimum setting for proof of concept.\nOur experimental settings highlight the differences between the counter term equal to zero and the non-zero counter term, using only the first-order correction (Section 4.1).\n\nPlease note that we can infer the effect of the higher-order corrections in the experiment only with the first-order correction. \nWe discuss in Section 4.1 (Figure 2) that the higher-order corrections dominate the early phase of training, and therefore we can expect that the presence of the higher-order corrections approximates GD better even in the early phase of training.", " > the authors do apply their analysis to characterize the learning dynamics of scale and translation invariant layers and show that with the inclusion of the first order (adding higher order counter term is going to be computationally expensive) they are able to better predict the decay of parameter norm, which is interesting but not that surprising.\n\nLet us clarify our contributions in Section 5, where we applied EoM to scale-invariant layers and translation-invariant layers.\n- We showed that a modification to the decay rate of the weight norm given in [33] is needed because [33] ignores discretization error. \n- We also showed that GF cannot reproduce the limiting dynamics ($t\\rightarrow\\infty$) of the weight norm and angular update of scale-invariant layers unless the counter term is included; in other words, we explicitly provide a counterexample that GF cannot fully explain the learning dynamics of GD due to discretization error. Therefore, our work sheds light on the importance of discretization error, which is often missing in the literature on continuous-time GD. More discussions on the differences between GF and GD are given in Appendix C and D.\n- We are the first to derive the dynamics of the whole weights in translation-invariant layers, which are equipped with almost all the networks that use the softmax loss.\nwe empirically showed that our theoretical prediction of decay rates dramatically matches empirical results (Table 1). Notably, Table 1 shows that our theoretical prediction captures the differences of the order of $5 \\times 10^{-7} \\sim 5 \\times 10^{-10}$. \nThere are few works that conduct such a precise experiment in deep learning, to our knowledge.\n\n---\nIn view of our responses, we would greatly appreciate it if you would be willing to consider raising your score. Thank you very much!", " We thank the reviewer for their time and efforts. \nWe will incorporate all the suggestions in an updated version.\n\n> My main concern is that this paper does not provide any interesting new result.\n\nWe respectfully disagree with this point. \nLet us clarify our contributions and novelty and solve your concerns below.\n\n\n> The main novelty of the paper is the general form of the counter term (which is derived in Theorem 3.3), as opposed to previous work for example Barrett and Dherin (Implicit gradient regularization) which only uses the first order term as their regularization term.\n\nThank you for appreciating the novelty. Yes, it is one of our contributions.\n\n> While the authors mention that they derive the discretization error (in Corollary 4.1), the precise formulation is not provided ...\n\nPlease see Appendix A.4, where we provide the precise formulation (proof) of Corollary 4.1.\n\n> ... (in Corollary 4.1) ... the rate is provided as an upper bound (using Big-OH) which I believe is an artifact of standard series expansion results.\n\nCould you please clarify what you mean by ``an artifact of standard series expansion results''?\nWe are open to discussion!\n\n\n> Furthermore, for most of the results on the discretization error bounds and the upper bound on the learning rate, the authors assume that the counter term is either equal to zero or assume the first order counter term (i.e., the term in equation 9).\n\nWe use the counter term equal to zero and the first-order counter term here (Section 4.2 and Appendix A.6) in order to 1) compare bounds with a previous study [18] (Corollary 4.2) and 2) confirm that the counter term improves the bound compared with the counter term equal to zero (Corollary A.1).\nNote that these results can be easily extended to higher-order terms, as is mentioned at the end of Section 4.2. Specifically, the bound is $\\eta < O((\\epsilon / k)^{1/(\\gamma+2)})$ when we use a $O(\\eta^\\gamma)$ counter term.\n\n\n> With the counter term equal to 0, the analysis matches a lot of the previous work (example Elkabetz and Cohen, [18]) and with the first order term the main theoretical results are very similar to the ones already established in Barrett and Dherin (who introduce the first order counter term as the implicit regularizer and the error is discussed in Theorem 3.1)\n\nWith the counter term equal to zero, our analysis DOES NOT exactly match [18]. \nThe comparison regarding this point is provided in Lines 223--228.\nSpecifically, the bound given in [18] includes factors that depend on both the step size and the smoothness of the loss, which make it difficult to separate the dependence of the bound on the step size, while our bound has an explicit and separate dependence on the step size and the smoothness of the loss (Corollary 4.2).\n\nWith the first order term, our theoretical results (Corollary 4.1) have crucial differences from Theorem 3.1 in Barrett & Dherin [27]: 1) [27] is a local error analysis (one-step discretization error), while ours is a global error analysis (whole-steps discretization error), 2) [27] gives only the order of local error with respect to the step size, while we provide the pre-factors of the step size as well (Corollary A.1), 3) [27] is the first-order error analysis, while ours can be extended to the all-order error analysis. \n\nTherefore, our results provide independent contributions compared with [18] and [27].", " > Can you clarify the connection between this correction term and the global truncation error of Euler's method (i.e. a bound on $e_k$)? I expected them to look very similar but seems to not be the case.\n\nA well-known global truncation error of Euler's method is $|| {\\boldsymbol{e}}_k || \\leq \\frac{c \\eta}{L}(e^{Lt}-1)$, where $\\eta$ is the step size, $L$ is the Lipschitz constant, and $c$ is a constant related to the second derivative. \nThe proof is strongly based on Lipschitz smoothness. \nIf we use the Taylor series instead of it, we can obtain $|| {\\boldsymbol{e}}_k || \\leq k\\eta^2 C + O(\\eta^3)$ (Equation (50) in Appendix A.5), where $C$ is a constant related to the smoothness of the trajectory. \nThis is what is done in our proof of Corollary 4.2 (Appendix A.5).\nThough these two bounds are not exactly the same, they share several characteristics; the bound becomes large when 1) $k$ (or $t$) is large, 2) the objective function is non-smooth, and 3) the step size is large. \n\n> Depending on the step size, Euler's method can result in inconsistencies where the discretization completely diverges from the gradient flow, in particular when the step size is much larger than the Lipschitz constant of g, and results in a discretization that cannot be meaningfully modeled by any ODE. Does the equation of motion reflect this kind of behavior in any way?\n\nYes. Please see Figure 3, where we show that the discretization error between GF and EoM blows up for a large learning rate.\nThis result is consistent with our theoretical findings (Corollary A.1).\nWe can expect that this divergence becomes less likely when we add higher-order counter terms to EoM (Corollary 4.1).\n\n> Can the correction term be used to correct the drift of the SDE formulation to stochastic gradient descent?\n\nThank you for the question. \nIt is exactly a part of our future work discussed in Section 7 and Appendix G, where we cite [23, 24, 67].\nAn interesting approach is to extend the error analysis of the Euler-Maruyama method to include the counter term. \nThe lowest-order error analysis is given in [23, 24], and a general error analysis between SGD and SDE is given in [67], and thus they can be a good starting point. \n\nIn addition, several recent papers [7, 9, 13, 14, 16, 54, 55, 56] proposed variants of gradient flow, and we believe our work lays foundations for formally quantifying the discrepancy between these variants and the practical optimizers they aim to represent.\nWe discuss these points in Section 7 and Appendix G as well.\n\n> Can Corollary 4.2 be experimentally verified?\n\nYes. Please see Figures 3 and 4. \nThey are consistent with Corollary 4.2; i.e., 1) the discretization error blows up for a large learning rate ($\\eta=10^{-1}$ in Figure 3), 2) it increases as the number of steps increases (Figure 4), and 3) most of it is produced in the early phase of training, where the objective function tends to be non-smooth, and the gradients tend to be large.\n\n---\nIn light of our responses, we kindly ask that you consider raising your score. Thank you very much!\n", " We thank the reviewer for their time and efforts, as well as their valuable comments. \nBelow, we address the comments and questions raised in the review.\n\n> I imagine the theory (of estimating discretization error of ODEs) has been done before, though perhaps not in this exact context. \n\nWe discuss this point in Section 3.3.\n\nEquation (8), which gives the discretization error in accordance with Corollary 4.1, can be found in [35] as a higher-order backward error analysis (different context from ours, though). However, our derivation has independent contributions: 1) we clarify that the counter term cancels the leading order of discretization error (Theorem 3.2), and 2) we find that the discretization error itself is also given by the counter term (Corollary 4.1).\n\n> The zero-th order term (Eq 9) has shown up in machine learning studies before.\n\nWe also discuss this point in Section 3.3.\n\nEquation (9) often appears in the literature on backward error analysis [21, 35] and its related topics in machine learning, e.g., [23, 24, 27, 28, 31, 41]. \nTypically, Equation (9) is added to gradients in continuous equations (e.g., SDE) to close the gap between continuous equations and discrete algorithms (e.g., SGD) by canceling (at least **first-order**) discretization error. \nHowever, **higher-order** discretization error is neglected in these studies.\nIn contrast, our solution (Equation (8)) cancels **all** orders of discretization error.\n\n> It's not clear to me if we gain anything from using the higher-order terms in the derivation, as the analysis and experimental results all use only the zero-th order term.\n\n**Not** all the analysis and experimental results use the zeroth-order term. \nLet us clarify why we used the zeroth-order term in a part of our paper and summarize our contributions related to the higher-order terms.\n\nMost of the texts up to Section 4.1 (except for the experiment) are dedicated to the all-order counter term. Equations (9) (zeroth), (10) (first), and (12) (zeroth) are presented to simply give an intuition to complicated equations by showing simple low-order examples. \nAt the end of Section 4.1, we demonstrate that the higher-order terms dominate the early phase of training.\n\nIn Section 4.2 and Appendix A.6, we provide the learning rate bounds for 1) the counter term equal to zero and 2) the zeroth-order counter term in order to 1) compare bounds with a previous study [18] and 2) confirm that the counter term improves the bound.\nThese results are not limited to these low-order cases but can be easily extended to higher-order terms, as is mentioned at the end of Section 4.2.\n\nIn Section 5 and experiments, we use the zeroth-order term because 1) we would like to simplify the analysis, 2) we need to avoid computing higher-order derivatives of the loss function (e.g., the derivative of a Hessian) to simulate EoM, which is prohibitively memory-consuming, and 3) we can at least see the differences between GF with and without the counter term (mentioned at the beginning of Section 5) even when we only use the zeroth-order term.\nHowever, please note that the zeroth-order term dominates the counter term and thus is sufficient to analyze the effect of the counter term when the step size is small.\n\nIn Appendix G, we provide a higher-order correction to the decay rate of scale-invariant layers, which implies that the higher-order corrections increase the decay rate and allow it to approach GD's decay rate.\n\nOur paper reveals the above findings from the higher-order terms.", " We thank the reviewers for their careful reading to appreciate the strengths of the paper:\n- Excellent presentation [NdAp]\n- Excellent soundness [UB6r]\n- Excellent contribution [UB6r]\n- Interesting take which attempts to correct the theoretical ODE formulation in order to match practice [NdAp]\n- Novelty of the general form of the counter term [2BkU]\n- The derived counter term is novel and seems to be useful to predict and interpret complicated learning dynamics of deep neural network models [BEXW]\n- Able to better predict the decay of parameter norm, which is interesting [2BkU]\n- Contribution that the authors introduce a technique in the numerical analysis field to the deep learning field [BEXW]\n- The experiments are extensive and support the theory [UB6r]\n- The paper shows many impressive theoretical results [UB6r]\n- The motivation for the paper is very clear [UB6r]\n\nPlease find our official comments in each of the review threads. \n\nWe are looking forward to discussing our paper with all of you!", " This paper derives a counter term to the gradient flow ODE formulation that reduces the discretization error from Euler's method, which is gradient descent. When this correction term is expanded as a Taylor series, adding a select number of terms reduces the discretization order accordingly. This is then used to analyze the behavior of GD under symmetry constraints, specifically scale- and translation-invariant parameters. Specifically, this adds learning-rate-dependent correction terms to the decay rates of certain quantities, which matches gradient descent in practice. Pros:\n - Quite an interesting take which attempts to correct the theoretical ODE formulation in order to match practice (as opposed to bridging this gap by using higher order solvers, for example). \n - The motivation, process, and theoretical results are presented very well. I could follow understand every result (just the results; not the proofs) despite not being an expert in the theory of gradient descent.\n\nCons:\n - I imagine the theory (of estimating discretization error of ODEs) has been done before, though perhaps not in this exact context. The zero-th order term (Eq 9) has shown up in machine learning studies before.\n - It's not clear to me if we gain anything from using the higher order terms in the derivation, as the analysis and experimental results all use only the zero-th order term. Can you clarify the connection between this correction term and the global truncation error of Euler's method (i.e. a bound on e_k)? I expected them to look very similar but seems to not be the case.\n\nDepending on the step size, Euler's method can result in inconsistencies where the discretization completely diverges from the gradient flow, in particular when the step size is much larger than the Lipschitz constant of g, and results in a discretization that cannot be meaningfully modeled by any ODE. I imagine this would correspond to the correction term becoming infinite and the zero-th order approximation to the correction term becoming somewhat meaningless. Does the equation of motion reflect this kind of behavior in any way?\n\nCan the correction term be used to correct the drift of the SDE formulation to stochastic gradient descent? \n\nCan Corollary 4.2 be experimentally verified? .", " The authors derive an Equation of Motion, i.e., a continuous differential equation that matches the discrete time dynamics of gradient descent more clearly. They do so by adding a counter terms to Gradient Flow that cancels out higher order discretization errors in DNNs, and this counter term is derived using backward error analysis, more precisely it is the solution to Equation 6 in the paper. \n\nGiven they are using backward error analysis, they are also able to quantify the discretization error for GD approximation of GF along with the counter term, and hence also provide a bound on the learning rate such that this discretization error is small. \n\nThe authors apply their Equation of Motion to translation and scale invariant layers and show that their theoretical predictions better match GD. My main concern is that this paper does not provide any interesting new result. The main novelty of the paper is the general form of the counter term (which is derived in Theorem 3.3), as opposed to previous work for example Barrett and Dherin (Implicit gradient regularization) which only uses the first order term as their regularization term. \n\nWhile the authors mention that they derive the discretization error (in Corollary 4.1), the precise formulation is not provided and the rate is provided as an upper bound (using Big-OH) which I believe is an artifact of standard series expansion results. \n\nFurthermore, for most of the results on the discretization error bounds and the upper bound on the learning rate, the authors assume that the counter term is either equal to zero or assume the first order counter term (i.e., the term in equation 9).\n\nWith the counter term equal to 0, the analysis matches a lot of the previous work (example Elkabetz and Cohen, [18]) and with the first order term the main theoretical results are very similar to the ones already established in Barrett and Dherin (who introduce the first order counter term as the implicit regularizer and the error is discussed in Theorem 3.1)\n\nBesides this, the authors do apply their analysis to characterize the learning dynamics of scale and translation invariant layers and show that with the inclusion of the first order (adding higher order counter term is going to be computationally expensive) they are able to better predict the decay of parameter norm, which is interesting but not that surprising. In the current state the results seem to be very similar to the previous work (as I have mentioned in the previous comment), and besides the series expansion in Theorem 3.3, it is very hard to adjudge the precise novelty of the work, esp since most of the results discussed in the next sections assumes that the counter term is either 0 or they assume the first order counter term. I would really appreciate if the authors could clarify their contribution. \n\nAnother minor comment, I think it would be easier to read the paper (esp the proofs) if the author provided a sketch after the main statement in the main paper as well as in the appendix. Yes the authors have discussed the limitations of their work. ", " This paper deals with the discrepancy between the actual discretized gradient descent and its continuous version, i.e., a gradient flow, for describing the equation of motion of learning dynamics more precisely. The discrepancy error is formally introduced by using the backward error in numerical analysis. The authors derive a counter term, which can compensate for such a discrepancy of the gradient flow, thus can describe the actual discretized trajectories in a continuous manner. While the derived counter term is a complicated functional integral equation, it can be analytically solved (for all orders) by assuming the underlying solution is a power series. As an application, the authors use the derived dynamics with the proposed counter term for investigating scaling- and translation-invariant layers. [Note]\n\nBecause I am not an expert on learning theory, my evaluation might not be exhaustive. Also, I did not read the proofs in the supplementary material carefully.\n\n[Strengths]\n\nTo me, the derived counter term is novel and seems to be useful to predict and interpret complicated learning dynamics of deep neural network models. Although there have been previous studies that incorporate some correction terms with respect to the backward analysis error, to my knowledge they are restricted to 1-st order compensation $\\frac{1}{4} \\nabla || \\nabla f(\\theta)||$, which is generally called an implicit gradient regularization. The proposed counter term is generalized for higher orders and can recover the previous studies well as in (9). While the main result (8) seems to be a known technique in the numerical analysis field, I would like to give appropriate credit to the authors for the contribution that introducing such a technique to the deep learning field well. \n\n[Weakenesses]\n\nThe authors address a full-batch gradient descent only. The authors also mention such a limitation of this work in Conclusion and Limitations. I am not sure whether the approach can be easily generalized for the mini-batch stochastic gradient descent method.\n\nWhile the authors theoretically prove high-order corrections are required to cancel the leading order of discretization error, it will be great if the authors (1) experimentally show the discrepancy between the GF with the proposed correction and that with a first order correction, and (2) demonstrate the former can approximate GD well compared the latter, e.g., in Figure 2 or Figure 4.\n\nThe proposed method is not guaranteed for GD with a large learning rate, thus cannot be used for explaining some interesting phenomena, e.g., the regularization effect of an initial large learning rate. However, I think it is not a crucial drawback of this paper, considering the essential assumption of GF.\n\nThe paper is very dense and thus hard to read. A journal format might be more suitable for a clear representation of this work.\n\n\n\n As I mentioned previously, it will be nice if the authors can experimentally elaborate on the usefulness of the proposed generalized counter term compared to the first-order counter term.\n\nIs there any idea that can tighten the learning rate bound (13) more efficiently?\n The authors discuss the limitations of the proposed method, e.g., the lack of concerning the mini-batch stochastic GD and other optimizers beyond GD, in Conclusion and Limitations section. It will be nice if the authors also address the questions raised above.", " This paper is concerned with a theoretical understanding of modelling the dynamics of gradient descent with a differential equation. Previous work (Gradient Flow) describes the differential equation as:\n\n$\\frac{d\\theta}{dt} = -\\nabla_{\\theta} L(\\theta)$\n\nWhich is the Euler discretisation of Gradient Descent:\n\n$\\theta_{t+1} = \\theta_{t} - \\eta \\nabla_{\\theta}L(\\theta_t)$\n\nHowever, discretisation error exists, such that Gradient Flow and Gradient Descent diverge. This paper derives a counter term to Gradient Flow, labelled by $\\xi$:\n\n$\\frac{d\\theta}{dt} = -\\nabla_{\\theta} L(\\theta) - \\eta\\xi(\\theta)$\n\nThe counter term is a functional integral, the paper approximates the counter term with a series solution in $\\eta$, with a recursive relationship existing to get from term $k$ to term $k+1$, the new dynamics are called Equation of Motion (EoM). A limit for the learning rate is also derived, which allows accurate simulation of gradient descent using (EoM) with larger step sizes.\n\nFinally these findings are tested on scale-invariant layers and translation-invariant layers and the results support the theoretical findings. Strengths:\n\nThis is not an area I have significant expertise in, however, overall this paper is very good in my opinion. Specifically:\n\n- The paper shows many impressive theoretical results\n- The experiments are extensive and support the theory\n- The motivation for the paper is very clear\n\nWeaknesses:\n\nAgain, I believe the paper is very good. I think it presents good theoretical results, with sufficient experiment to support this. The only weaknesses overall are in the writing style and presentation, I think the paper is quite math heavy currently, which makes it less accessible. However, this is down to personal preference. For example lines 239-245 (Definitions) feel like quite a complicated way of saying most of the meanings. One definition is $\\alpha_{\\mathcal{A}} = \\alpha I_\\mathcal{A} + I_{\\mathcal{A}^{C}}$, but it is easier in my opinion to say $\\alpha_{\\mathcal{A}}$ is $\\alpha$ for the parameters in layer $\\mathcal{A}$ and $1$ for the others. This can be extended for most of the definitions in this paragraph. I think the best way the paper can be improved is by having as much intuition as possible in the main text, with theorems included, and then possibly having more mathematical detail in the appendix.\n\nOther small points are the presentation of results. Figure captions don't have **Figure n** in line with the caption, but over to the left which feels weird. Table 1 could also be improved, rather than listing the decay rates, it might be more informative to list the differences (and relative differences) in decay rates between GF & GD and EoM & GD. The few questions/clarifications I have are:\n\n- Line 240 (Definitions), a symmetry is defined as a transformation of parameters such that the loss function is unchanged. Is this definitely correct? Should it not be a transformation that leaves the predictions unchanged?\n- Is it possible to run similar experiments for layers that do not have any symmetry, such as a standard linear layer? This could make a nice piece of future work.\n- It seems that the majority of experiments were run using the first term in the expansion of $\\xi$. Is it possible to run an ablation where we see how adding the next term affects results, and on and on until say the 5th term (to justify using only the first term)? I appreciate this requires huge amounts of memory for the higher derivatives.\n- Apart from the theoretical contributions (which are already fantastic), do you envision any practical applications of this work? The authors have been upfront with the limitations of their work. These are given in the conclusion and provide a nice avenue for future research, they are about different optimizers and how using minibatches are not accounted for in the current work. I cannot think of any further limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "KQgPqkd6lCc", "gZNtkH7td47i", "EXsuZlV5WIm", "y6_7_ZmMu3o", "tq2sm42Fpnm", "m2ublMtaNAc", "T6EyO-po4xw", "D20lsHVxobp", "MzXKD3ywbEb", "wG_B2fsmud", "EJ1OTwhJ3xD", "m2ublMtaNAc", "X_Fgcbk4OfT", "oy8p_VG_H5C", "L-ZkcQZ90xd", "nips_2022_qq84D17BPu", "nips_2022_qq84D17BPu", "nips_2022_qq84D17BPu", "nips_2022_qq84D17BPu", "nips_2022_qq84D17BPu" ]
nips_2022_QFQoxCFYEkA
DENSE: Data-Free One-Shot Federated Learning
One-shot Federated Learning (FL) has recently emerged as a promising approach, which allows the central server to learn a model in a single communication round. Despite the low communication cost, existing one-shot FL methods are mostly impractical or face inherent limitations, \eg a public dataset is required, clients' models are homogeneous, and additional data/model information need to be uploaded. To overcome these issues, we propose a novel two-stage \textbf{D}ata-fre\textbf{E} o\textbf{N}e-\textbf{S}hot federated l\textbf{E}arning (DENSE) framework, which trains the global model by a data generation stage and a model distillation stage. DENSE is a practical one-shot FL method that can be applied in reality due to the following advantages: (1) DENSE requires no additional information compared with other methods (except the model parameters) to be transferred between clients and the server; (2) DENSE does not require any auxiliary dataset for training; (3) DENSE considers model heterogeneity in FL, \ie different clients can have different model architectures. Experiments on a variety of real-world datasets demonstrate the superiority of our method. For example, DENSE outperforms the best baseline method Fed-ADI by 5.08\% on CIFAR10 dataset.
Accept
This work proposes a new one-shot FL algorithm. It consists of two steps on the server: a data generation step that trains a GAN to synthesize data utilizing the local models and a distillation step that distills the ensembles local models using the generated data. The method has several advantages in comparison with other one-shot FL algorithms. The performance is verified by experiments. One major concern in the reviews was regarding novelty. This has been addressed by the author. Please clarify the following in the final version 1. Teacher's model: What is the quality of the ensemble model (teacher) in the experiment? Does the distilled model improve over the teacher (similar to self-distillation)? Showing the distillation gap is important to understand how the method works. 2. Contribution of GAN in quality: From pure quality point of view, what if the original data is used to train the ensemble and distilled models? Please also consider adding privacy-utility trade offs in the future work. It is true that one-shot FL is in general more secure than multi-round methods, and some DP work can be applied here directly. But showing on-par or better privacy-utility trade off is an important justification on why it should be adopted.
train
[ "Lg1fP8eq_zo", "GfW2Dp7GWK", "zQoMREJaEQJ1", "3kaITUVELstv", "-hRE4n1ihee", "qkRAm9yoCH9", "xUsK_edVQ_", "9-lTTxgIgfg", "uw3aVFOFvwD", "4I0WqBZPz6W", "CiwZuDY7r9O", "GQ27_-i_f5X", "2y71PkQ1vb4", "ChTm8qxpsaM", "PRrMzeha2kj", "FmHfLYha6an" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " i have updated my score, thanks.", " Hi\n\nThank you for your detailed response and I have improved your score. ", " Dear Reviewer fNZT,\n\nThank you again for your support of our work and valuable feedback! We tried our best to address all mentioned concerns/problems. Are there unclear explanations? We could further clarify them. Could you please kindly re-evaluate our paper based on the current situation? If you have any further questions, we are very glad to discuss them.\n\nWe highly appreciate knowing if our responses have addressed your initial questions. \n\nThank you a lot!\n", " Dear Reviewer 4PjE, \n\nWe would like to thank the reviewer for taking the time to review our paper and for the comments.\n\nWe have now clarified the significance and the novelty of our method and also show a detailed comparison with FedGen. Note that more detailed information is shown in our rebuttal summary.\n\nPlease kindly let us know if anything is unclear. We truly appreciate this opportunity to improve our work and shall be most grateful for any feedback you could give to us.\n\nThanks again!\n", " > if you need to use any labelled data to train the ensemble on the server\n\nThank you very much for your quick feedback! In fact, as you can see from the code above, we do not require any labeled data for both data generation and model distillation. In our method, the ensemble model is always frozen, and only the **unlabeled synthetic data** is used to train the student model. We hope these new discussion have addressed your concerns about the data privacy issue. \n\nAgain, we highly appreciate knowing if our responses have addressed your questions. We are delighted to answer your remaining concern. We appreciate your inputs and feedback very much. Thank you!", " Thanks lot for your response! I am curious if you need to use any labelled data to train the ensemble on the server? If it does, I am concerned the data privacy issue.", " Dear reviewer,\nCan you please take a look at the author's reply and other reviews and see if they addressed your concerns in any way? Thanks.", " > Please explain the important steps in algorithm 1\n\nSorry for the confusion. Following is a detailed description of our method.\n1) At first, the server side will collect $n$ local models that have been trained to converge (e.g., for 200 epochs). Then ensemble these $n$ models as a teacher model $T$. Randomly initializes a generator $G$ and a student model $S$.\n2) Afterwards, train the generator $G$ and the student model $S$ alternately on the server. \nWe will quickly cleaned our code, benchmark datasets and pre-trained model. Our code will soon be available. We attach our key implementation of DENSE as below.\n\n```python\ndef ensembel_distill(teacher, student, generator):\n syn_data = generator.get_saved_data()\n with tqdm(syn_data) as epochs:\n for idx, (syn_images) in enumerate(epochs):\n t_out = teacher(syn_images)\n s_out = student(syn_images.detach())\n distill_loss = KLDiv(s_out, t_out.detach())\n\n\ndef generate_data(self, teacher, student, generator):\n hooks = []\n for m in teacher.modules():\n if isinstance(m, nn.BatchNorm2d):\n hooks.append(Hook(m))\n with tqdm(total=self.iterations) as t:\n for it in range(self.iterations):\n inputs = generator(z) \n t_out = teacher(inputs)\n loss_bn = sum([h.r_feature for h in hooks]) \n loss_oh = F.cross_entropy(t_out, targets) \n s_out = student(inputs)\n mask = (s_out.max(1)[1] != t_out.max(1)[1]).float()\n loss_adv = -(KLDiv(s_out, t_out, reduction='none').sum(\n 1) * mask).mean() \n loss = self.bn * loss_bn + self.oh * loss_oh + self.adv * loss_adv\n\nfor epoch in range(epochs):\n # 1. Data generation\n generate_data(teacher, student, generator) \n # 2. Ensemble distillation \n ensembel_distill(generator, student, teacher) \n``` \t \n", " We thank Reviewer fNZT for the careful review. We clarify your points mentioned in the comments as follows.\n\n> Attackers in the server side can still recover the privacy data based on the model parameters.\n\nAs reviewer e2Cu said, \"one-shot FL is already more secure than multi-round FL\". We would like to emphasize that our method is more secure than multi-round FL. The reasons are as follows:\n\n- Assumes the client is malicious: Compared to multi-round FL, our method greatly mitigates privacy and security risks during communication. In multi-round FL, attackers can continuously modify the data (data poisoning) or model (model update poisoning:) to modify the behavior of the model in some undesirable way. However, if there is only one round of communication, an attacker would have difficulty launching a successful attack.\n\n- Assumes the server is malicious: Yes, some studies[1,2] have shown that even without any real training data, attackers can still recover the data through model parameters. Imagine attacking a well-trained model (84% accuracy ) trained in multi-round FL and a model (63% accuracy) trained in one-shot FL. It is easier for attackers to recover private information from a better model. Thus, we believe that once the central server becomes malicious, the attacker can recover the private data more easily in multi-round FL than in one-shot FL.\n\nIn light of the above discussion, we believe that our approach does not pose any additional privacy concerns in comparison with multi-round FL.\n\nFurthermore, several existing privacy-preserving methods can be incorporated into our framework to protect clients from adversaries. We leave this as our future work.\n\n[1] Yin H, Molchanov P, Alvarez J M, et al. \"Dreaming to distill: Data-free knowledge transfer via deepinversion\", CVPR 2020.\n\n[2] Yin H, Mallya A, Vahdat A, et al. \"See through gradients: Image batch recovery via gradinversion\" , CVPR 2021.\n\n---\n\n> if the labelled data in the server would leak the data privacy?\n\nWe visualize the generated data in Figure 6 (learn from the models pretrained on CIFAR10 and SVHN datasets ). Clearly, the synthetic data are not similar to the original data, which can effectively reduce the probability of leaking sensitive information of clients.\n\n---\n\n> Why not compare the proposed method with the FedGen method that is data-free FL with knowledge distillation?\n\nThanks for your suggestion! First, we would like to emphasize that FedGen[2] needs to broadcast the generator parameter in each communication round, which means it heavily relies on frequent communication to continuously regulate the local training. It is also d \n ef ense to note that FedGen provides results only on simple datasets, such as MNIST and EMNIST. However, our approach can be adapted to more complex datasets, such as tiny-imagenet and CIFAR100.\n\nBased on your suggestion, we try our best to compare FedGen with our method on the same configuration. We conducted experiments on CIFAR10 and MNIST datasets with $\\alpha$=0.1. Below is a comparison of the two methods in the one-shot FL:\n\n| Method | MNIST | CIFAR10 |\n| :----: | :----: | :----: | \n| FedGen | $51.32_{\\pm 1.62}$ | $28.31_{\\pm1.93}$ |\n| Ours | $66.57_{\\pm1.31}$ | $50.31_{\\pm1.56}$ |\n\nWe observed poor performance while applying FedGen on CIFAR10 dataset. Several researchers have also found that FedGen performs poorly on CIFAR10 (see detailed issues in FedGen's github code link). We hope these new results have addressed your concerns. \n\n---\n\n> Please conduct experiments with mutiple random seeds\n\nThanks for your valueable comments. The following table shows the results on CIFAR10 and CIFAR100 ($\\alpha=0.5$) using 10 random seeds and reports their average and standard deviation. \n\n| Method | Fed-DAFL | Fed-ADI | Ours |\n| :----: | :----: | :----: | :----: | \n|CIFAR10|\t$58.52_{\\pm 1.37}$ |\t$59.31_{\\pm1.21}$|\t$63.06_{\\pm 1.32}$|\n|CIFAR100|\t$38.34_{\\pm2.03}$|\t$40.06_{\\pm0.95}$|\t$42.56_{\\pm1.41}$|\n\nWe hope these new results have addressed your concerns about the experimental results.\n", " \nThank you very much for your very detailed and supportive comments! We indeed highly appreciate your in-depth thought and discussion about our paper. \n\n> move the full algorithm (Algorithm 1) in the main context\n\nThanks for your valuable suggestion. We will move Algorithm 1 to the main text in our final version.\n\n> How does table 6 obtain the results? What is the hyperparameter used?\n\nIn the main text(line 240), we described that the default setting is $\\alpha=0.5$ for non-IID settings, and Table 6 shows the experimental results when $\\alpha=0.5$ on three datasets. Then we report the contributions of different loss functions used in our method. And yes, we also believe that $z$ and $y$ are somehow related. Here we use the generator to learn a proper transformation.\n\n> How does the author handle the case that for client models, their logits distributions are very different?\n\nThanks for pointing this out. In non-IID FL, we think it is likely to be a case of model overfitting when a certain model makes an overconfident judgment. For example, when there are too many samples in class 0 and too few samples in class 1, the local model is likely to classify the test samples belonging to class 1 as class 0. In this way, our ensemble method have been shown to yield robust measures of uncertainty, and are capable of distinguishing between different forms of uncertainty. And averaging model logits provides a simple and effective solution. \n\n\n> From algorithm 1, in the inner loop where generator is being updated, the global model's parameter theta_S is fixed. This would affect the quality of loss terms in equation (5) because the global model is still bad\n\nYes, in the early stages of our method, the generator performance was not satisfactory and the student model did not converge. By alternately training the generator and student models, the two models will gradually become more accurate. The idea is somehow similar to adversarial training in GAN. We hope that the student model will be able to learn useful information from synthetic data and ensemble models. \n\n> In global model training (distillation), the author uses KL divergence as objective (equation (6)). Have the authors tried other losses such as cross entropy?\n\nThank you for your question. As distillation loss is not our main contribution, we did not devote much time to tuning the distillation loss function. According to some studies[1,2], when performing model distillation, using the KL loss function to constrain the output of each model's logits is more effective than using the cross entropy loss. It is because the logits contain more information than the one-hot label. Those[3,4] who use cross entropy loss for distillation often use a temperature hyperparameter to soften the output value of softmax to some extent. Due to this, we think that KL loss or the cross-entropy loss function with temperature parameters should be suitable for model distillation. Here are experiments on CIFAR10 and SVHN.\n\n| Loss | KL loss | Cross-Entropy loss|\n| :----: | :----: | :----: | \n| CIFAR10 | 62.56 | 60.13|\n|SVHN| 79.64 | 77.83 | \n\n[1] Zhang, Jie, et al. \"QEKD: Query-Efficient and Data-Free Knowledge Distillation from Black-box Models.\" arXiv preprint arXiv:2205.11158 (2022).\n\n[2] Truong, Jean-Baptiste, et al. \"Data-free model extraction.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[3] Nayak, Gaurav Kumar, et al. \"Zero-shot knowledge distillation in deep networks.\" International Conference on Machine Learning. PMLR, 2019.\n\n[4] Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. \"Distilling the knowledge in a neural network.\" arXiv preprint arXiv:1503.02531 2.7 (2015).\n\n> The paper does not have a lot of theoretical analysis\n\nThank you for your valuable suggestion, and we also believe that adding theoretical analysis can further enhance the quality of our paper. Just as you mentioned, in the area of training with synthetic data, especially the data-free setting, the development of practice is usually ahead of theory. Advancing the theoretical progress in this field is a valuable future research direction.\n", " We thank Reviewer e2Cu for the very positive and constructive feedback. We indeed highly appreciate your in-depth comments and summary about our paper. \n\n> What is the architecture of GAN? Is a larger GAN better for model distillation?\n\nFor fair comparisons, we use the same generator for all methods in our experiments. We also introduce the effects of different sizes of generators, as shown in the table, where DCGAN, StyleGAN and Transformer-GAN have small, medium and large parameters. Different generative models have relatively minor effect on the performance. \n\n| Generator | DCGAN | StyleGAN| Transformer-GAN |\n| :----: | :----: | :----: | :----: | \n| Ours | 62.64 | 63.21 | 63.83 |\n\n\n> In Fig.3, why the performance of Fedavg decreases with longer training epochs?\n\nAs shown in Fig.3, the global model achieves the best performance (test accuracy=34\\%) when $E=40$, while a larger value of $E$ can cause the model to degrade even collapse. This result can be attributed to the inconsistent optimization objectives with non-IID data, which leads to weight divergence. Thus, it is not suitable to use FedAvg if there are too many local training rounds as the model parameters will fluctuate too much.\n\n\n\n> In Table 6, how about the results for \"w/o l_CE\"? Is l_CE necessary?\n\nYes, the cross-entropy loss function is the most basic way to update the generator.\nIf the cross-entropy loss function is removed, the image will appear more like random noise.\n\n> a few important papers are recommended\n\nThanks for the valuable suggestion! In the final version, we will include these recent related papers.", " \nWe thank Reviewer 4PjE for the comments and summary of our paper. We have addressed all your questions in the following.\n\n> The novelty is limited.\n\n\nFirst, we want to point out that this paper is primarily focused on one-shot FL. Although some studies [1,2] have used data-free distillation in FL, these methods are impractical in one-shot FL. For example, [1] requires frequent communication with clients, and FedGen[2] needs to broadcast the generator parameter in each communication round, which means it heavily relies on frequent communication to continuously regulate the local training. The dependency on frequent communication makes these methods exhibit poor performance in one-shot FL scenarios.\n\nMoreover, we provide a detailed comparison with FedGen[2] as follows:\n\n| Method | #Communication | Broadcast | Distillation|\n| :----: | :----: | :----: | :----: | \n|FedGen|\tmulti-round\t|predictor+generator|\tAt client side, use a generator to generate data in local training|\n|Ours\t|single round|\tlocal model|\tAt server side, ensemble distillation with synthetic data|\n\n\nEssentially, FedGen[2] uses the generator to regulate local training. Thus FedGen[2] heavily relies on frequent communication to update the generator. But for our method, there is no need to send the generator to the client. Once the server has collected local models, all training can be completed on the server. It is also worthwhile to note that FedGen provides results only on simple datasets, such as MNIST and EMNIST. However, our approach can be adapted to more complex datasets, such as tiny-imagenet and CIFAR100.\n\nThe above comparison can highlight the technical characteristics of our work compared to existing methods. We believe that the innovative use of data-free distillation can advance the field of one-shot FL.\n\n[1] Lin, Tao, et al. \"Ensemble distillation for robust model fusion in federated learning.\" NeurIPS 2020.\n\n[2] Zhu, Zhuangdi, et al. \"Data-free knowledge distillation for heterogeneous federated learning.\" ICML, 2021.\n\n---\n\n> The one-shot relies heavily on the 'well-trained' local models.\n\n\nThank you for pointing this out. “well-trained” refers to the locally trained model which has converged, not the model with particularly good performance. For example, the local model in the table below has only 35.21% accuracy, whereas our method obtains a model with 49.76% accuracy. In our method, users only have to train their own models based on their actual circumstances (e.g. , users with limited resources can design smaller models). Moreover, as shown in Tables 2 and 3, our method outperforms the baseline methods in scenarios with small models and limited data.\n\n---\n\n> The attackers can have the potential to reconstruct local models or local samples if the server is adversary-oriented.\n\n\nAs reviewer e2Cu said, \"one-shot FL is already more secure than multi-round FL\". We would like to emphasize that our method is more secure than multi-round FL. The reasons are as follows:\n\n- Assumes the client is malicious: Compared to multi-round FL, our method greatly mitigates privacy and security risks during communication. In multi-round FL, attackers can continuously modify the data (data poisoning) or model (model update poisoning:) to modify the behavior of the model in some undesirable way. However, if there is only one round of communication, an attacker would have difficulty launching a successful attack.\n\n- Assumes the server is malicious: Yes, some studies[3,4] have shown that even without any real training data, attackers can still recover the data through model parameters. Imagine attacking a well-trained model (84% accuracy ) trained in multi-round FL and a model (63% accuracy) trained in one-shot FL. It is easier for attackers to recover private information from a better model. Thus, we believe that once the central server becomes malicious, the attacker can recover the private data more easily in multi-round FL than in one-shot FL.\n\nIn light of the above discussion, we believe that our approach does not pose any additional privacy concerns in comparison with multi-round FL.\n\nFurthermore, several existing privacy-preserving methods can be incorporated into our framework to protect clients from adversaries. We leave this as our future work.\n\nBesides, we visualize the generated data in Figure 6 (learn from the models pretrained on CIFAR10 and SVHN datasets ). Clearly, the synthetic data are not similar to the original data, which can effectively reduce the probability of leaking sensitive information of clients.\n\n[3] Yin H, Molchanov P, Alvarez J M, et al. \"Dreaming to distill: Data-free knowledge transfer via deepinversion\", CVPR 2020.\n\n[4] Yin H, Mallya A, Vahdat A, et al. \"See through gradients: Image batch recovery via gradinversion\" , CVPR 2021.\n\n---\n\n> The writing and presentation should be improved.\n\n\nThanks for your advice. We will carefully revise it in the final version.", " This paper considers the problem of training a global model with only one-shot communication between the server and clients in the federated learning setting. This problem is addressed by designing a generator based on the ensemble of 'trained' local models, to generate synthetic data samples, which can then be further utilized to distill the ensemble models to the global model in a knowledge distillation paradigm. [Strengths]\n\n- The one-shot federated learning is an important problem in both areas of machine learning and distributed computing.\n\n- Using a generator trained from local models can provide synthetic data samples for the knowledge distillation between the ensemble models and the global model, which is also a simple solution for data-free distillation.\n\n- The evaluation has been conducted on various datasets, which is good.\n\n[Weaknesses]\n\n- The novelty is limited. The setting of 'data-free distillation' + 'heterogeneous local models' has been already addressed by some previous works. For example, FEDGEN [1] also trains a generator to provide data-free knowledge distillation between the server and clients, wherein, their local models are also in heterogeneous architectures. The idea is almost the same.\n\n- The one-shot relies heavily on the 'well-trained' local models, to reduce the communication overhead between the server and clients. However, such a setting is not so practical when the local devices are resource constrained, in either data or computing capability.\n\n- Using the ensemble output results of uploaded local models may cause serious privacy issues. In that case, each individual local model can be examined on both input and output, thus the attackers can have the potential to reconstruct local models or local samples if the server is adversary-oriented.\n\n- The writing and presentation should be improved.\n\n[1] Data-Free Knowledge Distillation for Heterogeneous Federated Learning, ICML 2021. \n\nPlease refer to item **'Weaknesses'** in the previous section. N/A", " The paper focuses on one-shot federated learning, i.e., the server can learn a model with a single communication round. The proposed FedSyn method has two stages: first, training a generator from the ensemble of models from clients; second, distilling the knowledge of the ensemble into a global model with synthetic data. The authors validate the efficacy of FedSyn by conducting extensive experiments on 6 different datasets with various non-IID settings generated from Dirichlet distributions. Results can well support that the proposed method consistently outperforms all the baselines. Strengths\n1. This paper focuses on one-shot FL, an interesting but less explored topic. From my understanding, the proposed method is by far the most practical one considering that:1) DENSE requires no additional information to be transferred between clients and the server; 2) DENSE does not require any auxiliary dataset for training; 3) DENSE considers both model heterogeneity, i.e., different clients with different model architectures. Generally, I think the investigated problem is sound and interesting. I think this can be an extremely strong paper in one-shot FL.\n\n2. The method using data-free ensemble distillation is inspiring and novel. The experimental evaluation of applying the data-free ensemble distillation to one-shot FL significantly improves the performance of the global model. Fig. 1 illustrates the method clearly and is well drawn. \n\n3. The experimental settings are well motivated, and the related analysis is convincing. My favorite parts of the paper are the discussion and the ablation study, which offer sound support for the proposed DENSE. \n\nOverall, this paper is very interesting and to my knowledge novel. It seems like a pioneering contribution towards practical one-shot federated learning. Hence, I would like to vote for strong acceptance. \n\nWeaknesses.\n\nBelow questions need to be addressed to further improve the quality of this paper.\n1. What is the architecture of GAN? Is a larger GAN better for model distillation?\n\n2. In Fig.3, why the performance of Fedavg decreases with longer training epochs?\n\n3. In Table 6, how about the results for \"w/o l_CE\"? Is l_CE necessary?\n\n4. Even though authors provide sufficient references, a few important papers are recommended. For example, a recent IJCAI paper “Data-Free Adversarial Knowledge Distillation for Graph Neural Networks” that also studied data-free distillation. These papers did not study exactly the same topic as this paper, but would certainly further enrich the literature review.\n 1. What is the architecture of GAN? Is a larger GAN better for model distillation?\n\n2. In Fig.3, why the performance of Fedavg decreases with longer training epochs?\n\n3. In Table 6, how about the results for \"w/o l_CE\"? Is l_CE necessary?\n\n4. In the future direction, authors mentioned “defend privacy attacks in one-shot FL”, could you elaborate more on this? From my understanding, one-shot FL is already more secure than multi-round FL. \n The analysis, theory, and method are sound to me, but I didn't check the privacy concern.", " This paper proposed a novel data-free one-shot FL framework named DENSE based on data generation and knowledge distillation techniques so that it can be applied to communication-efficient heterogeneous FL. Evaluation results on multiple benchmark datasets illustrate the good performance of the proposed FL framework. Strengths:\n[1] This paper proposed a novel data-free one-shot FL framework consisting of two stages: i) trains a generator based on ensemble models uploaded from clients and random labels; ii) adopts knowledge distillation to transfer the output from the teacher model to the global model. The idea of data-free one-shot FL is very interesting. \n[2] Extensive experiments have been conducted to evaluate the performance of the proposed framework\n\nWeakness:\n[1] I am concerned about the privacy as the ensemble models is uploaded to the central sever. As far as I know, attackers in the server side can still recover the privacy data based on the model parameters.\n[2] I am wondering if the labelled data in the server would leak the data privacy? Such as race and gender.\n[3] Please explain the important steps in algorithm 1 in Appendix with mode details. It seems hard to understand the algorithm.\n[4] Please conduct experiments with mutiple random seeds, and then report the mean and variance. Otherwise, the experimental results seem not very convincing. Why not compare the proposed method with the FedGen method that is data-free FL with knowledge distillation? I know it requires to send the model parameters of generator between server and clients multiple rounds. Yes, the authors have described the limitations of this work.", " The authors proposed a novel one-shot data-free Federated Learning algorithm:\n1. The algorithm only requires a single communication rounds. The clients locally train the model and uploads the model to the central server.\n2. In the server, the ensemble of the client model is utilized to train a generator model that generates synthetic data. The generator is trained by giving a random number and a random label, such that the logits computed from the generated image by averaging the client models minimize the cross entropy.\n3. Then the generator is used to generate a large amount of synthetic data to be labeled by the client ensemble. One then distill the knowledge of the ensemble to a global model by train it with synthetic data.\n4. The authors also includes additional loss terms for generator training to ensure similarity, stability, transferability of the data.\n5. Finally, extensive experiments are carried out, comparing with existing Federated learning algorithm as well as ablation study. The paper shows strength in the following perspectives:\n1. Originality. The algorithm proposed is novel to my best knowledge. It addresses several issues of applying federated learning to real world applications. First, the algorithm is genuinely data free: no client data, distillation data or other data summary will be uploaded to server. Second, the algorithm admits heterogeneity of client models, such that in real world applications clients have the freedom of picking local models. Thirdly, the algorithm is one-shot in the sense that only one round of communication is needed, although the author mentioned that more rounds can improve the quality of the model.\n2. Quality. The experiments conducted in the paper are extensive and sound. The author explains why some of the comparisons are not included such as regularization based methods. There are also plenty of experiments with a wide range of parameter space to demonstrate the effectiveness of the algorithm. Those include balancedness of the client data, number of clients, number of rounds and effects of added terms.\n3. Clarity. The paper is well-written. The author reviews the existing limitations of the literature, and gives a clear structure on the components of the algorithm.\n4. Significance. The paper's result is important both from academic and practical point of view. For the latter, it will stimulate the research community for real world application of FL, such that the whole infrastructure is secure, privacy-preserving, and high quality.\n\nThe weakness of the paper, in my opinion, has the following aspects:\n1. Quality. The paper does not have a lot of theoretical analysis, especially on why the algorithm would converge regardless of which dataset one is using, and data distributions among clients, etc. \n2. Clarity. While it is OK to move the backgrounds and preliminaries in appendix, in my opinion it would be better to move the full algorithm (Algorithm 1) in the main context - readers may constantly go to the table and refer to it. There are a few questions regarding the technical details:\n\n1. The generator G(z) is trained by first sample random pairs of random number and label (z, y). Then the generated input x = G(z) is used such that x would minimize some loss. The author first shows cross entropy loss alone does not work well and conjecture that it is due to non-IID data. However, in ablation study one could certainly make the data IID for all clients (table 6). How does table 6 obtain the results? What is the hyperparameter used? \n\nIn my opinion the poor performance of cross entropy alone could also due to the sampling setup: we generate a random pair (z, y). There may not exist a proper transformation between z to x such that (z, y) is independent. In other words, (z, y) could be correlated. \n\n2. How does the author handle the case that for client models, their logits distributions are very different (line 105-106)? In extreme case, suppose one model produce a set of logits of value O(1e6) while other models produce logits of value (1e-6). Then one model could clearly dominate over the average of the ensemble. Is this a concern for the averaging? \n\n3. From algorithm 1, in the inner loop where generator is being updated, the global model's parameter theta_S is fixed. This would affect the quality of loss terms in equation (5) because the global model is still bad. Could the author comment on this issue?\n\n4. In global model training (distillation), the author uses KL divergence as objective (equation (6)). Have the authors tried other losses such as cross entropy? \n\n The author has described limitations of current work and suggest for future investigation. They have not discussed negative societal impact, which I believe is OK." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "xUsK_edVQ_", "zQoMREJaEQJ1", "PRrMzeha2kj", "2y71PkQ1vb4", "qkRAm9yoCH9", "9-lTTxgIgfg", "GQ27_-i_f5X", "PRrMzeha2kj", "PRrMzeha2kj", "FmHfLYha6an", "ChTm8qxpsaM", "2y71PkQ1vb4", "nips_2022_QFQoxCFYEkA", "nips_2022_QFQoxCFYEkA", "nips_2022_QFQoxCFYEkA", "nips_2022_QFQoxCFYEkA" ]
nips_2022_kImIIKGqDFA
Large-batch Optimization for Dense Visual Predictions
Training a large-scale deep neural network in a large-scale dataset is challenging and time-consuming. The recent breakthrough of large-batch optimization is a promising way to tackle this challenge. However, although the current advanced algorithms such as LARS and LAMB succeed in classification models, the complicated pipelines of dense visual predictions such as object detection and segmentation still suffer from the heavy performance drop in the large-batch training regime. To address this challenge, we propose a simple yet effective algorithm, named Adaptive Gradient Variance Modulator (AGVM), which can train dense visual predictors with very large batch size, enabling several benefits more appealing than prior arts. Firstly, AGVM can align the gradient variances between different modules in the dense visual predictors, such as backbone, feature pyramid network (FPN), detection, and segmentation heads. We show that training with a large batch size can fail with the gradient variances misaligned among them, which is a phenomenon primarily overlooked in previous work. Secondly, AGVM is a plug-and-play module that generalizes well to many different architectures (e.g., CNNs and Transformers) and different tasks (e.g., object detection, instance segmentation, semantic segmentation, and panoptic segmentation). It is also compatible with different optimizers (e.g., SGD and AdamW). Thirdly, a theoretical analysis of AGVM is provided. Extensive experiments on the COCO and ADE20K datasets demonstrate the superiority of AGVM. For example, AGVM demonstrates more stable generalization performance than prior arts under extremely large batch size (i.e., 10k). AGVM can train Faster R-CNN+ResNet50 in 4 minutes without losing performance. It enables training an object detector with one billion parameters in just 3.5 hours, reducing the training time by 20.9×, whilst achieving 62.2 mAP on COCO. The deliverables will be released at https://github.com/Sense-X/AGVM.
Accept
The authors describe a new method of large-batch optimisation for dense prediction computer vision tasks. The reviewers appreciate the simplicity of the method, convincing experiments and the potential practical importance. AC recommends acceptance.
train
[ "fuuyAj-Zwmx", "kyVABKuuTdV", "47H_oAl-m8P", "r7ga24fUZXZ", "6ZWFIaux64v", "M_4W8U5AYSM", "YmwI94VBbV6", "Q-sfGuAHgGT", "Si4jFbZ2JWj", "0TPnDMFk3wu", "11RhDmpM_Oa", "d2k7q0F7EVh", "T-RylbySvly", "ayqSR-1EV71", "VZeyRh3unE_", "vLePTGS36Gn", "WLLtGj3ZObn", "cUjnhTO9-fc", "vh9tDLrExpA", "_cLFnqNu5vL" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer KpQK,\n\nWe sincerely thank the reviewer for the constructive feedback and support!", " I would like to thank the authors for addressing my questions. Also, I appreciate my fellow reviewers' comments that lead to in-depth discussions with the authors.\n\nThe authors well addressed my concerns. Specifically, the strategies that are introduced by the authors to improve numerical stability make sense to me. \n\nAdditionally, I went over other reviewers' comments. As pointed out by reviewer 8EAD, the concerns are addressed at least to an acceptable level in my opinion.\n\nTherefore, I'd like to raise my rating to 6, i.e., weak accept.", " We sincerely thank the reviewer for the constructive feedback and the kind support of this work! We believe AGVM helps democratize training in several computer vision problems. If our paper is accepted, we will definitely open source our codes.", " I have read the response and appreciate the additional evaluations and discussion.\nI have also read through the concerns raised by reviewrs 8KtA and q5yJ and the corresponding author responses.\nFor the practical concerns, I subjectively believe that authors have properly addressed these.\nIf reviewers 8KtA and q5yJ disagree, I would be eager to discuss that in the remaining time.\n\nUnfortunately, I cannot comment on theoretical issues in reviewer KpQK's position due to lack of expertise.", " **Q1: Estimate the variance term with respect to the mini-batch size b and iteration T to make the theoretical contribution more clear and solid**\n\nThanks for your sincere comments on the theoretical proof. We would like to clarify the contributions of our work to hope that reviewer can evaluate our paper from the **core contribution** of our paper. \n\nWe focus on solving a challenging problem in large-scale vision system. Our purpose is to contribute a new large-batch training algorithm for computer vision practical applications, not a theoretical algorithm to explore how to prove its' convergence very rigorously. Furthermore, our extensive empirical experiments have demonstrated the convergence and effectiveness of AGVM. Similar to previous works published on top machine learning conference [1,2,3,4], the theoretical analysis is only a **minor** insight and help further understand the properties of the method.\n\nThe dense visual prediction tasks such as object detection, instance segmentation and semantic segmentation, are significantly challenging in practical applications due to the large-scale datasets and time-consuming training. Increasing the batch size by adding the GPU resources is an efficient manner to reduce the training time. However, this suffers from poor generalization issues under the large-batch training scenario. Improving the stability and scalability of large-batch optimization is an essential and significant topic for dense visual prediction tasks in many practical applications, e.g., computer vision in smart city and visual robots. Thus we propose a novel and effective large-batch optimization method AGVM for various dense prediction tasks in this paper, which shows overwhelming superiority over all of the previous state-of-the-art methods. \n\nAs shown by Reviewer 8EAD's opinions, he is strongly in favor of paper acceptance and thinks that we propose a practically useful solution for large-batch training, which could help \"democratize\" training in several computer vision problems. Therefore, we hope that you can compare our works with the previous methods in this research field and re-examine our work in terms of its practical value and its contribution to the research community. \n\n**Last, NeurIPS conference has the following acceptance standard on machine vision: Novelty of algorithm/application, Difficulty of application, Quality of results, Insight conveyed, and Rigorous empirical evaluation (https://nips.cc/Conferences/2016/PaperInformation/EvaluationCriteria). Specifically, a NeurIPS paper on machine vision should propose a machine learning algorithm or system that can be used by a computer vision researcher to help solve a difficult computer vision problem. We firmly believe our paper and contribution deserve a positive score at least according to the NeurIPS standard. If our paper is accepted, we will definitely open source our codes.** \n\n[1] You Y, Li J, Reddi S, et al. Large batch optimization for deep learning: Training bert in 76 minutes. ICLR 2020\n\n[2] Liu Y, Chen X, Cheng M, et al. Concurrent adversarial learning for large-batch training. ICLR 2022\n\n[3] Qin H, et al. SimiGrad: Fine-Grained Adaptive Batching for Large Scale Training using Gradient Similarity Measurement. NeurIPS 2021\n\n[4] Keskar N, et al. On large-batch training for deep learning: Generalization gap and sharp minima. ICLR 2017\n\n\n**Q2:The variance visualization in Fig 3 (in appendix) does not convince me.**\n\nSorry for the confusing figure. We have uploaded another version. In the previous appendix figure 3, we give the variances $\\mathrm{Var} (g_{t}^{(i)})$ of the gradient $g_{t}^{(i)}$ to show AGVM could avoid failure training. But we don't add the AGVM coefficient in the previous figure 3.\n\n\nWith AGVM, we use the modified gradient $\\mu_{t}^{(i)} g_{t}^{(i)}$ to update the parameters and balance gradient variances. In current appendix figure 3, we plot the $\\mathrm{Var} (\\mu_{t}^{(i)} g_{t}^{(i)})$, where $\\mu_{t}^{(i)}$ is the coefficient of AGVM. Then the variances have been balanced.\n", " Thanks for the authors' detailed response. Most of my concern has been solved. However, the reviewer still has the following concerns: \n\n(1) The rigorous guarantee for Eq (7) is still not convincing. I recommend the authors estimate the variance term with respect to the minibatch size $b$ and the iteration $T$. Based on the current analysis in the response, the variance term is related to an unknown mini-batch size $b$, which may appear in the upper bound of the Theorem. Thus, the linear speedup property may be broken. In addition, a similar issue arose in the analysis of AVGM+Adam, $1-\\beta_2$ is a constant rather than approaching to 0, which is the key difficulty in estimating the convergence of Adam-type methods. \n\nThus, I highly recommend the authors provide a rigorous estimation for the variance term, which will make the theoretical contribution more clear and solid. \n\n(2) The variance visualization in Fig 3( in appendix) does not convince me. Compared with Fig 1 (main file) and Fig 3 (appendix), the variance gap still exists between each module, which is almost the same. Hence, the motivation \"the gradient variance misalignment\" in this work is questionable. \n\nBased on the current response, I will keep my initial score. Hope the authors can provide a more rigorous proof and make the motivation clearer.", " Dear Reviewer q5yJ:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. If your concerns have been well addressed, please consider raising your rating, thanks.\n\nBest, \n\nAuthors", " Dear Reviewer 8KtA:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. If your concerns have been well addressed, please consider raising your rating, thanks.\n\nBest, \n\nAuthors", " We sincerely appreciate all reviewers’ time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our contributions:\n\n * **Highlights.** The proposed method could help democratize training in several computer vision problems [8EAD].\n * **Theory.** The proposed method has a theoretical guarantee [8KtA].\n * **Experiments.** The proposed approach for large batch size training is simple and easy to implement [q5yJ]; Showing remarkable and promising results across different pipelines, optimizers, and datasets [8EAD, KpQK, 8KtA, q5yJ].\n * **Writing.** The paper is easy to follow and well-structured [8EAD, KpQK, 8KtA, q5yJ].\n\n\nAnd we also thank all reviewers for their insightful and constructive suggestions, which help a lot in further improving our paper. In addition to the pointwise responses below, we summarize supporting experiments and theoretical analysis added in the rebuttal according to the reviewers’ suggestions.\n\n**New Experiments:**\n * The proposed AGVM is more clearly positioned [8EAD];\n * Measuring overhead [8EAD];\n * Results on imbalanced object detection [KpQK];\n * Comparisons with Lena [8KtA];\n * Results for AdamW in Table 3 [8KtA];\n * Complete results for UniNet-G [8KtA];\n * The Same Variance-iteration Figure as the Figure 1 with AGVM [8KtA].\n * Experiments and clarification on the training with batch size 10k [q5yJ];\n\n**New Theoretical Analysis:**\n * A rigorous guarantee for Eq (7) in the appendix (Eq. (12) in the rebuttal version) [8KtA];\n * Linear speedup property of AGVM [8KtA].\n\nWe hope our pointwise responses below could clarify all reviewers’ confusion and alleviate all concerns. We thank all reviewers’ time again. **We have uploaded the rebuttal version and the revised parts are highlighted in red.**", " **Q4: When more GPUs have been adopted, the accelerator is a little bit lower**\n\nIdeally, increasing the batch size will linearly decrease the training iterations and so we can shorten the training time linearly. However, in practice, it is difficult for us to achieve this ideal state. More specifically, it requires more GPUs to achieve a larger batch size which increases the communication burden for many necessary synchronization operations such as gradient synchronization between nodes after back-propagation. This overhead is influenced by the GPU numbers and the cluster topology, which directly leads to that we can't achieve 48 times speed-up when increasing 48 times batch size and adopting 48 times computational resources.\n\n**In detail, for Faster R-CNN, a gradient synchronization operation takes less than 5ms with 16 GPUs, but takes 20-40ms with 768 GPUs at each iteration because of the barrier synchronization.** Therefore, the gradient synchronization operation is the bottleneck for imperfect speedup. And distributed deep learning framework also will influence the system throughput when increasing GPUs. It's also impractical to achieve linear speedup in Google TPU clusters (see Table 1 in [2])\n\nTo sum up, the reasons for the imperfect speedup are mainly related to cluster performance, not AGVM. AGVM only introduces a negligible extra overhead compared to the regular set-up (see the response to Reviewer 8EAD). You can view the detailed reasons for imperfect speedup for UniNet-G in the response to Q2 from Reviewer 8KtA.\n\n**Q5: Convergence results in Table 4**\n\nThanks for this comment. In a nutshell, all of the convergence results in Table 4 are 36.6 [email protected]:0.95 for Faster R-CNN+ResNet50.\nIn section 4.1, we have demonstrated that we explore how fast AGVM can reach the **36.6 [email protected]:0.95** reported in [7] to make a fair comparison with [7] (PMD-LAMB, which needs 12 minutes to train). We follow the same settings with [7] and report the training time in Table 4. We reduce the original small-batch training time from 2.5 hours to only 4.2 minutes, which is the fastest record to our knowledge.\n\n[1] You Y, Zhang Z, Hsieh C J, et al. Imagenet training in minutes[C]//Proceedings of the 47th International Conference on Parallel Processing. 2018: 1-10.\n\n[2] You Y, Li J, Reddi S, et al. Large batch optimization for deep learning: Training bert in 76 minutes[J]. arXiv preprint arXiv:1904.00962, 2019.\n\n[3] Goyal P, Dollár P, Girshick R, et al. Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv 2017[J]. arXiv preprint arXiv:1706.02677, 2017.\n\n[4] Liu Y, Mai S, Chen X, et al. Towards efficient and scalable sharpness-aware minimization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 12360-12370.\n\n[5] Liu Y, Chen X, Cheng M, et al. Concurrent adversarial learning for large-batch training[J]. arXiv preprint arXiv:2106.00221, 2021.\n\n[6] You, Yang, et al. \"The limit of the batch size.\" arXiv preprint arXiv:2006.08517 (2020).\n\n[7] Wang T, Zhu Y, Zhao C, et al. Large Batch Optimization for Object Detection: Training COCO in 12 minutes[C]//European Conference on Computer Vision. Springer, Cham, 2020: 481-496.\n\n[8] Zhao S Y, Xie Y P, Li W J. Stochastic Normalized Gradient Descent with Momentum for Large Batch Training[J]. arXiv preprint arXiv:2007.13985, 2020.", " Dear Reviewer q5yJ\n\nThank you for the detailed review. We will address your concerns below.\n\n**Q1:The performance gap is obvious if a larger batch-size is adopted, especially for object detection task in Table 2**\n\nThanks for this question. As large-scale dataset and big models become democratized, shortening the training time is very important for exploring this. Large-batch training is a promising way to achieve this and it's becoming a very challenging research field.\n\nRecently, the whole research community is working on eliminating the performance drop in large-batch optimization [1][2][3][4][5][6][7]. They concentrate on the basic image classification task and a specific task in dense visual predictions, i.e., object detection. Even though great effort has been made by the researchers, the performance loss still exists in large-batch setting. Theoretically, under fixed computation complexity (i.e., same training epochs), this is **inevitable** for performance drop when batch size is very large [8].\n\nWe argue that we contribute two main advances to the community. One is that we are the first to extend the large-batch optimization problem to dense visual predictions rather than being limited to a specific object detection task. We further reveal that the essential reason for this optimization problem is the significant effective batch size misalignment between different modules. The other is we propose a generalized AGVM which shows overwhelming superiority over all of the previous methods. **It establishes many new stage-of-the-art performances for dense visual prediction tasks (more than twenty) and improves the maximum batch size that the algorithm works without performance loss**.\n\n**Q2: Why ResNet18? How about other pipelines other than RetinaNet?**\n\nThanks for your constructive comment.\nWe further conduct experiments with ResNet50 backbone based on RetinaNet and Faster R-CNN.\n\n**RetinaNet+ResNet50 (2x)**\n\n|Batch Size| 32 |256 | 1k | 2k | 4k |10k |\n|----------|----|----|----|----|----|----|\n| PMD-LAMB | 36.5 | 36.6 | 34.8 | 31.5 |27.1|NaN|\n| AGVM | 37.1 | 37.1 |36.7 | 34.1 | 33.0 | 30.8 |\n\n**Faster R-CNN+ResNet50 (1x)**\n\n|Batch Size| 32 | 256 | 1024 | 1536 | 2k | 4k |\n|----------|----|------|------|------|------|-----|\n| PMD-LAMB |36.6| 36.7 | 35.3 | 33.5 | 28.7 | NaN |\n| AGVM |37.1| 37.2 | 37.0 | 36.6 | 33.8 | 25.2|\n\n**Q3: The reported results of 10K batchsize is much lower than batch-size of 32**\n\nSorry for this confusing description that the batchsize can be scalable to 10K (this doesn't refer to the batch size without performance drop), we will clarify this below. The evaluation on large-batch training in the research community can be divided into two aspects, **the maximum batch size that the algorithm works without performance loss** and **the extreme batch size that the algorithm converges without NaN**.\n\n**RetinaNet+ResNet50**\n\nFor the maximum batch size that the algorithm works without performance loss, we list the detailed comparison where it requires that the algorithm can achieve 36.6 [email protected]:0.95 within 24 epochs.\n\n| Method | MegDet | LAMB | PMD-LAMB | LARS | AGVM |\n|-----------|--------|------|----------|------|------|\n| Batch size | 256 | 128 | 256 | 128 | 1k |\n\nFor the extreme batch size that the algorithm converges without NaN, we conduct multiple experiments to evaluate this.\n\n| Method | MegDet | LAMB | PMD-LAMB | LARS | AGVM |\n|-------------------------------|--------|------|----------|------|------|\n| Batch size | 2k (NaN) | 6k (NaN) | 6k (NaN) | 2k (NaN) | 10k (30.8) |\n\nNote that even with 10k batch size, AGVM still performs well convergence and due to the limitation of GPU resources, we could not explore a larger batch size.\n\n**Faster R-CNN+ResNet50**\n\nFor the maximum batch size that the algorithm works without performance loss, we list the detailed comparison where it requires that the algorithm can achieve 36.6 [email protected]:0.95 within 12 epochs (defined in [7]).\n\n| Method | MegDet | LAMB | PMD-LAMB | LARS | AGVM |\n|-----------|--------|------|----------|------|------|\n| Batch size | 128 | 128 | 320 | 128 | 1.5k |\n\nFor the extreme batch size that the algorithm converges without NaN, we conduct multiple experiments to evaluate this.\n\n| Method | MegDet | LAMB | PMD-LAMB | LARS | AGVM |\n|-------------------------------|--------|------|----------|------|------|\n|Batch size | 2k (NaN) | 4k(NaN) | 4k (NaN) | 2k (NaN) | 4k (25.2) |\n\nAGVM demonstrates consistent superiority over prior arts in these two aspects.", " **Q3: Lack of discussions on imbalanced object detection tasks.**\n\nThanks for your constructive comment.\n\nThe purpose of AGVM is to solve the large-batch training problem. The core problem it solved is that the variances of GD and SGD are not balanced between different modules due to the significant **effective batch size misalignment** as demonstrated in section 4.2. This is also a form of imbalance problem under large-batch training in dense visual prediction tasks.\nAs concluded in [1], there are other various imbalance problems in object detection such as Fg-Bg class imbalance, objective imbalance, scale imbalance, spatial imbalance, and Fg-Fg class imbalance. The essence of these imbalances is quite different from that resolved by AGVM. Thus, AGVM is specifically for large-batch optimization and can't directly solve these inherent imbalance problems in object detection.\nWe compare the essential reasons for these imbalance problems and the imbalanced gradient variance in large-batch dense visual predictions.\n\n| Imbalance problem | Key reason | Core solution | Method |\n|-----------------------|----------------------------------------------|-----------------------------------------------|------------------------------------|\n| Fg-Bg class imbalance | Multiple negative classes (Background) | Soft/hard sampling, Generative methods | Focal loss, OHEM, GHM |\n| Objective imbalance | Multi-task loss in object detection | Tasks re-weighting or modifying loss function | Task Weighting, Guided Loss |\n| Scale imbalance | Objects with various scales and numbers | Multi Scale features/images | Multi Scale CNN, FPN, NAS-FPN |\n| Spatial imbalance | Different sizes, shapes, locations of boxes | Cascade head and modifying regression loss | Cascade R-CNN, Smooth L1/IoU loss |\n| Fg-Fg Class imbalance | Different objects' frequencies in nature | Modifying sampling strategy, loss re-weighting| OFB sampling, RFS, Seesaw Loss |\n| Gradient variance imbalance | **Effective batch size misalignment** | Modulating gradient variance | AGVM |\n\nEven so, AGVM still works well for large-batch training under these imbalance problems. For example, Fg-Bg class imbalance, objective imbalance, scale imbalance, and spatial imbalance are naturally presented in COCO dataset and the results on COCO in this paper demonstrate the effectiveness of AGVM. For Fg-Fg class imbalance, we further conduct experiments with Mask R-CNN on LVIS (a long-tailed dense visual prediction benchmark) and report the Bbox mAP. We see AGVM still outperforms the baseline with large-batch setting even suffering from the long-tailed problem.\n\n| Batch size | Megdet | AGVM |\n|------------|---------|---------|\n| 32 |21.4|21.4|\n| 128 |20.8|21.3|\n| 256 |20.1|20.7|\n| 512 |NaN|19.9|\n\n[1] Oksuz, Kemal, et al. \"Imbalance problems in object detection: A review.\" IEEE transactions on pattern analysis and machine intelligence 43.10 (2020): 3388-3415.", " Dear Reviewer KpQK,\n\nThanks for your advice. We will address your concerns below.\n\n**Q1: $\\mu_{t}^{(i)}$ is sensitive to $\\Phi_{t}^{(i)}$. $\\Phi$ will be a large value when $G_{t,1}^{(i)}$ and $G_{t,2}^{(i)}$ are similar**\n\nThanks for this valuable question. You may question $\\mu_{t}^{(i)}$ will be a large value rather than $\\Phi$, since $\\Phi$ is bounded between 0 and 2 in practice.\n\nTheoretically, when the batch size is extremely large (e.g., full batch), $G_{t,1}^{(i)}$ and $G_{t,2}^{(i)}$ will be similar and the gradient variance $\\Phi_{t}^{(i)}$ and $\\Phi_{t}^{(1)}$ tend to be zero ($\\Phi_{t}^{(1)}$ will be slightly larger than $\\Phi_{t}^{(i)}$). This will lead to the unstable $\\mu_{t}^{(i)}$.\n\nHowever, in practice, we find it's hard to achieve this. As demonstrated in **Appendix Figure 2**, even with a very large batch size 10k (requires 1280 GPUs with batch size 8 for a single GPU), the $\\mu_{t}^{(i)}$ is still controllable and AGVM also has an appealing convergence property.\n\nIn the practical implementation, we add a small epsilon value $\\mu_{t}^{(i)}=\\sqrt{\\frac{\\Phi_{t}^{(1)}+\\epsilon}{\\Phi_{t}^{(i)}+\\epsilon}}$ in Eq.(4) to avoid the large value and also clip the $\\mu_{t}^{(i)}$ to [0.1, 10]. We will emphasize these strategies in the revision.\n\nIn addition, at the training stage, there will be some unpredictable instantaneous large $\\mu_{t}^{(i)}$. To alleviate this, we further introduce a momentum update in Eq. (5) to reduce the influence of unstable $\\mu_{t}^{(i)}$ in some accidental iteration.\n\n**Q2: Why does the proposed method target the object detection and segmentation tasks? Can the proposed method generalize to the image classification task?**\n\nSpecifically, AGVM is proposed to alleviate the large-batch training problem. Whether it works for a given task depends on two facts, one is that this given task contains multiple sub-modules, and the other is that there is significant **effective batch size misalignment** between different sub-modules as we investigated in section 4.2, which is the essential reason for the inconsistent gradient variance.\n\nThe feature pyramid network (FPN), region proposals, and shared head between different levels are the core design ideas in dense visual prediction tasks. We find this inevitable introduces the **effective batch size misalignment**, leading to the inconsistent gradient variances of different modules. Based on this, AGVM is proposed to alleviate this inconsistency.\n\nThe pipeline used in the image classification task is often a single module (only backbone to directly predict the class-aware probability) which is not satisfied with these two facts above. Therefore, the application scenario of AGVM method is dense visual prediction tasks.", " Dear Reviewer 8EAD,\n\nThank you for appreciating our approach. We will address your concerns below.\n\n**Q1: Dense vision tasks vs modular tasks**\n\nThanks for this valuable comment.\nSpecifically, AGVM is proposed to alleviate the large-batch training problem. Whether it works for a given task depends on two facts, one is that this given task contains multiple sub-modules such as the reviewer listed ELECTRA, DDPG, SAC, and GANs, and the other is that there is significant **effective batch size misalignment** between different sub-modules as we investigated in section 4.2, which is the essential reason for the inconsistent gradient variance. The feature pyramid network (FPN), region proposals, and the shared head between different levels are the core design ideas in dense visual prediction tasks. We find this inevitable introduces the **effective batch size misalignment**, leading to the inconsistent variance of different modules between GD and SGD. Based on this, AGVM is proposed to alleviate this inconsistency. After our verification, we find that the methods listed in the field of NLP, RL and GAN are not satisfied with the second fact above. Even so, we still agree with the reviewer's opinion that there will be other modular tasks satisfying both of the two facts. We will emphasize this in the revision and explore this in more research areas in future work.\n\n**Q2: On measuring overhead**\n\nThanks for your constructive comment. Compared to the traditional data-parallel set-up, the only extra overhead in AGVM is the **additional all-reduce call time at t-th iteration when $t$%$\\tau$=0**, and this overhead is highly related to the hardware set-up such as the cluster communication bandwidth.\nWe will clarify the following issues in detail.\n\n**Communication per step**: AGVM doubles the required communication at t iteration when $t$%$\\tau$=0.\n\n**Maximum memory**: AGVM needs an additional gradient buffer to compute the cosine similarity in rank 0. In another word, AGVM needs an extra GPU memory to store the copy of $G_{t,2}^{i}$, which is in proportion to the model size. Take Faster R-CNN as an example, it needs an extra buffer to store these 42M gradients, but this is very small compared to the regular GPU memory used for forward and backward propagation.\n\n**Measured quality**: This overhead from Table 1 represents extra training time with AGVM compared with the basic setting, resulting in **0.12%** extra training time per epoch. The basic setting is 128 NVIDIA A100s with the regular data-parallel set-up, Faster R-CNN+ResNet50 as the detector, 1024 batch size, and $\\tau$=5. Even if we adopt batch size 2 per GPU (128 GPUs in total), the extra overhead is still negligible, e.g., less than 1%.\n\n**How the proposed overhead scales with network bandwidth and the number of GPUs**: The only extra overhead in AGVM is an additional all-reduce call time at t-th iteration when $t$%$\\tau$=0. In our cluster, each computing node is connected via 8 InfiniBand 200Gb/s IB. The extra overhead of an all-reduce call for Faster R-CNN in our cluster is given as follows:\n\n| GPUs | 16 | 32 | 64 | 128 | 256 | 512 |\n|----------|-------|-------|-------|-------|-------|-------|\n| Overhead | 3.1ms | 3.3ms | 3.7ms | 4.7ms | 5.6ms | 7.0ms |\n\n**Minors**: We have uploaded the rebuttal version to correct the minor typos.", " **Q5: Comparisons with LENA [2]**\n\n Lena adopts the \"gradient variance\" from the \"gain ratio\" in AdaScale [3] to modify layer-wise learning adaptively. The differences between LENA and AGVM are given in the following:\n\n1. LENA keeps increasing the learning rate in the whole training process, resulting in a very unstable training for object detection and segmentation.\n2. AGVM is motivated by effective batch size misalignment in dense visual predictions to modify module-wise learning rate. But LENA focuses on image classification problems and modifies layer-wise learning rate.\n3. LENA cannot generalize well to dense visual prediction tasks such as object detection. We give the experiment results with Faster R-CNN+ResNet50 detector on COCO. We implement LENA by borrowing its official implementation (https://github.com/yy-ko/lena-www22) and adopt the recommended $\\theta$ and $\\alpha$, but we cannot obtain any reasonable performance:\n \n| Batch size | LENA | AGVM |\n|------------|------|------|\n| 256 | 32.5 | 36.7 |\n| 512 | 25.9 | 36.7 |\n| 1024 | 18.6 | 35.4 |\n \n4. LENA needs an experienced engineer to tune the hyper-parameters, while AGVM does not.\n\n**Q6: A rigorous guarantee for Eq (7) (Eq.(12) in the rebuttal version) in the appendix**\n\nThanks for your constructive comment. Because the samples are randomly divided into two groups, according to the law of large numbers, when batch size $b$ goes to infinity, we have:\n\n$$\n\\mathbb{E}\\left[cos(G_{t,1}^{(j)},G_{t,2}^{(j)})\\right] \\to 1, \\forall j\\geq 1.\n$$\n\nFor $b=2$, each group only has one sample that comes from the same training distribution, we have:\n\n$$\n\\mathbb{E}\\left[cos(G_{t,1}^{(j)},G_{t,2}^{(j)})\\right] \\to 0, \\forall j\\geq 1.\n$$\n\nTherefore, there exists a $\\hat{b}$ that makes the following equation hold,\n\n$$\n\\mathbb{E}\\left[cos(G_{t,1}^{(j)},G_{t,2}^{(j)})\\right] \\leq \\frac{1}{2}, {\\rm if}\\, b\\leq \\hat{b}, \\forall j\\geq 1.\n$$\n\nSince the effective batch size of backbone is smaller than that of other modules, the gradient variance of backbone is larger than that of other modules, which means:\n\n$$\n\\mathbb{E}\\left[cos(G_{t,1}^{(1)},G_{t,2}^{(1)})\\right] \\leq \\mathbb{E}\\left[cos(G_{t,1}^{(i)},G_{t,2}^{(i)})\\right], \\forall i > 1.\n$$\n\nWhen $b<\\hat{b}$, we further have:\n\n$$\n\\mathbb{E}\\left[cos(G_{t,1}^{(1)},G_{t,2}^{(1)})\\right](1 - \\mathbb{E}\\left[cos(G_{t,1}^{(1)},G_{t,2}^{(1)})\\right]) \\leq \\mathbb{E}\\left[cos(G_{t,1}^{(i)},G_{t,2}^{(i)})\\right](1 - \\mathbb{E}\\left[cos(G_{t,1}^{(i)},G_{t,2}^{(i)})\\right]), \\forall i > 1.\n$$\n\nThen we get the Eq.(7).\n\n**Q7: Linear speedup property of the proposed AGVM**\n\nThanks for your suggestion. We give the linear speedup property for AGVM+synchronous SGD w.r.t. batch size as a corollary. First, we will prove **gradient variance decreases linearly with batch size b**. For ease of understanding, we assume that $\\nabla f(w)$, $g$, $r$ represent the gradient of the full dataset, the mini-batch with size $b$ and the single sample, respectively. Then we have the following covariance matrix:\n\n$$\n\\Sigma(w):=\\operatorname{cov}\\left[r\\right]=\n\\frac{1}{n} \\sum_{i=1}^{n}\\left(r_i-\\nabla f(w)\\right)\\left(r_i-\\nabla f(w)\\right)^{T},\n$$\n\nwhere n indicates the total number of training samples. Likewise, a stochastic gradient $g$ computed on a randomly-drawn mini-batch is a random variable with mean $\\nabla f(w)$. Assuming that it is composed of $b$ samples drawn independently with replacement, its covariance matrix is:\n\n$$\n\\operatorname{cov}[g]=\\frac{\\Sigma(w)}{b}.\n$$\n\nAccording to the Central Limit Theorem, g can be approximately normally distributed:\n\n$$\ng \\sim \\mathcal{N}\\left(\\nabla f(w), \\frac{\\Sigma(w)}{b}\\right).\n$$\n\nAs assumed in Appendix A.4.1 section, the variance of stochastic gradients with batch size $b_i$ meets $\\mathbb{E}\\left\\|g^{(i)}-\\nabla_{i} f(w)\\right\\|^{2} \\leq \\sigma_{i}^{2}$ for all $w \\in \\mathbb{R}^{d}$ and $i \\in[1,h]$.\nSo when we increase the batch size from $b_i$ to $Mb_i$, we have:\n\n$$\n\\mathbb{E}\\left\\|g^{(i)}-\\nabla_{i} f(w)\\right\\|^{2} \\leq \\frac{\\sigma_{i}^{2}}{M}.\n$$\n\nBy substituting $\\sigma_{i}^{2}$ with $ \\frac{\\sigma_{i}^{2}}{M}$ for all $i \\in [1,h]$, we get:\n\n$$\n\\frac{1}{T}\\sum_{t=1}^{T}\\mathbb{E}\\left[ \\|\\nabla f\\left(w_{t}\\right)\\|^{2}\\right]\\leq \\frac{2\\left(f\\left(w_{1}\\right)-f_{inf}\\right)}{T\\eta_{t}}+\\sum_{i=1}^{h}\\eta_{t} L_{i}\\left(K\\frac{\\sigma_{1}^{2}}{M}+(1+\\alpha_{0})\\frac{\\sigma_{i}^{2}}{M}\\right).\n$$\n\nLet $\\eta_{t}=\\sqrt{\\frac{M}{T}}$, we obtain a $O(1/\\sqrt{MT})$ convergence rate.\n\n[1] You Y, Li J, Reddi S, et al. Large batch optimization for deep learning: Training bert in 76 minutes[J]. ICLR, 2020.\n\n[2] Ko Y, Lee D, Kim S W. Not All Layers Are Equal: A Layer-Wise Adaptive Approach Toward Large-Scale DNN Training[C]//Proceedings of the ACM Web Conference 2022. 2022: 1851-1859.\n\n[3] Johnson T, Agrawal P, Gu H, et al. AdaScale SGD: A user-friendly algorithm for distributed training[C]//International Conference on Machine Learning. PMLR, 2020: 4911-4920.", " We thank the reviewer for taking the time to review our paper and give the point-to-point response below. We really hope our response addresses your concerns. If so, please consider raising your rating, thanks. \n\n**Q1: Why choose backbone as the anchor module?**\n\nThe significant **effective batch size misalignment** between different sub-modules as we investigated in section 4.2 is the essential reason for unstable large-batch training. This eventually leads to the imbalanced gradient variance in different modules. Intuitively, the effective batch size of the backbone is equal to the actual input batch size. In contrast, the effective batch size of the other modules is difficult to quantify and from our observation, as shown in Figure 1, they are also more volatile under different batch sizes.\n\nWe further conduct experiments to evaluate this. As shown in Table 7 in our paper (Mask R-CNN with batch size 512), we choose different modules as the anchor and we can see that adopting the backbone as the anchor leads to the best performance.\n\n**Q2: UniNet-G performances and why not a linear speed-up ratio. If the bottleneck is data loader IO, please report its time.**\n\nThanks for your suggestion. We first give the performance of AGVM with batch sizes 128 and 512:\n\n| batch size | method | Box mAP | Seg mAP | Iterations | Training time |\n| ---------- | ------ | ------- | ------- | ---------- | ------------- |\n| 128 | AGVM | 62.6 | 53.8 | 11004 | 21.3 hours |\n| 128 | AdamW | 62.5 | 53.7 | 11004 | 21.1 hours |\n| 512 | AGVM | 62.5 | 53.7 | 2760 | 5.9 hours |\n| 512 | AdamW | 61.8 | 53.0 | 2760 | 5.8 hours |\n\n**Why not linear speed-up**:\n\nTheoretically, increasing the batch size will linearly decrease the training iterations and so we can shorten the training time linearly.\nHowever, in practice, it is difficult for us to achieve this ideal state. More specifically, it requires more GPUs to achieve a larger batch size, which increases the communication burden for many necessary synchronization operations such as gradient synchronization between nodes after back-propagation. This overhead is influenced by the GPU numbers and the cluster topology, which directly leads to that we can't achieve 30 times speed-up when increasing 30 times batch size and adopting 30 times computational resources.\n\n**In detail, for Uninet-G, a gradient synchronization operation takes less than 0.3s with 16 GPUs, but takes 2-2.5s with 480 GPUs at each iteration because of the barrier.** Therefore, the gradient synchronization operation is the bottleneck for imperfect speedup rather than Dataloader IO (cost less than 50ms). Furthermore, the overhead of an all-reduce call will increase with the number of GPUs for UniNet-G. An all-reduce call takes 68.9ms with 16 GPUs, but takes 148.1ms with 480 GPUs. And distributed deep learning framework will also influence the system throughput when increasing GPUs. It's also impractical to achieve linear speedup in Google TPU clusters (see Table 1 in [1]).\n\nTo sum up, the reasons for the imperfect speedup are mainly related to cluster performance, not AGVM. AGVM only introduces a negligible extra overhead compared to the regular set-up. (see the response to Reviewer 8EAD Q2)\n\n**Q3: Results with AdamW**\n\nThanks for your comment. We list the results in the following:\nFaster R-CNN + ResNet50\n\n| Batch size | AdamW | AGVM+AdamW | Iterations |\n| ---------- | ----- | ---------- | ---------- |\n| 16 | 37.1 | 37.2 | 87960 |\n| 32 | 37.1 | 37.1 | 43980 |\n| 256 | 36.9 | 37.2 | 5508 |\n| 512 | 36.2 | 36.8 | 2760 |\n| 1024 | 36.2 | 37.0 | 1380 |\n| 1536 | 35.9 | 36.6 | 924 |\n\nFaster R-CNN + Swin-Tiny\n\n| Batch size | AdamW | AGVM+AdamW | Iterations |\n| ---------- | ----- | ---------- | ---------- |\n| 16 | 43.7 | 43.7 | 95350 |\n| 32 | 43.6 | 43.7 | 47675 |\n| 256 | 43.4 | 43.5 | 5967 |\n| 512 | 42.7 | 43.2 | 2990 |\n| 1024 | 42.4 | 42.8 | 1495 |\n\nAGVM demonstrates consistent superiority over basic AdamW with different batch sizes.\n\n**Q4: The Same Variance-iteration Figure as The Figure 1 with AGVM**\n\nThanks for your constructive comment. We attach the results in Appendix Figure 3 in the rebuttal version.\n\n", " Authors study the problem of large-batch optimization for various \"dense\" computer vision tasks, such as object detection or instance segmentation. Their primary stated objective is to enable training these models with extremely large batches using multiple GPUs.\nTo achieve this objective, authors analyze the behavior of gradient variance and observe that the variance of different network sub-modules (e.g. detector backbone vs FPN) becomes dissimilar during training with large batch sizes. Based on this observation, authors propose Adaptive Gradient Variance Modulator (AGVM) - an optimizer-agnostic technique meant to balance the gradient variance between the sub-modules. Authors conduct experiments on several vision tasks and demonstrate that AGVM can scale to very large batch sizes. The paper proposes a practically useful solution to large-batch training with convincing practical experiments and hyperparameter sensitivity analysis.\nBy extending the applicability of large-batch training, authors not only allow training models in minutes on GPU clusters, but also make it feasible to train \"dense prediction\" tasks outside of compute clusters, such as in federated learning[1] or even volunteer computing[2], where large-batch training allows one to mitigate high communication latency. While i have many low-level concerns about the paper positioning and quality, I am strongly in favor of paper acceptance, since the proposed method could help \"democratize\" training in several computer vision problems. I list my concerns below.\n\n### [conceptual] Dense vision tasks vs modular tasks\n\nThe paper positions AGVM as a solution to dense visual prediction tasks. However, the method itself does not seem specific to dense prediction, (and does necessarily work for all dense prediction tasks as authors admit in L319-320). Instead, AGVM appears to rely on the fact that a given task contains multiple sub-modules. Naturally, there are many \"modular\" tasks outside the area of dense prediction:\n- NLP pre-training with ELECTRA[3] and derivative work\n- reinforcement learning: DDPG[4] or SAC[5] (actor and critic networks)\n- generative adversarial networks[6], (generator vs discriminator) cyclic GAN[7] contains 4 sub-modules\n\nIn short, there is a plethora of other tasks that appear to fit AGVM's motivation of sub-module variance.\nEither the proposed AGVM fits those tasks - in which case, it is not specific to dense prediction - or there is some reason why it doesn't.\nI believe that the paper would be more clearly positioned if authors explain why AGVM is specific to dense predition, or evaluate it on more general tasks.\n\n\n### [practical] On measuring overhead\n\nIn Table 1 (and later), authors report \"extra overhead\" of less than 1%. In its current form, it is unclear what exactly does this overhead mean - and hence, predict how it will generalize to other hardware setups. At the very least, i would recommend clarifying the following issues:\n\n- __communication per step:__ does AGVM double the required communication (MB / step) from the additional gradient all-reduce?\n- __maximum memory:__ does AGVM need additional gradient buffers? (and hence, extra gpu memory in proportion to the model size)\n- __measured quantity:__ does the overhead from Table 1 represent time, communication, flops, or energy overhead?\n\nIdeally, it would be insightful to evaluate how the proposed overhead scales with network bandwidth and the number of GPUs.\nIf needed, the former can be emulated using `tc qdisc` on the active network interface.\n\n[1] https://arxiv.org/pdf/1902.01046.pdf\n\n[2] https://arxiv.org/pdf/2106.10207.pdf\n\n[3] https://arxiv.org/pdf/2003.10555.pdf\n\n[4] https://arxiv.org/pdf/1509.02971.pdf\n\n[5] https://arxiv.org/pdf/1801.01290.pdf\n\n[6] https://arxiv.org/pdf/1406.2661.pdf\n\n[7] https://arxiv.org/pdf/1703.10593.pdf \n\n### Minor comments / typos:\n\n> L159 they can be easily implemented using the popular deep learning platform e.g., PyTorch\n\nPerhaps it would be better to paraphrase?\n(A) **a / any** … platform **e.g.** PyTorch\n(B) **the** … platform **i.e.** PyTorch\n\n> Figure 1\n\nThe first row colors can be unduly associated with the second row colors. Would recommend using different color schemes.\n\n>Table 3:\n\nWhy is batch 1536 in bold? (the caption indicates that bold denotes best results) If this is a deliberate formatting choice, I would recommend changing the caption or highlighting that batch size in a different manner.\n\n> L192 Pytorch\n\nPy**T**orch (consistency)\n\n> The deliverables are released at https://anonymized-agvm.github.io/.\n\nAs of the first week of July, the above link only contains figures from the paper.\n[Nit] While it does not affect my recommendation, i would still recommend to either use \"deliverables **will be** released\" or actually provide them during submission.\n\n To the best of my understanding, the proposed problem (variance mismatch) and the solution (AGVM) appear more general than authors position them to be. As such, I believe it would be best to explain how is AGVM specific to dense visual predictions - or remove this limitation and evaluate it more generally. Furthermore, to the \"extra overhead\" could be addressed in more detail.\nI elaborate on both these concerns above, in the \"Strengths And Weaknesses\" section.", " This work aims to the heavy performance drop that takes place in the large-batch training regime for object detection and segmentation. Specifically, the authors propose an algorithm named Adaptive Gradient Variance Modulator (AGVM) that is able to work with a very large batch size. This work provides extensive experiments on experiments on MS COCO and ADE20K, which verify the superiority of the proposed method. Strengths:\n\n+ The paper is easy to follow and well-structured.\n\n+ The experiments are well executed. They show that the proposed method can work with different deep learning backbones (e.g., CNNs and Transformers) and different optimization methods on MS COCO and ADE20K.\n\nWeaknesses:\n\n- It seems that $\\mu_{t}^{(i)}$ is sensitive to $\\Phi_{t}^{(i)}$ in Equation (4). When $G_{t,1}^{i}$ is similar to $G_{t,2}^{i}$, the expectation of the cosine of the two groups of the gradient estimation is close 0. Then $\\Phi$ will be a large value.\n\n- According to Section 3, it seems that the proposed method is generic to most computer vision tasks, like image classification. Why does the proposed method target the object detection and segmentation tasks? Can the proposed method generalize to the image classification task? \n\n- The paper may lack a discussion about if the proposed method is able to generalize to the imbalance problems in object detection [r1].\n\nReferences:\n[r1] Oksuz, Kemal, et al. \"Imbalance problems in object detection: A review.\" IEEE transactions on pattern analysis and machine intelligence 43.10 (2020): 3388-3415.\n Please refer to the weaknesses. I will check other reviewers' comments and the authors' responses. I will adjust my rating accordingly. It seems that the authors addressed the limitations and potential negative societal impact.", " This paper proposes an Adaptive Gradient Variance Modulator (AGVM) to achieve large-batch optimization for dense visual tasks, i.e., object detection and segmentation. The paper clearly claims the motivation, which is the high gradient variance in the different modules in large-batch dense visual tasks. To overcome the issue in large-batch dense visual tasks, the authors propose the AGVM to make the gradient variance of different modules consistent and moving average to have stable optimization. The AGVM has a theoretical guarantee, and sufficient experiments have also proved its effectiveness. Strength\n1. The paper is clear and well written. The paper clearly studies the difference between dense visual tasks and normal classification tasks.\n2. The method AGVM proposed has a theoretical guarantee.\n3. The method is evaluated in different tasks and different optimizers. The experiments show comparable or better performance compared to the previous state-of-the-art optimization.\n\n\nWeakness\n1. The Equ(4) shows the AGVM uses Backbone as the anchor, so the ratio is \\sqrt{\\frac{\\Phi^1}{\\Phi^i}}. Why choose backbone? Will using other modules lead to training failure?\n2. Table 8 shows the results in different batch sizes. What are the experimental results of other batch sizes, e.g., 128, 512, and the batch size increased 30 times from 32 to 960, but the training time was shortened 20 times, why does AGVM not achieve linear speed? If the bottleneck is data loader IO, please report its time.\n3. Table 3 shows the performance in different batch sizes with different optimizers. We wonder about the results of AdamW since the AdamW works well in small batch sizes, i.e., 16, 32.\n4. The paper claims that the high gradient variance is the reason for the failure of large-batch dense visual training. It’s recommended to report the same variance-iteration figure as the Figure1 with AGVM to complete the paper.\n5. Recently, the work [r1] also study the large-scale DNN training from the perspective of gradient variance misalignment. The authors should give a fair comparison and discussion on this related work.\n6. About theoretical analysis. Can the authors provide a rigorous guarantee for Eq (7) in the appendix, which is merely numerical observation rather than a theoretical analysis? \n7. The theoretical contribution is mild. We recommend the authors provide the linear speedup property of the proposed AGVM with respect to the number of works or mini-batch size. \n\n[r1] Ko, Yunyong, Dongwon Lee, and Sang-Wook Kim. \"Not All Layers Are Equal: A Layer-Wise Adaptive Approach Toward Large-Scale DNN Training.\" Proceedings of the ACM Web Conference 2022. 2022.\n See the weakness. I will consider raising the score if the authors address the above issues. Yes.", " The paper presents a new approach called Adaptive Gradient Variance Modulator (AGVM)for large batch-size training. The approach is well motivated with simple implementation. Experimental results on different kinds of dense prediction tasks like object detection, instance segmentation, panoptic segmentation validate the effectiveness of the approach. More specifically, the paper claims to be able to train with a batchsize of 10K. strengths:\n1. The proposed approach for large batchsize training is simple and easy to implement. \n2. The proposed AGVM can be generally applied to different visual prediction tasks like instance segmentation, panoptic segmentation.\n\n\nweakness:\n1. Training with large batchsize with the proposed approach leads to performance drop. The performance gap is obvious if larger batch-size is adopted.\n2. The speed-up ratio is not constant with more GPUs are utilized. 1. According to Table 2, the results with batch-size 1024 (and 512) are usually lower than the setting with batchsize of 32 (or 256), expecially for the case of object detection 35.4 vs 36.8. The performance loss is not negligible with large-batchsize setting. \n\n2. In the abstract, the paper claims that the batchsize can be scalable to 10K. But the experiments in Table 5 cannot fully support the claim. First, the training setting is compromised with small backbone like Resnet18. How about the performance of the Res50 backbone, which is widely used in other experimental setting? Also, how about other detection methods rather than RetinaNet. Moreover, the reported results of 10K batchsize is much lower than batch-size of 32. \n\n3. For the results discussed in Table 4, only training time is reported without the converence results. Also, when more GPUs have been adopted, the accelerator is a little bit lower. For example, the training time is 4.2 mins with 1536 GPUs vs 148 mins with 32 GPUs. The authors addresed the limitations when the batchsize cannot be well estimated effectively like heatmap based pose estimation. It also addressed the negative social impact when used for deepfake training. \n\nOne limitation which has not reported is that the performance drop when large batch-size has been utilized. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "kyVABKuuTdV", "cUjnhTO9-fc", "r7ga24fUZXZ", "ayqSR-1EV71", "M_4W8U5AYSM", "Q-sfGuAHgGT", "_cLFnqNu5vL", "vh9tDLrExpA", "nips_2022_kImIIKGqDFA", "11RhDmpM_Oa", "_cLFnqNu5vL", "T-RylbySvly", "cUjnhTO9-fc", "WLLtGj3ZObn", "vLePTGS36Gn", "vh9tDLrExpA", "nips_2022_kImIIKGqDFA", "nips_2022_kImIIKGqDFA", "nips_2022_kImIIKGqDFA", "nips_2022_kImIIKGqDFA" ]
nips_2022_35I4narr5A
Few-Shot Continual Active Learning by a Robot
In this paper, we consider a challenging but realistic continual learning problem, Few-Shot Continual Active Learning (FoCAL), where a CL agent is provided with unlabeled data for a new or a previously learned task in each increment and the agent only has limited labeling budget available. Towards this, we build on the continual learning and active learning literature and develop a framework that can allow a CL agent to continually learn new object classes from a few labeled training examples. Our framework represents each object class using a uniform Gaussian mixture model (GMM) and uses pseudo-rehearsal to mitigate catastrophic forgetting. The framework also uses uncertainty measures on the Gaussian representations of the previously learned classes to find the most informative samples to be labeled in an increment. We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task. The results show that our approach not only produces state-of-the-art results on the dataset but also allows a real robot to continually learn unseen objects in a real environment with limited labeling supervision provided by its user.
Accept
All reviewers appreciated the importance of the problem being tackled, and the effectiveness of the proposed method. There were a number of concerns about ablations and use of pre-trained feature extractors, but these have been sufficiently addressed in the authors' rebuttal. I agree with the reviewers in recommending acceptance.
train
[ "VnRpvUYZ2tI", "tzSh4pv1CMv", "gLiDdSXDWix", "SWqVULGF5U4", "OE-fiT_6A5m", "brcdAE7VBVd" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for your insightful comments and have used these comments to improve the paper.\n\nWeaknesses:\n\nMemory Usage: We have added a discussion about the memory usage of all the approaches in the paper (L 258-271). In particular, GBCL requires only 0.97 MB of space to store GMMs of the previous classes. In contrast, FLB requires 43.08 MB (44 times more than GBCL). For other approaches, FT does not store any data, while LWF, EWC, and CWR store minimal information, and they all lead to catastrophic forgetting. iCaRL stores 2000 raw images for the previous classes, which requires 393 MB. Finally, NCM stores only 10 centroids for the previous classes, and CBCL stores 315 centroids which require 0.02 MB and 0.64 MB, respectively. All the approaches also store a ResNet-18 model trained on the previous classes which require an extra 83 MB of space. This analysis shows that GBCL provides the best trade-off between memory storage and overall performance in comparison with the other approaches. \n\nNetwork Pre-training: Yes, all approaches start with a ResNet-18 pre-trained on the ImageNet dataset. However, FT, FLB, CBCL, GBCL, CWR, and NCM use the pre-trained network as a frozen feature extractor, while iCaRL, LWF, and EWC finetune the pre-trained network on the new object data. \n\nPre-trained Feature Extractor: Please see our answer to reviewer 6wg4 comments. Particularly, we have added a new experiment (Section 7 in the supplementary file) to compare our approach with FSCIL approaches that train the feature extractor for few-shot class incremental learning (FSCIL). The results show that GBCL is able to avoid drastic accuracy decreases over a large number of increments while using a pre-trained feature extractor. In contrast, FSCIL approaches that also update the feature extractor suffer from a steep decline in accuracy. These results show that it might be beneficial to keep the CNN feature extractor fixed when continually learning from smaller sample sizes over a large number of increments. \n\nExperiments on Pepper Dataset: \nThank you for suggesting testing other methods on the Pepper dataset. We have tested some of the other approaches on our robot’s dataset and added the results in the supplementary file (Section 5). These results are similar to the results on the CORe-50 dataset, and all the approaches show the same trend in terms of overall accuracy. In particular, GBCL shows similar performance to the batch learning approach (FLB) that stores and retrains on all the feature vectors of the previous classes. We will release the dataset as a part of this paper so that future approaches can use it as a benchmark for FoCAL evaluations.\n\nQuestions:\n\n• “GBCL curiosity” was a typo. We have fixed it in the paper.\n\n• The few-shot learning baseline (FLB) in our experiments stores all the feature vectors of the previous classes, and GBCL’s performance seems to be very close to this batch learning baseline (Figure 2 in the paper, Figure 4 in the supplementary file).\n\n• The network is doing classification on the object level.\n\nLimitations:\n\nThank you for the helpful suggestion. We have added further discussion about the limitations of the pre-trained feature extractor and the detector to Section 5 of the paper.\n", " We thank you for your insightful comments and have used these comments to improve the paper.\n\nPre-trained Feature Extractor: \n\nOne of the main limitations of the Few-Shot Learning (FSL) and Few-Shot Class Incremental Learning (FSCIL) approaches is that if a neural network is trained from scratch using only a few examples per class, it does not produce good accuracy. Therefore, for a few images/samples per class, FSL and FSCIL approaches first train the neural network model on a large number of base classes with a large number of images per class to learn a good feature representation. Some FSL approaches keep the feature extractor fixed [24], while others (such as few-shot meta-learning approaches) update the feature extractor as well when learning from a few classes. Most FSL and FSCIL approaches first train the CNN on a set of base classes from the same dataset as the one used in the few-shot phase. However, for the FoCAL setup, particularly for the robotics applications, it might not be possible to capture a large amount of data from the same environment where the system is deployed later on. In other words, we do not have the base classes for the same dataset in FoCAL. Therefore, in this work, we choose to use a generic feature extractor pre-trained on a large dataset, such as ImageNet. Further, it has been shown in [24] that when the base classes are coming from a different dataset (such as ImageNet) and the few-shot classes are from another dataset (such as CUBS dataset), the baseline approaches with the fixed feature extractor outperform the meta-learning approaches. Therefore, for FoCAL it might be best to use a fixed feature extractor.\n\nTo further explore this point, we performed another experiment to compare GBCL against the FSCIL approach that also updates the feature representation when learning continually from a few classes (Section 7 in the supplementary file). As the FSCIL approach (termed TOPIC [6]) is not designed for FoCAL, we tested GBCL on the FSCIL setup described in [6] and removed the active learning component from GBCL (i.e. GBCL only used the GMM representation and pseudo-rehearsal and not the active learning techniques). For a fair comparison, we used the same settings as used by TOPIC [6] i.e. we trained a ResNet-18 on 60 classes in CIFAR-100 in the first increment and then learned the rest of the 40 classes over 8 increments with 5 classes per increment (5 images per class). However, unlike TOPIC, we do not continue to update the CNN in the 8 increments and use it as a fixed feature extractor after the first increment. As shown in Figure 6 in the supplementary file, GBCL produces a lower accuracy than TOPIC in the second increment only, because TOPIC adapts its feature representation to the new classes while GBCL uses the fixed feature representation. However, for the rest of the 7 increments, GBCL’s accuracy stays steady but TOPIC’s accuracy decreases drastically. The reason is that by adapting the feature representation to the new classes with only a few examples per class, the feature representation becomes too specific to the newly learned classes, which results in forgetting the previous classes. In contrast, GBCL continues to use the fixed feature representation learned from a large number of classes in the first increment and instead learns the complex distribution of new data in terms of GMMs. These results show that GBCL can address the limitations of the pre-trained feature representation by learning the complex distributions of the classes using GMMs and it can avoid forgetting using pseudo-rehearsal. Therefore, it can produce significantly higher accuracy (8\\% higher) than FSCIL approaches that also learn the feature representation. \n\nRelated Works:\n\nThank you for suggesting related works [Hadsell et al. 2020, Mundt et al. 2020, 2022] that explored different directions for continual learning. We have added them to the paper and also positioned our paper with respect to these works (L 39-41, L356-360 in the paper). We have also added some works related to open-set recognition in the paper (L 351-355).\n\nLimitations/Discussion: As per your suggestion, we have moved the complete discussion from the supplementary to the main paper. However, we had to move Figures 2 and 4 (in the previous version of the paper) to the supplementary file (Figures 3 and 2 in the supplementary file) to create space for complete discussion in the paper. Finally, we have also added the pre-trained feature extractor as one of the limitations in the paper.", " We thank you for your insightful comments and have used these comments to improve the paper.\n\nAblation Studies:\n\nWe have added the results of two different ablation studies to test the effects of hyperparameters $\\delta$, and $P$ (Section 6 in the Supplementary File). We performed both ablation studies on the data collected by the Pepper robot, which contains 240 objects in the training set, and 60 objects (different from the training set) in the test set. (Note that the first suggestion to check the effect of using only one of the active learning techniques is covered in the ablation study for varying $\\delta$)\n\n1. For the first ablation study, we trained our GBCL model with different values of $\\delta$ ranging from 0 to 1 (Figure 5 (a) in the supplementary file). $P$ was set to 0.2 in these experiments. The lowest accuracy is achieved when $\\delta$ is 0 or 1 because the model uses only one of the active learning techniques. Although there does not seem to be any significant difference between the accuracy when using either of the two active learning techniques. For the other values of $\\delta$, the model uses a combination of entropy and consistency scores. However, the best performance is achieved when $\\delta$ is set to 0.7. Further, $\\delta$ values close to 0.7 (i.e. 0.6) also achieve similar test accuracy. These results show that the best performance is achieved when using a combination of the two active learning techniques. Further, the accuracy of our approach is not highly sensitive to the choice of $\\delta$ within a range of values close to 0.7.\n2. For the second ablation study, we performed the same experiment on the Pepper dataset but changed the probability threshold for GBCL. $\\delta$ was set to 0.7 in all of these experiments. Figure 5 (b) in the supplementary file shows the results of this ablation study. There is a significant drop in the model accuracy when using $P$ values close to 0. The reason is that for $P$=0 the model simply stores a single gaussian distribution to represent each class, which might not be sufficient to capture the complex distribution of the object classes. As the $P$ value increases, the model starts to assign more mixture components (gaussian distributions) to each class to capture the complex relationship between the classes. For $P$=0.2, the model achieves the highest accuracy and uses 83 total gaussian distributions to represent the 20 object classes. Variations of the $P$ value by a small range do not impact the accuracy significantly (less than a 3% decrease). However, for values higher than 0.2, the model starts to generate more distributions for the classes thus requiring more memory. And as the threshold increases to 0.7, the accuracy starts to increase again. The reason is that with the increase in the $P$ value, GBCL starts to recruit a large number of clusters for each class, with a small number of images per cluster, and thus it gets closer and closer to the batch learning case when the model can store all the feature vectors separately for each class (when $P$=1). Therefore, with $P$=0.2 our model gets a good tradeoff between memory storage and accuracy.\n\nBoth ablation studies showed the contributions of different components of our model and also confirmed that GBCL is relatively insensitive to the two hyperparameters over a large range of possible values. Finally, note that we found both hyperparameters using cross-validation on the CORe-50 dataset and not on our dataset. However, even on our dataset the same hyperparameter values produce the best results. This shows that the chosen hyperparameter values are not dependent on a single dataset. ", " This paper proposes a setup of few-shot continual active learning (foCAL), where the agent continually learns with an unlabeled data and limited amount of labeling budget. This problem setup is meaningful, because previous continual learning methods focused on the scenario in which the data of the current task are available and labeled. In contrast, the new setup assumes that the robot doesn't have access to the labeled data, and needs to ask human teacher for supervision with limited availability (hence, active learning). This paper proposes FoCAL for online learning for image classification task. The framework is well summarized in the Figure 1. The features extracted from unlabeled objects go through the acquisition function a(,) to get the most informative samples (k), which are labeled by oracle. These labeled samples from oracle are used to update GMM representations. Pseudorehearsal process is used to replay old data. In the final stage of classifier, the samples from old classes and labeled features are used. Framework design description is given, and the experiment validations are presented. - Strength of this paper is the novelty of using GMM (Gaussian mixture model) based continual learning. The authors argue that the model system should recognize how different an incoming object (how new class it is) compared to other previously learned object classes with ease. To address this, they use a clustering-based approach; however, clustering-based approach normally uses mean feature vectors to represent object classes; instead, they use GMM to model each class data. Once the oracle gives label on the selected k feature vectors, the GBCL is applied to learn GMMs for the class. The updates of centroid is done by weighted mean of previous centroids and new inputs; the updates of covariance matrix is being done similarly. \n- Another notable part of the paper is the section 2.2. active learning using GMMs. Especially, FoCAL employs two techniques for active learning. First technique is the use of prediction entropy as the acquisition function. Second technique is the use of viewpoint consistency; inconsistent predictions for the different view of the same objects generates a high reward for acquiring the label of the object, as opposed to the consistent predictions.\n- Weak points: Is there an ablation studies for using these two techniques? \n- Experiments and evaluations of FoCAL on Core-50 dataset, and evaluations on Pepper robot are presented. \n- Comparison baselines are sufficient (FT, LWF, EWC, CWR, etc) Q1. I think the section 2.2. active learning using GMMs is the important part of the paper, and there are two essential techniques of this algorithm, namely (1) prediction entropy as acquisition function (2) viewpoint consistency. Is there any ablation studies for this? such as using just technique 1 in the first experiment, and just using technique 2 in the second experiment, comparing the performance with the algorithm.\n\nQ2. Choice of hyperparameters. There are parameters P and \\delta that affects the performance of the algorithm. Authors mentioned that P=0.2 and \\delta=0.7 were chosen, and these values were chosen based on cross validation. Is there a graph or table that shows the performance of the GBCL algorithm based on different sets of those hyperparameters? How is the trend?\n As authors mentioned, the main limitation of this work is that the experiments are quite limited and simple. It would've been a better paper if they had added more domain and more environments or different dataset. Some more experimental analysis on parameters and ablation studies would strengthen this paper. ", " This paper operates in the few-shot continual active learning (FoCAL) setting as exemplified by visual object category learning, but without concerning itself with continual representation learning. Instead, attention is mainly given to active, sample efficient learning of a classifier on top of pre-trained representations. A human-in-the-loop interaction model motivates the FoCAL setup, hence the paper does not only propose a solution for continual classifier learning, but it also motivates choices for the acquisition function used to select which samples to be labelled during the active continual learning process. Experimental validation is done on CORe-50 and a custom collected dataset using a robot. Strengths:\n* The paper identifies an important application of open-world learning in the particular case of visual concept learning with the constraints of a real-world robotic setup.\n* The few-shot active classifier learning problem is interesting even outside of the continual learning paradigm.\n\nWeaknesses:\n* The paper does not focus on continual representation learning, since the feature extractor is pre-trained and fixed. This means reduced significance, since there exist several approaches for “shallow” continual learning, e.g. ELLA [Ruvolo et al. 2013].\n\n\nFoCAL is not only about visual concept learning, but realistic robotic settings should also address issues of control, for which pre-training or hard-coding of behaviours are major limitations. The paper should limit claims to visual category classifier learning; this detracts from the clarity of the paper at the moment.\n\nIt would also help clarity if the work positioned itself better with respect to other continual learning works and made its assumptions explicit, see for context: [Hadsell et al. 2020, Mundt et al. 2020, 2022]\n\nSeveral baselines [LWF, EWC, iCaRL] are designed for continual representation learning under different assumptions, and this should be discussed, also to improve clarity.\n\nFew-shot meta-learning with pre-trained representations is well known to be effective [6], so it is difficult to argue for the novelty of the approach or the experimental setup.\n\nIt is difficult to argue for the high significance of the work due to not addressing open problems, such as continual representation learning. It is not surprising that pre-trained visual representations are useful in few-shot settings with novel visual categories, see the few-shot meta-learning works.\n\nThe paper should mention few-shot meta-learning works which have explored several of the concepts used, e.g. see the citations in [6]. There are too few references to works on open-world learning, a related field.\n\n\n\n### References:\n\n[Ruvolo et al. 2013] Paul Ruvolo, Eric Eaton. ELLA: An Efficient Lifelong Learning Algorithm. Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):507-515, 2013.\n\n[Hadsell et al. 2020] Raia Hadsell, Dushyant Rao, Andrei A. Rusu, Razvan Pascanu. Embracing Change: Continual Learning in Deep Neural Networks. TiCS 2020.\n\n[Mundt et al. 2020] Martin Mundt, Yong Won Hong, Iuliia Pliushch, and Visvanathan Ramesh. A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning. CoRR 2020.\n\n[Mundt et al. 2022] Martin Mundt, Steven Lang, Quentin Delfosse & Kristian Kersting. CLEVA-COMPASS: A CONTINUAL LEARNING EVALUATION ASSESSMENT COMPASS TO PROMOTE RESEARCH TRANSPARENCY AND COMPARABILITY. ICLR 2022.\n * Is there no need for further representation learning? Is such an assumption realistic?\n* How are limitations of pre-trained representations addressed by the machinery on top?\n Limitations should be discussed at length in the main text, not the appendix.", " This paper proposed a new task combining few-shot learning, active learning, and continual learning, in the context of robots perceiving and interacting with unseen objects. In this task, new instances of different object categories emerge and the robot needs to identify the unseen ones and ask a human for labels. To tackle this problem, the paper proposed GMM Based Continual Learning (GBCL), where Gaussian mixture models are learned continually for each object, given a pretrained and fixed feature extractor. The confidence and consistency predicted by GMMs are used to select novel object instances for active learning. The experiments are performed on the CORe-50 dataset and on a real robot. The results demonstrate the effectiveness of the proposed method over baselines. \n Strengths:\n* The proposed task is very interesting and has a huge value for real-world applications of vision and robotic systems. Robots deployed in the real world need to interact with potentially unseen objects every day. This task is a combination of few-shot, active and continual learning. I think this paper formulates this task in a decent way. The experiments are performed both on a pure static dataset (CORe-50) and on real robots with human labeling in the loop. \n* The application of GMM in continual and active learning in novel and the results prove to exceed the state-of-the-art. \n\nWeaknesses:\n* It’s unclear whether the comparison with other continual learning baselines is fair or not. \n * How is the memory usage of different continual learning? How many images or latent features are stored for each method? Which method stores a checkpoint of the network and which does not?\n * What’s the number of Gaussian mixtures during the learning of GBCL? How does this compare to the memory usage of other CL methods?\n * Are the networks for different CL methods all pretrained in the same way? During continual learning on CORE-50, do they all use the fixed feature extractor or the entire network is trained continually?\n* The feature extractor for each object is pretrained and fixed. The object detector used in the robot experiment is also fixed. It is possible that the feature extractor is not capable enough and can achieve better performance if they are learning on new objects observed. \n* It would be better if some baseline methods can be also tested on the robot experiment and compared to GBCL.\n * What is “GBCL-curiosity” in Line 240-241?\n* How will a baseline that stores all feature vectors from all increments perform?\n* Is the network doing classification on object-level or instance-level? \n I appreciate the authors’ discussion on limitations in the conclusion section. In addition, I think the usage of a pretrained and fixed feature extractor and the detector is another major limitation of this paper. \n" ]
[ -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, 3, 5, 3 ]
[ "brcdAE7VBVd", "OE-fiT_6A5m", "SWqVULGF5U4", "nips_2022_35I4narr5A", "nips_2022_35I4narr5A", "nips_2022_35I4narr5A" ]
nips_2022_s7SukMH7ie9
Adversarial Training with Complementary Labels: On the Benefit of Gradually Informative Attacks
Adversarial training (AT) with imperfect supervision is significant but receives limited attention. To push AT towards more practical scenarios, we explore a brand new yet challenging setting, i.e., AT with complementary labels (CLs), which specify a class that a data sample does not belong to. However, the direct combination of AT with existing methods for CLs results in consistent failure, but not on a simple baseline of two-stage training. In this paper, we further explore the phenomenon and identify the underlying challenges of AT with CLs as intractable adversarial optimization and low-quality adversarial examples. To address the above problems, we propose a new learning strategy using gradually informative attacks, which consists of two critical components: 1) Warm-up Attack (Warm-up) gently raises the adversarial perturbation budgets to ease the adversarial optimization with CLs; 2) Pseudo-Label Attack (PLA) incorporates the progressively informative model predictions into a corrected complementary loss. Extensive experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets. The code is publicly available at: https://github.com/RoyalSkye/ATCL.
Accept
This paper focuses on a significant and challenging problem: adversarial training (AT) with complementary labels. A naive combination of AT with existing complementary learning techniques fails to achieve good performance. The authors conduct both theoretical and empirical analyses of this phenomenon and identified two key challenges including intractable adversarial optimization and low-quality adversarial examples. Furthermore, two attack approaches are proposed accordingly: a warm-up attack to ease the adversarial optimization and a pseudo-label attack to improve the adversarial example quality. All reviewers recognize the effectiveness of the proposed method through experimental evaluations. During the discussion, the authors also successfully addressed the reviewers' questions on the problem settings, the novelty of the pseudo-label attack, warm-up strategies, etc. Based on the positive reviews and thorough discussions, we recommend the acceptance of the paper.
train
[ "tLaYQLJoIgX", "Yvkkotsj0dO", "91BuuBK9q_t", "BPTYboF200m", "-taFj7xoiKPp", "wIIszxf11eP", "T-K_K-JbzBZ", "rBUqOzjChV3", "vb5Bqa2ROSp", "MmF1rhW5Pl2", "9xvb8fTBBF6", "YiGAFtkXZz", "6e-wYzVGJsy", "GGVK5p28IC", "OOFm6Z2i248e", "IiWr7mal4jxs", "LTTDjAk26Zg", "SYs7QX0VnU-", "C8dUB90z2or", "D92bm06NeyQ", "q4E_h-xfUUc" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks the author for the through response and sorry for the late reply. The author has addressed most of my concerns, so I would raise my initial score and recommend this work.", " Dear Reviewer Jbnu,\n\nWould you mind acknowledging our rebuttal? As the discussion due is approaching, if you still have some questions, let us discuss in the openreview system.\n\nBest regards,\n\nAuthors", " Dear Reviewer Jbnu,\n\nWe appreciate your efforts in reviewing our paper. We have addressed all your questions in detail. Would you mind checking our response, and confirming if you have further questions?\n\nBest regards, \nAuthors", " Dear Reviewer Jbnu,\n\nWe appreciate your efforts in reviewing our paper. Would you mind checking our response, and is there any unclear point so that we could further clarify?\n\nBest regards, \nAuthors", " I appreciate the authors' detailed response, which addressed my concerns on the settings of this paper. Based on the thorough theoretical analysis as well as detailed empirical evaluation in the paper, I would like to raise my score and recommend this paper to be accepted.", " The authors have addressed my concerns. I would raise my score.", " Since there is a shared concern about the two problem settings *learning with noisy labels* and *learning with complementary labels*, we'd like to further clarify their connections and differences with some details.\n\nA quick conclusive message is that although CL as a problem setting is a special case of NL, **not every algorithm designed for the NL problem can be employed to solve the CL problem**. When talking about problem settings, supervised learning (where all labels are correct) and learning with CLs (where all labels are incorrect) are **the two ends of a line** named learning with NLs. Note that the range of this line is too wide, and we cannot require or expect an algorithm to cover the full range. The *critical point* is when all labels are random in which case learning a classifier is indeed impossible. For any algorithm working on the CL side of the NL line, a special knowledge (i.e., the problem is on the CL side) is needed in the algorithm design. This makes sense --- if we simply apply a general-purpose NL algorithm to supervised learning, it doesn't work well either (due to unnecessary robustness reducing statistical efficiency); similarly, if we simply apply a general-purpose NL algorithm to the CL problem, unfortunately, it might not work at all.\n\nSome machine learning methods such as *robust loss* and *robust regularization* are designed for robustness in the general sense rather than robustness against NLs. They can only work in a narrow range of the NL line when the noise rate is low enough. *Loss correction*, *sample selection*, and *label correction* are specially designed for robustness against NLs and thus can work in a wide range of the NL line when the noise rate is fairly high, provided that **correct labels can still dominate incorrect labels**. On class-balanced benchmarks (such as the MNIST family and CIFAR-10), we theoretically need $T$ to be diagonally dominant to enable *learnability* in the asymptotic case, and we empirically need $T$ to be **diagonally dominant by a sufficient margin** to enable *stable training* in the finite-sample case.\n\n- More specifically, sample selection and label correction require the row diagonal dominance of $T$ --- for example, on MNIST and CIFAR-10, the noise rate should be lower than 81% for symmetric noise (where the original class gets 19% of its data and each of other classes gets 9% data of that class) or 45% for pairwise noise (where the original class gets 55% of its data and the next class gets 45% data of that class), so we require a margin of 10% beyond diagonal dominance.\n\n- On the other hand, loss correction itself has no constraint on $T$ if it is known in advance, while the estimation of $T$ requires its column diagonal dominance, or equivalently, $\\arg\\max_{\\tilde{y}}p(\\tilde{y}|x)==\\arg\\max_{y}p(y|x)$ for all $x$, to determine the class membership of given or found (likely) *anchor points*.\n\nTherefore, we can see that only backward/forward loss corrections and related methods on top of them can work for the problem of learning with CLs, and at the same time the derived algorithms must also utilize the special knowledge about $T$.\n\nRegarding the **unpublished** paper entitled \"Understanding the Interaction of Adversarial Training with Noisy Labels\" (here, the word \"with\" modifies interaction rather than training), note that the goal is indeed not about adversarial training with noisy labels, but about **using the number of PGD steps to improve sample selection quality and thus improve the natural accuracy of learning with NLs**. According to the one-sentence summary of that paper,\n\n> Adversarial training (AT) itself is noisy labels (NL) correction; \"PGD step number\" in AT is a new criterion for sample selection.\n\nand according to its abstract,\n\n> Firstly, we find if a point is too close to its noisy-class boundary (e.g., one step is enough to attack it), this point is likely to be mislabeled, which suggests to adopt the number of PGD steps as a new criterion for sample selection to correct NL. Secondly, we confirm that AT with strong smoothing effects suffers less from NL (without NL corrections) than standard training, which suggests that AT itself is an NL correction.\n\nAs a result, it is even not possible to directly apply that method in a two-stage manner (i.e., sample selection --> adversarial training). On the CL side of the NL line, none labels are correct and thus **there is no sample to select for standard or adversarial training**. Therefore, even if the aforementioned paper has been published, it would not compromise the novelty of the current paper under consideration.\n\nLast but not least, note that complementary labels are the most adversarial (yet structured) label modification, whereas adversarial examples are the most adversarial instance modification. As a consequence, showing the possibility of better classifier training than first CL and then AT is really not a small step and needs a lot of unique insights. ", " **Q2: Other discussions about AT with imperfect supervision**\n\n**A2:** \n\n> The robustness trained under CL setting does not lead to better robustness mostly because of the imperfect supervision. For most cases vanilla AT under standard setting outperforms the proposed methods in CL setting in Table 1; What is the advantage of robust model trained from CL-based AT since it performs worse than models trained with normal AT?\"\n\nThe performance of AT with ordinary labels is **served as an oracle, and a reference** (measuring current performance gaps between AT with CLs and AT with ordinary labels) for future research in the studied problem setting, i.e., AT with CLs. Hence, it is not comparable to the methods with different settings. On the other hand, the vanilla AT would fail given complementary labels, and the naive combinations of vanilla AT with complementary learning methods have been widely verified not to be better than our proposed method.\n\n> I still not understand why it is necessary to introduce CL into AT to increase the difficulty of AT; why use CL to make the supervision imperfect.\n\nWe would like to mention that we are not introducing CLs deliberately to AT to achieve some objectives (e.g., try to lead to better robustness). In this paper, what we are focusing on is **\"how to equip the machine learning model with adversarial robustness when we only have complementary labels in the dataset\"**, which is a scientific/research problem, not an engineering one. We kindly refer to our response to Q1 for the motivation and research significance of AT with CLs.\n\nDue to the page limit, part of the discussion about the motivation and significance of AT with CLs is added to Appendix E, and will be moved to the introduction later.", " We sincerely appreciate all reviewers' time and efforts in reviewing our paper as well as the comments. We have updated our draft, including the additional experiments required by each reviewer, and the motivation and scientific significance of our studied setting. \n\nIn addition to the pointwise responses below, here we summarize our updates.\n\n- [Setting] We explain and highlight the motivation and scientific significance of our studied new setting (in Appendix E), namely, adversarial training with complementary labels (AT with CLs). The new problem setting itself is of scientific interests to both research areas of weakly supervised learning and adversarial learning. The studied setting has never been explored, and we hope our effort can make AT with CLs practically useful and let its trained models be able to be safely deployed in the real world in the near future.\n\n- [Method Novelty] We compare and highlight the novelty of our proposed method (in Appendix D.4). We proposed a unified framework, which naturally combines and adaptively controls two indispensable components (Warm-up Attack and Pseudo-labels Attack) throughout the adversarial optimization with CLs. Conceptually, the motivation and underlying principle of the two components in our method are closely related to the unique challenges identified in our new problem setting, which have never been revealed. Technically, the two components utilize the dynamic information of complementary labels during the adversarial optimization, which is also not been considered in previous literature. Comprehensive experiments are conducted to verify its rationality and effectiveness.\n- [Extra Experiments] We add the experiments about the empirical evaluation of pseudo-labels attacks (in Appendix D.6), various strategies of warm-up attack (in Appendix D.7), comparison with our newly designed methods for AT under noisy labels, and verification of proposed methods combined with more complementary learning baselines (in Appendix D.4).\n\nWe hope our responses below could address the reviewers' concerns, and we are welcome to discuss further if any point is unclear.\n\nThe authors of Paper1958.", " **Q3: Demonstrate the improvements of adding Warm-up Attack and Pseudo-label Attack with other complementary learning baselines**\n\n**A3:** To demonstrate the effectiveness of adding Warm-up (Warm-up) Attack and Pseudo-labels Attack (PLA), we conduct the experiments with other complementary learning baselines. To be specific, we run the experiments on Kuzushiji dataset following the settings described in the main paper. We summarize the results (mean with standard deviations) within 3 runs in Table 2.\n\n**Table 2.** Performance of adding Warm-up and PLA with other complementary learning baselines.\n\n| | Natural | PGD | CW | AA |\n| :-------------------- | :------------------: | :------------------: | :------------------: | :------------------: |\n| SCL_NL | 40.83($\\pm$24.03) | 32.82($\\pm$22.88) | 29.93($\\pm$22.53) | 20.86($\\pm$19.74) |\n| SCL_NL+Warm-up | **91.92($\\pm$0.29)** | 82.93($\\pm$0.58) | 79.77($\\pm$0.96) | 62.27($\\pm$0.68) |\n| SCL_NL+PLA | 39.64($\\pm$32.18) | 32.18($\\pm$30.82) | 28.37($\\pm$30.97) | 21.58($\\pm$23.01) |\n| Warm-up+PLA (SCL_NL) | 91.74($\\pm$0.85) | **86.09($\\pm$1.01)** | **83.77($\\pm$1.03)** | **67.87($\\pm$0.94)** |\n| | | | | |\n| SCL_EXP | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 8.21($\\pm$2.54) |\n| SCL_EXP+Warm-up | **89.09($\\pm$0.58)** | 80.94($\\pm$0.70) | 78.12($\\pm$0.74) | 61.17($\\pm$1.17) |\n| SCL_EXP+PLA | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) |\n| Warm-up+PLA (SCL_EXP) | 87.58($\\pm$2.48) | **82.17($\\pm$2.45)** | **80.25($\\pm$2.41)** | **66.36($\\pm$2.02)** |\n| | | | | |\n| EXP | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) |\n| EXP+Warm-up | **89.18($\\pm$0.21)** | **80.80($\\pm$0.21)** | **77.30($\\pm$0.35)** | 60.90($\\pm$0.25) |\n| EXP+PLA | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) | 10.00($\\pm$0.00) |\n| Warm-up+PLA (EXP) | 84.63($\\pm$0.94) | 79.37($\\pm$1.27) | 77.23($\\pm$1.42) | **65.04($\\pm$1.33)** |\n| | | | | |\n| LOG | 32.66($\\pm$25.50) | 26.87($\\pm$22.66) | 24.90($\\pm$21.20) | 18.78($\\pm$16.95) |\n| LOG+Warm-up | 91.31($\\pm$0.53) | 82.77($\\pm$0.33) | 80.31($\\pm$0.25) | 62.23($\\pm$0.42) |\n| LOG+PLA | 57.78($\\pm$33.80) | 53.10($\\pm$30.49) | 51.11($\\pm$29.08) | 39.65($\\pm$21.14) |\n| Warm-up+PLA (LOG) | **91.60($\\pm$0.49)** | **85.88($\\pm$0.48)** | **83.74($\\pm$0.35)** | **68.75($\\pm$0.68)** |\n\nSimilar to the results shown in Figure 8(c) of Appendix D.4, the improvement of adding Warm-up Attack and Pseudo-label Attack can be also found in other complementary learning baselines. We would also like to mention that *only with Warm-up* seems to be empirically effective for datasets like MNIST and Kuzushiji. However, based on our previous experiments, all complementary learning baselines equip only with Warm-up result in unsatisfactory robustness on the CIFAR-10 datasets (e.g., LOG+Warm-up can only achieve 28.25% of PGD20 test accuracy on CIFAR10 as shown in Figure 3(c)). Also, if *only with PLA*, the adversarial optimization tends to be unstable or even fail (e.g., the results for SCL+EXP+PLA and EXP+PLA in Table 2). The above two facts further verify the benefits of the proposed *unified framework* (Warm-up+PLA), which can achieve consistently better performance as demonstrated in our experiments. \n\nWe will update the comprehensive results and corresponding analysis in Appendix D.4.", " **Q2: About the novelty of this work. Both proposed attack methods appear in the literature.**\n\n**A2:** There are many facets to evaluate the novelty of the work. In our opinion, the main novelty of our work is the proposed new setting (AT with CLs), the proposed unified framework based on our theoretical and empirical analysis, and the potential insights for future research work on AT with various imperfect supervision. Complementary learning illustrates the possibility of training ordinary classifiers even if all the labels given for training are wrong. We take one step further by taking robustness benefits from adversarial training with ordinary labels, and tackling the unique challenges of AT with CLs. The studied new problem setting itself is of scientific interests to both research areas of weakly supervised learning and adversarial learning. \n\nAs for the technical novelty, first, the proposed method is conceptually novel since it is naturally based on the unique challenges identified in AT with CLs, which have both theoretical and empirical insights for the new research problem. It has a different motivation and underlying principle from other literature. Different from simply labeling the unlabeled data [d] in a one-shot manner (**the same as the two-stage baseline**), our Pseudo-label Attack (PLA) further utilizes the dynamics of adversarial optimization with CLs (e.g., weight between CL and PL according to the learning status). Different from improving training stability in the large batch training in [b] or escaping from the suboptimal random initialization in [c], our Warm-up Attack (Warm-up) is for easing the adversarial optimization with CLs considering the CLs information in different epsilon balls. \n\nSecond, except for the high-level differences, our proposed method has technical differences from the [b-d]. As for PLA, [d] only considers the pseudo-label (PL) generated by the fixed pre-trained model as below, which is not optimized further.\n\n> ```Meta-Algorithm 1 in section 4 of [d]:``` \n>\n> 1. Learn a model using standard training; \n> 2. Generate pseudo-label for unlabeled data using the fixed standard model;\n\nWhile our method generates PL based on the cashed probability to improve optimization stability with CLs, and adaptively considers both PLs and CLs during the optimization process (refer to Eq.(10)). As for Warm-up, we also consider different strategies (refer to Section 4.3 and Appendix D.7) that can achieve our unique objective instead of only controlling the radius of the epsilon ball or learning rate that is closely related to their corresponding research problem in [b,c]. \n\nMore importantly, our proposed method is a *unified framework* that integrates the two critical components together in a quite natural way and controls them adaptively throughout the adversarial optimization with CLs. Both two components are indispensable, and are proposed to solve the corresponding challenges of AT with CLs, based on our theoretical and empirical analysis. We kindly refer to our response to Q3 for the ablation study of each component combined with several complementary learning baselines, without either of the two components and reasonable scheduling of them, the adversarial optimization with CLs will be extremely unstable and result in a model with poor robustness, or even failure (see also in Section 5.2 and Appendix D.4). \n\n[b] A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation.\n\n[c] On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them.\n\n[d] Unlabeled Data Improves Adversarial Robustness.", " Thanks for reviewing our paper, with valuable comments and suggestions. Here are our detailed responses to your suggestions/questions.\n\n**Q1: Compare with the literature considering the interaction of AT with Noisy Labels [a].**\n\n**A1:** Thanks for your suggestion. However, we would like to mention that paper [a] does not target for adversarial training under imperfect supervision. It studied and explored the interaction of AT with Noisy Labels (NLs). That paper found AT itself is more robust than natural training and the number of PGD steps needed to attack a point can be used together with the small-loss trick (i.e., **they use AT to help label noise**). Except for the main difference, combining AT with robust learning methods for noisy labels is also not compatible with our problem setting. Below, we briefly design a new algorithm in a three-stage way by leveraging the insights in [a], and conduct experiments to evaluate its performance on AT with CLs:\n\n- First of all, [a] did not focus on solving the adversarial training on noisy datasets. It explored the interactions of adversarial training and noisy labels (e.g., the smoothing effect of adversarial training). Note that [a] did not propose a specific method targeted for conducting AT under imperfect supervision. Here, we leverage the specific empirical insights from noisy labels [1], that is, clean samples tend to have small losses and small pgd steps proposed in [a]. However, it is not applicable to complementary learning (CL is a special case of NL with a 100% noise rate). Therefore, it is hard to design a one-stage method leveraging their empirical insight from NLs, for AT with CLs. \n- Moreover, if we consider the two-stage baseline, it is possible to add a new stage that conducts sample selection based on the empirical insights from [a], after the first (complementary learning) stage. In this way, the newly designed algorithm consists of three stages (i.e., complementary learning -> sample selection -> vanilla AT). Specifically, it first converts the given complementary dataset $(x, \\bar{y})$ into a noisy dataset $(x, \\tilde{y})$ through the technique of complementary learning, then leverages the insights of [a] to select potential clean samples on the generated noisy dataset, and finally conducts vanilla AT with the selected samples. However, the sample selection phase usually requires knowing the noise rate in advance, which is impractical/impossible to accurately estimate in our settings (since we do not have any information about the ordinary label). \n- However, we still design the algorithm in a three-stage way and conduct an experiment on Kuzushiji following the setting of the two-stage baseline (as described in section 5 and Appendix D.1), by assuming the ordinary label of each data $(x, y, \\bar{y})$ is known in advance. During the sample selection phase, we set the estimated noise rate as 1-$ACC_{Tr}$, where $ACC_{Tr}$ is the natural training accuracy of the model in the last epoch. But note that this is an unfair setting since only an *oracle* is able to know that. As Table 1 shown, we do not observe superior performance in the experiment.\n- Actually, it is expected that the performance of the designed three-stage method is similar to that of the two-stage baseline, since there are no essential differences between the label correction (e.g., the first stage) and the following sample selection (i.e., the second stage). The generated noisy dataset $(x, \\tilde{y})$ (after the first stage) is instance-dependent noise (IDN) rather than class-conditional noise (CCN) on which most of the methods in the literature of CLs / NLs are focusing.\n\nOverall, we summarize the comparison results in Table 1.\n\n**Table 1.** Performance Comparison.\n\n| Method | Natural | PGD | CW | AA |\n| :---------------------------: | :------------------: | :------------------: | :------------------: | :------------------: |\n| Two-stage baseline | 89.75($\\pm$0.42) | 82.91($\\pm$1.01) | 80.21($\\pm$1.27) | 64.57($\\pm$1.79) |\n| Three-stage with [a] insights | 89.00($\\pm$0.67) | 82.37($\\pm$1.25) | 80.29($\\pm$2.00) | 64.09($\\pm$1.34) |\n| Warm-up+PLA | **91.60($\\pm$0.49)** | **85.88($\\pm$0.48)** | **83.74($\\pm$0.35)** | **68.75($\\pm$0.68)** |\n\n[a] Understanding the Interaction of Adversarial Training with Noisy Labels.\n\n[1] A closer look at memorization in deep networks.", " Thank you for reviewing our paper, with constructive comments and strong support. Here are our detailed responses to your suggestions/questions.\n\n**Q1: About the novelty of the proposed method. The performance improvement above the simple two-stage baseline is not significant.**\n\n**A1:** First, the proposed method is conceptually novel since it is naturally based on the unique challenges identified in the new setting, i.e., AT with CLs, which have both theoretical and empirical insights for the new research problem. It has a different motivation and underlying principle from other literature. Different from engineeringly simplifying the problem in the two-stage baseline, our Pseudo-label Attack (PLA) further utilizes the dynamics of adversarial optimization with CLs. Different from escaping from the suboptimal random initialization in [1], our Warm-up Attack (Warm-up) is for easing the adversarial optimization with CLs considering the CLs information in different epsilon balls.\n\nSecond, except for the high-level differences, our proposed method has technical differences from the above-mentioned methods. As for PLA, the two-stage baseline does not consider the information of CLs (at the second stage), while our method adaptively considers both PLs and CLs during the optimization process. As for Warm-up, we also consider different strategies (refer to Section 4.3 and Appendix D.7) that can achieve our unique objective instead of only controlling the radius of the epsilon ball that is closely related to the research problem in [1]. \n\nMore importantly, our proposed method is a *unified framework* that integrates the two critical components together in a quite natural way and controls them adaptively throughout the adversarial optimization with CLs. Both two components are indispensable, and are proposed to solve the corresponding challenges of AT with CLs, based on our theoretical and empirical analysis. We kindly refer to our analysis in Section 5.2 and Appendix D.4, without either of the two components and reasonable scheduling of them, the adversarial optimization with CLs will be extremely unstable and result in a model with poor robustness, or even failure.\n\nAs for the performance improvement above the two-stage baseline, as shown in Table 1, on Kuzushiji/CIFAR-10, our method could improve 4.18%/1.39% in terms of AA and 1.85%/0.90% in terms of natural accuracy, which are considered as significant improvements in the literature of adversarial training. \n\n[1] On the loss landscape of adversarial training: Identifying challenges and how to overcome them.", " **Q2: Other discussions about AT with imperfect supervision**\n\n**A2:** \n\n> It can be more reasonable if the authors can demonstrate their CL-based AT performs better than vanilla AT when the training data is noised, instead of first using CL to make the supervision imperfect and then try to conduct AT under imperfect supervision.\n\nFirst, considering AT with noisy labels and complementary labels are two different research problems. Recently, some work studied the interaction of adversarial training with noisy labels [1] (e.g., discovered the smoothing effect of AT). However, they did not target for AT under imperfect supervision, and did not mainly study how to conduct AT with noisy labels (they used AT to help label noise instead). Therefore, \"*how to achieve high robustness if the clean dataset is partly corrupted*\" is still an open research problem. While the CL is a special case of the NL since the clean dataset is fully corrupted. We kindly refer to our discussion in the response to Q1 of Reviewer Jbnu, we briefly design a new three-stage algorithm that leverages their empirical insights [1], and observe that its performance is not superior to the two-stage baseline, even in an unfair setting. Second, we would like to clarify that we are considering AT under CLs, instead of using CLs to further enhance AT in the normal setting with perfect supervision. Finally, we conduct the performance evaluation of vanilla AT and our method on Kuzushiji and CIFAR10 with symmetry noise rates of 95% and 100% (similar to CLs where the labels are **fully corrupted**), and show the results in Table 1. Note that we keep the same setups as described in our paper, and do not fine-tune any hyperparameters. It is clearly observed that our method performs consistently better than the vanilla AT in such cases.\n\n**Table 1.** Performance evaluation of vanilla AT and our method on the noisy dataset [Last/Best checkpoints].\n\n| | Natural | PGD | CW | AA |\n| :------------------: | :---------------: | :---------------: | :---------------: | :---------------: |\n| ***Kuzushiji-95%*** | | | | |\n| Vanilla AT | 10.00 / 10.00 | 10.00 / 10.00 | 10.00 / 10.00 | 9.97 / 10.00 |\n| Ours | **74.09 / 74.22** | **68.07 / 68.78** | **66.88 / 67.67** | **56.04 / 57.21** |\n| ***Kuzushiji-100%*** | | | | |\n| Vanilla AT | 10.00 / 10.00 | 10.00 / 10.00 | 10.00 / 10.00 | 10.00 / 10.00 |\n| Ours | **90.65 / 90.92** | **84.31 / 85.27** | **82.14 / 83.27** | **66.85 / 67.80** |\n| ***CIFAR10-95%*** | | | | |\n| Vanilla AT | 10.00 / 10.00 | 10.00 / 10.00 | 10.00 / 10.00 | 10.00 / 10.00 |\n| Ours | **26.92 / 27.34** | **21.92 / 22.10** | **21.67 / 21.79** | **21.59 / 21.71** |\n| ***CIFAR10-100%*** | | | | |\n| Vanilla AT | 10.00 / 10.00 | 10.00 / 10.00 | 10.00 / 10.00 | 10.00 / 10.00 |\n| Ours | **64.02 / 63.84** | **41.98 / 42.33** | **40.04 / 40.47** | **39.23 / 39.75** |\n\n[1] Understanding the Interaction of Adversarial Training with Noisy Labels.", " Thanks for reviewing our paper, with the concerns about our setting. Here are our detailed responses to your suggestions/questions.\n\n**Q1: Why should we consider AT with CLs?**\n\n**A1:** First of all, ordinary training with complementary labels (i.e., complementary learning) is a promising problem setting in weakly supervised learning. It has been included as a chapter in the book \"Machine Learning from Weak Supervision: An Empirical Risk Minimization Approach\", Adaptive Computation and Machine Learning series, The MIT Press. Its success illustrates a possibility of training ordinary classifiers even when all the labels given for training are wrong (thus the only possibility where we cannot train classifiers is that the instances and the labels are statistically independent, i.e., the labels are completely random). Ordinary training with complementary labels can be employed in certain cases to reduce labeling burden or even make labeling a specific unlabeled dataset from impossible to possible, since the annotators do not need to be domain experts now.\n\nOn the other hand, adversarial training with ordinary labels is extremely popular nowadays. According to the Test of Time Award speech of ICML 2022, there are already more than 10K related papers in the more general adversarial learning where adversarial training is a major branch. Most trustworthy machine learning and computer security researchers, if not all of them, think that adversarial robustness is already a sanity check before deploying a trained model in the real world. In other words, a model with high natural accuracy but low adversarial accuracy can only be deployed in some strictly controlled environments, while a model to be deployed in the wild must have both high natural accuracy and high adversarial accuracy.\n\nTherefore, we are interested in going one step further for ordinary training with complementary labels by taking robustness benefits from adversarial training with ordinary labels, namely, we are considering adversarial training with complementary labels. Unfortunately, naive combinations of the two cannot work under the new problem setting, and we have proposed conceptually and technically novel methods to handle the new problem setting. That being said, the new problem setting itself is of scientific interests to both research areas of weakly supervised learning and adversarial learning. Note that NeurIPS is a scientific conference, and hence usefulness is not the exclusive measure of importance, especially for papers proposing new problem settings. Only if a new problem setting has been proposed and the corresponding paper has been published, can follow-up methods be proposed, making a topic more and more popular and at the same time more and more useful in practice. We hope our effort can make adversarial training with complementary labels practically useful and let its trained models be able to be safely deployed in the real world in the near future. Please focus on science and evaluate our scientific contributions.", " **Q2: Ablation study on strategies of Warm-up attack.**\n\n**A2:** Thanks for your suggestion. We try two more strategies of Warm-up attack, and conduct experiments on Kuzushiji dataset: (a) we only change the number of attack steps $k$; (b) similar to the original implementation, we still control the radius of the epsilon ball, but accompanied by the proportional increase of $k$ instead of the step size $\\alpha$. We summarize the results in Table 3.\n\n**Table 3.** Ablation study on strategies of Warm-up Attack [Last/Best checkpoints].\n\n| | Natural | PGD | CW | AA |\n| :------: | :----------------------------------: | :----------------------------------: | :----------------------------------: | :----------------------------------: |\n| original | 91.26($\\pm$0.45) / 91.60($\\pm$0.49) | 84.96($\\pm$0.46) / 85.88($\\pm$0.48) | 82.79($\\pm$0.50) / 83.74($\\pm$0.35) | 67.51($\\pm$0.74) / 68.75($\\pm$0.68) |\n| (a) | 60.09($\\pm$35.42) / 89.22($\\pm$0.66) | 55.55($\\pm$32.21) / 81.94($\\pm$0.90) | 54.02($\\pm$31.13) / 79.14($\\pm$0.93) | 45.79($\\pm$25.31) / 64.41($\\pm$0.94) |\n| (b) | 91.48($\\pm$0.39) / 91.82($\\pm$0.39) | 85.43($\\pm$0.23) / 86.12($\\pm$0.37) | 83.26($\\pm$0.10) / 84.04($\\pm$0.43) | 67.52($\\pm$1.37) / 69.43($\\pm$0.40) |\n\nThe results demonstrate that trial (a) may suffer from unstable adversarial optimization, while trial (b) is comparable and slightly better than our original implementation. We update the detailed results with corresponding analysis in Appendix D.7 and Figure 9(b). ", " Thank you for reviewing our paper, with constructive comments and strong support. Here are our detailed responses to your suggestions/questions.\n\n**Q1: Empirical evaluation of pseudo-labels.**\n\n**A1:** Thanks for pointing this out, we conduct two experiments for the empirical evaluation of pseudo-labels to comprehensively verify this point. \n\nFirst, we report the Acc. of pseudo-labels (%) w.r.t. the slowly increased $\\epsilon$ ball in Table 1, where the $\\epsilon$ ball is relatively small. To be specific, the radius of the epsilon ball is increased from 0 to 0.3 within $E_\\mathrm{s}=50$ epochs, following our proposed scheduler as shown in Section 4.3 and Figure 3(a). To avoid the effect of (natural complementary learning) warmup period on the analysis of accuracy dynamics of pseudo-labels, we set the epoch of warmup $E_\\mathrm{i}=0$. We keep the other setups fixed and rerun the experiments on Kuzushiji dataset. \n\nThe results show that the accuracy of pseudo-labels rises up rapidly at the early stage of adversarial optimization with CLs, which demonstrates that a small epsilon ball (e.g., $\\epsilon$ is increased from 0 to 0.029 within the first 10 epochs) is helpful for the formation of a discriminative model that tends to assign high confidence to the ordinary labels.\n\n**Table 1.** Acc. of pseudo-labels (%) w.r.t. the slowly increased $\\epsilon$ ball in each Epoch. \n\n| Epoch | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 50 |\n| :--------------------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: |\n| **Acc of pseudo-labels (%)** | 12.52($\\pm$0.42) | 25.44($\\pm$2.26) | 46.78($\\pm$4.27) | 61.57($\\pm$4.53) | 71.81($\\pm$6.13) | 79.70($\\pm$4.20) | 85.27($\\pm$1.63) | 87.55($\\pm$1.05) | 89.00($\\pm$1.05) | 90.39($\\pm$1.15) | 97.22($\\pm$0.18) |\n| $\\epsilon_e$ | 0.0003 | 0.0012 | 0.0027 | 0.0047 | 0.0073 | 0.0105 | 0.0143 | 0.0186 | 0.0234 | 0.0286 | 0.3000 |\n| $\\epsilon_e/\\epsilon_{\\max}$ | 0.10% | 0.39% | 0.89% | 1.57% | 2.45% | 3.51% | 4.76% | 6.18% | 7.78% | 9.55% | 100.00% |\n\nSecond, we report the Acc. of pseudo-labels (%) w.r.t. the rapidly increased $\\epsilon$ ball in Table 2, where we only modifying $E_\\mathrm{s}=10$ without changing other setups. \n\nIn this way, at the beginning of training, the adversarial data are found within a relatively big epsilon ball (also with a rapid growth rate). The results demonstrate that the model fails to assign high confidence to the ordinary label in such a case, and even may fail the adversarial optimization (observed in 2 out of 3 runs). \n\n**Table 2.** Acc. of pseudo-labels (%) w.r.t. the rapidly increased $\\epsilon$ ball in each Epoch. \n\n| Epoch | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 50 |\n| :--------------------------: | :--------------: | :--------------: | :--------------: | :--------------: | :-------------: | :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | ---------------- |\n| **Acc of pseudo-labels (%)** | 11.08($\\pm$0.04) | 12.66($\\pm$1.21) | 12.93($\\pm$1.48) | 12.98($\\pm$1.52) | 13.0($\\pm$1.55) | 13.02($\\pm$1.56) | 13.02($\\pm$1.56) | 13.02($\\pm$1.56) | 13.02($\\pm$1.56) | 13.01($\\pm$1.55) | 13.02($\\pm$1.56) |\n| $\\epsilon_e$ | 0.0073 | 0.0286 | 0.0618 | 0.1036 | 0.1500 | 0.1964 | 0.2382 | 0.2714 | 0.2927 | 0.3000 | 0.3000 |\n| $\\epsilon_e/\\epsilon_{\\max}$ | 2.45% | 9.55% | 20.61% | 34.55% | 50.00% | 65.45% | 79.39% | 90.45% | 97.55% | 100.00% | 100.00% |\n\nOverall, the results demonstrate the model tends to assign high confidence to ordinary labels when the epsilon ball is small. Along with training, we can obtain the pseudo-labels with high accuracy using our proposed cashed (exponential moving average) probability. We update the whole results in Appendix D.6 and Figure 9 (a).", " The authors explore a brand new yet challenging setting that studies adversarial training (AT) with complementary labels (CL). Generally, it is significant to involve CL in AT since imperfect supervision can be common in real AT scenarios. However, the direct combination of AT and CL consistently leads to failure according to extensive empirical observations.\n\nTo explore this issue, the authors provide theoretical evidence that there exists inconsistency between complementary risk and ordinary risk of adversarial optimization with limited CLs. Together with the empirical studies of gradients, they identify two key challenges including intractable adversarial optimization and low-quality adversarial examples.\n\nBased on the analysis, a new attack strategy is introduced. A warm-up is adopted to ease the difficulty of adversarial optimization. With the model prediction as supplementary information, the adversarial training gradually involves the pseudo label predicted by the model. The authors conduct extensive experiments on different datasets and compare proposed algorithm with various baselines to demonstrate its effectiveness. Pros:\n\n* In general, this paper is well-written and easy to follow. The motivation is clear. The studied setting is both significant and challenging.\n* Both the theoretical analysis of the inconsistency between empirical risks with the assumption of limited CLs and the empirical analysis of the difficulty of adversarial optimization as well as adversarial example generation are intriguing.\n* The proposed techniques including warm-up and pseudo-label are proposed based on the theoretical and empirical analysis, which are simple yet natural.\n* The authors provide sufficient evaluation of proposed algorithm. The evaluation is conducted on MNIST, Kuzushiji, CIFAR-10 and SVHN, and includes various SOTA complementary losses for comparison. The proposed algorithm consistently achieves better adversarial robustness as well as stability.\n\nCons:\n\n* In Section 4.3, the authors propose to use model prediction as a strong supplementary information since the model tends to assign high confidence to ordinary label when the epsilon ball is small enough. However, it is difficult to find empirical evidence of it in experimental section. It would be better for the authors to report the accuracy of pseudo labels in some scenarios.\n* The author introduces a warm-up attack which controls the radius of epsilon ball. However, the number of attack steps is fixed during warm-up. It would be better for the authors to conduct more ablation studies of it. 1.Please provide empirical evaluation of pseudo labels.\n2.Please include more ablation studies of warm-up strategies of attacks. Yes, the authors have discussed the limitations and potential negative societal impacts in Appendix E.", " This paper focuses on how to make adversarial training(AT) applicable in a new setting where complementary labels (CL) instead of ground-truth labels are given for AT. The authors claim that the main obstacles for CL-based AT are intractable adversarial optimization and low-quality adversarial examples. Based on this, the authors propose to solve the problems with warm-up attack and pseudo-label attack. Experiment results show that  the proposed method successfully build robust models in CL setting while many baselines fail to get a robust model.\n\n Strength: the proposed method is neat and reasonable. The authors first analyze the reasons why AT fails in CL setting and then design corresponding solutions to mitigate the problems. \n\nWeakness: \n\n1. I am not quite agree with the setting proposed by the authors. The AT with complementary labels considers if there is no perfect supervision data for training a robust model. However, the authors did not make it clear and reasonable that why we should consider AT in such a setting. I admit that exploring the performance of AT in noised data might be necessary and practical, but exploring the performance of AT in CL setting seems to make it unnecessarily harder for adversarial training. It can be more reasonable if the authors can demonstrate their CL-based AT performs better than vanilla AT when the training data is noised, instead of first using CL to make the supervision imperfect and then try to conduct AT under imperfect supervision.\n2. Consequently, the robustness trained under CL setting does no lead to better robustness mostly because of the imperfect supervision as shown in the experiments. For most cases vanilla AT under standard setting outperforms the proposed methods in CL setting in Table 1. Though such unfair setting provides vanilla AT advantage, I still think the proposed setting is not reasonable. To better demonstrate the effectiveness, a noised dataset for AT may be considered and the authors can compare the performance on such a dataset between vanilla AT and their CL-based AT method. I am still not understand why it is necessary to introduce CL into AT to increase the difficulty of AT. As discussed above, why a nosied dataset for AT is not considered? I would be really appreciate if the authors can explain more why use CL to make the supervision imperfect. What is the advantage of robust model trained from CL-based AT since it performs worse than models trained with normal AT? The authors have addressed the limitations and potential negative societal impacts.", " This paper proposes to address a new problem: adversarial training with Complementary Labels (CLs). A naive combination of Adversarial training and CLs fails to yield good performance. The authors identified the problem of this naive combination and propose to use Warm-up Attack and Pseudo Label Attack to address these problems. The proposed method yields a performance improvement above the naive combination and simple two-stage method. Strengths:\n1. The writing is good and easy to follow\n2. Thorough theoretical analysis is provided\n3. The target problem has never been explored before\n4. Unique challenge of this problem is identified and solved\n\n\nWeakness:\n1. Limited novelty in the proposed method. The proposed Pseudo-Label Attack is very similar to the simple two-stage baseline. Also, the warm-up attack has also been explored in the previous work [1]. Meanwhile, the performance improvement above the simple two-stage baseline is not significant.\n\n[1] Liu, Chen, et al. \"On the loss landscape of adversarial training: Identifying challenges and how to overcome them.\" Advances in Neural Information Processing Systems 33 (2020): 21476-21487. My main concern is the novelty of the proposed method. It would be good if the author can further explain on this. No negative societal impact is found.", " This paper aims to propose an effective adversarial training (AT) method for the scenario where the imperfect supervision is available. More specifically, the paper shows that when the complementary label (CL) (i.e. label for non-groundtruth class) are available, a direct combination of AT and CL will fail. To address this limitation, two attack approaches, warm-up attack and pseudo-label attack, are proposed. The former gradually increases the attack budget over the training epoch and the latter uses the pseudo-label of the model prediction to generate the adversarial example. Experiments demonstrate the effectiveness of the proposed attack methods under the imperfect supervision scenario. Strength\n\n1.\tThis work first proposed adversarial training together with complementary labels. From my best understanding, this is the first work to study this problem.\n\n2.\tThe motivation of studying adversarial training in imperfect data scenario is strong and practical. \n\n3.\tAlgorithm 1 is clear and simple to follow.\n\nWeakness\n\n1.\tIn L24, the author mentioned that the adversarial training (AT) with imperfect supervision has received less attention. While this work mainly studies the use of complementary labels, there are papers like [a], which studies AT with noisy label. The author is suggested to compare the proposed method with [a], which both study the AT in imperfect data regime.\n\n2.\tThe novelty of the proposed method is a concern of this work. The warm-up technique has been widely used in deep learning. For example, [b] studies the warm-up technique used to adjust the learning rate. [c] is somewhat similar to the proposed Warm-up Attack, where the attack budget gradually increases (See section 5 of [c]). On the other hand, the pseudo-label attack mainly stabilizes the adversarial example generation by using the pseudo-label from model prediction. This has been widely used, for example, when applying adversarial training under semi-supervised learning [d] (See Meta-Algorithm 1 in section 4 of [d]). Since both proposed attack methods appear in the literature, the author is suggested to highlight the novelty of the proposed methods.\n\n3.\tFigure 3 only conducts ablation on the LOG method, does the improvement of adding Warm-up Attack and Pseudo-label Attack applied to other complementary learning baselines?\n\n\n\n[a] Understanding the Interaction of Adversarial Training with Noisy Labels\n[b] A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation\n[c] On the Loss Landscape of Adversarial Training: Identifying Challenges and How to Overcome Them\n[d] Unlabeled Data Improves Adversarial Robustness\n The author is suggested to address the concern in the weakness section, especially the novelty of this work (e.g. the proposed 2 attacks). Yes, the author has provided the checklist and the discuss the broader impact in the supplemental material. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "q4E_h-xfUUc", "q4E_h-xfUUc", "q4E_h-xfUUc", "q4E_h-xfUUc", "OOFm6Z2i248e", "6e-wYzVGJsy", "nips_2022_s7SukMH7ie9", "C8dUB90z2or", "nips_2022_s7SukMH7ie9", "q4E_h-xfUUc", "q4E_h-xfUUc", "q4E_h-xfUUc", "D92bm06NeyQ", "C8dUB90z2or", "C8dUB90z2or", "SYs7QX0VnU-", "SYs7QX0VnU-", "nips_2022_s7SukMH7ie9", "nips_2022_s7SukMH7ie9", "nips_2022_s7SukMH7ie9", "nips_2022_s7SukMH7ie9" ]
nips_2022_DGwX7wSoC-
Stationary Deep Reinforcement Learning with Quantum K-spin Hamiltonian Equation
A foundational issue in deep reinforcement learning (DRL) is that \textit{Bellman's optimality equation has multiple fixed points}---failing to return a consistent one. A direct evidence is the instability of existing DRL algorithms, namely, the high variance of cumulative rewards over multiple runs. As a fix of this problem, we propose a quantum K-spin Hamiltonian regularization term (H-term) to help a policy network stably find a \textit{stationary} policy, which represents the lowest energy configuration of a system. First, we make a novel analogy between a Markov Decision Process (MDP) and a \textit{quantum K-spin Ising model} and reformulate the objective function into a quantum K-spin Hamiltonian equation, a functional of policy that measures its energy. Then, we propose a generic actor-critic algorithm that utilizes the H-term to regularize the policy/actor network and provide Hamiltonian policy gradient calculations. Finally, on six challenging MuJoCo tasks over 20 runs, the proposed algorithm reduces the variance of cumulative rewards by $65.2\% \sim 85.6\%$ compared with those of existing algorithms.
Reject
The paper proposes to add a regularisation term H to RL algorithms in order to work around issues caused by the multiple fixed points of the Bellman’s optimality equation. The added H term is inspired by quantum field theory, specifically the K-spin Ising model. All reviewers thought this was an interesting idea, but by the end of the review period, there remained some problems with this paper. Indeed, this paper is not a theory paper, and there is no mathematical proof that the added H term does accomplish the stated goal of variance reduction. This leaves us with empirical evidence. Unfortunately, as was pointed out by reviewers, "Experiment is limited to the 6 MuJoCo tasks", which is not enough to convince that the algorithm should generally work. Finally, many reviewers were confused by the claim that PPO solves the Bellman Optimality Equation. By the end of the review, not all reviewers were convinced this problem had been resolved. This point should be clarified, and it would be better for the paper to go through a new round of reviews before being accepted for publication.
train
[ "u0GE2bFfrf", "3nHlaClIgP9", "YlGgTa6ideC", "Ib90PbzwOEx", "3mtHRjay2s3", "Q2BP7OYq4l", "PeIbuVsHfS", "Dr_eklqeIRL", "_Y8eI4Qyivp", "2G3QBX5tBZ4", "Tf6OPfb20QI", "gEuv_7cukjg", "E48uwlhVFLb", "fXOT2Umw8mw", "zgiUUrgpkmS", "km2lSDnl8a3", "kUSX5cE6uoF", "rHkhf77PymIS", "ih8Veh64rSC", "BR-PBT8GD3h", "1-HzbwRF6Rq", "69fOyPd0BmG", "df-3O7rWju", "48QfxQ3EYmM", "QMXeup90k36" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors sincerely thank all reviewers and area chair. The authors enjoy the discussions and are happy that some key points reached a consensus. \n\nTo recap, this work has made the following major contributions.\n1. Per Reviewer 65t2 and Reviewer Reviewer UzCH’s suggestion, the authors added Appx. E to include theoretical analysis of gradient’s variance reduction brought by the proposed H-term.\n1. The authors had an extensive discussion with Reviewer 65t2 about the motivation. This work targets a foundational issue that Bellman’s optimality equation has multiple fixed points—failing to return a consistent one. Such an issue of multiple fixed points is quite common in RL practice, which is suspected to be an unavoidable obstacle when addressing practitioners’ major criticism of the highly unstable performance of existing DRL algorithms.\n1. The authors had a fruitful interaction with Reviewer JucZ. As a result, Section 3.2 has been substantially revised to investigate the Ising formulation of MDP/RL from a glass of Monte Carlo Gradient Estimator.\n1. An interesting point to restate is that under the Ising model (a quite universal model) analogy of MDP/RL, the Hamiltonian equation has a clear physical meaning, namely, it measures the “energy” of a policy. For this physically-inspired H-term, we derived a variant of the policy gradient estimation, as a regulation term (Alg. 1 in Line 15). Such an add-on term turns out to be rather simple to implement and delivers substantial performance improvements.\n\nEven though there are so many great works in the DRL community, the authors are happy to address such a foundational issue and present to the community a simple and effective add-on H-term, inspired by Ising Model and Hamiltonian equation.\n\nNote that the manuscript has been updated in accordance with the above responses, mainly Section 3.2 and Appx. E.\n", " > Finally, the argument that the Hamiltonian helps reduce the variance is still not convincing to me, given that there is no difference between the Hamiltonian raised in (7) and cumulative rewards. It would be more convincing if the authors could provide a more rigorous mathematical justification.\n\nIf the reviewer agrees with the above responses regarding H-term reformulation and Bellman’s optimality equation and agrees that the Hamiltonian equation in (7) is a novel foundational perspective, the authors would like to reason the impressive variance reduction results in Table 2 as follows:\n1. There are multiple fixed points (policies) in existing DRL algorithms, due to Bellman’s Optimality Equation, as shown by three examples in Fig. 1 (the case of $\\gamma = 1$) and Fig. 5 (the case of $\\gamma \\in (0, 1)$) which is further supported by the empirical experiments in Fig. 2.\n1. The conventional PPO algorithm (note that we also tested other DRL algorithms) randomly converges to one of several policies, resulting in high variance. This is empirically shown in the third column of Table 2, \n1. Ising Model in Table 1 measures the “energy” of a policy, thus the proposed $H$-term helps the policy network stably converge to a physically stationary policy, namely, the lowest-energy configuration of a system.\n\nTherefore, the Alg. 1 will converge to a policy with the smallest $H$-value, thus the variance of multiple trainings will be reduced.\n\nIn Appx. E (newly updated version), we also provide a rigorous mathematical justification that the added H-term will result in a reduced variance of the gradient.", " Next, we would like to provide our response to “the reviewer expects to see a theorem attached to this claim with a mathematical proof of how the issue is fixed under the proposed framework” and “a proof of how the variance issue is addressed by Hamiltonian regularization is needed.”\n\nIn Appx. E (newly updated version), we also provide a rigorous mathematical justification that the added H-term will result in a reduced variance of the gradient. And the authors would like to reason the impressive variance reduction results in Table 2 as follows:\n1. There are multiple fixed points (policies) in existing DRL algorithms, due to Bellman’s Optimality Equation, as shown by three examples in Fig. 1 (the case of $\\gamma = 1$) and Fig. 5 (the case of $\\gamma \\in (0, 1)$) which is further supported by the empirical experiments in Fig. 2.\n1. The conventional PPO algorithm (note that we also tested other DRL algorithms) randomly converges to one of several policies, resulting in high variance. This is empirically shown in the third column of Table 2, \n1. Ising Model in Table 1 measures the “energy” of a policy, thus the proposed H-term helps the policy network stably converge to a physically stationary policy, namely, the lowest-energy configuration of a system.\n1. Therefore, the Alg. 1 will converge to a policy with the smallest $H$-value, thus the variance of multiple trainings will be reduced.\n", " Thanks very much for this detailed clarification of possible confusion. The authors would like to take this opportunity to communicate with the reviewer.\n\nFirst, the authors agree that “the Bellman operators operate on the space of bounded real-valued functions over $S$ or $S\\times A$, not the space of policies.” In Fig. 1 (the case of $\\gamma = 1$) and Fig. 5 (the case of $\\gamma \\in (0, 1)$), we show that there are multiple feasible solutions for the value function, not limiting to the reviewer’s mentioned case “the same optimal value function, e.g., when $Q^*(s,a1)=Q^*(s,a2)$, or $V^\\pi_1(s)=V^\\pi_2(s)=V^*(s)$”, but different functions $Q$’s or $V$’s.\n\nSecond, the authors are quite aware of the well-known uniqueness result that “the Bellman optimality operator has a unique fixed point due to the monotonicity and contraction properties, and the Banach fixed-point theorem”. For example, the Bellman optimality operator is a contraction over a complete metric space of real numbers with a metric L-infinity norm. The authors would like to point out that such a result holds under certain sufficient conditions.\n\nThird, the authors would like to list the following evidence that there are multiple policies,\n1. In the three examples of the case of $\\gamma \\in (0, 1)$ in Fig. 5, there are multiple policies with different value functions. More examples can be found in Ch. 3.1 of the Bertsekas ADP textbook [1] (http://web.mit.edu/dimitrib/www/AbstractDP_ED3_TEXT_2021.pdf)\n1. Section 3 Counterexamples and Section 3.1 Multiple Fixed Points of [2] give counter-examples of the uniqueness solution and also examples for multiple fixed points. Also, [3] pointed out that “However, it is not then possible to assure uniqueness of the fixed point on $C(X)$. Also, in this case, a convergence of the successive approximations from an arbitrary element of $C(X)$ can fail.”\n1. For the Six MuJoCo tasks in Fig. 2 and Table 2 and more tasks in [4][5][6], there is empirical evidence of “a trained agent randomly converges to one of the multiple policies”.\n1. Besides the above robotic control tasks, the authors observed the multiple policies issue by checking the MDP instances of several NP-hard problems, e.g., Graph MaxCut, Minimum set cover, Mixed integer programming problems (MILP). Actually, the issue of multiple policies is currently a major challenge for DRL solutions that do not always beat commercial solvers (Gurobi and SCIP).\n\nEven though there are so many great works, the authors are happy to address such a foundational issue and present to the community an add-on term to mitigate the highly unstable performance of existing DRL algorithms, which is a major criticism from practitioners. \n\n\n* [1] Bertsekas D. Abstract dynamic programming[M]. Athena Scientific, 2022.\n* [2] Kamihigashi, T. (2012). Existence and uniqueness of a fixed point for the Bellman operator in deterministic dynamic programming (No. DP2012-05).\n* [3] Rincón‐Zapatero, Juan Pablo, and Carlos Rodríguez‐Palmero. \"Existence and uniqueness of solutions to the Bellman equation in the unbounded case.\" Econometrica 71.5 (2003): 1519-1555.\n* [4] Duan, Yan, et al. \"Benchmarking deep reinforcement learning for continuous control.\" International Conference on Machine Learning. PMLR, 2016.\n* [5] Eysenbach, Benjamin, et al. \"Diversity is all you need: Learning skills without a reward function.\" International Conference on Learning Representations. 2018.\n* [6] Recht, Benjamin. \"A tour of reinforcement learning: The view from continuous control.\" Annual Review of Control, Robotics, and Autonomous Systems 2 (2019): 253-279.", " > > > > Thanks very much for your clarification on your question “our motivation and empirical results” and “intuitive motivation”. It is indeed a good question and a good opportunity for the authors to emphasize the foundational contributions. Before replying, the authors made a thorough rechecking of Sutton’s RL book and survey of existing DRL algorithms.\n> > > > \n> > > > The authors would like to state several backgrounds of current deep reinforcement learning algorithms. First, there is a fact that the Optimality Bellman Equation (with a $\\max_a$ operation) is an optimality condition, which originally is a sufficient condition, namely, any optimal policy of MDP and Dynamic Programming problems should satisfy the Optimality Bellman Equation. Note that it is not about algorithm design, however, several (deep) RL algorithms used it, like Q-learning, DQN, etc. Please do not use Q-learning and DQN algorithms as an example of how the Optimality Bellman Equation should be used.\n> > > > \n> > > > Here, the authors would like to point out that the optimality condition does not discuss how an algorithm should be designed or implemented, but a mathematical principle that any optimal policy should satisfy, as long as the target problem space possesses the MDP structure. Even if the $\\max_a$operation is NOT used in an RL algorithm, an optimal policy should satisfy the Optimality Bellman Equation. However, such an optimal policy under the Optimality Bellman Equation is not unique. In practice, an algorithm will randomly converge to one of many policies. This foundational issue of Optimality Bellman Equation is our strong motivation to consider an alternative, Hamiltonian equation, which is universally used in modern physics.\n> > > > \n> > > > Second, the currently most widely used Actor-Critic algorithms (both DDPG and PPO) use Bellman equation (not the optimal one) for training the critic network (for value estimation), and the critic network converges when the Optimality Bellman Equation is satisfied. That is to say, since the Optimality Bellman Equation has multiple fixed points, the obtained critic network will randomly converge to one of many fixed points, thus the trained Actor-Critic agent also randomly converges to one of many fixed points.\n> > > > \n> > > > Third, Fig. 2 and Table. 2 (vanilla PPO) have given empirical verification about the observation that an DRL algorithm will randomly converge to one of many policies. Such an observation is widely recognized by the DRL community, there are YouTube videos (for example, an upside down policy of the HalfCheetah task: https://www.youtube.com/watch?v=qU8Nd9lyxlw). The authors believe that Fig. 1 (and descriptions in Introduction), Fig. 2, and Table. 2 together strongly motivate our work. Even though there are so many great works, the authors are happy to address such a foundational issue and present to the community an add-on term to mitigate the highly unstable performance of existing DRL algorithms, which is a major criticism from practitioners.\n> > > > Fourth, differentiating “the training/learning process” and “the obtained optimal policy” is the key to understanding the novel analogy between MDP and K-spin Ising model. “Spin in quantum physics is either 1 or -1 when being measured”, similarly, an optimal policy assigns either 1 or 0 to each state-action pair, which is obtained when optimality is achieved. Please note that an optimal policy is deterministic, i.e., either 1 or 0 for each state-action pair. The authors believe that this fact may be the cause of the reviewer’s confusion. In other words, during the training process, a non-optimal policy is just like in a quantum superposition state; when the training process ends, an optimal policy is “measured” (when the algorithm is converged). Since both the initialization and the training process are random, it is natural to treat the training process as a quantum superposition state; and an optimal policy after convergence (and the Bellman’s optimality equation is satisfied) is just like being “measured” and becomes deterministic.\n> > > > \n> > > > Furthermore, both the Ising model and lowest-energy state are fundamental in physics. The authors are quite impressed by the fact that the Hamiltonian equation (simple and easy-to-implement) can be used as an add-on term to most actor-critic DRL algorithms (note that we tested over 5 algorithms, i.e., DDPG, PPO, SAC, TD3), and such an add-on term effectively addresses practitioners’ major criticism “unstable”. We are confident that this work will be highly recognized by both NeurIPS community members and industrial practitioners. ", " > Second, PPO is an on-policy PG method that only solves the Bellman equation for the current policy. It does not solve the Bellman optimality equation.\n\nThis is a great question and Reviewer Zi6K raises a very similar one. Therefore, the authors like to refer to the relevant discussions.\n\n> The main motivation is based on that the Bellman’s optimality equation, which is the base of Q-learning-like algorithms such as DDPG, has multiple fixed points. But the algorithm also work well on PPO, which even does not use Bellman equation to learn the value function. Can the authors provide any explanation about why it PPO+H works so well?\n> \n> > The authors believe the reviewer made a factual error comment that “PPO does not use Bellman equation to learn the value function”. PPO is a policy gradient algorithm with advantage function estimation.\n> > \n> > In the following reference [1], it is theoretically clear how a critic is plugged into the policy gradient theorem in equations (8) and (9). Thus, all actor-ciitic DRL algorithms use the Bellman equation to learn the value function.\n> > \n> > As mentioned in 216~218, the authors use GAE [28] for advantage estimation, where a value function is approximated as a baseline and optimized via the Bellman equation. The authors follow several benchmark implementations as in Stable Baseline3, RLlib, Tianshou, etc, which update the value function by minimizing the TD-residual. Therefore, it is reasonable that the H term also works with PPO.\n> > \n> > [1] Wen, Junfeng, et al. \"Characterizing the gap between actor-critic and policy gradient.\" International Conference on Machine Learning. PMLR, 2021.\n> > \n> > > I appreciate the authors' reply to my questions.\n> > > \n> > > First, for PPO, what I meant was that PPO, which is an on-policy algorithm, does not use an Optimality Bellman Equation, i.e., no max_a operation in value learning. However, the multiple fixed points problem only occurs when the max_a operation is used like in Q-learning. This makes PPO failed to be an empirical evidence to support the authors' foundamental motivation (while I agree it is a good to see PPO+H works well). I also checked other reviewer's comments, which also raise question about the correlation between motivation (BE has multiple fixed points) and the empirical results.\n> > > \n> > > Second, for the analogy between policy and spin angular momentum. I am still confused about the analogy. Spin in quantum physics is either 1 or -1 when being measured. The authors said that non-optimal policy is the \"orientation\"--what I understand here is that the non-optimal policy is the spin vector's z-axis component on the bloch sphere, is this what the authors meant? If so, I do not see a intuition here to consider optimal policy as \"being measuted oplicy\". Could the authors explain in more details? I have a certain background in physics but I feel hard to fully grasp the intuition behind the analogy.\n> > > \n> > > Since most of readers of the NeurIPS conference come from computer science and computational neuroscience, and given the fact this work is not quantum RL but physics-inspired conventional deep RL, I think certain efforts need to be made to make the audience understand at least the intuitive motivation.\n> > > ", " > First, in (2) the cumulative reward is the expectation of $Q^{\\pi_\\theta}$ with respect to the initial state distribution, which I believe is exactly the same as (7) except for the negative sign.\n\n\nThe authors realize that the equal sign in (7) may lead to some confusion and like to clarify it as follows. \n\nPlease note that both the optimization objectives (2) and (7) of reinforcement learning are probabilistic functions, and the “Monte Carlo” method (Chapter 5 in [1]; [2]) is broadly used for gradient estimates whose operation involves a significant random component.\n\n* [1] Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.\n* [2] Mohamed, S., Rosca, M., Figurnov, M., & Mnih, A. (2020). Monte Carlo Gradient Estimation in Machine Learning. J. Mach. Learn. Res., 21(132), 1-62.\n\nTherefore, the authors show that (2) and (7) are NOT exactly the same through the glass of Monte Carlo gradient estimators. This explanation is also available in Section 3.2 (the newly updated version) and will be added in the Appendix of the future version.\n\nTo recap, inspired by [17], the authors formally reformulate (2) into a $K$-spin Hamiltonian equation\n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~H(\\theta) \\triangleq -E_{S_0,A_0} [Q^{\\pi_\\theta}(S_0,A_0)],$\n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=-\\lim_{K \\rightarrow \\infty}\\sum_{k = 0}^{K-1} \\sum_{\\mu_0}^{\\mathcal{S} \\times \\mathcal{A}} \\cdots \\sum_{\\mu_k}^{\\mathcal{S} \\times \\mathcal{A}} L_{\\mu_0, ..., \\mu_k} \\pi_{\\theta}(\\mu_0)\\cdots\\pi_{\\theta}(\\mu_k),$\n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=- \\lim_{K \\rightarrow \\infty} E_{\\mu_0, \\mu_1, ..., \\mu_K} [\\sum_{k = 0}^{K-1} L_{\\mu_0, ..., \\mu_k}]$,\n\nthe expectation is taken over $S_0\\sim d_0(\\cdot),A_0\\sim\\pi_\\theta(S_0,\\cdot)$, and the density function $L_{\\mu_0, ..., \\mu_k}$ is given in (6).\n\n**Monte Carlo Estimator** [2]: Consider a general probabilistic objective $\\mathcal{F}$ of the form:\n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\mathcal{F} \\triangleq \\mathbb{E}_{p(x;\\theta)}[f(x;\\phi)],$\n\nin which a function $f$ of an input variable $x$ with $\\textit{structural parameters}$ $\\phi$ is evaluated on average with respect to an input distribution $p(x; \\theta)$ with $\\textit{distributional parameters}$ $\\theta$.\n\nA Monte Carlo method evaluates the function by first drawing independent samples $\\hat{x}^{(1)},..., \\hat{x}^{(N)}$ from the distribution $p(x; \\theta)$, and then computing the average:\n\n$\\widehat{\\mathcal{F}}_N = \\frac{1}{N} \\sum_{i=1}^{N} f(\\hat{x}^{(i)}), ~~where~~\\hat{x}^{(i)} \\sim p(x; \\theta) ~for~i=1,...,N.$\n\nThe Monte Carlo estimator for conventional objective (2) is \n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\widehat{J}(\\theta) = \\frac{1}{N}\\sum\\limits_{i=1}^{N} R(\\tau^{(i)})$ where $\\tau^{(i)} \\sim P(\\tau^{(i)} | \\pi_{\\theta})$ for $i=1,...,N,$\nand\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~P(\\tau^{(i)} | \\pi_{\\theta}) = d_0(s_0^{(i)}) \\cdot \\prod_{k = 0}^{T} \\mathbb{P}(s_{k + 1}^{(i)}| s_k^{(i)}, a_k^{(i)}) \\pi_{\\theta}(a_k^{(i)}|s_k^{(i)}).$\n\nThe Monte Carlo estimator for the Hamiltonian reformulation is \n\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\\widehat{H}(\\theta) = \\frac{1}{N'}\\sum\\limits_{i=1}^{N'} \\sum_{k = 0}^{K-1} L_{\\mu_0^{(i)}, ..., \\mu_k^{(i)}},$ for $i=1,...,N',$\nand\n$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~L_{\\mu_0^{(i)}, ..., \\mu_k^{(i)}} = \\gamma^k\\cdot R(\\mu_k^{(i)})\\cdot d_0(s_0^{(i)}) \\cdot \\prod_{\\ell = 0}^{k - 1} \\mathbb{P}(s_{\\ell + 1}^{(i)}|\\mu_{\\ell}^{(i)}).$\n\n$\\textbf{Remark:}$ The above two Monte Carlo estimators are quite different in the simulation process. The conventional Monte Carlo estimator samples a random trajectory by following an environment's stochastic transition and a policy. In contrast, our novel Hamiltonian Monte Carlo estimator measures a random path's discounted reward (the \"energy\") without following any policy, and the Hamiltonian equation combinatorially enumerates all possible paths of length $K$ over the state-action space. In other words, the simulation process of the Hamiltonian term does not rely on any policy. Therefore, the Hamiltonian term is a suitable regularizer for both on-policy and off-policy algorithms.\n\nThis fundamental difference is due to the Ising model in (5), which combinatorially enumerates all paths and separates the environment and the policy.", " Thanks for agreeing to raise the score, and very much appreciate your open attitude toward discussions.\n\nThe authors agree that the above three updates (related work of variance inhibiting in DRL, the difference from [23], and clarifying the confusion) are necessary. The authors can promise those updates appearing in the next version.\n", " Thanks for the response. I still find some points unclear to me. \n\nFirst, in (2) the cumulative reward is the expectation of $Q^{\\pi_\\theta}$ with respect to the initial state distribution, which I believe is exactly the same as (7) except for the negative sign. \n\n\nSecond, PPO is an on-policy PG method that only solves the Bellman equation for the current policy. It does not solve the Bellman optimality equation.\n\n\nFinally, the argument that the Hamiltonian helps reduce the variance is still not convincing to me, given that there is no difference between the Hamiltonian raised in (7) and cumulative rewards. It would be more convincing if the authors could provide a more rigorous mathematical justification.", " Upon reading the authors' response to all the reviews including mine, it seems to me that the paper suffers from a major confusion. The Bellman operators operate on the space of bounded real-valued functions over $\\mathcal{S}$ or $\\mathcal{S} \\times \\mathcal{A}$, **not the space of policies**. Therefore, the fixed point of the Bellman equations (or the Bellman operators) are not policies, but value functions. Said value functions are guaranteed to be unique whenever $\\gamma \\in [0, 1)$. However, multiple distinct policies may have the same optimal value function, e.g., when $Q^*(\\mathbf{s}, \\mathbf{a}_1) = Q^*(\\mathbf{s}, \\mathbf{a}_2)$, we may have $\\pi_1(\\mathbf{s}) = \\mathbf{a}_1$ and $\\pi_2(\\mathbf{s}) = \\mathbf{a}_2$, yet $V^{\\pi_1}(\\mathbf{s}) = V^{\\pi_2}(\\mathbf{s}) = V^*(\\mathbf{s})$. Seeing as the paper confuses the uniqueness of the value function (which is the fixed point of the Bellman operator) with the uniqueness of the policy, and this confusion is highlighted *10 times* throughout the paper, I think that the paper needs a major revision, so I am keeping my score.\n\nI also *disagree* with the authors' argument that the paper is of an empirical nature and therefore a proof of how the variance issue is addressed by Hamiltonian regularization is not needed. I think the paper makes big claims about (i) the high variance issue in RL arising from the non-uniqueness of the policy corresponding to the Bellman fixed-points, (ii) addressing the Bellman issue and consequently the variance issue. However, the empirical evidence provided is rather indirect and not sufficient to convince a wide audience of which I am a member. ", " I appreciate the author's response to my questions. I like the idea that \"during the training process, a non-optimal policy is just like in a quantum superposition state\".The motivation appears much more clear given the authors' explanations. I am happy to raise my score if the authos promise to complement the manuscript with the following updates:\n\n1. Discussion about related work of variance inhibiting in DRL.\n2. Discussion about how this work differs from the paper \"K-spin Hamiltonian for quantum-resolvable markov decision processes\".\n3. Addressing the potentially confusing points appeared in the reviews from me and others.", " Thanks very much for your clarification on your question “our motivation and empirical results” and “intuitive motivation”. It is indeed a good question and a good opportunity for the authors to emphasize the foundational contributions. Before replying, the authors made a thorough rechecking of Sutton’s RL book and survey of existing DRL algorithms.\n\nThe authors would like to state several backgrounds of current deep reinforcement learning algorithms.\nFirst, there is a fact that the Optimality Bellman Equation (with a $\\max_a$ operation) is an optimality condition, which originally is a sufficient condition, namely, any optimal policy of MDP and Dynamic Programming problems should satisfy the Optimality Bellman Equation. Note that it is not about algorithm design, however, several (deep) RL algorithms used it, like Q-learning, DQN, etc. Please do not use Q-learning and DQN algorithms as an example of how the Optimality Bellman Equation should be used.\n\nHere, the authors would like to point out that the optimality condition does not discuss how an algorithm should be designed or implemented, but a mathematical principle that any optimal policy should satisfy, as long as the target problem space possesses the MDP structure. Even if the $\\max_a$operation is NOT used in an RL algorithm, an optimal policy should satisfy the Optimality Bellman Equation. However, such an optimal policy under the Optimality Bellman Equation is not unique. In practice, an algorithm will randomly converge to one of many policies. This foundational issue of Optimality Bellman Equation is our strong motivation to consider an alternative, Hamiltonian equation, which is universally used in modern physics.\n\nSecond, the currently most widely used Actor-Critic algorithms (both DDPG and PPO) use Bellman equation (not the optimal one) for training the critic network (for value estimation), and the critic network converges when the Optimality Bellman Equation is satisfied. That is to say, since the Optimality Bellman Equation has multiple fixed points, the obtained critic network will randomly converge to one of many fixed points, thus the trained Actor-Critic agent also randomly converges to one of many fixed points.\n\nThird, Fig. 2 and Table. 2 (vanilla PPO) have given empirical verification about the observation that an DRL algorithm will randomly converge to one of many policies. Such an observation is widely recognized by the DRL community, there are YouTube videos (for example, an upside down policy of the HalfCheetah task: https://www.youtube.com/watch?v=qU8Nd9lyxlw). The authors believe that Fig. 1 (and descriptions in Introduction), Fig. 2, and Table. 2 together strongly motivate our work. Even though there are so many great works, the authors are happy to address such a foundational issue and present to the community an add-on term to mitigate the highly unstable performance of existing DRL algorithms, which is a major criticism from practitioners.\nFourth, differentiating “the training/learning process” and “the obtained optimal policy” is the key to understanding the novel analogy between MDP and K-spin Ising model. “Spin in quantum physics is either 1 or -1 when being measured”, similarly, an optimal policy assigns either 1 or 0 to each state-action pair, which is obtained when optimality is achieved. Please note that an optimal policy is deterministic, i.e., either 1 or 0 for each state-action pair. The authors believe that this fact may be the cause of the reviewer’s confusion. In other words, during the training process, a non-optimal policy is just like in a quantum superposition state; when the training process ends, an optimal policy is “measured” (when the algorithm is converged). Since both the initialization and the training process are random, it is natural to treat the training process as a quantum superposition state; and an optimal policy after convergence (and the Bellman’s optimality equation is satisfied) is just like being “measured” and becomes deterministic.\n\nFurthermore, both the Ising model and lowest-energy state are fundamental in physics. The authors are quite impressed by the fact that the Hamiltonian equation (simple and easy-to-implement) can be used as an add-on term to most actor-critic DRL algorithms (note that we tested over 5 algorithms, i.e., DDPG, PPO, SAC, TD3), and such an add-on term effectively addresses practitioners’ major criticism “unstable”. We are confident that this work will be highly recognized by both NeurIPS community members and industrial practitioners. \n", " I appreciate the authors' reply to my questions.\n\nFirst, for PPO, what I meant was that PPO, which is an on-policy algorithm, does not use an Optimality Bellman Equation, i.e., no max_a operation in value learning. However, the multiple fixed points problem only occurs when the max_a operation is used like in Q-learning. This makes PPO failed to be an empirical evidence to support the authors' foundamental motivation (while I agree it is a good to see PPO+H works well). I also checked other reviewer's comments, which also raise question about the correlation between motivation (BE has multiple fixed points) and the empirical results. \n\nSecond, for the analogy between policy and spin angular momentum. I am still confused about the analogy. Spin in quantum physics is either 1 or -1 when being measured. The authors said that non-optimal policy is the \"orientation\"--what I understand here is that the non-optimal policy is the spin vector's z-axis component on the bloch sphere, is this what the authors meant? If so, I do not see a intuition here to consider optimal policy as \"being measuted oplicy\". Could the authors explain in more details? I have a certain background in physics but I feel hard to fully grasp the intuition behind the analogy.\n\nSince most of readers of the NeurIPS conference come from computer science and computational neuroscience, and given the fact this work is not quantum RL but physics-inspired conventional deep RL, I think certain efforts need to be made to make the audience understand at least the intuitive motivation.\n\n\n\n\n", " > How does utilizing the Hamiltonian regularizer resolves the raised challenge that Bellman's optimality equation has multiple fixed points? Conjecturing that minimizing energy improves the stability is not sufficiently convincing as the formulation of Hamiltonian in equation (7) seems to be the same as the cumulative reward function. That said, minimizing the energy appears to be the same as maximizing the cumulative reward, which is also the objective of various existing policy gradient approaches. More importantly, the adopted AC-style approach is a policy gradient method, so it does not attempt to solve Bellman's optimality equation directly. In contrast, it improves the cumulative reward function by updating the policy directly.\n\nThe authors have to defend against this comment for several reasons.\n\n* First, from the above response, it is said that minimizing the energy by the Hamiltonian equation is NOT exactly maximizing the cumulative reward (the objective function of RL). Maybe the word “reformulate” caused some misunderstanding. However, they are quite similar, especially when the proposed H-term is an add-on term to regularize the policy network. \n* Second, taking an optimization perspective, multiple fixed points mean multiple critical points (including saddle points and local minima); And Bellman's optimality equation having this issue means that each run with a different random initialization will randomly converge to one of many critical points, which the author believe is a fundamental cause of high variance (the current highly unstable DRL algorithms). Exploiting a term that measures the “energy” of the policy will provide a guide for the training process, which helps converge to critical points with lower energy. Then, the problem becomes whether those critical points have a similar energy, or whether the Hamiltonian equation is a good metric. Since the Hamiltonian equation is universal for a lot of physical systems (robotic control, movements in gaming, etc.), the authors are confident. \n* Some backup information on its ubiquity is:\n 1. We found this phenomenon (randomly converging to one of many critical points, as shown in Fig. 2) in combinatorial search problems, e.g., graph max-cut, mixed integer learning programming, traveler salesman problem, and minimum independent cover;\n 2. We even found it in resource allocation of 5G/6G wireless communication systems, e.g., power allocation beamformer design of MIMO base stations, respectively.\n* Third, the adopted AC-style approach is a policy gradient method, which involves an estimate of Q-value (via Bellman's optimality equation). For RL, the Q-value estimation is a dual problem, while the primary problem is policy optimization (say via policy gradient). One claim of this work is that since Bellman’s optimality equation has the inherent issue of high variance (randomly converges to one of many critical points), we propose an add-on term to regularize the policy network. As shown by the experimental result in Section 5.2, we verify that prioritized experience replay (PER) on the policy network, achieved by the H-term, is better than PER on the critic network.\n* In summary, the dual problem of Q-value estimation via Bellman's optimality equation is problematic itself, thus we are hoping this add-on H-term directly on the policy network can address a fundamental issue of DRL algorithms, namely how to reduce the variance of policies with different random seeds.", " Thank you for your insightful feedback. We would like to address your concerns and answer your questions in the following.\n\n> Is the regularizer term in (7) equivalent to (2), namely, the corresponding cumulative reward function? Why does adding the cumulative reward function as a regularizer improves the stability of the actor-critic? In fact, by rewriting the log of products of policies into the sum of log policies and reorganizing the sum, I find the gradient equivalent with REINFORCE (with a truncation of trajectory to the $k$-th step). Can I understand the gradient of Hamiltonian as a variant of policy gradient estimation?\n\nThe regularizer term in (7) is NOT exactly equivalent to (2) (details can be found in Appx. C, Equation (18)). \n\n* First, (2) is the cumulative reward function (the expectation is taken over trajectories); Equation (7) is derived from (1) which is the expectation of discounted rewards along a trajectory. We should have clearly specified $R(\\tau)$ after (2).\n* Second, Fig. 3 made a comparison between REINFORCE’s policy gradient in (12) and the proposed Hamiltonian gradient in (11). Yes, the Hamiltonian gradient is a variant of policy gradient estimation. \n* Third, as shown in Fig. 3, we used a truncation of K steps in (7), and the discounted reward $L(\\cdot)$ in (6) is calculated through Monte Carlo simulation.\n* An interesting point to make is that under the Ising model analogy of MDP/RL, the Hamiltonian equation has clear physical meaning, namely, it measures the “energy” of a policy. For this physically-inspired H-term, we derived a variant of the policy gradient estimation, as a regulation term (Alg. 1 in Line 15). Such an add-on term turns out to be rather simple to implement and delivers substantial performance improvements. ", " Thank you for your insightful feedback. We would like to address your concerns and answer your questions in the following.\n\n> Rather than an analogy between optimal policy and quantum field, should it be just policy with the quantum field?\n\nBoth optimal policy $\\pi^* \\in$ {-1, 1} and policy $\\pi \\in [0, 1]$ could be naturally mapped to a quantum field (a spin configuration). We agree that there was notation reuse and we did not make it explicit. In physics, a spin orients at an angle $\\in [0, 2\\pi)$ and takes continuous value $\\in [-1, 1]$, while the optimal spin configuration takes discrete value {-1, 1}. Therefore, the optimal policy $\\pi^*$ corresponds to the optimal spin configuration, and the policy π corresponds to the case when spins take continuous values. The authors add a new row in Table 1 to help distinguish the mapping for optimal policy $\\pi^*$ and policy $\\pi$.\n\n\n> How much more computational cost is needed for the additional H term as compared to the baseline methods?\n\nThere is a relatively little computational cost when the Hamiltonian gradient is truncated with a small K, say K=24 in Table 2. The authors provided a complexity analysis in Section 4.2 lines 201~205, in which the additional cost only involves a Hamiltonian gradient computation.\n\nFor the reviewer’s concern about the computational cost, as mentioned in lines 263 and 295~296, the authors foresee potential high computational costs for future works if Bellman equations in RL would be replaced by the Hamiltonian equation. Note that the accuracy of the H term approximation is directly related to the K-truncation. Therefore, future works may require a very accurate estimate and need a larger K, which may experience high computational costs. \n\n\n> if the main point is to make an analogy with physical systems and MDP, why the quantum k-spin Ising model is specifically chosen?\n\nThere are two main motivations behind it. The K-spin Ising model matches the sequential decision-making process and the Hamiltonian equation measures the energy of an Ising model (here our policy network).\n\nOn the other hand, the Ising model is a universal model, e.g., NP-hard problems [1], and iterative optimization algorithms [2].\n\n* [1] Lucas, Andrew. \"Ising formulations of many NP problems.\" Frontiers in physics (2014): 5.\n* [2] Li, Ke, and Jitendra Malik. \"Learning to Optimize.\" ICLR, 2017.\n\n\n\n> In the Broader Impace Statement the authors state that 'bring together the strengths of both approaches and yield new insights in both fields'. However, I'm not sure what this can bring for the quantum community?\n\nThere are two aspects that our work will bring insights to the quantum community. \n\nFirst of all, our work is trying to bring the success of DRL algorithms to the quantum RL field, which is an active research area in the quantum machine learning community. \n\nOn the other hand, RL has been an alternative promising approach for solving quantum physics problems, such as CQ, QC, and QQ problems, depending on whether the agent (first symbol) or the environment (second symbol) are classical (C) or quantum (Q). One recent breakthrough is using RL to control nuclear fusion [1]. \n\nMoreover, the ML community is also very interested in borrowing quantum mechanisms for two major reasons. One is that quantum mechanisms may deliver quadratic improvements in learning efficiency and exponential improvements in performance over limited time periods [2, 3].\n\n* [1] Degrave, Jonas, et al. \"Magnetic control of tokamak plasmas through deep reinforcement learning.\" Nature 602.7897 (2022): 414-419.\n* [2] Biamonte, Jacob, et al. \"Quantum machine learning.\" Nature 549.7671 (2017): 195-202.\n* [3] Dunjko, Vedran, Jacob M. Taylor, and Hans J. Briegel. \"Quantum-enhanced machine learning.\" Physical review letters 117.13 (2016): 130501.", " > I found the paper to be rather difficult to read. The paper could use copy editing.\n\nThe authors will update the manuscript to improve the readability.\n\n> The paper claims to fix the multiple fixed-point issue with the Hamiltonian regularization scheme, but only shows the effect of its usage for a few pedagogical examples. But I would expect to see a theorem attached to this claim with a mathematical proof of how the issue is fixed under the proposed framework.\n\nIn addition to showing the effectiveness of the H term on three examples in both undiscounted and discounted cases, the authors provide experimental results (with visualization results in supplementary materials) on six MuJoCo tasks. Due to the high-dimensional continuous state and action space, these tasks are widely-recognized challenging tasks in robotic control [1].\n\nThe current work is not theoretical. It provides a physical-inspired algorithm design that is easy to implement, delivering significant improvements in performance. The target issue of instability of DRL algorithms is practically important for RL’s adoption in real-world tasks, say robotic control.\n \n\n> In L73, the discount factor is defined as $\\gamma \\in (0, 1]$. In L127, $\\gamma \\in (0, 1)$. Why the difference?\n\nHere we are discussing a practical case of discounted cumulative rewards. $\\gamma < 1$ is required to guarantee a small approximation error.\n\n\n> L114: there is no summation in (6), so why is this called a cumulative reward?\n\nThanks for the careful reading, and the typo is fixed in the revised version.\n\n> In (7), what does a summation from $\\mu_k$ to $\\mathcal{S} \\times \\mathcal{A}$ mean?\n\nThe summation comes from the standard Hamiltonian equation as defined in (5).\n\n\n> L265: if memory budget permits replay buffer size 800 for K=24, for a fair comparison it would make sense to set the buffer size to 800 for K=8 and K=16 too.\n\nThanks for the suggestion, and the authors will provide an experiment using the buffer size 800 for all K values in the Appendix G.2 of the revised version.\n\n> $R(\\tau)$ should be defined after (2).\n\nYes, it should be given right after (2). The cumulative reward along a trajectory $\\tau$.", " Thank you for your feedback. We would like to address your concerns and your questions in the following.\n\n> In both theory and practice, practitioners typically set $\\gamma < 1$ in infinite horizon settings, in which case the Bellman optimality operator has a unique fixed point due to the monotonicity and contraction properties, and the Banach fixed-point theorem. That being said, there exist some exceptional cases such as those discussed in Ch. 3 of the Bertsekas ADP textbook, where discounting with $\\gamma < 1$ may fail to find the optimum policy and additional restrictions on the space of value functions is necessary (Bertsekas, 2019). However, I think saying that the Bellman optimality operator has multiple fixed points (in bold and italics, multiple times) without making it very clear early on that the setting involves $\\gamma \\in [0, 1]$ rather than $\\gamma \\in [0, 1)$ is misleading. \n\nThe reviewer’s objection seems to be highly relying on his/her misreading that our results ONLY hold for the undiscounted case $\\gamma=1$. The authors believe there are several factual errors regarding “motivation”, “soundness” and “practical usefulness”, resulting in highly biased comments on this paper.\n\nFirst, the authors discussed the case of $\\gamma=1$ and the case of $\\gamma \\in (0,1)$ separately. \n* In the Introduction (from line 21 to line 41), for easy understanding, the authors described the Bellman equation’s issue of multiple fixed points with three motivating examples of $\\gamma=1$.\n* The authors deferred the more complex case of $\\gamma \\in (0,1)$ and pointed out that “more examples are given in Fig. 5 and Appx. A”. For some unknown reason, the reviewer ignored the continued discussion. In the revised version, the authors change the wording from “more examples” to “examples with $\\gamma < 1$”.\n\n> Furthermore, there are lots of other valid sources of variance in deep RL including but not limited to optimization of non-convex/non-stationary objectives, stochastic gradients, reward sparsity, initial conditions, complexity of learner function class, etc. that the wording in this paper neglects. I don't suppose existence of multiple fixed points is a primary concern since practitioners use $\\gamma < 1$ in infinite-horizon (and long-horizon) settings. So, I fail to see the motivation behind the proposed regularization approach and remain deeply skeptical of the soundness of this paper.\n\nSecond, the existence of multiple policies is also common in practical tasks with $\\gamma < 1$, as mentioned in recent studies (benchmarks) [1, 2, 3], which contradicts the reviewer’s comment “I don't suppose existence of multiple fixed points is a primary concern since practitioners use $\\gamma < 1$ in infinite-horizon (and long-horizon) settings”. The observational experiments on MuJoCo tasks in Section 2.2 further verify the issue, and the experiments in Section 5 demonstrate the practical usefulness of the H term, where MuJoCo tasks are standard benchmarking tasks for continuous control. \n\nThird, the authors believe that this paper targets the issue of multiple fixed points, while other sources like “optimization of non-convex/non-stationary objectives, stochastic gradients, reward sparsity, initial conditions, complexity of learner function class” are out of the scope. For completeness, we summarize the sources in the Introduction of the revised version. \n* [1] Duan, Yan, et al. \"Benchmarking deep reinforcement learning for continuous control.\" International Conference on Machine Learning. PMLR, 2016.\n* [2] Eysenbach, Benjamin, et al. \"Diversity is all you need: Learning skills without a reward function.\" International Conference on Learning Representations. 2018.\n* [3] Recht, Benjamin. \"A tour of reinforcement learning: The view from continuous control.\" Annual Review of Control, Robotics, and Autonomous Systems 2 (2019): 253-279.", " > There is a lack of discussion about related work in deep RL to reduce variance, e.g., https://openreview.net/pdf?id=9xhgmsNVHu\n\nThe authors acknowledge missing this closely related work, as that paper was presented two weeks before the NeurIPS’ submission deadline. After a careful review, the authors will add more relevant works, including the above one suggested by the reviewer:\n* [2] Bjorck, Johan, Carla P. Gomes, and Kilian Q. Weinberger. \"Is High Variance Unavoidable in RL? A Case Study in Continuous Control.\" International Conference on Learning Representations. 2021.\n* [3] Islam, Riashat, et al. \"Reproducibility of Benchmarked Deep Reinforcement Learning Tasks for Continuous Control.\" RML workshop, ICML, 2017.\n* [4] Nikishin, Evgenii, et al. \"Improving stability in deep reinforcement learning with weight averaging.\" Uncertainty in artificial intelligence workshop on uncertainty in Deep learning. 2018.\n\n> My personal opinion is that the analogy is confusing to people without quantum physics background. And in the end, the algorithm used the classic policy gradient algorithm to estimate the Nabla of Hamiltonian. I feel that Algorithm 1 itself is indeed intuitive (heuristic) without introducing quantum physics. My suggestion is to defer some details of the analogy to the appendix (then the authors can have more space to clearly explain it), and complement the main texts with contents such as related work and ablation studies.\n\nThe authors foresee the reading difficulties for scholars without a quantum physics background, thus providing Table 1 and lines 111~119 for a detailed presentation. The authors like to point out that the H term is a physically inspired algorithm and therefore believe it is necessary to provide the novel analogy in the main body. Given the universality of the quantum K-spin Ising model, it is reasonable that the derived formula is intuitive. \n\nThe authors have to say that ablation studies are naturally included. 1). H-term is an add-on term, so we compare algorithms with and without it; 2). For DDPG, we also compare with DDPG+PER for fairness; and 3). For the key parameter K (steps), we show the results for K=8, K=16, and K=24 in Table 2.\n\n> Line 47: I suggest to add \"often\" or \"sometimes\" because there are cases we want some diversity, e.g., option-critic and DIAYN.\n\n> Line 87: polices --> policies\n\n> Fig.4 is not centered.\n\nThe authors thank the reviewer for the careful reading, and typos are fixed in the revised version.", " > The main motivation is based on that the Bellman’s optimality equation, which is the base of Q-learning-like algorithms such as DDPG, has multiple fixed points. But the algorithm also work well on PPO, which even does not use Bellman equation to learn the value function. Can the authors provide any explanation about why it PPO+H works so well?\n\nThe authors believe the reviewer made a factual error comment that “PPO does not use Bellman equation to learn the value function”. PPO is a policy gradient algorithm with advantage function estimation. \n\nIn the following reference [1], it is theoretically clear how a critic is plugged into the policy gradient theorem in equations (8) and (9). Thus, all actor-ciitic DRL algorithms use the Bellman equation to learn the value function.\n\nAs mentioned in 216~218, the authors use GAE [28] for advantage estimation, where a value function is approximated as a baseline and optimized via the Bellman equation. The authors follow several benchmark implementations as in Stable Baseline3, RLlib, Tianshou, etc, which update the value function by minimizing the TD-residual. Therefore, it is reasonable that the H term also works with PPO.\n\n* [1] Wen, Junfeng, et al. \"Characterizing the gap between actor-critic and policy gradient.\" International Conference on Machine Learning. PMLR, 2021.", " Thank you for your thoughtful comments. We would like to address your concerns and your questions in the following.\n\n>Line 48: \"we make a novel analogy between an MDP and a quantum K-spin Ising model\". However, ref [21] proposed to model MDP with K-spin Hamiltonian. What is the difference?\n\nRef [21] modeled MDP with K-spin Hamiltonian (in Section III), and made an analogy between an MDP and classic field theory in Table I.\nThere are several major differences:\n1. The objective function (10) in [21] has a penalty term (for their quantum optimization approach), while in our deep reinforcement learning (DRL) approach, it is automatically satisfied by employing a softmax function. Moreover, (10) in [21] is the objective function of a quantum optimization task, while we used the Hamiltonian equation as an add-on term (a regularizer) for existing actor-critic DRL algorithms.\n2. Their quantum optimization approach relies on the variational optimality condition (analogy to the Bellman optimality in DRL) and is amenable to quantum simulated annealing algorithms. Here, we use the K-spin Hamiltonian equation to regularize the policy network. A new policy gradient is added in Alg. 1 (line 15).\n3. Their solution discussed the potential implementation on rear-term quantum hardware. Here, our major conclusion is that K-spin Hamiltonian can help reduce the high variance of DRL algorithms, which is brought by the Bellman equation’s issue of multiple fixed points.\n4. Actually, the analogy (in ref [21]) between an MDP and classic field theory in Table I is not physically right. One should replace the classic field by a transverse field (a quantum field). First, there are no corresponding concepts of classic field’s potential energy and kinetic energy in RL. Second, the most important “conservation law” of classic field theory does not have a counterpart in RL. In contrast, our analogy to a quantum K-spin Ising model is more accurate, since the K-spin Ising model matches the sequential decision-making process, and the Hamiltonian equation measures the energy of an Ising model (here our policy network).\n\n> The optimal policy function $\\pi^*$ and a general policy $\\pi$ are mixed-up. E.g., line 111, I understand the optimal policy $\\pi^* \\in$ {0, 1} can be mapped to spin operator, which is a common practice in quantum computation. However, how about $\\pi(\\mu_k)$, which is a continuous-value scalar? Also, in Table 1, the optimal policy is analogous to the spin operators, while the Hamiltonian the functional of non-optimal policy. I am a bit confused about this mixing-up.\n\nWe agree that there was notation reuse and we did not make it explicit on purpose. However, this is a quite standard routine in both algorithmic design and theoretical analysis. In physics, a spin orients at an angle $\\in [0, 2\\pi)$ and takes continuous value $\\in [-1, 1]$, while the optimal spin configuration takes discrete value $\\in$ {-1, 1}. Therefore, both optimal policy $\\pi^* \\in$ {-1, 1} and policy $\\pi \\in [0, 1]$ could be naturally mapped to a quantum field (a spin configuration), through the mapping in lines 111~114. Physicists study the simplified case with a spin $\\in$ {-1, 1} since it already delivers theoretical results of phase transitions; in physical experiments, a spin takes values $\\in [-1, 1]$.\n\nRevision: in Table 1, the authors add a new row to further clarify it. Note that the authors follow the routine in physics that the configuration can take either continuous values $\\in [-1, 1]$ or discrete values $\\in$ {-1, 1}, depending on the context.\nSome exemplar references, where the spin angle takes value $\\in [0, \\pi/2]$. \n* [1] Stoudenmire, Edwin, and David J. Schwab. \"Supervised learning with tensor networks.\" Advances in Neural Information Processing Systems 29 (2016). \n* [2] Huggins, W., Patil, P., Mitchell, B., Whaley, K. B., & Stoudenmire, E. M. (2019). Towards quantum machine learning with tensor networks. Quantum Science and technology, 4(2), 024001.", " The paper first suggests that the Bellman's optimality equation has multiple fixed points. Then a analogy is made between MDP and qunatum K-spin Ising model, and a reformulation of expected return into quantum K-spin Hamiltonian equation is proposed. It is argued that by regularizing the policy to have a stationary Hamiltonian, the model can 1). achieves a relative high reward independent of the initialization; and 2). is robust to interference/noise in the inference stage, and thus reduce performance variance among random seeds. This idea has been practically implemented by randomly sampling consecutive trajectories from a specific replay buffer and minimizing the Hamiltonian by policy gradient. The experiments on MuJoCo robotic control tasks have shown the effectiveness of the proposed methods using both DDPG and PPO as base algorithms, in terms of slightly higher mean performance and significantly lower variance. Furthermore, the agents converged to the stationary policy with a substantially higher ratio with the proposed method. [Strength]\n1. The paper touches a relatively important problem in deep RL, namely how to reduce variance of policies with different random seeds.\n2. The proposed method is simple and easy to implement.\n3. The experimental results are good, which show the effectiveness of the proposed methods by reducing variance by 65.2% ~ 85.6%\n4. 3 simple yet motivated examples to show that the Bellman’s optimality equation has multiple fixed points.\n\n[Weakness]\n1. Lack of discussion of related work.\n2. The analogy is hard for a reader without quantum physics background. \n3. Some of the paper's claims need to be further supported.\n4. The paper sometimes mixes-up the optimal policy function $\\pi^*$ and non-optimal policy $\\pi$, making the analogy a bit confusing.\n\nSee below for the details of my concerns.\n\n---------------------- post-rebuttal -----------------\nThe author has resolved most of my conerns and agreed to update the manuscript to address the issues. Correspondingly, I update my score toward acceptance, mainly because the results (Table 2) are appealing with a relatively simple add-on (H term). Nonetheless, I expect the authors in the future to more comprehensively investiagte the motivation using empirical and theoretical analysis to support their claims.\n\n [Major]\n- Line 48: \"we make a novel analogy between an MDP and a quantum K-spin Ising model\". However, ref [21] proposed to model MDP with K-spin Hamiltonian. What is the difference?\n- The optimal policy function $\\pi^*$ and a general policy $\\pi$ are mixed-up. E.g., line 111, I understand the optimal policy $\\pi^*(\\mu_k) \\in $ {0 ,1} can be mapped to spin operator, which is a common practice in quantum computation. However, how about $\\pi(\\mu_k)$, which is a continuous-value scalar? Also, in Table 1, the optimal policy is analogous to the spin operators, while the Hamiltonian the functional of non-optimal policy. I am a bit confused about this mixing-up.\n- The main motivation is based on that the Bellman’s optimality equation, which is the base of Q-learning-like algorithms such as DDPG, has multiple fixed points. But the algorithm also work well on PPO, which even does not use Bellman equation to learn the value function. Can the authors provide any explanation about why it PPO+H works so well?\n- There is a lack of discussion about related work in deep RL to reduce variance, e.g., https://openreview.net/pdf?id=9xhgmsNVHu\n- My personal opinion is that the analogy is confusing to people without quantum physics background. And in the end, the algorithm used the classic policy gradient algorithm to estimate the Nabla of Hamiltonian. I feel that Algorithm 1 itself is indeed intuitive (heuristic) without introducing quantum physics. My suggestion is to defer some details of the analogy to the appendix (then the authors can have more space to clearly explain it), and complement the main texts with contents such as related work and ablation studies.\n\n\n[Minor]\n- Line 47: I suggest to add \"often\" or \"sometimes\" because there are cases we want some diversity, e.g., option-critic and DIAYN.\n- Line 87: polices --> policies\n- Fig.4 is not centered.\n N/A", " The paper posits that the Bellman optimality operator has multiple fixed-points. It becomes apparent that in defining a discounted MDP, the paper allows discount factors $\\gamma = 1$ in the infinite-horizon setting contrary to conventional wisdom in RL. Arguing that the existence of multiple fixed-points is a key source of high variance in RL, the work draws inspiration from statistical mechanics to regularize actor-critic and policy gradient algorithms, and presents results over PPO and DDPG with reduced variance across seeds and improved average performance. Weaknesses:\n\n- In both theory and practice, practitioners typically set $\\gamma < 1$ in infinite horizon settings, in which case the Bellman optimality operator has a unique fixed point due to the monotonicity and contraction properties, and the Banach fixed-point theorem. That being said, there exist some exceptional cases such as those discussed in Ch. 3 of the Bertsekas ADP textbook, where discounting with $\\gamma < 1$ may fail to find the optimum policy and additional restrictions on the space of value functions is necessary (Bertsekas, 2019). However, I think saying that the Bellman optimality operator has multiple fixed points (in bold and italics, multiple times) without making it very clear early on that the setting involves $\\gamma \\in [0, 1]$ rather than $\\gamma \\in [0, 1)$ is misleading. Furthermore, there are lots of other valid sources of variance in deep RL including but not limited to optimization of non-convex/non-stationary objectives, stochastic gradients, reward sparsity, initial conditions, complexity of learner function class, etc. that the wording in this paper neglects. I don't suppose existence of multiple fixed points is a primary concern since practitioners use $\\gamma < 1$ in infinite-horizon (and long-horizon) settings. So, I fail to see the motivation behind the proposed regularization approach and remain deeply skeptical of the soundness of this paper.\n- I found the paper to be rather difficult to read. The paper could use copy editing.\n- The paper claims to fix the multiple fixed-point issue with the Hamiltonian regularization scheme, but only shows the effect of its usage for a few pedagogical examples. But I would expect to see a theorem attached to this claim with a mathematical proof of how the issue is fixed under the proposed framework.\n\nStrengths:\n- Despite the awkwardness of the motivation, story and positioning of the paper, I can see some merit in regularizing the policy in the way that this paper proposes. Indeed, temporal regularization has been shown to serve as a good variance reduction technique when applied to the value function [33]. While [8] proposes a form of temporal regularization on the policy with a control prior, their approach requires having access to some known dynamics, while this paper does not. So, I think a major revision of the paper with better motivation, presentation and discussion of related work has potential. - In L73, the discount factor is defined as $\\gamma \\in (0, 1]$. In L127, $\\gamma \\in (0, 1)$. Why the difference?\n- L114: there is no summation in (6), so why is this called a _cumulative_ reward?\n- In (7), what does a summation from $\\mu_k$ to $\\mathcal{S} \\times \\mathcal{A}$ mean?\n- L265: if memory budget permits replay buffer size 800 for K=24, for a fair comparison it would make sense to set the buffer size to 800 for K=8 and K=16 too.\n- $R(\\tau)$ should be defined after (2). As far as I can tell, there is no discussion of limitations, except maybe the complexity trade-off due to the truncation parameter K. I'm curious as to whether there exist MDPs such that Hamiltonian regularization negatively impacts performance. ", " This work proposes to help a policy network stably find a stationary policy by making an analogy between an MDP and a quantum K-spin Ising model. To demonstrate the existence of multiple fixed points of the Bellman Optimality equation, the authors used three examples from dynamic programming. The paper empirically evaluated the performance of the newly proposed method on 6 MuJoCo tasks. Strengths:\nInteresting problem and approach. The instability of DRL algorithms is definitely a major concern.\nDerivation seems to be sound.\n\nWeaknesses:\nThe experiment is quite limited (6 MuJoCo tasks)\nNumber of compared baseline is too few & did not compare the performance with any quantum RL/Hamiltonian mechanics method\nThe computational cost may prohibit its ability to scale to more complex problems 1. Rather than an analogy between optimal policy and quantum field, should it be just policy with the quantum field? \n2. How much more computational cost is needed for the additional H term as compared to the baseline methods?\n3. if the main point is to make an analogy with physical systems and MDP, why the quantum k-spin Ising model is specifically chosen?\n4. In the Broader Impace Statement the authors state that 'bring together the strengths of both approaches and yield new insights in both fields'. However, I'm not sure what this can bring for the quantum community? 1. Experiment is limited to the 6 MuJoCo tasks. \n2. The analogy to 'lowest energy' makes me worry that this method only works for physical tasks (humanoid, hopper, etc.).\n3. Lacking theoretical analysis on the resulting new algo.\n4. Maybe can include a comparison of the computational cost of this method versus trainiing ten times (depends on how many would be required to reach the same level of stability) of the original DRL agent.", " This paper aims to resolve the challenge that Bellman's optimality equation has multiple fixed points, which leads to instability in deriving its solution. To this end, the authors first observe that the evaluation of cumulative reward function can be reformulated into a K-spin Ising model. The authors then propose to use the energy of such an Ising model as a regularizer in the actor-critic algorithm. The authors further conduct multiple experiments on the MuJoCo environment. $\\textbf{Strength.}$\n\nThe quantum K-spin Ising model view of RL is interesting and novel to me. \n\n$\\textbf{Weakness.}$\n\nThe challenge raised by the authors does not seem to be fully addressed by the authors. See Question 2 for the details.\n $\\textbf{Question 1.}$\nIs the regularizer term in (7) equivalent to (2), namely, the corresponding cumulative reward function? Why does adding the cumulative reward function as a regularizer improves the stability of the actor-critic? In fact, by rewriting the log of products of policies into the sum of log policies and reorganizing the sum, I find the gradient equivalent with REINFORCE (with a truncation of trajectory to the $k$-th step). Can I understand the gradient of Hamiltonian as a variant of policy gradient estimation?\n\n$\\textbf{Question 2.}$\nHow does utilizing the Hamiltonian regularizer resolves the raised challenge that Bellman's optimality equation has multiple fixed points? Conjecturing that minimizing energy improves the stability is not sufficiently convincing as the formulation of Hamiltonian in equation (7) seems to be the same as the cumulative reward function. That said, minimizing the energy appears to be the same as maximizing the cumulative reward, which is also the objective of various existing policy gradient approaches. More importantly, the adopted AC-style approach is a policy gradient method, so it does not attempt to solve Bellman's optimality equation directly. In contrast, it improves the cumulative reward function by updating the policy directly. N.A." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "nips_2022_DGwX7wSoC-", "3mtHRjay2s3", "Ib90PbzwOEx", "2G3QBX5tBZ4", "Q2BP7OYq4l", "PeIbuVsHfS", "_Y8eI4Qyivp", "Tf6OPfb20QI", "fXOT2Umw8mw", "rHkhf77PymIS", "gEuv_7cukjg", "E48uwlhVFLb", "ih8Veh64rSC", "zgiUUrgpkmS", "QMXeup90k36", "48QfxQ3EYmM", "rHkhf77PymIS", "df-3O7rWju", "BR-PBT8GD3h", "1-HzbwRF6Rq", "69fOyPd0BmG", "nips_2022_DGwX7wSoC-", "nips_2022_DGwX7wSoC-", "nips_2022_DGwX7wSoC-", "nips_2022_DGwX7wSoC-" ]
nips_2022_kHrE2vi5Rvs
Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization
Deep reinforcement learning (DRL)-based combinatorial optimization (CO) methods (i.e., DRL-NCO) have shown significant merit over the conventional CO solvers as DRL-NCO is capable of learning CO solvers less relying on problem-specific expert domain knowledge (heuristic method) and supervised labeled data (supervised learning method). This paper presents a novel training scheme, Sym-NCO, which is a regularizer-based training scheme that leverages universal symmetricities in various CO problems and solutions. Leveraging symmetricities such as rotational and reflectional invariance can greatly improve the generalization capability of DRL-NCO because it allows the learned solver to exploit the commonly shared symmetricities in the same CO problem class. Our experimental results verify that our Sym-NCO greatly improves the performance of DRL-NCO methods in four CO tasks, including the traveling salesman problem (TSP), capacitated vehicle routing problem (CVRP), prize collecting TSP (PCTSP), and orienteering problem (OP), without utilizing problem-specific expert domain knowledge. Remarkably, Sym-NCO outperformed not only the existing DRL-NCO methods but also a competitive conventional solver, the iterative local search (ILS), in PCTSP at 240$\times$ faster speed. Our source code is available at https://github.com/alstn12088/Sym-NCO.git.
Accept
All reviewers agree that the paper presents interesting results, hence I recommend acceptance. On the other hand there are several issues which need to be addressed in the final version of the paper: 1. The authors should add the experimental results listed in the responses, as these demonstrate more convincingly the significance of the results. 2. The mathematical formulation of the problem and the description of the solution is of extremely low quality (almost made me reject the paper). For example, nothing is defined in equation 1, neither the meaning nor the possible values of the different variables: What are the nodes? What values can features take? What is a solution? Going on to Section 2.1 and 2.2, it is again unclear what a solution is (not to mention a solution sequence), hence why we care about the corresponding MDP, what are the motivations in the definition of the MDP. What is a policy? What is a solution set? And so on. These must be written in a way which is understandable to a reader who is not already very familiar with the topic.
train
[ "wlnh4kdd-Ak", "W_MdHXC4Gc", "FmCL282S6Rz", "gpTtFwt34sb", "-GkwaHQvMN9", "vIGGjgve-4b", "tjfPV_fIkw", "DMDTe586xNL", "CgQtXscPaM", "kwo2irW1MD", "zYmVM69pJ11", "19v1ZJ1bPA", "MOhzeoPFroy", "D9w-HR-JryC", "018NFbibjUx", "BbpQrVofgkA", "XyXLAD3rdq6", "rDXFlM_zbcQ", "TbhJz1v8V_M", "PofddLQyC9m", "czJ_Nqv19cM", "j1rjzzhov7", "Xcz0e2MbDJT" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for addressing my concerns regarding the claims on expressive power and hard vs. soft invariant learning.\n\nI find the updated Figure 1 and accompany text more convincing. I acknowledge that the previously made claims regarding the expressive power of ENNs and the required expressive power for combinatorial optimisation tasks have now been removed.\n\nI agree that this work demonstrably improves the performance of NCO solvers by leveraging appropriate symmetries, and these empirical findings may be of interest for the broader community working on combinatorial problems beyond routing. I have updated my score with these considerations.", " First of all, I would like to sincerely thank you for your valuable and constructive comments in helping the authors write the manuscript more objectively. Your comments are not offensive at all, and we are rather grateful to you for acknowledging the merits of this study. And we wish you are now fully recovered from the illness.\n\n**[Ambiguous use of \"expressive power]**\n\nWe agree that the term \"expressive power\" can be misleading if one interprets it as a level of universal approximation (UA). In this study, we use the term \"expressive power\" to indicate the general performance of the solver constructed based on a specific architecture or learning method. So we would like to ensure that we did not intend to deny the UA property of an equivalent neural network (ENN). Therefore, respecting the reviewer's comment, we revised ‘expression power’ terms in our paper (see motivation, novelty, and figure 1).\n\n---\n\n**[Contemplation about the causes of performance differences between ENN and Sym-NCO]**\n\nWe think Sym-NCO achieved the significant performance because it can efficiently utilize effective architectures proven to be effective in RL-based CO routing fields (Kool et al., Kwon et al.). We have tried to employ EGNN-type architectures in NCO; however, the performance was unsatisfactory empirically. \n\nWe believe the unsatisfactory performance of ENN is not because of low \"expressive power\". We however believe local un-decomposability of routing problems has significantly different features compared with ENN’s target benchmark. ENN works are usually verified in several geometric deep learning benchmarks including point cloud and molecule. For example, SE3 transformer and EGNN are verified on the N-body system (point cloud style data) and QM9 (molecule sparse graph). The point cloud and molecule have strong local decomposability; local clustering such as K-nearest neighborhood (KNN) processing is extremely helpful and does not degrade performances and design constraints much. For example, a molecule graph can be clearly decomposed with molecule fragments (imagine the benzene-ring attached with other molecule components); several researches studied fragment-based molecule generation and optimization (Jin et al., 2018). \n\nOn the other hand, routing problems such as TSP are not decomposable because it has a global constraint on the Hamiltonian cycle (Ahn et al., 2020). Therefore, ENN’s technical approaches such as the KNN approximation of the SE3 transformer may not directly compete with SOTA in CO. \n\nHowever, we agree that these observations do not imply that the ENN structure is not capable of solving CO problems, but we think some delicate designing process is needed to increase performance and compete with SOTA, which may require a significant amount of additional research.\n\n---\n\n**[Clarified Novelty of Sym-NCO]**\n\nWe agree that the main contribution of the current paper is not on rigorously analyzing the difference between the ways to impose symmetricities: hard invariant learning (ENN) vs. soft invariant learning (Sym-NCO). Reflecting the reviewer's opinion, we exclude the argument comparing the pros and cons of the two methodologies. While mainly focusing on conveying the merits of Sym-NCO for achieving excellent performance and the simplicity of implementation, we introduce ENN as another alternative method that can reflect symmetricity and explain the difficulty of directly employing ENN for solving NCO. Although we haven't provided a mathematically rigorous analysis, we hope that the results of our Sym-NCO convey to the readers the message that we can improve the generalization performance of the learned NCO solver by exploiting the symmetricities with our proposed simple but novel approximation method. And we expect this study to lead to a discussion of different ways to reflect the symmetry inherent in many combinatorial optimization problems effectively.\n\n---\n\n**References**\n\nAhn, Sungsoo, Younggyo Seo, and Jinwoo Shin. \"Learning what to defer for maximum independent sets.\" International Conference on Machine Learning. PMLR, 2020.\n\nJin, Wengong, Regina Barzilay, and Tommi Jaakkola. \"Junction tree variational autoencoder for molecular graph generation.\" International conference on machine learning. PMLR, 2018.\n\nKool, Wouter, Herke Van Hoof, and Max Welling. \"Attention, learn to solve routing problems!.\" arXiv preprint arXiv:1803.08475 (2018).\n\nKwon, Yeong-Dae, et al. \"Pomo: Policy optimization with multiple optima for reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 21188-21198.\n", " My comment may sound aggressive so let me preface it by saying that I believe this is a borderline paper and it brings ideas that are of interest to the community. However, it is my duty as a reviewer to raise the following concerns to the ACs and other reviewers. It may be possible that I am mistaken.\n\n---\n\nI respectfully disagree with the above response and to the presentation of the expressive power of ENNs vs. approximately invariant models. I believe that the claims made are handwavy at best, and misleading at worst.\n\nFirstly, the authors have not precisely defined the terms 'expression power' or expressive power. Based on Fig.1, I am assuming that they are referring to the ability of a model to universally approximate any function to arbitrary accuracy. My understanding is based on this definition of expressive power, which is a precise and formal term.\n\nThus, the main theoretical claim by the authors is that ENNs are not expressive enough to universally approximate rotation-invariant functions over sets (or fully connected graphs).\nAnother theoretical claim is that combinatorial problems require 'more expressive power' than what is possible for ENNs.\nThe authors are making (seemingly) formal statements, but they have not pointed to any references or provided any proofs to support these claims.\n\n> Attentive structures, including AM and POMO, give powerful expression because multi-head attention for coordinates can powerfully represent (fully) edge connections between coordinates. However, EGNN (which is mainly designed for sparse graphs) uses a simple multi-layer perceptron (MLP) to represent each relative coordinate. \n\nIn my opinion, these claims are handwavy and lack justification. They are misleading for several reasons:\n1. The E(n)-equivariant GNNs paper did in fact work with fully connected graphs for all their experiments. This can be verified from the manuscript as well as the code. \n2. I do not believe there is any difference between message passing (where messages are constructed via MLPs on edges) vs. attentional (where messages are constructed via learnable scalar weights) in regards to expressive power. Adding attention to an architecture does not automatically equip it with higher expressivity. Attention may work better in practice, but I have yet to see a proof showing attentional aggregation being provably more powerful than message passing.\n3. There is some work showing that the model from the E(n)-equivariant GNNs paper is a universal approximator for group invariant/equivariant functions, e.g. [Appendix E](https://arxiv.org/pdf/2102.09844.pdf) of their paper, and [this work from Villar et al](https://arxiv.org/abs/2106.06610).\n4. So if we assume that (a) E(n)-equivariant GNNs are universal; and (b) Theorem 2.1. from this work holds, i.e. solutions have strict rotational symmetry, then in theory, E(n)-equivariant GNNs are expressive enough to learn the solution. (BTW, if Theorem 2.1. does not strictly hold, then there is a case to be made that the possible solutions lie outside the space of functions that E(n)-equivariant GNNs can learn.)\n\n> Designing an equivariant attentive structure for a fully connected graph is very challenging; we leave it for further research.\n\nI strongly disagree, I think it is trivial to replace message passing in E(n)-equivariant GNNs with an attentional aggregation. In fact, this has been done already in a popular GitHub codebase: https://github.com/lucidrains/En-transformer. The [SE(3)-Transformers](https://arxiv.org/abs/2006.10503) paper does the same, but using higher order spherical tensors instead of cartesian vectors.\n\n---\n\nTo return to my original point, I think this paper's main contribution is regarding enforcing approximate rotational invariance (the other contribution, symmetry w.r.t. the starting city, has been proposed previously in POMO). I do not think the advantage of approximate invariance over exact invariance has been justified in a rigorous manner at all. \n\nIn the revision and rebuttal, the authors have tried to justify this by making statements about the expressive power of ENNs and the expressive power needed to solve combinatorial problems. Neither of these arguments are supported by any references or formal proofs. In my opinion, the justifications are handwavy and potentially misleading.\n\nI will restate that I believe this is a borderline paper, even without any rigorous or math-y justification for approximate invariance outperforming exact invariance (it may be empirical). However, I would encourage the authors to revise their presentation. ", " Thank you for your response. We are always open to discuss any time at any issue. \n\nRouting-style combinatorial optimization problems, including TSP, are represented as fully connected input graphs. Attentive structures, including AM and POMO, give powerful expression because multi-head attention for coordinates can powerfully represent (fully) edge connections between coordinates. However, EGNN (which is mainly designed for sparse graphs) uses a simple multi-layer perceptron (MLP) to represent each relative coordinate. Designing an equivariant attentive structure for a fully connected graph is very challenging; we leave it for further research. ", " I would like to thank the authors for taking the time to go through my concerns and questions. In particular, their explanation of how they derived equations 5 and 6 significantly increased my confidence in the soundness of their approach and in the correctness of the paper. I now believe this paper should be accepted.", " I thank the authors for taking the time to carefully go through and address each of my concerns and questions. I particularly like the revised version's motivation explanations and the new Figure 1. I am happy to increase my recommendation score to accept.\n\nAs a final note, I suggest that the authors carefully go through their paper and correct grammatical mistakes (I think a few have been introduced in the revised text, but it would take a long time for me to list out each sentence!). It's a good paper, but perhaps e.g. Grammarly https://www.grammarly.com/ could help with some of the sentences for the camera-ready version if accepted.", " Thank you to the authors for partially addressing my concerns. I have updated my score based on the responses. In particular, I was not fully convinced regarding statements on the (lack of) expressive power of equivariant networks, e.g. “ENN scheme provably guarantees to trained in symmetric space; more expression power is needed for CO tasks”. I apologise that I am unable to engage in author discussions at this time as I have fallen ill after traveling.\n", " - Pg. 1 line 2: Introduce DRL-NCO acronym but unclear what the ‘N’ stands for (presumably ‘neural’, but should specify)\n\nN means 'neural', which we already mentioned in line 26. \n\n\n- Pg. 4 line 114: Should it not be ‘as the hidden representations of $x$ and $Q(x)$ rather than $x$ and $P(x)$?\n\nWe revised it; see our revision. \n\n- Pg. 4 line 119: There seems to be unnecessary extra brackets in the $g(\\cdot)$ term\n\nWe revised it; see our revision. \n\n\n- Pg. 6 line 197: You list PointerNet without saying which CO problem(s) you applied it to as you did for the other methods.\n\nWe revised it; see our revision. \n\n- Throughout the paper, you introduce many acronyms (e.g. S2V-DQN, AM, POMO, MDAM, etc.) without first stating what the full name of the acronyms are, which you should always give when first introducing a new acronym.\n\nWe revised it; see our revision. \n\n- It seems confusing to refer to the method of Nazari et al. 2018 as ‘RL’ since there are multiple other RL methods such as S2V-DQN.\n\nWe revised as 'RL' to 'Nazari et al.'; see our revision. \n\n- Citation [20] seems to be miss-formatted?\n\nWe revised it; see our revision. \n", " \n**Negative Social Impact**\n\nYour claims for social impact are significantly valuable. We will put an extra paragraph in the main text after the decision is made. \n\nDesign automation through NCO research affects various industries including logistics and transportation industries. From a negative perspective, this automation process can lead to unemployment in certain jobs. However, automation of logistics, transportation, and design automation can increase the efficiency of industries, reducing CO2 emissions (by reducing total tour length) and creating new industries and jobs. \n\n---\n\n**Real World Usage of Rotational Invariance**\n\nThe rotational invariance is a training feature for a neural network regardless of its' actual usage in test time. For example, self-supervised learning scheme trains models have invariance features of the actual image of the cat and 90 degrees rotated cat. In the real world, it is rare that a cat is rotated or strong-augmented. However, this self-supervised is beneficial to \"learn\" invariant features in neural networks and increases generalization capability. Sym-NCO also has a similar motivation. See our revised introduction (motivation subsection) and figure 1 to understand the motivation of leveraging symmetricity. \n\n\n\n\n\n\n \n", " **Question 20: Missing analysis of Sym-NCO incurred overhead**\n\nWe measure VRAM allocation overhead using NVIDIA A100 single GPU, TSP ($N=100$). Overhead of POMO + SymNCO (K=100 and L can be variable) is evaluated as:\n\n| L,K | Memory |\n| --- | --- |\n| 1,100 (POMO) | 7GB |\n| 2,100 | 12GB |\n| 4,100 | 23GB |\n\nWe can conclude that $L$ is directly proportional to memory consumption.\n \n---\n\n**Question 21: Generality claim**\n\nCurrent Sym-NCO has verified to targets: Euclidean CO solver trained with REINFORCE (easily extended to other on-policy schemes).\n\n$L_{inv}$ term can be extended to other learning approaches: Supervised, Unsupervised Learning based Euclidean CO solver.\n\n$L_{ss}$ term can be directly extended to other domains: Graph CO solver trained with an on-policy method. \n\n$L_{ps}$ and $L_{inv}$ can be extended to graph CO domains if the proper graph CO input-transformation rule is identified. \n\nThe overall concept of Sym-NCO can be extended to non-euclidean graph-based methods when the problem of symmetricity of graph input data is identified. If some graph transformation rule is founded, Sym-NCO can be directly applied to the graph CO domain.\n\n**Question 22: Euclidean vs. non-Euclidean problem clarification**\n\nWe agree the non-Euclidean problem is the ultimate goal for the neural CO domain, relatively unexplored than euclidean COPs. We acknowledge and follow up with non-euclidean NCOs. Non-euclidean NCO is important because there are several important CO applications that can not be converted in euclidean nature. \n\nThe reason for setting euclidean NCO as our baseline is that we wanted to show our scheme is valid in well-explored literature, which has various benchmarks and a very high-performance baseline model and easily identified symmetricity. \n\nWe agree next step for COP is a non-Euclidean routing problem and the Large Scale routing problem. To this end, our finding that leveraging symmetricity of CO is important for generalization capability is still alive for further work on non-euclidean settings and large-scale settings and can become important resources.\n\n\n\n", " **Question 10: Statistical significance of solver performance difference**\n\nNote that the performance of the NCO model is evaluated with the test dataset which is independently separated from the training dataset. Therefore overfitting to training dataset does not help to increase performance in test time. Our method outperforms all neural baselines in four different tasks, where some methods were “focused” on specific tasks. Also, the test dataset has 10,000 instances, the performance value is the average of them. In TSP, the optimal value is around 7.76, reducing the gap nearby the optimum not having much room for improvement. Therefore, Sym-NCO’s performance increment over POMO (which is SOTA DRL-based neural model for TSP) has meaningful satistical results. \n\nFurthermore, as shown in the scalability result above (CVRP N =500,1000) our model has high transferability than the baseline model which has statistical significance in avoiding overfitting. \n\n\n---\n\n**Question 11: Optimality gap calculation**\n\nOptimal value of TSP (N=100) with 10,000 dataset proposed by Kool et al.,: 7.76455\n\nPOMO + SymNCO (ours): 7.8375\n\nOptimal Gap: (7.8375 - 7.76455)/7.76455 * 100 = 0.94%\n\n**Question 12: Unclear Sym-NCO integration with existing ML solvers**\n\nAs we mentioned in section 5.2:\n\nTable 1: POMO + Sym-NCO\n\nTabel 2: AM + Sym-NCO\n\n\nWe revised our manuscript; see table 1,2. \n\n---\n\n**Question 13: Missing experimental data**\n\nWe do not directly reproduce S2V-DQN because it is far from SOTA. We referred Kool et al.\n\n**Question 14: Negative optimality gaps**\n\nIn PCTSP, following Kool et al., ILS is the best solver (but not the optimal solver). Therefore, the optimality gap may be misleading. The optimality gap (as Kool et al did) indicates a gap from the current best-known solver. Therefore, in PCTSP we outperformed the best-known solver, to the best of our knowledge. \n\n**Question 15: Unclear and inconsistent results**\nThat is because we reported early steps of the training process. Full training results are below:\n\n| | TSP (N=100) | CVRP (N=100) |\n| --- | --- | --- |\n| PointerNet | 8.60 | - |\n| PointerNet + Sym NCO (ours) | **8.57** | - |\n| AM | 8.12 | 16.80 |\n| AM + Sym NCO (ours) | **7.90** | **16.35** |\n\n\nHyperparameters:\n\nBatchsize = 512\n\nNumber of Epochs: 100\n\nNumber of Instances per Epochs: 1,280,000\n\nL (problem sampling for L_ps) : 10\n\nK (solution sampling per one problem): 1\n\nInference: Greedy Rollout\n\nNote that results of PointerNet reported in Kool et al. (2019) and reproduced model from their source code are different (Kool et al. Proposed AM, PointerNet was just for verifying their rollout baseline scheme). In Table 1, we just followed the reported value of the paper of Kool et al. (2019). We think the actual expressive power of PointerNet is not the main point of this paper, the important fact is SymNCO also can improve NCO model proposed in 2015.\n\n---\n**Question 16: Fig 4a**\n\nPOMO is already a good solver in TSP as we described in section 3.2. As shown in Fig4a, the gap becomes larger when solution symmetricity is hard to identify (TSP has a trivial solution symmetricity where initial visiting can be permuted). In TSP, there is a small gap improved by Sym-NCO.\n\n**Question 17: Stopping Criteria**\n\nStopping criteria were set by the existing paper’s stopping rule. For comparison with POMO, we follow the POMO paper’s post-processing rule. For comparison with MDAM, we also follow MDAM’s post-processing rule. \n\nIn greedy rollout results, which is very important to see the zero-shot capability of the neural network model, Sym-NCO clearly outperformed other ML baselines. There are several other techniques for post-processing (such as 2opt). Moreover, the graph in Figure 4 is log scaled (That is why it ”seems” surpassing which is not true at all). For example, Sym-NCO zero shot greedy rollout outperforms MDAM which has 1000 times for time budget.\n\n---\n**Question 18: Sensitivity Analysis**\n\nOur training resources are not enough to tune hyperparameters because training POMO requires more than 2 weeks per task. However, we just set alpha = 0.1, and beta is 0 or 1 for every method. If we tuned hyperparameters, performance may become even more significant. \n\n---\n**Question 19: Missing analysis and discussion of Sym-NCO design choice**\n\nYes, there are a bunch of problems that have sufficient solution symmetricity. In these cases, we can simply set K=1 but L is large (L=10 for example) to automatically identifies solution symmetricity. If there is sufficient solution symmetricity, input rotation may help to find solution symmetricity because input rotation increases the randomness of a neural network to find a different solution. If there is not sufficient solution symmetricity, problem symmetricity will be only captured by $L_{\\text{inv}}$ and $L_{\\text{ps}}$. See our revised manuscript Appendix C.2. \n\n\n\n\n\n\n\n\n", " \n**Question 7: Unclear REINFORCE methodology and integration** \n\nThe total loss of Sym-NCO is $L_{total} = L_{Sym-RL} + L_{inv} = L_{pss} + L_{ss} + L_{inv}$\n\n- $L_{inv}$ is a loss term for representation learning and thus is not related to a general RL loss term (Eq.1).\n- Eq. 1 denotes a general RL loss term $L$ and this loss term is expended to define $L_{pss}$ and $L_{ss}$ to introduce the problem symetricity and solution symetricity, respectively, as:\n \n $L_{ss} = E_{\\pi \\sim F}[R(\\pi)]$\n \n $L_{ps} = E_{Q^l \\sim Q}E_{\\pi \\sim F}[R(\\pi)]$\n \n- Eq. 5 and 6 are computed by differentiating $L_{pss}$ and $L_{ss}$ as:\n \n $\\nabla L_{ss} = E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\sum_{j=1}^{K}[(R(\\pi^{j}) - \\frac{1}{K}\\sum_{k=1}^{K}R(\\pi^{k}))\\nabla logF]$\n \n $\\nabla L_{ps} = E_{Q^l \\sim Q}E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}[(R(\\pi^{i,j}) - \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}R(\\pi^{i,j}))\\nabla logF]$\n \n\nThe gradient of Loss is derived as a policy gradient baseline trick and approximated with the sample mean. See our revised paper.\n\n---\n**Question 8: Missing related work and context** \n\nSym-NCO in this paper focuses on integration with euclidean NCO methods. However, research for non-euclidean NCO (graph NCO) is also a very important research flow. I also mentioned in the discussion that Sym-NCO can be further extended to graph NCO models. We revised our manuscript by adding all the graph NCO literature you mentioned in the discussion section. \n\n---\n\n**Question 9: Small CO instances**\n\nFirst of all, graph CO problems such as max-cut, maximum independent set (MIS), and min-cut is a locally decomposable problem, which is easier to scalable to compare with routing problems such as TSP and CVRP. Specifically, Ahn et al. 2020 proposed “learning what to defer” which hierarchical decomposes the decision process of graph CO into smaller pieces and solves over 1,000,000 nodes problem. However, Ahn et al. mentioned TSP’s routing constraint is not locally decomposable, hard to apply their scalable method. \n\nSym-NCO also can scale more than N=100, as shown in TSPLIB results where the maximum N=250. The below table is the result using POMO + SymNCO trained in N=100, inferencing for larger-scale problems.\n\n**Exp setting**\n\nTraining: same with pre-trained model reported in Table 1\n\nInference: sampling width = 20\n\nTable for TSPLIB\n\n| | Opt | POMO | Gap | Ours | Gap |\n| --- | --- | --- | --- | --- | --- |\n| KroA200 | 29,368 | 29,937 | 1.94% | **29,816** | **1.53%** |\n| ts225 | 126,643 | 131,811 | 4.08% | **127.742** | **0.87%** |\n| tsp225 | 3,919 | 4,149 | 5.87% | **4,126** | **5.27%** |\n| pr226 | 80,369 | 82,428 | 2.56% | **82,337** | **2.45%** |\n\nFurthermore, Sym-NCO can be scaled further up using (1) pertaining in small scale problem (2) transfer pretrained model to larger scale problem (Hottung et al. 2022). Therefore we provide additional experiments on **large-scale CVRP (N=500,1000)**.\n\n**Transfer Learning to Large Scale Experiments** \n\nTraining: Pretrained model same reported in Table 1\n\nTransfer Adaptation: We referred source code of Hottung et al. 2022 (https://github.com/ahottung/EAS) \n\n- Adaptation Method: EAS-lay\n- Adaptation Shot: 200\n\n| | CVRP (N=500) | CVRP (N=1000) |\n| --- | --- | --- |\n| LKH3 | 60.37 (0.00%) | 115.74 (0.00%) |\n| POMO + EAS {200} | 63.30 (4.85%) | 126.56 (9.24%) |\n| Ours + EAS {200} | **62.41 (3.37%)** | **121.85 (5.92%)** |\n\n**Few shot Scale Transfer Adaptation Experiment**\nTraining: Pretrained model same reported in Table 1\n\nTransfer Adaptation: We referred source code of Hottung et al. 2022 (https://github.com/ahottung/EAS) \n\n- Adaptation Method: EAS-lay\n- Adaptation Shot: 1,2,5,10\n\n\n\n| **CVRP ($N=500$)** | K = 1 | K=2 | K=5 | K=10 |\n| --- | --- | --- | --- | --- |\n| POMO + EAS | 136.91 | 116.77 | 77.59 | 69.90 |\n| Ours + EAS | **75.85** | **69.72** | **67.26** | **66.33** |\n\n| **CVRP ($N=1,000$)** | K = 1 | K=2 | K=5 | K=10 |\n| --- | --- | --- | --- | --- |\n| POMO + EAS | 366.61 | 311.41 | 189.26 | 162.64 |\n| Ours + EAS | **192.12** | **163.92** | **139.66** | **134.61** |\n\nWe view large-scale routing problem research as very important further work. Sym-NCO can be positioned to a pertaining scheme. Because Sym-NCO supports learning symmetricity which is a shared feature even for large-scale problems, pertaining with Sym-NCO will help to improve scalability further. \n\n**References**\n\n- Ahn, Sungsoo, Younggyo Seo, and Jinwoo Shin. \"Learning what to defer for maximum independent sets.\" International Conference on Machine Learning. PMLR, 2020.\n\n- André Hottung, Yeong-Dae Kwon, and Kevin Tierney. Efficient active search for combinatorial\noptimization problems. arXiv preprint arXiv:2106.05126, 2021.\n\n", " **Question 1: Solution symmetricity and shared features' clarification**\n\nWe agree with your comment. We have revised it into “the solution symmetricity refers to the property that solutions have identical output values” in the revised manuscript.\n\n---\n\n**Question 2: Pre-identified symmetricites**\n\nThe Pre-identified Symmetricities indicate the problem of symmetricity such as rotational and reflectional invariance which provably guarantees its’ symmetricity.\n\n---\n\n**Question 3: Difficulty of solution symmetricity identification**\n\nChecking the solution symmetricity for the given two solutions is easy because we can just compare their solution values. However, finding a set of solutions with the same value is difficult. This is only possible for some CO problem classes whose solution structures are well understood. For example, for TSP, we know that the traveling cost of a determined route will be the same regardless of the first starting node. Thus we can identify solution symmetricity explicitly. However, for general CO problems, identifying such solution symmetricity is not straightforward, which is why we aim to learn such solution symmetricities through learning.\n\n---\n\n**Question 4: Overall motivation and intuition**\n\nLeveraging symmetricity is important to train CO models for two major reasons. Firstly, symmetricity is a strong inductive bias that can support the training process of DRL by making compact training space as shown in newly updated Figure 1. Secondly, learning symmetricity is beneficial to increasing generalization capability for unseen CO problems because symmetricity induces the invariant representation that every COP contains. See figure 1 in the revised paper. \n\n---\n\n**Question 5: Inconsistent jargon**\n\nThe ‘invariant representation symmetricity’ and ‘problem symmetricity’ are similar but indicate different meanings. \n\nThe Problem symmetric indicates the relationship of problems having the same optimal solution set (Def 2.1 in the main text). \n\nInvariant Representation Symmetricity is the objective of $L_{inv}$ that forces encoder representations from problems in problem symmetric class to contain features in some projected space.\n\n---\n\n**Question 6: Solution sampling methodology**\n\nTraining policy samples K x L solutions (i.e. On-policy). \n\n---\n\n**Question 6: [Advantage function-R(π(P)) the greatest rewards attained by policy π for problem P across all K samples?]**\n\n**R(π(P)) is one of the sample rewards from all K samples. Specifically,** \n\n$\\nabla L_{ss} = E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF]$ \n\n$\\approx \\frac{1}{K}\\sum_{j=1}^{K}(R(\\pi^{j}) - \\frac{1}{K}\\sum_{k=1}^{K}R(\\pi^{k}))$\n\n**See our revised manuscript (equation 3,4,5)**\n\n---\n\n**Question 7: [Advantage function-Could the authors please explain how the proposed advantage function means the advantage will be negative if a proposed solution has worse optimality than the K solutions sampled?]**\n\nIn the case of solution symmetricity, we approximate the gradient of $L_{ps}$ using K sampled solutions as\n\n$\\nabla L_{ss} = E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\frac{1}{K}\\sum_{j=1}^{K}(R(\\pi^{j}) - \\frac{1}{K}\\sum_{k=1}^{K}R(\\pi^{k}))$\n\nAs you said, if the sampled solution $\\pi^j$ underperforms compared to the average performance of $K$ sampled solution, the advantage becomes negative. Thus, this cost term is designed to push each sampled solution to perform better than average, until all the solutions have the same value. Thus, it optimizes policy (i.e., making each solution perform better) at the same time while imposing solution symmetricity. Note that the average of advantage terms of K sampled solutions becomes zero, which means our baseline is an unbiased estimator.\n\nSimilarly, \n\n$\\nabla L_{ps} = E_{Q^l \\sim Q}E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}[(R(\\pi^{ i,j}) - \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}R(\\pi^{i,j}))\\nabla logF]$.\n\n\n\n", " Thank you for your valuable and specific comments. They were very constructive in improving the paper. We have uploaded the revised manuscript with the modified portions marked in blue font. We would like to kindly request the reviewer to see our revised manuscript.\n\nThe reviewer largely suggested four limitations, and the answers to these are briefly summarized as follows. Specific answers to individual questions will be presented further.\n\n- **Presentation**: We revised our manuscript and add figure 1 to improve the clarity of the paper.\n\n- **Large Scale CO**: We have conducted an additional experiment to test the scalability of our method with large-scale COs (CVRP N = 500, 1000). We observe that pretrained model by Sym-NCO improves scale transferability significantly. Our method achieved 3.37% (N=500) and 5.92% (N=500) gap from LKH3; while POMO shows 4.85% and 9.24%. See appendix D.3 and detailed responses regarding the scalability tests. \n\n- **Experiments inconsistencies**: These criticisms seem to stem from the unclear description of the experimental setups and results. To resolve this, we have addressed your questions in detail with concrete experiment results\n\n- **Relative Works**: We revised the discussion section to discuss the relationship between the non-Euclidean NCO models and our Sym-NCO. \n", " Firstly reason that we do not present empirical comparisons with improvement-style NCO is that the improvement-style NCO is complemented by constructive-style NCO. Improvement-style NCO tries to make policies that can improve the current solution. Therefore, Improvement-style NCO can also improve solutions from Constructive-style NCO which can be a hybrid NCO method. \n\nConstructive-NCO has a strong benefit over improvement-NCO as we mentioned in the introduction: easy to generate feasible solutions in hard constraint tasks and extremely fast. \n\nHowever, we also provide an empirical comparison with state-of-the-art improvement NCO:\n\n| | TSP (N=100) | | CVRP (N=100) | |\n| --- | --- | --- | --- | --- |\n| | Gap | Time | Gap | Time |\n| Wu et al. (I=5K) | 1.42% | 2h | 2.47% | 5h |\n| DACT (I=1K) | 1.62% | 48s | 3.18% | 2m |\n| DACT (I=5K) | 0.61% | 4m | 1.55% | 8m |\n| Ours (s.100) | 0.39% | 12s | 1.46% | 16s |\n| Ours (s.800) | **0.14%** | 1m | **0.90%** | 2m |\n\nSee detailed experiments in revised Appendix D.4.\n\n\n**References**\n\n- **[DACT]** Yining Ma, Jingwen Li, Zhiguang Cao, Wen Song, Le Zhang, Zhenghua Chen, and JingTang. Learning to iteratively solve routing problems with dual-aspect collaborative transformer. Advances in Neural Information Processing Systems, 34, 2021.\n- Yaoxin Wu, Wen Song, Zhiguang Cao, Jie Zhang, and Andrew Lim. Learning improvement heuristics for solving routing problems, 2020\n\n\n", " \n\n**Question 4: Section 6.1 results are counterintuitive**\n\nYour question about EGNN and our method is a critical part of our research. Thank you very much for providing us with the opportunity to clarify this important topic. If we focus only on symmetricity among many conditions that the optimal solver must have, the results presented in this study may not be intuitive. That is because the EGNN having the desired symmetricity exactly does not perform well compared to Sym-NCO having the symmetricity approximately.\n\nWe believe rotation symmetricity is a necessary condition for finding the optimum solver but not a sufficient condition (see the newly added figure 1 in the revised manuscript). To support this claim, we provide the following two arguments with evidence:\n\n- **Rotational symmetricity is necessary to improve performance**. Rotational invariance is an important property to improve the generalization capability of the model for the CO tasks. Although a test instance has never been exposed during training the solver, the solver can utilize the fact that the test instance also has rotation invariance, which is the reason why Sym-NCO has better generalization capability than exactly the same model without having rotational invariance. Tables 1, 2, and figure 6 clearly show that Sym-NCO utilizing problem symmetricity outperforms the same model without having rotation invariant learning.\n- **Rotational symmetricity is not sufficient to improve the performance of the solver.** Figure 6 shows that the EGNN having the desired symmetricity exactly does not perform well compared to Sym-NCO having the symmetricity approximately. This trend implies that rotational symmetricity is not a sufficient condition for the optimum solver. We believe such performance difference comes from the different representation power between EGNN and Sym-NCO. Our method can utilize existing powerful CO model such as AM and POMO, which has extremely high representation capability on CO tasks. Our method is simple to integrate with existing powerful NCO models, using the proposed regularization scheme. However, EGNN is difficult to be combined with such effective NCO solver architecture.\n\nWe agree that it could be best that there are provable equivariant neural networks which is a constraint to satisfy an equivariant on several CO symmetricity and at the same time maintains the existing CO model’s representation capability. We leave these directions to further work.\n\n---\n\n**Question 5: Continuation of the Question 4**\n\nFor your continuation question, we firstly acknowledge helping analyze our results. However, we have degrees in both (A) and (B). \n\n- **(A) Rotational Invariance is indeed important.** As explained above, rotational invariance is an important property to improve the generalization capability of the model for the CO tasks. That is because the rotational invariance is a shared invariant feature that every CO problem contains; even unseen CO problems in the training time have rotational invariance properties, which makes the model easy to adapt to solve such a new test problem. Tables 1, 2, and figure 6 clearly show that Sym-NCO utilizing problem symmetricity outperforms the same model without having rotation invariant learning.\n- **(B) Our method avoids overfitting.** Our “regularization” scheme improves generalization capability. Firstly, the CO task is all about generalization because they must solve the unseen problem. Even if the training distribution and scales are identical to the test distribution the training data and test data have different instances, optimal values, and optimal solutions. Therefore, if one model is overfitted in the training dataset, it will perform poorly in the test dataset. Furthermore, our method is verified to improves generalization capability on the different scaled problems (see Appendix D.3).\n\n---\n\n**Question 6: Implementation details of EGNN**\n\nWe used 6 EGNN encoder layers, where the embedding dimension is 128. The EGNN layer requires three input components: Edge, Coordinate, and Node. We use the distance matrix for all cities as the edge. We use city coordinate as the Coordinate. Lastly, we use demand and prize (where the feature f in Section 3.1) as input nodes; in TSP there is no such a demand and prize so we simply use zero vector. Note that decoder layer is same with POMO. \n[Note that We will upload source code after decision made]\n\n---\n\n**Question 7: Symmetricity \"imposed\" can be a misleading expression.**\nWe agree with this argument. We revised it, check the revised manuscript\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", " **Question 1: Comment of Abstract**\n\nThe authors agree that the sentence can mislead us in that it reminds “supervised learning”. Our intention was to say DRL is able to learn an NCO solver just by interacting with the target problem (environment) without having to rely on domain expert knowledge. \n\nDeep reinforcement learning (DRL)-based combinatorial optimization (CO) methods (i.e., DRL-NCO) have shown significant merit over the conventional CO solvers as DRL-NCO is capable of learning CO solvers without having to rely on domain expert knowledge.”\n\n---\n\n**Question 2: Question of Loss Terms**\n\nThe total loss of Sym-NCO is $L_{total} = L_{inv} +L_{ps} + L_{ss}$.\n\n- $L_{inv}$ is a loss term for representation learning and thus is not related to a general RL loss term (Eq.1). \n- Eq. 1 denotes a general RL loss term $L$ and this loss term is expended to define $L_{ps}$ and $L_{ss}$ to introduce the problem symetricity and solution symetricity, respectively, as: $L_{ss} = E_{\\pi \\sim F}[R(\\pi)]$, $L_{ps} = E_{Q^l \\sim Q}E_{\\pi \\sim F}[R(\\pi)]$\n- Eq. 5 and 6 are computed by differentiating $L_{ps}$ and $L_{ss}$ as:\n$\\nabla L_{ss} = E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\frac{1}{K}\\sum_{j=1}^{K}[(R(\\pi^{j}) - \\frac{1}{K}\\sum_{k=1}^{K}R(\\pi^{k}))\\nabla logF]$\n$\\nabla L_{ps} = E_{Q^l \\sim Q}E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}[(R(\\pi^{ i,j}) - \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}R(\\pi^{i,j}))\\nabla logF]$.\nThe gradient of Loss is derived as a policy gradient baseline trick and approximated with the sample mean. See our revised paper. \n\n---\n\n**Question 3: Sym-NCO with other Models**\n\nAs we mentioned in the experimental setting (section 5.2), we applied Sym-NCO as follows:\n\n-Table 1: POMO + SymNCO.\n-Table 2: AM + SymNCO.\n\nAs your suggestion, we give **PointerNet + Sym-NCO** and **AM+Sym-NCO** as follows:\n\n| | TSP (N=100) | CVRP (N=100) |\n| --- | --- | --- |\n| PointerNet | 8.60 | - |\n| PointerNet + Sym NCO (ours) | **8.57** | - |\n| AM | 8.12 | 16.80 |\n| AM + Sym NCO (ours) | **7.90** | **16.35**|\n\n\n**Hyperparameters**:\n\n- Batchsize = 512\n\n- Number of Epochs: 100\n\n- Number of Instances per Epochs: 1,280,000\n\n- L (problem sampling for L_ps) : 10\n\n- K (solution sampling per one problem): 1\n\n- Inference: Greedy Rollout\n\nNote that results of PointerNet reported in Kool et al. (2019) and reproduced model from their source code (https://github.com/wouterkool/attention-learn-to-route) are different (Kool et al. Proposed AM (2019), PointerNet was just for verifying their rollout baseline scheme). In Table 1, we just followed the reported value of the paper of Kool et al. (2019).\n\nWe remark that PointerNet and POMO do not support PCTSP and OP in Table 2. \n\n", " Thank you for your valuable comment. It was very constructive for revising our manuscript. Particularly, we could improve the presentation of the current paper based on your detailed comments.\n\nBefore answering your specific question one by one, we have summarized the major changes and the effort we made to respond to the limitations the reviewer raised.\n\n**[Insufficient discussions on related work (Symmetricity based NCO)]**\n\n**Table 1** shows that our method outperforms all baselines in the fastest time. **Table 2** shows that our method covers a wide arrange of CO tasks. We have included these results in Appendix D.5\n\n\n| **Table 1** | TSP (N=100) Optimal Gap | Evaluation Time | GPU Usage |\n|--------------------------------------|-------------------------------|-----------------------|--------------------|\n| Ouyang et al. (local search) | 2.61% | 1.3m | GTX 1080Ti |\n| Hudson et al. (local search) | 0.698% | 28h | Tesla P100 |\n| Ma et al. ($I=1K$) | 1.62% | 4m | Titan RTX |\n| Ours (s.100) | **0.39%** | **12s** | RTX 2080Ti |\n* $I$ indicates the number iterations\n* $s$ indicates the number of sampled solution from identical problem. \n\n\n| **Table 2** | Learning Method | Verified Tasks |\n|--------------------------------------|-------------------------------|-----------------------|\n| Ouyang et al. (local search) | RL | TSP | \n| Hudson et al. (local search) | SL | TSP | \n| Ma et al. ($I=1K$) | RL | TSP,CVRP | \n| Ours (s.100) | RL | TSP,CVRP,PCTSP,OP | \n\n---\n\n**[Literature Review]**\n\nWe have revised section 4 to add the literature review you suggested. The following is the added paragraphs to expand the literature review in appendix D.5:\n\n“Ouyang et al. have a similar purpose with Sym-NCO, in that both are DRL-based constructive heuristics, but they give rule-based input transformation (relative position from first visited city) to satisfy equivariance. However, our method learns to impose symmetricity approximately into the neural network with regularization loss term. We believe our approach is a more general approach to tackling symmetricity (see Table 2) because not every task can be represented as a relative position with the first visited city.\n\nThe Hudson et al. is the extended work of Joshi et al. where graph neural network makes sparse graph from fully connected input graph, and search method figures out the feasible solution from the sparse graph. This method is based on the supervised learning scheme that requires expert labels. Moreover, this method does not guarantee to generate feasible solutions in hard-constraint CO tasks because the pruning process GNN may eliminate feasible trajectory (In TSP, it may work, but in other tasks, this method must address feasibility issues). Regardless of this limitation, we view line graph transformation as novel and helpful in terms of symmetricity. \n\nMa et al. proposed a DRL-based improvement heuristic, exploiting the cyclic nature of TSP and CVRP. The purpose of Ma et al. and our Sym-NCO is different: the objective of Sym-NCO is approximately imposing symmetricity nature, but the objective of Ma et al. is to improve the iteration process of improvement heuristic with fined designed positional encoding for TSP and CVRP. Note that Sym-NCO (constructive method) and Ma et al. (Improvement method) are complementary and can support each other. For example, pretrained constructive model can generate an initial high-quality solution, whereas improvement method can iterative improves solution quality.” \n\n**[Counter-intuitive Results, Unclear writing, and presentation]**. We provide detailed responses below to your questions from weakness comments. \n\n\n**References**\n\n- Wenbin Ouyang, Yisen Wang, Paul Weng, and Shaochen Han. Generalization in deep rl for tsp problems via equivariance and local search. arXiv preprint arXiv:2110.03595, 2021\n- Benjamin Hudson, Qingbiao Li, Matthew Malencia, and Amanda Prorok. Graph neural network guided local search for the traveling salesperson problem. arXiv preprint arXiv:2110.05291 2021.\n- Yining Ma, Jingwen Li, Zhiguang Cao, Wen Song, Le Zhang, Zhenghua Chen, and JingTang. Learning to iteratively solve routing problems with dual-aspect collaborative transformer. Advances in Neural Information Processing Systems, 34, 2021.\n\n\n", " Thank you for your valuable comments. \n\n\n**Question 0: Why traning step of figure 6 is not consistance?**\n\nThe training step was consistancly reported; we present training graph of first 50,000 step. The training graph of PointerNet is presented from 25,000 step to 50,000. That is because traning curve of PointerNet is unstable when step T<25,000.\n\nNote that full training results of POMO + Sym-NCO was reported in Table 1. Also we report PointerNet + POMO and AM + POMO at the reponses of your Question 3 below.\n\n**Question 1: Equation Clarification**\n\nThe total loss of Sym-NCO is $L_{total} = L_{inv} +L_{ps} + L_{ss}$.\n\n- $L_{inv}$ is a loss term for representation learning and thus is not related to a general RL loss term (Eq.1). \n- Eq. 1 denotes a general RL loss term $L$ and this loss term is expended to define $L_{pss}$ and $L_{ss}$ to introduce the problem symetricity and solution symetricity, respectively, as: $L_{ss} = E_{\\pi \\sim F}[R(\\pi)]$, $L_{ps} = E_{Q^l \\sim Q}E_{\\pi \\sim F}[R(\\pi)]$\n- Eq. 5 and 6 are computed by differentiating $L_{ps}$ and $L_{ss}$ as:\n$\\nabla L_{ss} = E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\frac{1}{K} \\sum_{j=1}^{K}[(R(\\pi^{j}) - \\frac{1}{K}\\sum_{k=1}^{K}R(\\pi^{k}))\\nabla logF]$\n$\\nabla L_{ps} = E_{Q^l \\sim Q}E_{\\pi \\sim F}[(R(\\pi)-b)\\nabla logF] \\approx \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}[(R(\\pi^{ i,j}) - \\frac{1}{LK}\\sum_{i=1}^{L}\\sum_{j=1}^{K}R(\\pi^{i,j}))\\nabla logF]$.\nThe gradient of Loss is derived as a policy gradient baseline trick and approximated with the sample mean. See our revised paper. \n\n---\n\n**Question 2: What does 'gr' refers to in table 1?**\n\nThe gr refers to greedy rollout (rollout of maximum probability trajectory). We revised 'gr' to 'greedy' in table 1 and 2. \n\n---\n\n**Question 3: Which model is trained by Sym-NCO in Table 1 and 2**\n\nAs we mentioned in the experimental setting (section 5.2), we applied Sym-NCO as follows:\n\n-Table 1: POMO + SymNCO.\n-Table 2: AM + SymNCO.\n\nAs your suggestion, we give **PointerNet + Sym-NCO** and **AM+Sym-NCO** as follows:\n\n| | TSP (N=100) | CVRP (N=100) |\n| --- | --- | --- |\n| PointerNet | 8.60 | - |\n| PointerNet + Sym NCO (ours) | **8.57** | - |\n| AM | 8.12 | 16.80 |\n| AM + Sym NCO (ours) | **7.90** | 16.35 |\n\n\n**Hyperparameters**:\n\n- Batchsize = 512\n\n- Number of Epochs: 100\n\n- Number of Instances per Epochs: 1,280,000\n\n- L (problem sampling for L_ps) : 10\n\n- K (solution sampling per one problem): 1\n\n- Inference: Greedy Rollout\n\nNote that results of PointerNet reported in Kool et al. (2019) and reproduced model from their source code (https://github.com/wouterkool/attention-learn-to-route) are different (Kool et al. Proposed AM (2019), PointerNet was just for verifying their rollout baseline scheme). In Table 1, we just followed the reported value of the paper of Kool et al. (2019).\n\n**Question 4: EGNN + Sym-NCO**\n\nEGNN and our method are not complimentary. EGNN explicitly imposes symmetricities through neural network architecture (hard constraint equivariant learning), while Sym-NCO imposes symmetries by imposing regulation costs. The EGNN designed for a particular symmetricity will make the regularizing cost designed to impose the symmetriciy to be exactly zero. Thus, using both approaches at the same time will not boost performance.\n\nThis may sound that EGNN will be a more direct and effective approach; however, identifying proper symmetricities for each NCO is not always easy. In addition, imposing explicitly symmetries often restrict the expressive power of a network, resulting the performance degradation.\n\nWe believe your suggestion is meaningful because we can use both approaches to impose different types of symmetricities. In future research, we will consider developing a more effective method by hybridizing these two approaches.\n\n**References**\n\n- Wouter Kool, Herke van Hoof, and Max Welling. Attention, learn to solve routing problems! In International Conference on Learning Representations, 2019.\n\n", " To begin with, we thank every reviewer who helps improve our manuscript. \n\nWe have revised our manuscript significantly according to the reviewers’ comments. The revised portion is indicated in blue font. We kindly request the reviewers to see our revised manuscript while reading our responses below. The major updates are as follows:\n- **Motivation**. To clearly explain the motivation for utilizing symmetricity found in COPs, we have included the “motivation” paragraph in the Introduction section. We also newly included Figure 1 to conceptually and intuitively explain the benefit of Sym-NCO over existing DRL-NCO methods and Equivariant Neural Network (ENN) schemes. \n- **Detailed mathematical description regarding training method**. We have provided the detailed procedure for deriving the policy gradient terms of REINFORCE algorithm with specially designed bias terms to impose both problem and solution symmetricities.\n- **Additional Experiment**. We have included two additional experiment results in the supplementary document: evaluation of transferability on large-scale CVRP (D.3), comparison with the DRL improvement heuristics (D.4), and comparison between other symmetric NCOs (D.5).\n\nThe current manuscript has received positive evaluations regarding its novelty and impact:\n- **Reviewer 5sgV**: the proposed method is “innovative,” having extremely significant results on four benchmarks \n- **Reviewer 186o**: the proposed method has clear motivation.\n- **Reviewer oKeQ**: contains a novel and original loss scheme, is easy to integrate with prior methods, and targets significant application areas of ML.\n\nThe three key points that reviewers commonly point out and the authors' responses to them are as follows: \n- **Unclear derivation procedure for deriving the REINFORCE gradients (**Reviewer 186o**, **Reviewer oKeQ**)**: We have provided the detailed procedure for deriving the policy gradient term of REINFORCE algorithm with specially designed bias terms to impose both problem and solution symmetricities. Please see Section 3.1 of the revised manuscript.\n- **Unclear Motivation for imposing symmetricities (**Reviewer 186o**, **Reviewer oKeQ**)**: To clearly explain the motivation for utilizing symmetricity found in COPs, we have included the “motivation” paragraph in the Introduction section. We also newly included Figure 1 to conceptually and intuitively explain the benefit of Sym-NCO over existing DRL-NCO methods and Equivariant Neural Network (ENN) schemes.\n- **Scalability (**Reviewer oKeQ**)**: We have evaluated the transferability of the method on a large-scale CVRP (N=500, 1000), and included these additional results in Appendix D.3. The result shows that Sym-NCO greatly improves scale transferability, significantly outperforming previous DRL-NCO model’s transferability (5 times faster adaptation in N=500). These results clearly strengthen our claim that learning symmetricity improves trainability by giving compact training space and generalization capability. \n\n\n**We provide specific responses for each reviewer**. \n\n\n\n", " Combinatorial optimization problems often have numerous symmetries. This work proposes to leverages these symmetries to improve the training of neural networks that have been proposed in other works to solve such combinatorial optimization problems.\nMore specifically, the paper:\n * precisely defines two types of symmetries it wants to leverage, covering both symmetries that are inherent to the formulation of these problems and symmetries that are intrinsic to the solution spaces of such problems.\n * Proposes a regularization scheme to help the neural network learn for the symmetries\n * Evaluate their approach extensively on 4 common combinatorial optimization problems. Overall the paper is well written and fairly easy to follow. However, it's unclear how the authors came up with equation 5 and 6. It looks like the authors don't need to mathematically define Lss and Lps (which is why they never do so in the paper), but instead directly tweak the corresponding gradients in order to build two regularization gradients. The paper would be much easier to follow if that was clearly stated, and the process the authors went through in order to build these 2 gradient terms was spelled out.\n \nSince I am not sure how the authors came up with equation 5 and 6, I didn't check the correctness of the maths, so at this point I cannot vouch for the theoretical soundness of the approach. That said, the experimental results look good, so I'm optimistic that the maths will check out. I am looking forward to more explanations from the authors in the rebuttal. \n\nThe various graphs in figure 6 are drawn for varying numbers of training steps. Why not use a consistent number of steps ?\n\nPrevious work have proposed to take advantage of symmetries to improve the generalization, mainly by leveraging symmetry invariant neural networks architectures, or by leveraging problem specific symmetries. As far as i know, this is the first work that proposes a solution that can be applied to any neural network, which is a significant innovation.\n\nThe evaluation shows that Sym-NCO results in better quality of results on all 4 benchmarks the approach was evaluated on. Furthermore, it gets these results at least as fast as the fastest other approach it was compared against. This is extremely significant.\n\n Can you detail how you came up with equations 5 and 6 ? If needs be, you can add the relevant text to the supplemental material as you did for the proof of theorem 2.1. This would really help convince me of the soundness of your approach.\n\nWhat does gr refers to in table 1?\n\nIn section 5.2 you mention that you applied Sym-NCO to POMO, AM, and PointerNet. Which one did you use to gather the results in Table 1 and 2 ? It would be interesting to be able to compare the 3 versions against their corresponding baselines whenever possible.\n\nWould it be possible to extend your your comparison with EGNN to also train the EGNN model with your loss ? I'd love to see to what extend your approach is complementary to that of EGNN, in which case you should be able to improve on the performance reached by the EGNN model by training it with your loss. \n\n \n Nothing of note here.", " This work is in the area of learning to approximately solve TSP, CVPR, and the associated class of routing problems using deep reinforcement learning.\n\nThe main methodological contribution is to identify that routing problems and their solutions often contain symmetries such as **rotational symmetry** or the **cyclical nature** of the solutions. The paper proposes **new loss functions** that **softly incorporate symmetries** for the encoder-decoder architecture from AM (Kool-etal) and POMO (Kwon-etal). \n\nTraining models via the proposed loss functions (termed 'Sym-NCO') improve over the models trained via previously proposed loss functions on random routing problem instances as well as on real-world instances from TSPLib.\n\n==========\n\nPost rebuttal: Thank you to the authors for partially addressing my concerns. I have updated my score based on the responses. In particular, I was not fully convinced regarding statements on the (lack of) expressive power of equivariant networks, e.g. “ENN scheme provably guarantees to trained in symmetric space; more expression power is needed for CO tasks”. I apologise that I am unable to engage actively in author discussions at this time as I have fallen ill after traveling.\n\n==========\n\nPost author discussions: Thank you for addressing my concerns regarding the claims on expressive power and hard vs. soft invariant learning. I believe this work demonstrably improves the performance of NCO solvers by leveraging appropriate symmetries, and these empirical findings may be of interest for the broader community working on combinatorial problems beyond routing. I have updated my score with these considerations. Strengths:\n- **Clear motivation**: The paper does a good job at presenting ideas around how routing problems and their solutions often contain symmetries such as rotational symmetry or the cyclical nature of the solutions. The figures are very instructive.\n\nWeaknesses:\n- **Unclear writing and presentation**: There are several aspects of the paper where the writing and presentation was unclear to me. I have listed this down under the **Questions** part of my review.\n- **Counterintuitive results** and **lack of justification**: The results in Section 6.1 around approximate vs. exact rotation invariance are counterintuitive as they suggest to me that rotation invariance is in fact not desirable (because when it is enforced exactly, the model is unable to perform at all). However, these findings are not sufficiently justified or understood (see **Questions** section). I may have misunderstood the results or may be missing something here.\n- **Insufficient discussions on related work**: There are several recent works which as based on incorporating symmetries to improve learning for routing problems, but are not discussed in this paper, e.g. [Hudson et al.](https://arxiv.org/abs/2110.05291) obtained strong results via a **rotationally invariant GNN** via converting graphs to line graphs, [Ma et al.](https://arxiv.org/abs/2110.02544) proposed positional encodings that incorporated the **cyclical nature** of routing problems, [Ouyang et al.](https://arxiv.org/abs/2110.03595) performed **preprocessing** to make the model (approximately?) invariant to rotations. The Related Work section does not do enough to contextualise present work w.r.t. recent advances in the community, which may be doing something similar or tangential to the present approach. In the best case, it would also be good to empirically compare to these techniques. Following the **Weaknesses** part of the review, there were several parts of the paper where the writing and presentation were unclear to me:\n- Lines 1-3, Abstract, states that DRL has merits over traditional CO solvers because DRL does not need supervised data. To the best of my knowledge, traditional CO solvers (which include Concorde or LKH) also **do not** need supervised data. Could the authors clarify this statement?\n- I was uncertain what the relationship of Eq. 4 and Eq. 5/6 were to Eq. 1. I understand that they are components of the overall loss L_sym in Eq. 3, however, **how is L_sym used to model Eq. 1**?\n- In the results in Table 1 and 2, it appears as if Sym-NCO is a model by itself. On the other hand, the experimental setting states that Sym-NCO is applied to POMO, AM, and PointerNet. Could the authors clarify which of the **underlying models** were trained for each row of results highlighted as Sym-NCO? \n\nIn addition to the questions on writing/presentation, here are additional questions regarding **approximate vs. exact rotation invariance** in Section 6.1:\n- I found the results in Section 6.1 extremely counterintuitive. The results seem to suggest that provable rotation invariance is **actually undesirable**, and the authors have tried to justify this by saying that nudging the model to be approximately rotationally invariant makes it more **'flexible'** and maintain its 'representation capability'. In my opinion, the justifications are rather hand wave-y. Could the authors be more precise in defining these terms?\n- As a continuation of the previous point, in my opinion, the curves in Fig. 6 show one of two things: (A) rotational symmetry is irrelevant to the problem, so EGNNs are not useful and unable to learn the task well (as seen by their fluctuating performance); or (B) the approximately rotationally invariant models are **overfitting on the data distribution** that is being used for training and validation (both are randomly generated in the same size range). Is this how one should interpret these results, or would the authors disagree? \n- Finally, could the authors provide some implementation details of the EGNN used?\n\nFinally, I had some other minor questions or nitpicks to mention:\n- Titles of Sections 3.1 and 3.2 state that the symmetries are being **'imposed'**. However, in my opinion, this may be **misleading** as the symmetries are only approximated, not imposed. E.g. the encoder model's output features are only approximately invariant to rotations (unless using models which are provably rotation invariant).\n- It was unclear to me why there are no empirical comparisons to the **improvement-style NCO models**. Could the authors justify this decision? The authors have included several technical limitations of the present work and avenues for future research. They have **not included** any sections on **potential negative social impact**, and this should at least be addressed. Is it likely that this type of research may be put in production in the logistics or transportation industry, and if so, what may be some considerations to make?\n\nIn my opinion, one major limitation of this work is the **lack of justification** or understanding of **approximate vs. exact rotation invariance** in the hidden representations of the encoder. E.g. is rotational symmetry even relevant in real world datasets beyond synthetic and random TSPs/CVPRs? As such, **real world** maps and cities have a **canonical set of coordinates** and **directions** (north, south, east, west). This is to say that, in a city where I may have many locations that I would like to navigate through, I always have an aligned or fixed set of coordinates as inputs - there doesn't seem to be a good reason to arbitrarily rotate them. \n\nIt is worth considering whether methodological developments on synthetic tasks may be useful for the corresponding real-world applications in routing.", " This paper considers the problem of training neural networks to solve NP-hard CO problems. Recent works have sought to apply ML to CO, but often fall short of outperforming the state-of-the-art handcrafted heuristics; methods which neural networks have the potential to surpass in both optimality and solving time. To address this, the authors first identify that CO problems have underlying symmetries in both their problem definition and their corresponding solutions. The authors reason that explicitly learning these symmetries will implicitly aid learning to find near-optimal solutions quickly. To this end, the authors propose a new loss function which guides the network towards finding optimal solutions whilst also learning embeddings which retain these symmetries; an achievement which the authors implicitly reason makes learning to solve such CO problems more tractable for RL. Moreover, the authors claim that their loss function formulation is agnostic to the specific CO problem considered, and that it can be applied to and improve any prior neural CO solver from the literature. The authors demonstrate their proposed loss function's efficacy on four CO problems (TSP, CVRP, PCTSP, and OP), and claim to surpass all ML baselines on all four CO problems. Strong points:\n* Addresses an important and significant application area of ML, namely solving NP-hard CO problems.\n* Proposes a novel and original loss function which guides the network towards learning the underlying symmetries of CO problems in addition to learning to find near-optimal solutions.\n* The proposed approach is seemingly easy to integrate with existing ML-CO solvers and is therefore complimentary to a broad variety of prior work.\n\nWeak points:\n* The paper/writing is unclear in multiple areas (see below).\n* The $100$ node problem sizes considered are significantly smaller than those of prior works.\n* The Experiments section in general has multiple shortcomings and inconsistencies (see below).\n* The Related Work section is incomplete and may not sufficiently place this work in the context of the current literature. ### Introduction & Methodology\n\n* **'Solution symmetricity and shared features' clarification:** In the Introduction when the authors introduce the two types of CO symmetry they consider, they say: ‘Second, the solution symmetricity, which is the shared feature among solutions having identical optimal values.’ I find this sentence difficult to understand - do the authors just mean that solution symmetricity is where solutions have the same value? What is the ‘shared feature’, and why do the values necessarily need to be optimal? In Definition 2.2, the authors state that two solutions are symmetric when their total returns are equal; nothing about optimality is mentioned as far as I can tell.\n\n* **'Pre-identified symmetricites' clarification:** In the Introduction, the authors mention that their novel learning scheme ‘imposes symmetricities by leveraging the pre-identified symmetricities’. Does ‘pre-identified symmetricities’ refer to the problem and solution symmetricities, or some pre-identification method? This jargon should be explicitly defined in my view for clarity.\n\n* **Difficulty of solution symmetricity identification:** In the Introduction, the authors state that problem symmetricity is found in all CO problems in the form of rotational symmetricity, but that solution symmetricity cannot be identified easily as the ‘properties of the solutions are distinct for every CO problem’. What specific properties are distinct? Is solution symmetricity where two solutions result in the same return (as stated in Definition 2.2), in which case why is it not easy to evaluate whether two solutions are symmetric?\n\n* **Overall motivation and intuition:** I think the Introduction and Discussion is missing some motivation for the overall idea. Why do the authors think it is fundamentally beneficial to account for symmetricities in the learning scheme? Are there state-of-the-art solvers which do this, or are the authors relying on some intuition that explicitly learning CO symmetricities will lead to a parameterised network more able to find solutions in fewer steps since it will have learned to find multiple policies which lead to the same near-optimal reward, and therefore have greater chance of generalisation at test time? I think a discussion of what motivated the idea and what the intuition is behind it is currently missing from the paper. \n\n* **Inconsistent jargon:** In the Sym-NCO Methodology, is the ‘invariant representation symmetry’ just the ‘problem symmetricity’ referred to elsewhere? If so, I think it would be good to keep jargon to a minimum by consistently referring to the various phenomena being discussed by the same names. If not, then I have misunderstood what is being discussed here.\n\n* **Solution sampling methodology:** How exactly are the $K$ and $L$ solutions sampled from the REINFORCE policy? Is this done with some stochastic exploration policy?\n\n* **Advantage function:** \n * In equation 5, is $R(\\pi(P))$ the greatest rewards attained by policy $\\pi$ for problem $P$ across all $K$ samples? I.e. is $R(\\pi(P))$ a subset of R({\\pi^{k}\\}^{K}{k=1}? If not, what is the difference between how $R(\\pi(P))$ and R(\\{\\pi^{k}\\}^{K}{k=1} are generated?\n\n * Could the authors please explain how the proposed advantage function means the advantage will be negative if a proposed solution has a worse optimality than the K solutions sampled? Will it not only be negative if it is worse than the mean of the $K$ sampled solutions, since the baseline is just the mean return of $K$ sampled solutions? Same for $L$ in Equation 6.\n\n* **Unclear REINFORCE methodology and integration:** Where is $L_{inv}$ (and by extension $L_{sym}$) actually incorporated into training with REINFORCE? The policy gradient theorem defined in Equations 5 and 6 only seems to include the $L_{ss}$ and $L_{ps}$ terms, so where is $L_{inv}$? Where are the three symmetric loss functions combined? Furthermore, are Equations 5 and 6 separate, or does Equation 6 contain Equation 5 with the $L$ and $K$ summations? I would like to see how all of this is tied together, since at the moment I do not think it is clear.\n\n\n### Related Work\n\n* **Missing related work and context:** This section is missing some important work and fails to introduce some of the baselines considered in the later Experiments section (e.g. S2V-DQN) and the context around them. I do not think it necessary to add all prior works to the set of baseline comparisons, since the authors are showing that the Sym-NCO method can improve a range of different architectures and methods, but I think a discussion in the related work of a few more pieces of literature should be included to put the Sym-NCO work in context. In particular, the related work is missing a discussion of where Sym-NCO fits in the context of some of the key state-of-the-art non-GNN, GNN, supervised learning, and reinforcement learning works (e.g. Bello et al. 2016, Gu and Yang 2020, Dai et al. 2017, Abe et al. 2019, Li et al. 2018, Barrett et al. 2020 and 2022, Drori et al. 2020, Hottung et al. 2022).\n\n\n\n### Experiments\n\n* **Small CO instances:** At $100$ nodes, the CO problems considered are very small compared to those considered by e.g. Gu and Yang 2020 ($300$ nodes), Drori et al. 2020 ($1,000$ nodes), and Barrett et al. 2022 ($10,000$ nodes). Does Sym-NCO scale? Does the requirement to sample $L \\times K$ solutions for the advantage function and to gain enough data to learn the symmetricities hinder scalability? This would be interesting information to include in the paper’s experiments and discussion.\n\n* **Statistical significance of solver performance differences:** On $O(100)$ node problems of the size considered, many of the ML-CO methods in Table 1 seem to obtain similar costs (e.g. AM gets $7.94$, POMO and MDAM get $7.80$, and Sym-NCO gets $7.79$). Is this a statistically significant difference in the optimality gap? It would seem that on such small CO problems there is not much room for differing costs, and that the results of Sym-NCO can be easily made to overfit until the reported result is achieved.\n\n* **Optimality gap calculation:** How were the optimality gaps in Table 1 calculated? E.g. If the optimal solution of TSP is $7.76$ and Sym-NCO finds a solution of $7.84$, does this not mean that Sym-NCO’s solution cost is $1.03$% higher than the optimal solution (the authors have recorded a gap of $0.94$%)?\n\n* **Unclear Sym-NCO integration with existing ML solvers:** Table 1 and Fig 4 do not indicate which ML method(s) Sym-NCO was applied to. They just state Sym-NCO as a standalone method, but is Sym-NCO not applied to at least one of the other baselines to get the results in the table? At the beginning of the Experiments section, the authors state that they apply Sym-NCO ‘on top of POMO, AM, and PointerNet’, but it is not clear from Table 1 and Fig 4 which underlying method was used for Sym-NCO.\n\n* **Missing experimental data:** In Table 1, why are PointerNet and S2V-DQN missing solving time values?\n\n* **Negative optimality gaps:** In Table 1, having a negative 'optimality gap' does not make sense - how can a solution be found which is more optimal than the optimal solution?\n\n* **Unclear and inconsistent results:** Table 1 and Figs 3 and 4 are not consistent in multiple regards:\n * Fig 3 shows PointerNet getting a minimum validation cost of $\\approx9.50$ for TSP, but Table 1 shows PointerNet to have a cost of $8.30$. Also, none of the Sym-NCO validation curves in Fig 3 reach the $7.84$/$7.79$ TSP costs claimed in Table 1. Why? \n * Fig 4a: Sym-NCO and POMO get almost exactly the same $\\approx0.79$ result, yet POMO is recorded as having achieved a cost of $0.80$ in Table 1 compared to Sym-NCO’s recorded $0.79$.\n * What are the stopping criteria to stop running each solver? In Fig 4, it seems that the algorithms were still improving their solutions when they were stopped, and it also seems as though some of the other ML baselines were on track to surpass the optimality of Sym-NCO; it is important to report if Sym-NCO converges faster but to less optimal solutions.\n\n* **Missing sensitivity analysis to the introduced hyperparameters:** What is the sensitivity of Sym-NCO to $\\alpha$, $\\beta$, $L$, and $K$? In the Appendix, the authors only show results for $\\alpha = \\{0.1, 0.2\\}$, but $\\alpha, \\beta \\in [0, 1]$. In proposing a new training scheme which introduces additional hyperparameters, the authors should have a comprehensive study of the influence of these hyperparameters on training and validation performance across different problems, since this is relevant information for practitioners who may wish to use the work.\n\n* **Missing analysis and discussion of Sym-NCO design choices:** Which factors determine the values of $K$ and $L$ in Sym-NCO? Presumably they change as the size and nature of the CO problem changes, since some CO problems will be difficult to sample trajectories for which sufficiently encapsulate the symmetricities (e.g. some CO problems will have a vast number of possible solutions, and only a few of these might have the same objective function value; will this influence the ability with which Sym-NCO can learn the underlying solution symmetry of the problem? Is there a requirement on how many solutions with the same/similar objective value are needed to make learning solution symmetry tractable? Can solution symmetry still be imposed if no solutions with the same objective are found?)\n\n* **Missing analysis of Sym-NCO incurred overhead:** What is the incurred training overhead of sampling the $K \\times L$ solutions needed for the Sym-NCO advantage function baseline?\n\n\n### Other\n\n* **Generality claims:** The paper makes claims (in the title, abstract, and throughout the paper) to be a general neural CO solver training scheme. However, Sym-NCO has been specifically designed for policy-gradient RL and was only applied to REINFORCE. Moreover, it only considers graph-based CO problems which are variants of TSP and, for that matter, only looks at instances which can be projected onto Euclidean space. Do other ML-CO methods consider non-Euclidean problems (e.g. Dai et al. 2017, Barrett et al. 2020 & 2022, Drori et al. 2020) where graphs trained and inferred on can have differing structures? Are the authors claiming that Sym-NCO can be generalised to ML paradigms (supervised and unsupervised learning) and CO problems (non-graph-based and/or non-Euclidean graph-based) other than those considered in the paper? Can the existence of the same symmetries considered in this paper be universally assumed to exist for all CO problems? \n\n* **Euclidean vs. non-Euclidean problem clarification:** Was there a particular reason for only considering Euclidean problems rather than utilising the now commonplace graph neural network architectures such as those used in prior works (mentioned above) which can handle non-Euclidean inputs with varying structures? What are the limitations of only handling Euclidean CO problems in terms of applications and the state-of-the-art literature? Do the authors have to hand-pick problems to meet this Euclidean constraint?\n\n\n### Miscellaneous minor issues\n* Pg. 1 line 2: Introduce DRL-NCO acronym but unclear what the ‘N’ stands for (presumably ‘neural’, but should specify)\n\n* Pg. 4 line 114: Should it not be ‘as the hidden representations of $x$ and $Q(x)$’ rather than ‘$x$ and $P(x)$’?\n\n* Pg. 4 line 119: There seems to be unnecessary extra brackets in the $g(\\cdot)$ term\n\n* Pg. 6 line 197: You list PointerNet without saying which CO problem(s) you applied it to as you did for the other methods.\n\n* Throughout the paper, you introduce many acronyms (e.g. S2V-DQN, AM, POMO, MDAM, etc.) without first stating what the full name of the acronyms are, which you should always give when first introducing a new acronym.\n\n* It seems confusing to refer to the method of Nazari et al. 2018 as ‘RL’ since there are multiple other RL methods such as S2V-DQN.\n\n* Citation [20] seems to be miss-formatted?\n\n### References\n\n1. Hanjun Dai, Elias B. Khalil, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. In Advances in Neural Information Processing Systems, 2017\n2. Thomas Barrett, William Clements, Jakob Foerster, and Alex Lvovsky. Exploratory combinatorial optimization with reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020\n3. Iddo Drori, Anant Kharkar, William R. Sickinger, Brandon Kates, Qiang Ma, Suwen Ge, Eden Dolev, Brenda Dietrich, David P. Williamson, and Madeleine Udell. Learning to solve combinatorial optimization problems on real-world graphs in linear time. arXiv:2006.03750, 2020\n4. Andre Hottung, Yeong-Dae Kwon, and Kevin Tierney. Efficient active search for combinatorial optimization problems. International Conference on Learning Representations, 2022.\n5. Thomas D. Barrett, Christopher W. F. Parsonson, and Alexandre Laterre. Learning to solve combinatorial graph partitioning problems via efficient exploration. arXiv:2205.14105, 2022\n6. Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio. Neural Combinatorial Optimization with Reinforcement Learning. arXiv:1611.09940, 2016\n7. Shenshen Gu and Yue Yang. A Deep Learning Algorithm for the Max-Cut Problem Based on Pointer Network Structure with Supervised Learning and Reinforcement Learning Strategies. Mathematics, 2020\n8. Kenshin Abe, Zijian Xu, Issei Sato, and Masashi Sugiyama. Solving NP-Hard Problems on Graphs by Reinforcement Learning without Domain Knowledge. arXiv:1905.11623, 2019\n9. Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial Optimization with Graph Convolutional Networks and Guided Tree Search. In Advances in Neural Information Processing Systems, 2018\n10. MohammadReza Nazari, Afshin Oroojlooy, Lawrence Snyder, and Martin Takac. Reinforcement learning for solving the vehicle routing problem. In Advances in Neural Information Processing Systems, 2018 N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "W_MdHXC4Gc", "FmCL282S6Rz", "gpTtFwt34sb", "tjfPV_fIkw", "czJ_Nqv19cM", "DMDTe586xNL", "j1rjzzhov7", "kwo2irW1MD", "018NFbibjUx", "zYmVM69pJ11", "19v1ZJ1bPA", "MOhzeoPFroy", "D9w-HR-JryC", "Xcz0e2MbDJT", "BbpQrVofgkA", "XyXLAD3rdq6", "rDXFlM_zbcQ", "j1rjzzhov7", "czJ_Nqv19cM", "nips_2022_kHrE2vi5Rvs", "nips_2022_kHrE2vi5Rvs", "nips_2022_kHrE2vi5Rvs", "nips_2022_kHrE2vi5Rvs" ]
nips_2022_PZtIiZ43E2R
List-Decodable Sparse Mean Estimation
Robust mean estimation is one of the most important problems in statistics: given a set of samples in $\mathbb{R}^d$ where an $\alpha$ fraction are drawn from some distribution $D$ and the rest are adversarially corrupted, we aim to estimate the mean of $D$. A surge of recent research interest has been focusing on the list-decodable setting where $\alpha \in (0, \frac12]$, and the goal is to output a finite number of estimates among which at least one approximates the target mean. In this paper, we consider that the underlying distribution $D$ is Gaussian with $k$-sparse mean. Our main contribution is the first polynomial-time algorithm that enjoys sample complexity $O\big(\mathrm{poly}(k, \log d)\big)$, i.e. poly-logarithmic in the dimension. One of our core algorithmic ingredients is using low-degree {\em sparse polynomials} to filter outliers, which may find more applications.
Accept
This paper studies the problem of list-decodable mean estimation under the assumption that the true mean is *sparse* and the clean distribution is Gaussian with identity covariance. In this setting, we are given n data points and a parameter $0<\alpha \leq 1/2$ such that: (1) an unknown $\alpha$-fraction of the dataset consists of iid samples from $N(\mu, I)$, where the target mean $\mu$ is $k$-sparse (i.e., supported on an unknown set of at most $k$ coordinates), and (2) no assumptions are made on the remaining points. The goal is to output a list of $O(1/\alpha)$ many vectors such that with high probability at least one of these vectors is close to $\mu$, in L2 distance. This list-decodable mean estimation problem has been well-studied in the dense case (i.e., when $k = d$ where $d$ is the dimension). The authors give an efficient algorithm for the sparse case achieving significantly better sample complexity than in the dense case. The submitted version of the paper achieves error $O(\alpha^{-1/2})$, relying on degree-$2$ polynomials. On August 8, the authors updated their manuscript, achieving improved error using higher degree polynomials. The proposed algorithm (both the initial version and the updated version) uses the multi-filtering technique of Diakonikolas, Kane, Stewart from STOC'18 [DKS18b]. Their approach crucially builds on the multi-filtering technique of [DKS18b] to a degree that the pseudocode of the algorithm and the analysis itself are very similar. On the other hand, the work includes some non-trivial steps to adapt the multi-filtering technique to the sparse setting. The reviewers eventually agreed that the paper is above the acceptance threshold. The current scores represent the updated scores by the reviewers after the August update of the submission's results. One issue to note here is that the reviewers did not have time to verify (or even read in any detail) the updated version at a technical level; hence, I have low confidence on its correctness. Overall, the paper seems to be slightly above the acceptance threshold, assuming that the updated version of the paper is correct.
train
[ "2U5M9o7uA3u", "nctcBLhkWwq", "kI-b7SFliXw", "2Sxc3h-pO1v", "K-8gCOWBdlFr", "bdY3wuzpdLg", "jhCHB0SX87", "3jydUowCshb", "fyw3cJQeIAh", "Djyzc4Bizay", "ftDFbFtKV19" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. After reading the response and also other reviews, I would like to adjust my scores. However, I still have doubt on the technical novelty and the presentation for it in the manuscript, which in part also pointed out by Reviewer 7URr. Since I need to evaluate the paper as submitted, I think it is not reasonable to raise the score further.", " Hi Reviewer 7URr,\n\nWe are checking in to see if our initial responses have addressed your concerns and whether you have any follow-up questions. We are delighted to respond to them during the author-reviewer discussion period if any.\n\nThe manuscripts are updated with the result that improves the error bound to $\\alpha^{-1/\\ell}$ via degree-$\\ell$ polynomials.", " Hi Reviewer yoWU,\n\nCan you look into our initial response and let us know whether we addressed your concern? Since your rating diverges from the other two reviewers (who rated '6' and '7' with confidence '4'), we believe it is better off to communicate with you during the author-reviewer discussion period to clear your questions. Thanks!", " We would like to thank the reviewer for appreciating our contribution. The rebuttal revision has been uploaded with the results for higher-degree cases. The reviewer can feel free to check it out at any time.", " Thanks for the response. Given that the authors claimed the techniques can support the improved error bound of O(1/alpha^{eps}), I think this paper should be make a good contribution to the robust statistics literature. Namely, showing that the techniques for list-decodable mean estimation can handle the sparse case in an attribute efficient manner. ", " Thank you very much for recognizing the significance of our contribution in obtaining a sparse estimator for the list-decodable statistical problems. We are more than happy to address your concerns with the following responses.\n\n**Q1:** It would be great to have an intuition section detailing the key points of difference from prior work and where sparsity complicates the design of an effective multi-filter.\n\n**Response:** Thank you for pointing out this for us to improve the manuscripts. The challenges introduced by the sparse setting are as follows: 1. Unlike it in the dense setting, the algorithm has access to only poly(k, log d) samples, meaning that the good data is less representative than those in the dense setting (see Def 7). 2. In [DKS18b], the algorithm clusters the points using L_2 distance because the Gaussian samples are naturally concentrated in an L_2 ball of radius sqrt{d}. However, this does not work for the sparse setting: if we were to bound the L_2 distance for every k-sparse support set, the total sample complexity would blow up. This motivated us to appeal to L_infty norm in clustering. 3. The hardness of searching the k-sparse directions complicates the filtering scheme as we are in the dilemma between (a) paying exponential time to iterate through all combinations, and (b) continuing with the current support set (Step 6 of Alg 3) but the algorithm might not filter any samples at all. This is also the necessity of designing Step 7 and proving that it must work given the conditions in Step 3 being violated. We will include these discussions in our future version.\n\n\n**Q2:** The results obtained in the paper are sub-optimal both in terms of recovery guarantees and sample complexity\n\n**Response:** Thank you for the insightful discussion! In our original submission, we aimed to provide the first attribute-efficient algorithm that bears with a sample complexity in poly(k, log d). For the recovery guarantees, a natural extension of our analysis to the degree-\\ell polynomial techniques can further improve the estimation error to alpha^{-1/\\ell} (for which we have some guarantees included in the revision to be uploaded). As for the sample complexity, we notice that even in the low-dimensional setting, the seminal work of [DKS18] has a sample complexity O(n^4). Given that our algorithm is computationally efficient, the traditional bound O(k^2) was further blowed up due to the search of k^2-sparse directions. We believe that this bound can be further improved if involving new techniques (e.g. a tighter concentration bound), which we are happy to take as our future direction. ", " We thank the reviewer for the detailed review of our paper and for endorsing its originality and clarity! Indeed, the main goal of our paper is to design an attribute-efficient algorithm for the list-decodable mean estimation problem in a high-dimensional setting. We didn't focus on the optimality of the error rate. However, we did consider this problem after the submission and derived an improved error bound of O(1/alpha^epsilon) by using degree-1/epsilon polynomials in filtering. We are happy to include this result in our revision and are happy to take any follow-up questions.\n", " Thank you very much for your valuable comments!\t\n\n**Q1:** The novelty of the algorithm is not clearly explained. \n\n**Response:** The main contribution of our work is to propose the first polynomial-time algorithm that enjoys sample complexity poly-log in d for the list-decodable mean estimation problem. It is highly non-trivial to design attribute-efficient algorithms in the robust mean estimation regime. Even for the milder corruption setting where alpha>1/2, only until very recently has some guarantees established [BDLS17, DKK+19, CDK+21]. Before this work, all known algorithms for list-decodable learning have sample complexity polynomial in d [CSV17, DKS18b, CMY20, DKK20a, DKK+21a]. The main technical novelty falls in leveraging sparse harmonic polynomials to efficiently filter the outliers. This is because the algorithm is given only poly(k, log d) samples and the estimation thus has an error bound of 1/poly(k, log d) (See Definition 7), which is in stark contrast with all prior works. \n\nIn addition, there are several technical challenges when extending the literature to the sparse setting: 1. Searching for the sparse direction where the samples' behavior the most unlike a Gaussian is known as an NP-hard problem. A heuristic algorithm exists when one considers directly applying the algorithm of [DKS18b] to all k-sparse support sets. However, the runtime, sample complexity, and the obtained list size would be exponential, i.e. poly(d^k). 2. When filtering the outliers in list-decodable setting, if the polynomial has degree >1, the traditional techniques fail immediately because the performance of these polynomials relies heavily on how good the estimated mean is (which usually is not good enough as alpha<=1/2). Here, the Hermite basis is the key to filter.\n\nWe encourage the reviewer to check out lines 75-108 and also the technical challenges recognized by Reviewer zY2p.\n\n**Q2:** It is hard to grasp the improvement since it lacks the comparison with existing algorithms, especially numerical experiments.\n\n**Response:** The most distinguishing property of our algorithm from the existing ones is that our algorithm works in the high-dimensional setting, where n<<d. As we study the problem from a theoretical perspective and have established provable performance guarantees, we believe experiments are not required.", " The authors consider a mean-estimation problem for a sample with (adversarially) corrupted entries. Assuming that the true mean is sparse, they provide an approximate algorithm, poly-logarithmic in dimension. The proof is based on the use of sparse polynomials. Strengths: A concrete, efficient algorithm for the mean estimation is provided. The manuscript is generally well-written.\n\nWeaknesses: The novelty of the algorithm is not clearly explained. It is hard to grasp the improvement since it lacks the comparison with existing algorithms, especially numerical experiments. It is desirable to explain more about the novelty of the current work. This might includes heuristic ideas about how an efficient algorithm as proposed in the manuscript is available when the entries are corrupted.\nIs the key idea to use the Hermite basis? Yes.", " The setup for list-decoding of mean estimation (no sparsity yet) is the following:\n\nThere is an unknown d-dimensional Gaussian G with mean mu and identity covariance, and an algorithm receives a set of n samples x_1,..., x_n with the promise that alpha * n of the samples are drawn from the unknown Gaussian. The task is to output a list of vectors hat{mu}_1, ..., hat{mu}_l such that one of them is close to mu.\n\nThe notable aspects of this setting is that alpha may be significantly smaller than 1/2, so that *most* of the dataset is actually adversarially corrupted. It turns out that the best one can hope for is to output this list of candidate means hat{mu}_1,..., hat{mu}_l, with l = O(1/alpha) such that at least one of them is close to mu. If we want our algorithm to be computationally efficient (i.e., run in time polynomial in the input -- which is sample size * d), then \"close\" to mu actually depends on alpha. \n\nThe first such results along these lines were from [Charikar, Steinhardt, and Valiant (CSV) '17] who got ~O(1/sqrt{alpha}) closeness, and then [Diakonikolas, Kane, Stewart (DKS) '18] improved the closeness parameter to O(1/alpha^{eps}), where the running time of the algorithm is a polynomial of degree O(1/eps). In this setup, the sample complexity is polynomial in d, so the resulting algorithms run in time which is polynomial in d.\n\nThis paper considers a very natural scenario, where d is very large, such that polynomial-in-d sample complexity is unacceptable. In this case, one needs an additional assumption on the underlying distribution, and the authors consider the case that mu is k-sparse. This is a very typical assumption to make, and the parametrization in terms of k leads to a sample complexity which is linear in k but logarithmic in d. Hence, the question is whether these list-decodable mean estimation algorithms can be made to run with only poly(k log d) samples. \n\nThe main result of this work is an algorithm for achieving this. For any alpha, an algorithm receives poly(k log d) samples, with the promise that an alpha-fraction of the samples come from an identity covariance Gaussian with mean mu which is k-sparse. The algorithm runs in time polynomial in the sample complexity and d, and outputs a list of O(1/alpha) candidate hat{mu}_1,..., hat{mu}_l where one of them is within O(1/sqrt{alpha}) from mu.\n\nAs the authors point out, it may be possible to use techniques from [DKS] to improve the error to O(1/alpha^{eps}) with algorithms whose running time is a polynomial in degree O(1/eps). \n\nThe approach that this work takes is the 'filtering' approach, which at a high level, proceeds in the following way. First, one comes up with a candidate list of subsets of samples. In each step, an algorithm will argue that either the empirical mean of the subset of samples is good enough for estimation, or else it can remove some samples from the set where the amount of 'false' samples (outliers) removed is much larger than true samples. While the paper proceeds in this approach, there are significant technical challenges in adapting the tools to this setting. Since the number of samples that we have is only poly(klog d), anything which we use samples to estimate will have an error on the order of 1/poly(k log d). The paper consider finding directions to filter outliers by only considering sparse polynomials. Another challenge is that one cannot iterate through all choices of k coordinates in [d], which causes additional technical difficulties.\n Strengths: \n\nThis is a natural problem and follows a line of work on robust algorithmic statistics. The work is original and clear. It is significant that the techniques developed for 'dense' Gaussians can be adapted for a poly(k log d) dependence when including sparsity.\n\nWeaknesses:\n\nOne may consider that this paper leaves the question of 1/alpha^{eps}, involving techniques in [DKS] to future work. In that sense, the paper may be quickly improved.\n\n**** This has since been incorporated! While I cannot verify correctness of the change (I am not an expert in the area), I think that the contribution now contains all elements of a solid paper. The problem is natural, timely, and available avenues have been explored. Therefore, I am changing my score to accept. I don't think I have any particular questions. The limitations in the theorems are thoroughly discussed.", " \nThe field of algorithmic robust statistics is concerned with the development of efficient algorithms for statistical estimation problems in settings where the observed data is extremely (often adversarially) noisy. For the canonical problem of mean estimation, a single grossly corrupted data point can completely invalidate the performance of the sample average as a natural estimate of the population mean. In settings usually considered in this domain, one assumes that the algorithm observes $n$ data points generated in the following manner:\n\n1. First $\\alpha n$ ``good'' data points are generated from the true underlying distribution $D$.\n2. An adversary then inspects the generated samples and adds an arbitrary set of $(1 - \\alpha) n$ points to the dataset. Note that the algorithm is given no knowledge of where the corrupted data points are. \n\nRestricting to the specific problem of mean estimation in high dimensions where the data set $X = \\left\\\\{ x_i \\right\\\\}_{i = 1}^n \\subset \\mathbb{R}^d$, the goal is to recover the mean of the distribution $D$ generating the good data points. This task is made complicated by the fact that the algorithm does not actually know which points these are and natural approaches such as distance based thresholding yield sub-optimal results. In the standard setting when $\\alpha \\in (1/2, 1]$, approximate identification of $\\mu$ is possible with error (in Euclidean norm) ranging between $\\sqrt{1 - \\alpha}$ to $\\alpha$ and computationally and statistically efficient estimators have been designed. In the more challenging list decoding setting when $\\alpha < 1/2$, even approximate identification is not possible and one instead returns a list of $1 / \\alpha$ estimates one of which is guaranteed to be close to $\\mu$. Note that a list of this size is necessary by considering the special case where the data is generated from a mixture of $1 / \\alpha$ well behaved distributions. Efficient estimators have also been proposed in this setting with the guarantee that at least one of the elements in the list is close to $\\mu$. The degree of closeness ranges between $1 / \\sqrt{\\alpha}$ when only second moment assumptions are placed on $D$ with improvements possible when stronger restrictions are placed on the distribution. Typically, these involve higher order moments of the distribution and the recovery error correspondingly improves to $(1 / \\alpha)^{O (1 / t)}$ if $t$ moments are available. \n\nThis paper considers the list decodable setting in the sparse regime where $\\mu$ is assumed to be $k$-sparse and $D$ is an isotropic Gaussian centered at $\\mu$. In line with prior work on sparse estimation, the goal of the paper is to build an estimator whose sample complexity depends very mildly on the ambient dimension. All prior work incur sample complexity at least $d / \\alpha$ which while optimal under no additional assumptions may be too large when the ambient dimension is too large. The paper constructs a polynomial-time estimator which achieves sample complexity $\\mathrm{poly} (k, 1 / \\alpha, \\log d)$ achieving recovery error of $1 / \\sqrt{\\alpha}$. While the recovery error and sample complexity are sub-optimal (one would expect information theoretic recover error arbitrarily close to $\\sqrt{\\log (1 / \\alpha)}$ and sample complexity $O(k \\log (d) / \\alpha)$), the number of samples is nearly independent of $d$. \n\nThe algorithm is based on the ``multi-filter'' framework for the list-decodable estimators. Intuitively speaking, one starts with a candidate set of good data points and infers one of the following:\n\n- The dataset is well behaved (typically in terms of its moments) and the empirical mean is a good estimate\n- Alternative, the dataset is poorly behaved but a certificate of this fact (usually a direction along which moments are not well concentrated) can be used to refine (remove bad points from) the dataset. One then constructs one (when $\\alpha > 1/2$) or more (when $\\alpha < 1/2$) subsets of the original dataset which are better behaved.\n\nIn the multi-filter approach, care must be taken to ensure that we do not create too many datasets or more accurately, the sum of the number of points at any level of this iterative process remains bounded. At the conclusion of this process, the empirical means of all the refined datasets are returned as candidate in the returned list. In the standard list-decoding setting, the certificates belong to the unit vector along the sphere and signify directions along which moments (first and second) deviate significantly from their expected behavior. However, in the sparse setting, it suffices to check only sparse directions -- unit vectors which are $k$-sparse. Since searching for sparse violating directions is typically a hard problem, the paper instead searches for large entries in $\\hat{\\Sigma} - I$ and uses the $k^2$ largest entries to construct an appropriate certificate.\n\n**** POST REBUTTAL UPDATE ****\n\nThe authors have since updated the manuscript with improved results when higher order moments are available at the expense of increased computational and statistical complexity. I've updated my score to reflect these changes. The design robust algorithms for the setting of sparse estimation is an important problem and the paper makes important progress in this direction conceptually matching the types of improvements that were previously obtained for sparse estimation. My main concern is the lack of technical novelty in the design of the estimator. The algorithmic approach and its subsequent analysis are based on a well-established framework and does not seem significantly novel. For instance, the key lemma (Lemma 12) controlling the behavior of the polynomials used to certify violating directions draw heavily from analogous results in [DKS 18]. Furthermore, the results obtained in the paper are sub-optimal both in terms of recovery guarantees and sample complexity even accounting for the conjectured statistical-computational gap at ($k$ vs $k^2$) for sparse estimation. Despite these drawbacks, the results in the paper are interesting and relevant and would be of interest to the theoretical machine learning and algorithms communities. Some explanation of the main technical contributions of the paper would be helpful with regards to prior work employing the multi-filter ([DKS 18] for instance). It would be great to have an intuition section detailing the key points of difference from prior work and where sparsity complicates the design of an effective multi-filter. Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "3jydUowCshb", "bdY3wuzpdLg", "fyw3cJQeIAh", "K-8gCOWBdlFr", "jhCHB0SX87", "ftDFbFtKV19", "Djyzc4Bizay", "fyw3cJQeIAh", "nips_2022_PZtIiZ43E2R", "nips_2022_PZtIiZ43E2R", "nips_2022_PZtIiZ43E2R" ]
nips_2022_vK53GLZJes8
The Pitfalls of Regularization in Off-Policy TD Learning
Temporal Difference (TD) learning is ubiquitous in reinforcement learning, where it is often combined with off-policy sampling and function approximation. Unfortunately learning with this combination (known as the deadly triad), exhibits instability and unbounded error. To account for this, modern Reinforcement Learning methods often implicitly (or sometimes explicitly) assume that regularization is sufficient to mitigate the problem in practice; indeed, the standard deadly triad examples from the literature can be ``fixed'' via proper regularization. In this paper, we introduce a series of new counterexamples to show that the instability and unbounded error of TD methods is not solved by regularization. We demonstrate that, in the off-policy setting with linear function approximation, TD methods can fail to learn a non-trivial value function under any amount of regularization; we further show that regularization can induce divergence under common conditions; and we show that one of the most promising methods to mitigate this divergence (Emphatic TD algorithms) may also diverge under regularization. We further demonstrate such divergence when using neural networks as function approximators. Thus, we argue that the role of regularization in TD methods needs to be reconsidered, given that it is insufficient to prevent divergence and may itself introduce instability. There needs to be much more care in the practical and theoretical application of regularization to Reinforcement Learning methods.
Accept
This paper presents a counterexample-driven analysis of regularization in TD learning with function approximation. Despite the paper's simplicity, the reviewers unanimously though there was a good contribution being made here, and I agree. Highlights include a clarity of presentation and new insights into what is known as the deadly triad. The reviewers generally agreed that these results are relevant to deep RL today, but would have appreciated more forward guidance.
val
[ "FKKbT7FLuK0", "w-WAZ0KhMd", "8SA5Wlco-VD7", "KTA6iAqWO8d", "Y_6Pb03oYWx", "yhO1N-HI9p", "hY47VQ99s2j", "balME3sPHm", "5dhVt5HWfFj", "RtUBEJYenPf", "wPo-yeBbV89", "RwZjvughZMA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I have edited my review/score upward after reading your clarifications. ", " Thank you for your answers! The authors have addressed my questions and I appreciate the additional experiments the authors provided.", " I thank the authors for their additional experiments and other updates to the paper. The newly added discussions help to acknowledge and contextualize the limitations of the findings within modern RL (neural networks, adaptive regularization techniques, etc) which I think makes the message of the paper stronger and more useful to readers overall. My score remains the same, and I continue to recommend acceptance of this interesting work.", " Sorry, I just realized I forgot to actually include the reference I was talking about. It sounds like you found it already, but here's the paper I was talking about just in case:\n\nDu, S. S., Gidel, G., Jordan, M. I., & Li, C. J. (2022). Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization.\n", " Thank you for the detailed review! We appreciate the attention and effort that went in to it.\n\nTo answer your questions:\n\n1. We don't have any broader theoretical results on more generic classes of scenario where these pathologies exists. The best intuition we can offer is that it tends to happen when the $A$-matrix has at least one negative eigenvalue and $\\mu$ is far from the on-policy distribution.\n\n2. The geometric intuition unfortunately doesn't extend cleanly to the NN case -- the relationship between the parameters and the output is nonlinear, and so the \"non-vacuous\" region no longer has a nice, convex shape that provides an intuitive explanation.\n \n3. We ran a few more experiments on two-layer networks with a range of parameters covering both under- and over-parameterization regimes. As expected, when under-regularized, having more parameters reduces error. We also show that the behavior around the small-$\\eta$ divergence does not change, even when the hidden layer is stretched to 64 parameters. (For comparison, the MDP only has 9 states.) We include this in the new Appendix C.1, with plots in Figure 10.\n\nIn addition to your questions, you raise an important point about regularization: its not all doom-and-gloom, and it does have an important part in training RL algorithms. Our paper was intended to point out that regularization behaves differently (and counterintuitively!) with bootstraping than in fully-supervised contexts, and to emphasize that regularization should be treated with caution. We hope to address a pattern in RL where regularization is used without special attention to the possible errors and failure modes it may introduce. We've added this to the discussion and conclusion, and also mention that trying different regularization parameters, adaptive regularization schemes, or even just changing regularization schedules can all be used to mitigate some of the problems we identify.\n\nIn Figure 5b, the increase in error is strong evidence of small-eta divergence in the NN case. To make this more apparent, we evaluated the same MDP at a different off-policy distribution, which is included in the updated appendix as Figure 9. At this other off-policy distribution, the optimal $\\eta$ is about $10^{-3}$, and divergence occurs _after_ that, which should address the concern that $\\eta \\approx 10^{-3}$ is negligibly small. Interestingly, these two examples are mutually incompatible (that is, there is no single $\\eta$ for which both examples are simultaneously not vacuous), which further emphasizes the need for either adaptive regularization or even just trying different parameters for each trained model.\n\nTo answer the more general question about how much this applies to general off-policy RL, we don't know and are also curious about this. The offline/batch RL literature contains many examples of how on-policy RL algorithms diverge in the face of off-policy sampling, so the algorithms are vulnerable. There is some evidence to suggest that RL algorithms that maintain transition buffers (e.g. SAC/MBPO) yield better performance if out-of-distribution \"stale\" transitions are evicted from the buffer. We added this discussion to the new Appendix C.2, and are considering including it in Section 3.4.\n\n(We'll also fix the minor typos/grammatical errors you flagged. Thank you!)", " Thank you for the time and work in writing your review!\n\nWe agree that people don't always necessarily believe that regularization solves the deadly triad. Instead, our paper comes from the observation that a common pattern in this area is to use regularization without special attention to the possible errors it may introduce. In that sense, our paper points out that regularization behaves differently (and counterintuitively!) with bootstraping than in fully-supervised contexts, emphasizes that regularization is not a costless decision, and encourages caution in using it.\n\nOur paper focuses on L2 regularization because it is by far the most widely used, and the kinds of errors we identify (vacuous models, small-eta divergence, etc.) all analogously apply to L1 regularization.\n\nTo answer your questions: A vacuous model is one that does no better than the trivial zero solution regardless of the amount of regularization. We added extra text in the introduction to make this clear. The key takeaway from Section 3.4 is that the problems we've identified so far are not mere artifacts of linear approximation or caused by a unique pathological basis. Instead, these also apply when the basis is not fixed (i.e. the NN case).\n\nThe more general question is if these failure modes apply to modern RL algorithms on benchmark and real-world tasks. We don't know if that happens, and are also very curious about that. The various ingredients are in place: we know (thanks to the offline/batch RL literature) that the algorithms are vulnerable to this on static datasets, and there is some evidence to suggest that RL algorithms that maintain transition buffers (e.g. SAC/MBPO) suffer if the transition buffer is too large (suggesting that the presence of out-of-distribution \"stale\" transitions decreases performance). However, its not clear to us how we can prove this is the failure mode. (We included this in the new Appendix C.2 and may later move it to Section 3.4.)\n\n($p$ in Fig. 5 is described by Eqn. 40 in the appendix, we'll clarify that and fix L230. Thank you!)\n", " We appreciate the time and effort that went into your review!\n\nThank you for pointing out Du et al. 2022 -- we'll read through the paper and update L266 to reflect this new information.\n\nThe conclusions in our Section 3.4 should apply to any Emphatic-TD-based algorithm that learns the emphasis function (1) via TD/bootstrapping, and (2) uses regularization. COF-PAC was chosen because it is an exemplar of these: it uses \"time-reversed\" TD to estimate the emphasis, and it (like almost every other work that offers performance bounds) uses regularization to ensure that the models converge despite slowly changing sampling distributions.\n\n(We will also correct L158, thanks!) ", " Thank you for the review! We appreciate the time and attention it took to review our paper.\n\nTo be fully transparent, as far as current deep RL algorithms, we don't fully know to what extent they suffer from this failure mode during regular training. We know that the algorithms themselves are indeed vulnerable to poor off-policy performance, but we don't fully know to what extent this particular mode is the cause of these failures vs. other factors in distribution shift. This is an interesting and relevant question, and we're also curious to see if this is the case. We've included some discussion on this topic in the new Appendix C.2.\n\nTo answer your questions regarding the trade-offs on the practical benefits of regularization and questions on overparameterization, we ran a few more experiments on two-layer networks with a range of parameters covering both under- and over-parameterization regimes. As expected, when under-regularized, having more parameters reduces error. We also show that the behavior around the small-$\\eta$ divergence does not change, even when the hidden layer reaches 64 parameters. We included plots of this as Figure 10, and discussion in the new Appendix C.1.\n\nTo answer your questions: We don't know of any solution that is guaranteed to stabilize off-policy training. In the longer term, insights from the credit assignment literature may lead to Emphatic algorithms that are provably robust, or other advances (such as the work by Ray Jiang et al. \\[1]) may close this gap.\n\nFor now, even though regularization can fail in the ways we illustrate, it remains a reasonable method that (usually) offers a fair tradeoff -- as long as we are careful to check that we are not running afoul of the failure modes we explain in the paper. Based on our experiments, some sort of adaptive regularization scheme or simply trying different values of $\\eta$ spanning a few orders of magnitude could ameliorate some of the problems we highlight. (We've added this point to the conclusion.)\n\n\\[1] Learning expected emphatic traces for deep RL. Jiang, Ray et al. (2020) \n\n(We'll also fix the typos and clarify the unclear points in our writing. Thanks for pointing these out!)", " This submission is a theoretical paper which investigates a common belief in the community around the problem of the deadly triad: regularization would prevent instability and divergence of TD methods with function approximation in an off-policy setting. This paper provides four counter-examples showing that this is not the case and that l2 regularization can lead to instability and divergence in the context of function approximation and even with neural networks. The paper also shows that emphatic methods can also suffer from these divergence issues. This paper is an invitation to reconsider the use of regularization in off-policy TD learning.\n **Contribution**\nThe paper provides interesting new counter-examples showing some issues of TD methods under regularization. I also appreciated the explanatory figures 2, 3, 4 , 5 illustrating the counter examples from the text. \nAs mentioned by the submission, the fact that off-policy TD learning can be unstable or have unbounded error when it converges is already know in the literature but the authors provide a simpler example. The other insights and examples from sections 3.2, 3.3 and 3.4 are new as far as I am aware.\n\nIt would be interesting to understand whether these counter examples could have an impact on current RL algorithms that have been used on common benchmarks, ie are these counter examples extreme situations or can they explain empirical results from past papers?\n\nIn particular, the analysis from section 3.4 is limited to two-layer neural networks and it could be interesting to see if the insights would hold for overparametrized architectures too. This shouldn't prevent acceptance of the paper though. \nOne weakness I see is that given the dynamics of sgd with off-policy TD are unknown it is hard to conclude that the issues highlighted by the paper are a practical issue for current deep rl algorithms. This aspect is briefly mentioned by the authors line 246.\n\n**Organization and Clarity**\nThe paper is overall well-written and well-organized.\nL70 \"error of the zero model\": at this point in the text it is not clear what this means.\nL 122 inconsistencies \\Pi_\\mu vs \\Pi_D: does the representation error depend on the distribution \\mu ? if so, this is the error in the learnt value function and what is the distribution under which the representation error is bounded?\n\n**Related work**\nThe related work seems overall well covered.\n\n**Typos**:\nL 48: as it differently than in supervised settings,\nL107-109: inconsistencies for real numbers \\mathcal{R} , \\mathbb{R}\nL109 : [0..1]\nL253: some training methods / a training method\nL376: to fails What are promising solutions to address the issues highlighted in the paper? While regularization can be harmful as the paper shows, itsnt there a tradeoff given it can help in other examples under the same off-policy setting? There is no dedicated section to limitations. I would say this work highlights issues in the off-policy setting but is limited in terms of practical solutions.\n\n\nPost-rebuttal update:\n\nI want to thank the authors for engaging into this discussion. After carefully reading other reviews, I will update my score to 7 and recommend acceptance.", " It is well-known that TD methods diverge when used in\nconjunction with value function approximation and off-policy learning.\nSeveral recent works have proposed the use of additional regularization to\nprevent divergence in this setting. This paper provides several counterexamples\ndemonstrating that adding regularization to linear TD methods can lead to\nvacuous value estimates, and that regularization can in fact cause divergence\nin certain settings. Counterexamples are also given for a popular emphatic TD algorithm,\nand for non-linear TD. Taken together, the results suggest that the role of regularization\nbeing proposed by recent works needs reconsideration.\n Overall, I thought the paper was excellent --- easily the best I reviewed this round of reviews --- and recommend acceptance.\nThe results are novel, thorough, and clearly relevant to\non-going discourse in the community; the\npaper is well-written and the explanations were clear.\nI really only have minor comments and questions, which I provide below.\n\n### Minor Comments\n- Line 266: \"A separate primal-dual saddle point method has also been adapted to $\\ell_2$ regularization,\n but error bounds at convergence are not yet known\"\n - I think this may no longer be true; the results in Du et al. (2022) capture\n the regularized MSPBE as a special case. This paper was posted after the NeurIPS deadline I believe,\n but I think it is worth revising this sentence. I don't think the existence of such bounds for\n the $\\ell_2$ regularized MSPBE takes anything away from the results presented here anyways\n- Line 158, typo: $\\xi\\in[0..1]$ should read $\\xi\\in[0,1]$\n - Emphatic approaches:\n - The conclusions in section 3.3 hold only for a specific\n algorithm, COF-PAC, which simultaneously learns both a\n value model and emphasis model, but not necessarily to\n emphatic methods in general; am I understanding this correctly?\n - Do you expect similar conclusions could hold under more general assumptions\n on the emphasis model? or is the core problem specifically the *learning* of the emphasis model\n in parallel with the value model?\n yes", " This paper proposes and analyzes a set of simple counter examples to show that l2 regularization can not mitigate the problems faced value estimation in the deadly triad setting consisting of off-policy learning under function approximation with bootstrap. The authors also analyze emphatic TD approaches and show that l2 regularization on the weight norm is insufficient.\n\n Strengths:\n\n* This is a well written paper with simple but non-trivial examples that demonstrate concretely that l2 regularization can not solve the convergence issues highlighted in the \"deadly triad\" problem in off-policy learning under function approximation.\n\n* The analysis over a range of regularization strengths and comparison against \"vacuous bounds\" helps understand when prior bounds like Eq (9) are not informative.\n\n* The analysis showing how the off-policy distributions can influence the optimum regularization, specifically causing different non-overlapping ranges of $\\eta$ to be necessary for non-vacuous performance seems like a helpful qualitative insight even though it arises out of very specific counter examples. \n\nWeaknesses:\n\n* I'm not sure that there was a widespread belief that regularization necessarily solves the deadly triad, so in that sense the demonstrated phenomenon is not necessarily surprising even if valuable\n\n* The paper is presented as a general study of regularization though the results are specific to l2 weight regularization. * What is the precise definition of a \"vacuous model\"? Anything that achieves an error worse than the trivial zero-prediction solution? An alternate threshold for vacuity could be the variance of the value function as opposed to the l2 norm. Can the authors make this a bit more clear as this term is repeatedly used through the paper across multiple sections? \n\n* The takeaways from Section 3.4 are a bit unclear. It appears to mainly imply that there are situations where the resulting solution from TD is not useful. What could be more interesting is some evidence that these phenomena can occur with neural nets occur outside of the specific counter examples considered. \n\n* What is the distribution parameter p in Fig 5(a)? \n\n* Nit: small-eta in L230. Adequate", " Off-policy RL can be divergent when function approximation and bootstrapping are utilized, and it is commonly believed that $l^2$ regularization prevents divergence in this case. The authors show that there are problems where linear models with any amount of $l^2$ regularization produce worse error than the trivial zero solution does. An analytical and geometric argument is presented, and experiments on linear and nonlinear models demonstrate specific examples in which $l^2$ regularization can increase value-estimation error or lead to divergence. **Strengths**\n- The paper addresses a fundamental issue in off-policy RL with function approximation, with the potential for significant practical implications. I expect these findings will be of interest to a wide audience of RL researchers.\n- The paper is very well written. There are lots of motivating examples presented in a logical, cohesive flow---almost like reading a chapter of a textbook.\n- The geometric interpretation in Figure 2a was helpful for developing an intuition for the problem addressed by the authors.\n\n**Weaknesses**\n- Overall, I finished reading the paper feeling a lack of clear instruction about “the pitfalls of regularization.” The authors’ general takeaway seems to be that regularization is bad and should be avoided. In reality, the issue seems to be far more nuanced than that. The paper would benefit from further discussion about when regularization is good, too, and what the practical trade-offs may be. For instance, it was not clear to me how pertinent the specific counterexamples in the paper are to off-policy RL in general.\n- I am not totally convinced by the neural network (NN) experiment in Example 4, Figure 5b. The authors claim that their divergence results in the linear approximation case carry over to nonlinear approximation as well, since there are some $\\eta$-values that increase the error of the learned NN value function. In reality, I think this conclusion is based somewhat on the misleading x-axis log scale, and the fact that the y-axis does not start at the origin. To me, it looks like, below $\\eta=10^{-3}$, there is effectively no regularization, and the error is a plateau. If the x-axis were linear, then this region would be essentially negligible. Then, at $\\eta=10^{-3}$, after a short uptick in error, the error starts to decrease rapidly for about two orders of magnitude, before the regularization becomes too strong and starts to worsen again. The takeaway I ultimately got from this graph is that a moderate amount of regularization is actually quite helpful!\n\n**Minor Edits**\n- Line 48: The grammar in this sentence sounds incorrect to me.\n- Line 96: Typo in “regularization.”\n- Line 129: Don’t end a paper section with a colon.\n- Lines 145, 149: I would change the notation “trajectory (1),” since it looks like you are referencing an equation there.\n- Line 195: I think it should be “reweighting,” not “reweighing.”\n 1. In Example 1, how can you guarantee that $\\hat{w}^Tw^*(\\eta) \\leq 0$, $\\forall~\\eta$? Do you have any theoretical results for when a distribution $\\mu$ that causes this either exists or does not exist?\n1. The geometric interpretation of a vacuous model (a trajectory within the halfplane tangent to the $l^2$ ball) makes sense for linear models, but I don’t see why it would necessarily extend to nonlinear (NN) models. Could you elaborate on why you think these results are transferrable?\n1. In Figure 5b, what would happen if you used a NN with more than just 3 neurons in each layer? Would you still expect to see an increase in error for some $\\eta$-values?\n N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "yhO1N-HI9p", "balME3sPHm", "Y_6Pb03oYWx", "hY47VQ99s2j", "RwZjvughZMA", "wPo-yeBbV89", "RtUBEJYenPf", "5dhVt5HWfFj", "nips_2022_vK53GLZJes8", "nips_2022_vK53GLZJes8", "nips_2022_vK53GLZJes8", "nips_2022_vK53GLZJes8" ]
nips_2022_mvbr8A_eY2n
Optimal Efficiency-Envy Trade-Off via Optimal Transport
We consider the problem of allocating a distribution of items to $n$ recipients where each recipient has to be allocated a fixed, pre-specified fraction of all items, while ensuring that each recipient does not experience too much envy. We show that this problem can be formulated as a variant of the semi-discrete optimal transport (OT) problem, whose solution structure in this case has a concise representation and a simple geometric interpretation. Unlike existing literature that treats envy-freeness as a hard constraint, our formulation allows us to \emph{optimally} trade off efficiency and envy continuously. Additionally, we study the statistical properties of the space of our OT based allocation policies by showing a polynomial bound on the number of samples needed to approximate the optimal solution from samples. Our approach is suitable for large-scale fair allocation problems such as the blood donation matching problem, and we show numerically that it performs well on a prior realistic data simulator.
Accept
Executive summary: The problem considered in this paper is as follows: There is a distribution over items X \subseteq [0,\bar{x}]^n where x_i denotes the value of the item to recipient i. There are also matching constraints {p_i}_{i \in N}, which require that each agent should be matched a p_i fraction pf the times. The goals is to maximize the sum of recipient utilities subject to the matching propability constraints, and also satisfying that no recipient i envies another recipient by more than a factor \gamma_i. It is shown that this problem can be solved as a semi-discrete optimal transport problem. They also give a stochastic optimization algorithm which converges at rate O(1/sqrt(T)), and a PAC-style sample complexity result (showing that with O(n/eps^2) samples an eps-approximate solution can be found with high probability). Discussion and recommendation: This paper is a bit out of my comfort zone, so I am mostly relying on the reviews, which are rather positive and supportive of the paper. The connection to optimal transport is appreciated, and the approximation results (while rather standard) seem to find their audience as well. Weak accept.
train
[ "6e5ZbDh3kB", "D4sFqM63OCF", "DrqrED4_r6", "rUAZtQzAMS_", "FqokmmbpZfe", "DZ6ISIdBLFj", "Y8EC0xph9i7", "cOyDSjzknnw", "b3KCWxGn9Oc", "gjNubjL1wj4", "kC5nMJvTed2", "T4yQSXCVJ1F", "uqsFI5ejxS0", "1o13chfEk7X" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the response. I will keep my score unchanged. I encourage authors to do a more extensive literature review and compare the results and methods in the future version.", " Thank the authors for the comments, which confirm my original thoughts about this paper. For this reason, I will stick to my positive rating of this paper. ", " Thank you for your comments. In our revision (already uploaded to the OpenReivew submission website), we moved the more technical proofs in Section 6 to the appendix, and used the additional space for a discussion on the choice of ex-ante constraint, more discussions in the literature review section, as well as a few other things that other reviewers brought up. ", " Thank you for the suggestion. We agree that the proof details in Section 6 can be relegated the appendix. We have moved it to the appendix, and used the additional space for a) a high level summary of the proof in section 6, b) additional discussions of the relationship between the results in Section 5 and 6, which hopefully will help readers with understanding the results better. c) additional discussions around the modeling assumptions which some other reviewers had confusions about, and d) more discussions in the literature review section. ", " Thanks for the detailed response to all of the comments! Unfortunately my main point of critique is with the exposition, where any modifications are difficult to verify under the NeurIPS review format. \n\nThe writing is very technical, which for a paper who's (to my understanding) main thesis is using techniques from the optimal transport field and applying it to the fair division literature, could do a better job explaining the fair division model and contrasting it to the current literature to highlight the main ideas. \n\nWeakness 1: Agreed - I think potentially the tone of some of the writing around these facets could be reduced more to highlight it from a \"practical\" point of view on allowing practitioners to adjust these parameters, without providing a full characterization on the trade-off. \n\nWeakness 6: Right, exactly. I think that some of the exposition around this fact should be included in the paper. Especially when one considers what the \"policy\" is - a mapping from a single item to the set of recipients. Deriving ex-post fairness guarantees under this setting probably doesn't make much sense, which is why the ex-ante where the \"policy\" is evaluated several times is more intuitive. ", " Thank you for your response. Based on what I have seen, your revision doesn’t yet address what bothered me about the exposition. (What you added to the introductions of Section 5 & 6 is fine by me, but nothing I was missing as a reader.) Let me try again, in a more pointed way:\n\nFor me as a reader, it seems like your not using space effectively in Section 6. Let me demonstrate this with the second half of the proof of Thm. 2, after Lemma 1. At this point, you are using the fat-shattering dimension (not defined in the paper, only in the reference), the covering number (not defined, only in reference), Thm. 1 from [23] (not restated), and Dudley’s chaining integral (not defined, only in reference). Even though I know traditional PAC learning, I derived no insights from this half page of text, and this is just one example. I can’t tell whether there is some specialist audience for which this proof will be super valuable to have in the body, but I think it’s a very valid question whether you would not get more mileage out of the paper by deferring the proof for specialists in the appendix (and perhaps making it less terse there) and filling this space with anything else that you feel is important to get across.", " Thank you for your comments. We will respond to the weaknesses and the questions you raised in order.\n\nWeakness 1 (weakness of results): This is a fair point. Although our formulation allows practitioners to optimally trade-off between efficiency and envy, our results do not contain theoretical predictions on how “severe” the trade-off has to be under different distributions. It would be interesting to characterize exactly which types of distributions lead to more envy, and what the shape of that trade-off curve might be. We do think however, that the results provided are already interesting by themselves, especially for practitioners.\n\nWeakness 2 (relation to existing literature): Thank you for bringing this to our attention. Donahue & Kleinberg 2020 does indeed also study a setting where the fairness criteria can be smoothly adjusted with respect to welfare. However their setting is quite different from ours, since they assume that there is a single type of resource where the only thing the agents care are the amount/probability of receiving the items. In our revision, we have added a clearer discussion of ex-ante vs ex-post fairness, how it relates to Donahue & Kleinberg (and others), and why we focus on ex-ante fairness. See also our response below.\n\nWeakness 3 (Writing + practical motivation): Regarding relation to previous literature on fair division, we have added additional discussion on that in the literature review section. \n\nWeakness 4 (Model description): The receivers here are the blood banks. The “items” being allocated in this case are the potential blood donors. We will revise the exposition to make this clearer. \n\nWeakness 5 (Target matching distribution): If we did not have a target matching distribution, and the objective is simply to maximize social welfare, then we might end up with an allocation where some donation centers receive 0 donors (for instance, those that are located in more rural areas) and others receive all the donors. Intuitively, this is not a result that we want (rural donation centers might serve a lot of patients). Introducing the target distribution allows the central planners to fix the ratios so that each blood-bank receives some fraction of the donors. This is useful (separately from envy freeness) if the platform wants to convince recipients to participate on the platform, by guaranteeing that they will be recommended a certain fraction of the time.\n\nWeakness 6 (Ex-Ante vs Ex-Post allocation): The reason why we consider ex-ante guarantees has to do with the type of application we're interested in. Imagine that you are performing blood donations recommendation on an internet platform. In such settings, we are assigning tens, and even hundreds of millions of users to blood banks. Even if we want them to hold ex-post, the large-scale nature of the problem + the law of large numbers means that in-expectation guarantees translate into something that is very close to holding ex-post. That is why for internet platform problems, requiring constraints to hold in expectation is a pretty standard setup. See for example the literature on budget constraints in ad auctions, where this is the case (see for example Balseiro, Besbes, Weintraub 2014) The reviewer might have mistakenly understood our setup as allocating a single item. We are in fact interested in allocating all items as represented by the distribution. \n\nQuestion 1: The reviewer is right to point out that there is some connection between the convergence of a stochastic optimization method (computational complexity), and the sample complexity bound of the same problem (learning complexity). In general, these two things are not the same. In particular, the learning complexity result from Section 6 is algorithm agnostic. In fact, it also shows uniform convergence, meaning that no matter which solution one arrives at, the empirical objective is close to the expected objective. However, the result from section 6 does not provide a way for practitioners to actually compute the solution. The result from section 5 on the other hand does provide an algorithm for the practitioner to use, even though the convergence result only works for the specific optimization method that we proposed. \n\nQuestion 2: Given the envy constraint specified by the central planner, the allocation proposed by our formulation is optimal – meaning that there is no other allocation with at most the same amount of envy, satisfies the target distribution constraints, and achieves higher social welfare. This is simply guaranteed by the constrained optimization formulation. Caragiannis et al. 2009 provide some worst case analysis on how much welfare one has to lose to achieve envy-freeness. Our formulation allows central planners to smoothly trade-off envy and efficiency, instead of having to choose between either zero-envy or maximum envy.\n\nMinor Comments: We added a discussion of EF1/EFX in the literature review section as well.\n", " Thank you for your comments. In our revision, we have expanded our discussion on the existing fair division literature to help readers better position our paper against other work. For sections 4 (solution structure) and 7 (experiments), we focused a lot of the discussions on building intuitions on how to interpret the results. For sections 5 and 6 however, the results are more technical, which means that there are more equations. We have added more discussions in the beginning of these sections to make the results more interpretable. ", " Thank you for your comments. We agree that the results, when viewed from an algorithmic perspective, are not groundbreaking. However, we view our main contribution as providing a new framework for studying one of the most studied fairness constraints – envy-freeness, by relaxing it from a hard constraint, to one that can be smoothly adjusted, along with the connection to optimal transport. In addition, we provide the analysis of a practical optimization method, as well as an analysis of the statistical properties of the solution structure, thus providing a comprehensive treatment of the problem. We believe that this is a meaningful contribution to the fair division community, and hope that it will catalyze more research effort in the intersection of fair division and optimal transport.", " Thank you for your comments. Regarding other real world applications of our formulation: our formulation applies to any setting where the recipients have different valuations for the different items, and where we want to ensure that each recipient receives a fraction of the items. Other potential examples include: allocating unsold groceries to food banks, and allocating ad impressions to advertisers in settings with pre-specified contracts on how many ad impressions each advertiser must receive. ", " This paper considers the resources allocation problem where each of $n$ receivers has to be allocated a fixed fraction of items. For each round an item is drawn from a distribution $\\mathcal{D}$. Each item can be represented by a vector $x$ in $\\mathbb{R}^n$, where $x_i$ is $i$-th receiver's value of this item. In this model the optimal allocation rule that maximizes social welfare can be model as the solution to a semi-discrete optimal transport (OT) problem. The solution structure and efficient algorithm via stochastic optimization are well-studied in the literature of OT. \n\nThe main contribution of this paper is that they study the tradeoff between envy and efficiency in this model. They add one envy constraint for each receiver: envy of receiver $i$ against any other receiver should be bounded by $\\lambda_i$. The goal is to find a socially optimal allocation rule under these constraints. They apply Fenchel-Rockafellar duality to characterize the optimal solution, which is in a similar form to the original OT optimization problem without envy constraints. The paper shows that with slight modification, the stochastic optimization for the original OT problem is efficient for the new problem. Finally the authors provide a PAC-like sample complexity bound using standard Rademacher complexity argument.\n\nThe authors also provide some numerical simulation results of their framework. Strengths:\n- The paper considers an important model of resource allocation. The tradeoff between envy and efficiency is also a natural question to ask in the setting.\n\n- The paper studies this model comprehensively, it not only characterize the optimal tradeoff between envy and efficiency but also give a PAC-like sample complexity bound.\n\n- The paper includes some numerical simulations.\n\nWeaknesses:\n\n- The literature review is a bit problematic. As far as I am concerned, some previous works have been discussed applications of optimal transport in resources allocation and mechanism design problems (see e.g. A. Galichon, Optimal Transport Methods in Economics). The authors should have discussed these results.\n\n- As of technical parts, techniques and proofs are not very interesting in general. Everything seems to be standard. - Does not provide many real world applications of their model. The only application discussed in the paper is the blood bank donation. It would be great if the authors can provide more potential real world applications.\n The authors adequately addressed the limitations and potential negative societal impact of their work.", " The key contribution of this paper is the formulation of the resource allocation problem with envy restrictions as a variant of the semi-discrete optimal transport problem, which can thus be solved using the projected SGD algorithm. The paper also shows some statistical properties that may be helpful for its real world applications.\n The strength (also novelty) of this paper is that it relaxes the resource allocation problem with envy-free restrictions to the one with envy-tolerance. It also proposes a semi-discrete optimal transport variant formulation. \n \nOverall, this is a good paper and well organized. The assumptions and problems are explicitly stated. The novel formulation may provide the possibility of applying various optimal transport methods to envy-related resource allocation problems. \nThe SGD method is not new, but serves the purpose adequately. Though the paper looks reasonable, there are still places that can be improved. \nFor example, the paper did not show clearly the relationship between conditional probability and joint probability in the optimal solution structure, which makes the problem formulation less strict. It would be more convincing if the paper can put these details into the appendix.\n The paper solves the problem using some SGD method. Thus, from an algorithmic point of view, it does not provide any new method or improvement. Although the paper presents a new way to look at the envy-tolerance resource problem, it does not show how the optimal transport properties play roles in the solution. Another limitation is that the appendix is not well written. Some deductions in the appendix \nneed to be clearer and more detailed.\n", " The paper studies a fair division problem inspired by Facebook’s blood donation program: a continuum of divisible goods must be allocated to “receivers” with additive utilities in such a way that (1) each receiver $i$ receives a target fraction $p_i$ of all goods, such that (2) each receiver $i$ has envy below some given bound $\\lambda_i$, and such that, within these constraints, (3) utilitarian welfare is maximized. The paper reduces this problem to a form of optimal transport, from which they derive results about the structure of optimal allocations, a convex optimization formulation, a stochastic gradient descent algorithm based if the distribution of goods can only be sampled rather than being explicitly given, and a PAC-style learning algorithm if a finite number of samples are available. The authors test these algorithms on synthetic and semi-synthetic data, and empirically investigate the price of tighter envy bounds in terms of welfare. I was pleasantly surprised by the results of this paper. I wasn’t familiar with optimal transport, and would have assumed a priori that the optimization problem solved in the paper would lead to quite messy solutions and approaches. But the authors show that by using the right kind of duality and (what sounds in their writing like standard) approaches from optimal transport and learning theory can make quick inroads into this problem. Given that I am not an expert, I cannot judge the novelty of the technical contribution, but they were new to me.\nI also really appreciate the empirical evaluation, which studies three well-selected questions in little space. Each of the three figures on page 8 tells an interesting story.\n\nMy main point of critique is the exposition, which is very math-heavy and not particularly accessible to an outside audience. In some sense, I think that this is the drawback of my first positive point: some of the paper’s value comes from bringing techniques from other fields into fair division, but the paper could do a better job at actually speaking to a fair-division audience where these tools aren’t yet common knowledge. I want to exclude Section 2.1 from this critique, which did a decent job at laying out the basic background of Optimal Transport. By contrast, a lot of the technical material references so many results from existing papers with so little description that I couldn’t take much intuition from it. I also think that the analysis of Algorithm 1 (stochastic gradient descent) should be proved in the supplementary material. More generally, part of the paper consists of more formulas than text, which makes following along much more painful than it would have needed to be. I would strongly recommend putting more of the technical details into the appendix and trying to convey in words more of an intuition of how the argument proceeds and why it works.\n\n## Minor comments:\n\n(1) To me, the word “receiver” sounds like radio terminology, not fair division. In case this is not already established language, how about “recipient”? At a quick glance, McElfresh et al. seem to also use this word.\n\n(2) Equations like Eq. 2 would be much easier to parse with parentheses.\n\n(3) I was going to complain about the missing derivation of the Fenchel dual, but saw it in Appendix A.1, which isn’t mentioned anywhere in the draft. I’d encourage the authors to change that! No specific questions, but please feel free to respond to anything you’d like. No complaints.", " Here the authors consider the problem of allocating a distribution of items to $n$ fixed individuals (named \"receivers\") subject to a fixed fraction constraint, while simultaneously ensuring minimal envy. At a high level, the authors show that this problem can be formulated as a variant of the optimal transport problem subject to a particular cost function (and the fact that \"one\" if the distributions in the problem is discrete). The authors then show convergence guarantees for an empirical version of stochastic gradient descent using observed samples from the underlying distribution, and complement their results with a numerical simulation on a synthetic and practically motivated dataset.\n\nTo be more concrete, the authors consider a fixed set $n$ of individuals $Y$. There is an \"infinite\" number of items represented by a distribution $\\alpha$ over $[0,1]^n$ where each draw from this distribution $\\alpha$ is a vector representing each individual $i$'s utility for the particular item. The goal is to maximize the expected utilities of the individuals, while maintaining the constraint that each individual $y_i$ is matched at least $p_i$ of the items in expectation.\n\nThe authors consider allocation policies $\\pi$ which takes in a valuation vector (i.e. the $n$ dimensional vector of individual utilities for the item) and maps it to one of the $n$ receivers. The basic formulation is then to solve the optimization problem to maximize the $\\sum_i X_i \\pi(y_i | X)$ subject to the constraint that $\\Pr(\\pi(y_i | X)) \\geq p_i$. Note that the objective function maximizes the utilitarian welfare, and the constraint ensures the target matching constraint. However, one (potentially) additional desired property is envy-freeness. This can be incorporated by allowing a tolerance $\\lambda_i$ of envy for each individual, and adding the constraint that $\\max_j u_i(\\pi_j) - u_i(\\pi_i) \\leq \\lambda_i$ where we drop some notation and use $u_i(\\pi_j)$ to denote the expected utility for $i$ given $j$'s allocation.\n\nThe main result of the authors is three-fold under this model:\n1. Connection to Optimal Transport: The authors show how while the feasible solution space to this problem is large, tools from OT and Fenchel Duality can be used to show a relationship between Optimal Transport (under a negative linear cost function) to an optimal solution. In particular, the authors show that the optimal solution is given as a greedy function over a partition (called \"Laguerre cells\").\n2. Practical Optimization: The authors then show that the dual of the optimization problem (in the OT formulation) can be solved either with stochastic gradient descent or samples. With this the authors show a high probability convergence guarantee on the utilitarian welfare scaling as $1 / \\sqrt{m}$ where $m$ is the number of samples.\n3. Experiments: The authors complement the theoretical results with an experiment on a synthetic dataset and a practically motivated one, highlighting the empirical loss in efficiency as one modifies the fairness constraints and empirical convergence rates. ### Strengths\n1. Model + Theoretical Results: The theoretical results presented in section 4 is novel to the fair resource allocation literature, by making an explicit connection between allocation policies and optimal transport literature. As the authors state, this avenue leads to interesting future directions both theoretically and practically as more tools from OT are used in the literature. In particular, to some extent, the model presented can be thought of as an \"infinite resource\" version of many exiting literature, which typically assumes that the distribution governing utility of individual is also discrete.\n2. Quality of Results: The theoretical results presented in section 5 + 6 help highlight the ability to use the dual of the OT formulation practically to an algorithm design.\n3. Relation to Existing Literature: The authors do a good job relating the current analysis to the optimal transport literature, especially for readers in the fairness community who would be less aware of this line of work.\n\n### Weaknesses\n1. Weakness of Results: One aspect that the authors mention frequently in their paper is their ability to understand the \"optimal tradeoff\" between envy and efficiency. While the OT formulation allows them to optimally solve for the solution given the additional envy constraints, there is no theoretical results or justification on the \"efficiency\" loss one observes when adding the additional envy constraints. There are numerical simulations highlighting this aspect, but the theoretical results are lacking in that regard. Outside of formalizing the relationship between allocation policies and optimal transport, the authors provide straightforward ML generalization guarantees and SGD formulation.\n2. Relation to Existing Literature: The authors need to improve on situating their paper in the fair resource allocation literature. In particular, some model primitives (i.e. ex-ante vs ex-post allocations, relation to the \"price of fairness\" eg (Donahue + Kleinberg, 2019), etc need to be better made clear after describing the model (more of this in next bullet).\n3. Writing + Practical Motivation: The authors do an amazing job highlighting the connection between the optimal transport literature and fair resource allocation. However, the practical description of their model and relation to previous literature on fair resource allocation needs to be expanded more.\n- Model Description: The paper could use a more concrete model description, in particular substantiating some of the model parameters in the running blood matching example. For example, in line 40/41 are the receivers here the blood bank vs. individual users of the blood blank matching platform?\n- \"Target Matching Distribution\": The target matching distribution $p_i^\\star$ is introduced with no practical motivation. It leaves questions open like why the matching policy only needs to satisfy these constraints in expectation instead of almost surely (where then the problem is trivial), how they should be chosen, and their relationship to the envy-free guarantee which is described later.\n- Ex-Ante vs Ex-Post Allocations and the Online Structure: The matching policy solved for $\\pi$ denotes the probability of matching an item to $y$ given a specified valuation vector $X$ (denoting the utility of the $n$ individuals for the given item). Consider an application of this policy: an item is observed (hence an $X$), and the item is matched or shared at the discrete fractions $\\pi(y | X)$. However, notably this does not satisfy the target matching distribution (although...if those constraints were added almost surely the problem would be trivial). However, the hindsight solution (i.e. once $X$ is observed) is trivial, either allocate based on the fractions $p_i^\\star$, or allocate to the individual who cares the most, or subject to some envy-freeness constraints guaranteed by $\\lambda$. The motivation for considering these \"ex-ante\" allocations is never explicitly given. Moreover, the objective problem is only situated to consider a \"single\" item drawn from the infinite set of items, instead of considering potentially multiple allocations (although this is somewhat abated by the expectation). The relationship between the model considered here and the related literature (e.g. Donahue + Kleinberg highlighted earlier and the literature therein), or a discussion on the relationship between the blood bank problem should be included. ### Questions\n1. The comparison between section 5 and section 6 potentially could use more emphasis. In Section 5 the authors highlight how stochastic gradient descent with samples (unbiased from the true underlying distribution) achieve a particular convergence guarantee. However, in section 6, this is then done again via a slightly different method, is the only intuitive difference being obtaining high probability versus in expectation guarantees?\n2. Frequently in the paper the authors mention that their work addresses the welfare cost for achieving envy-free solutions. Are there any theoretical results supporting this claim, highlighting price of fairness under this model?\n\n### Minor Comments\n- In paragraph , \"Unlike the existing resource allocation literature, where envy-free is treated as a hard constraint or not considered at all\" is not totally true with the ongoing literature studying EF1/EPX, etc with divisible resources, (Donahue + Kleinberg 2019), etc.\n- The second bullet in line 51 could be expanded more\n- The \"proof by picture\" in Figure 1 should be expanded more\n In Section 8 the authors properly address the limitations and potential negative social impact of their work - notably being that any implementation of the strategy for designing allocation mechanisms must be used carefully in critical scenarios such as blood donation settings." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3, 3 ]
[ "gjNubjL1wj4", "b3KCWxGn9Oc", "FqokmmbpZfe", "DZ6ISIdBLFj", "Y8EC0xph9i7", "cOyDSjzknnw", "1o13chfEk7X", "uqsFI5ejxS0", "T4yQSXCVJ1F", "kC5nMJvTed2", "nips_2022_mvbr8A_eY2n", "nips_2022_mvbr8A_eY2n", "nips_2022_mvbr8A_eY2n", "nips_2022_mvbr8A_eY2n" ]
nips_2022_NN_TpS5dpo5
Physically-Based Face Rendering for NIR-VIS Face Recognition
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps as well as a lack of sufficient data for cross-modality model training. To overcome this problem, we propose a novel method for paired NIR-VIS facial image generation. Specifically, we reconstruct 3D face shape and reflectance from a large 2D facial dataset and introduce a novel method of transforming the VIS reflectance to NIR reflectance. We then use a physically-based renderer to generate a vast, high-resolution and photorealistic dataset consisting of various poses and identities in the NIR and VIS spectra. Moreover, to facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss, which not only reduces the modality gap between NIR and VIS images at the domain level but encourages the network to focus on the identity features instead of facial details, such as poses and accessories. Extensive experiments conducted on four challenging NIR-VIS face recognition benchmarks demonstrate that the proposed method can achieve comparable performance with the state-of-the-art (SOTA) methods without requiring any existing NIR-VIS face recognition datasets. With slightly fine-tuning on the target NIR-VIS face recognition datasets, our method can significantly surpass the SOTA performance. Code and pretrained models are released under the insightface GitHub.
Accept
The paper received 3 positive reviews. The reviewers all lean towards acceptance after the rebuttal. Overall this work can be of large interest to the community working on NIR-VIS Recognition. But I hope the authors will present additional visualized results, as suggested by the reviewers.
train
[ "LAfiyyEDufW", "HtoeE3PK9-", "xxISs1sQozF0", "yvctP-Hs8gE", "rYvySiUn702", "Sj7wt6KCf94", "qQeilzKJpJm", "Pn6IESwYeA9" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your answers. They addressed my concerns pre-rebuttal.\nI decide to keep my Accept score.", " We sincerely thank all reviewers for their valuable comments and insightful advice on our paper. We are pleased to see that all reviewers give highly positive ratings (one accept and two borderline accepts). The main changes are highlighted in blue in the revision and our response to all comments can be found as follows.", " Thanks for the helpful suggestions, and we are open to further discussions.\n* Q1: The motivation for using a 3d-rendering based generating dataset is not clear, and it seems that the authors just utilise id-related loss when training. It is suggested to strengthen more use of the generation.\n\nAs stated in Section 2 in the paper, the NIR-VIS face recognition task is in the mire of the over-fitting problem due to the limited amount of image pairs in current NIR-VIS face recognition datasets. To solve the problem, previous methods generally employ the generative models, such as Generative Adversarial Networks (GAN) [1] or Variational AutoEncoders (VAE) [2], to synthesis facial images. Specifically, GAN generates NIR face images from the corresponding VIS ones. However, the “one-to-one\" face synthesis strategy (GAN) still suffers from the limited number of images in the NIR-VIS face recognition datasets. Although VAE solves the problem by synthesizing NIR-VIS face image pairs from identity representations, we notice the failure of preserving identity consistency when generating multiple NIR-VIS face image pairs from a given identity representation. To this end, we explore the generation of multiple paired facial images by photorealistically rendering in VIS and NIR 3D facial reconstructions, where the identity is perfectly preserved while changing the illumination and pose.\n\nMoreover, recent advances in facial reflectance reconstruction [3] enable us for the first time to acquire big datasets of facial shape and reflectance properties, which can be rendered in a fully controllable manner, in terms of illumination, background, pose and expression. The resulting vast datasets of labelled NIR-VIS pairs can be then used to augment the existing smaller-scale datasets used for training the model, clearly increasing the models' capabilities.\n\nAs stated in the paper (Line 255-257), the synthesized images are employed throughout the NIR-VIS face recognition network training. Additionally, the proposed ID-MMD loss is not only id-related, but aims to reduce the domain discrepancy between the generated NIR and VIS images at each identity level. \n* Q2: It is suggested to add generated data by different percentages.\n\nFollowing the suggestion, experiments of adding generated data by different percentages are conducted on the LAMP-HQ dataset. Specifically, based on the baseline model LC-29${^\\dagger}(L_{id})$, the generated data are added by the percentages of 10%, 50%, and 100%, respectively, during training. In the experiment, both identity loss and ID-MMD loss are used for network training. The comparison results are added as Table 3 in the revision. \n\nThe model performances suggest that the generated images could continuously contribute to the performance improvements. Best performance is achieved when all generated data are involved.\n\n* Q3: More comparative visualisations. In addition, the authors should discuss how generalizable the proposed method is in practical situations.\n\nTo better verify the effectiveness of the proposed method, we compare the mean cosine similarity between identity features of positive pairs and negative pairs of the LAMP-HQ dataset. Specifically, we randomly select 3k positive pairs (belonging to the same identity) and 3k negative pairs (belonging to different identities) from the test set. Then, identity features are extracted by model LC-29$^\\dagger$ ($L_{id}$) and model LC-29$^\\dagger$+Fake ($L_{id}$+$L_{idmmd}$) in the main paper, respectively. Here, we report the mean similarity of positive pairs and negative pairs as follows.\n\nModel | Positive | Negative\n:-: | :-: | :-: \nLC-29$^\\dagger$ ($L_{id}$) | 0.569 | 0.053 \nLC-29$^\\dagger$+Fake ($L_{id}$+$L_{idmmd}$)| **0.590** | **0.009**\n\nAs can be seen, as the generated data (Fake) and the ID-MMD loss ($L_{idmmd}$) are employed during training, the feature similarities between positive pairs increase while the similarities between negative pairs decrease. An intuitive visualization of the similarity distribution has been added as Fig. 5 in the revision. \n\nOur extensive experiments provide concrete evidence that, our method can achieve comparable performances with the state-of-the-art methods without requiring any existing NIR-VIS face recognition datasets, proving the generalizability of the proposed method.\n\nReference\n\n[1] Song L, et al. Adversarial discriminative heterogeneous face recognition. AAAI 2018.\n\n[2] Fu C, et al. Dvg-face: Dual variational generation for heterogeneous face recognition. TPAMI 2021.\n\n[3] Lattas A, et al. AvatarMe++: Facial shape and BRDF inference with photorealistic rendering-aware GANs. TPAMI 2021.", " The suggestions are helpful, and we are open to further discussions.\n* Q1: The ID-MMD loss is not novel [1].\n\nSMCL [1] uses a tri-directional center-based loss ($L_{tricenter}$) to handle the distance between the syncretic modality and VIS/NIR modalities. Although we both focus on the relationship between the feature centroids, our ID-MMD loss differs from SMCL in: \n* SMCL regularizes the feature relationship in Euclidean space while ours is in Reproducing Kernel Hilbert Space. When linear kernels are adopted, ours is degenerated to a simple version of SMCL, i.e., only positive centroid pairs are involved.\n* Compared to SMCL, ours excludes the involvement of an intermediary modality.\n\nTo illustrate the differences, we replace $L_{idmmd}$ with $L_{tricenter}$ when training on LAMP-HQ. LC-29$^\\dagger$+Fake($L_{id}$) in the paper is adopted as the backbone model (B). We have following results,\n\nModel|FAR=0.01%|Rank-1\n :-: | :-: | :-: \nB | 84.9$\\pm$1.6|98.4$\\pm$0.3\nB+$L_{tricenter}$| 90.5$\\pm$1.5|98.8$\\pm$0.3\nB+$L_{idmmd}$|**92.0$\\pm$1.5**|**98.9$\\pm$0.3**\n\nAs can be seen, $L_{tricenter}$ is inferior to $L_{idmmd}$.\n* Q2: Training differences.\n\nThe generation of NIR-VIS images and the training of NIR-VIS face recognition network do not require any existing NIR-VIS face recognition datasets.\n* Q3: Missing works [2-4].\n\nDA-GAN [4] reveals that high-quality profile view synthesis could facilitate the face recognition task. But DA-GAN is proposed for the VIS face recognition task while ours is for NIR-VIS face recognition. DFAL [3] and OMDRA [2] focus on domain-invariant face features extraction. Both methods do not involve any facial image generation with new identities.\n\nDiscussion about [2-4] have been added to Background and Related Work (Section 2) and performance comparisons with [2-3] have been added to Table 5 in the revision.\n* Q4: Combinations losses.\n\nAs stated in Eq. (7) and Section 4.2 (Line 255) in the paper, we employ the combination of modality discrepancy reduction losses and id loss during training. Model performances in Table 4 prove \"the combination is better than single ones\".\n* Q5: Different metrics with DVG-Face.\n\nWe did not take the same metrics as DVG-Face due to the differences in the generation method and the training process.\n* Even though DVG-Face can generate multiple pairs of NIR-VIS images for a particular identity, it only generates one NIR-VIS pair per person. DVG-Face measures Mean Similarity (MS) between the pair to evaluate intra-identity consistency. However, we generate multiple NIR and VIS face images for a given identity. To obtain the intra-identity consistency, the feature distances (similarity) across multiple images are calculated, namely Mean Identity feature Distance (MID) in our work. In the revision, for better understanding, we compare with DVG-Face on LAMP-HQ in terms of MS between pairs and MS across multiple images, which are indicated by 1v1 and 1vN, respectively. The results have been added to Table 2 in the revision. The results show that our method outperforms DVG-Face by achieving higher MS on both settings, which proves our generation well preserves intra-identity consistency. Additionally, the 1vN MS of our method is 0.411. Given the general identity verification threshold (around 0.3), our generation preserves the faces diversity.\n* DVG-Face obtains identity representations for the face generation via random noise sampling. The evaluation of inter-identity diversity via Mean Instance Similarity (MIS) proves the low overlap between generated identities. However, the identity features we used for the face generation come from a benchmark VIS face recognition dataset (CelebA). There is no overlap between identities. Thus, we did not evaluate MIS in our work. In the revision, we add the comparison results on MIS in Table 2. Following the settings in DVG-Face, the comparisons are conducted between VIS-VIS pairs and NIR-VIS pairs. The results suggest that our generation achieves a higher inter-identity diversity than DVG-Face.\n* Frechet Inception Distance (FID) is widely used in GAN-based generation, but we use physical rendering based generation. Following DVG-Face, we also employed LightCNN for FID evaluation in the revision. The proposed method exhibits higher feature distribution\nconsistency with real data than the GAN-based DVG-Face. Even though our method has not rendered \nhair and torso, our generation is more close to the feature of real data from the view of a face recognition network.\n\nReference:\n\n[1] Wei Z, et al. Syncretic modality collaborative learning for visible infrared person re-identification. ICCV 2021.\n\n[2] Hu W, et al. Orthogonal modality disentanglement and representation alignment network for NIR-VIS face recognition. TCSVT 2021.\n\n[3] Hu W, et al. Dual face alignment learning network for NIR-VIS face recognition. TCSVT 2021.\n\n[4] Zhao J, et al. Dual-agent gans for photorealistic and identity preserving profile face synthesis. NIPS 2017.", " We thank Reviewer Srya for the feedback and suggestions. The suggestions are helpful, and we are open to further discussions.\n\n* Q1: \"The synthesized images, particularly the VIS ones, look pretty unrealistic. The 3D models only cover the facial part, leaving some components missing, including hair, facial accessories, and background.\"\n\nThere are multiple reasons for not adopting a full-head 3D model, with hair and accessories. 1\\) First is the input pre-processing configuration [1] of the face recognition networks. Specifically, before training, face images are cropped according to 5 facial points (two eyes, nose and two mouth corners), where only the main facial components are included. In [2], most of discriminative facial regions are around eyes, noses and mouths. 2) Apart from the generated images, we also use a benchmark VIS face recognition dataset (WebFace4M) [3] for training, which is collected from real-world scenarios with diverse hair styles, accessories, and backgrounds and thus ensures diversity and authenticity in training data samples. Given both the aforementioned points, hair and accessories are not required to be augmented during the NIR-VIS facial image generation. In terms of background, as we can see in Figure 1 and 2 in the main paper, we do vary the background in the rendered images.\n\nMoreover, despite recent approaches in photorealistic human head and body rendering, reconstructing the human face in conjunction with the hair, torso and accessories still remains an open problem. The most relevant work of human VIS-rendering [4] utilizes only a head geometry PCA model and requires manually created rendering assets (hair, facial hair, clothes) that require professional desingers to be procured. Furthermore, photorealistic hair rendering is much slower, given the huge amount of vertices required,\nand requires complex commercial rendering engines, which cannot be easily engineered to render in NIR. \n\nIn the end, a main finding of our work is that facial NIR-VIS pairs of rendered facial images can augment the capabilities of the recognition network, even without including the hair and torso.\n\n* Q2: \"The denotations such as $\\mathcal{R}^{NIR}$ should be explained earlier.\"\n\nWe introduced the concept of VIS (NIR) Renderer $\\mathcal{R}^{VIS(NIR)}$ in Section 3.1 (Line 167) in the main paper, i.e., \"We define a VIS physically-based rendering function $\\mathcal{R}^{VIS}$ …\". We apologize for the misunderstanding caused by the inconsistency terms, i.e., \"rendering function\" and \"renderer\". We have clarified this term in the revision (Line 172 and 175).\n\n* Q3: \"Are WebFace260M and WebFace4M the same dataset?\"\n\nIn [3], WebFace260M is randomly divided into 10 folds, and the first fold serves as WebFace4M. We also use this fold and it is the same subset as in [3]. We have added explanations to avoid confusions in the revision (Line 256-258).\n\n* Q4: \"No facial expressions augmentations; all qualitative figures of synthetic images have neutral expressions. The authors should consider it when developing face recognizers for real-world applications.\"\n\nWe conduct a comparison on the LAMP-HQ dataset to validate the effectiveness of facial expressions augmentations. NIR-VIS facial images with diverse facial expressions are generated by using blend-shapes. The comparison results of models trained with generated data without (w/o) and with (w/) Expressions (E) augmentations are illustrated as follows. \n\n| Setting | FAR=0.01\\% | FAR=0.1\\% | Rank-1 |\n| :-----:| :-----:| :----: | :----: |\n| w/o E | 91.75$\\pm$1.5 | 97.96$\\pm$0.3 |98.87$\\pm$0.3 |\n| w/ E | **92.05$\\pm$1.5** | **98.02$\\pm$0.3** | **98.91$\\pm$0.3** |\n\nSeen from the results, it is clear that performance improvements brought by the expression augmentations are subtle, i.e., less than 0.1$\\%$ in terms of Rank-1 accuracy.\n\nReferences\n\n[1] Wang H, et al. Cosface: Large margin cosine loss for deep face recognition. CVPR 2018.\n\n[2] Wang Q, et al. Hierarchical pyramid diverse attention networks for face recognition. CVPR 2020.\n\n[3] Zhu Z, et al. WebFace260M: A Benchmark for Million-Scale Deep Face Recognition. TPAMI 2022.\n\n[4] Wood E, et al. Fake it till you make it: face analysis in the wild using synthetic data alone. ICCV 2021.\n", " This paper proposed a new face rendering technique to synthesize paired NIR-VIS images for improved face recognition in NIR space. Unlike the previous methods, which use image-to-image translation models learned from paired NIR-VIS images, this method use physical-based 3D face rendering. It first reconstructs 3D face meshes and reflectance assets in VIS space, then infers the corresponding reflectance assets in NIR space, and finally synthesizes paired VIS-NIR images at various head poses and illuminations. It also employs a novel ID-MMD loss to close the gap between VIS and NIR features in NIR-VIS face recognition training. The proposed method helped to achieve state-of-the-art face recognition performance on four NIR-VIS benchmarks. ### Strengths\n- The proposed method allows generating infinite pairs of VIS-NIR facial images covering different head poses and illuminations without learning from any real VIS-NIR image pairs.\n- The paper proposes a sophisticated algorithm to transform a VIS reflectance asset into its NIR version.\n- The paper proposes a novel ID-MMD loss to close the gap between VIS and NIR features in NIR-VIS face recognition training. Ablation studies confirm its effectiveness.\n- By using only the synthesized VIS-NIR images, the trained face recognizers can produce competitive NIR face recognition performance compared with the baseline methods trained on real NIR-VIS datasets. After fine-tuning on real NIR-VIS images, these recognizers provide state-of-the-art and near-perfect performance on four NIR-VIS face recognition benchmarks.\n\n### Weaknesses\n- The synthesized images, particularly the VIS ones, look pretty unrealistic. The 3D models only cover the facial part, leaving some components missing, including hair, facial accessories, and background.\n- The denotations such as \\mathcal{R}^{NIR} should be explained earlier, e.g., in the text or the caption of Table 1, rather than in the caption of Fig. 4.\n- There is a predefined subset of WebFace260M called WebFace4M. The authors said they randomly selected images to create their WebFace4M dataset. Are they the same dataset? If not, better change the dataset name to avoid confusion.\n- It seems the authors did not augment facial expressions; all qualitative figures of synthetic images have neutral expressions. I think the expression is not an important factor in the test benchmarks, but the authors should consider it when developing face recognizers for real-world applications. - Can we improve the method by using full-head 3D reconstruction, augmented expressions, and augmented accessories?\n- There is a predefined subset of WebFace260M called WebFace4M. The authors said they randomly selected images to create their WebFace4M dataset. Are they the same dataset? If not, better change the dataset name to avoid confusion. There is no discussion on limitations. The paper limits its application to avoid potential social impacts.", " The work proposes a NIR-VIS face matching dataset constructed with a physically-based renderer. An ID-MMD loss is employed to facilitate the identity feature learning as well as reduce the modality discrepancy. The work achieves state-of-the-art performance on 4 NIR-VIS face recognition benchmarks. Strengths:\n(1)\tThe proposed method is capable to automatically generate multiple NIR-VIS image pairs with identity information reserved, which is of great significance. \n(2)\tWith the proposed training scheme, the NIR-VIS face matching dataset helps improve the NIR-VIS face recognition performance by a large margin. \n\nWeaknesses:\n(1)\tThe proposed ID-MMD loss reduces the distance between the NIR-VIS feature centroids of the same identity, which is effective yet not novel [1]. \n[1] Wei, Ziyu, et al. \"Syncretic modality collaborative learning for visible infrared person re-identification.\" ICCV. 2021.\n(2)\tThe training differences from other compared methods should be more detailly stated\n(3). Some related works are missing, e.g., Dual face alignment learning network for NIR-VIS face recognition, Orthogonal modality disentanglement and representation alignment network for NIR-VIS face recognition, Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis\n (1)\tIt would be better if the combinations of modality discrepancy reduction losses and id loss, as the combination of losses sometimes can have larger impact than single ones. \n(2)\tIn DVG-Face, the evaluation metrics of generation quality are Mean Similarity, Mean Instance Similarity and Frechet Inception Distance. Why do the authors take different metrics in this work? Table 2 shows that the proposed method holds the smallest Mean Identity feature Distance. However, it can also be interpreted as lack of diversity. \n See Weaknesses & Questions", " In this paper, the authors propose a method to synthesize near-infrared faces by transforming them from visible faces. Based on this, the authors can conduct NIR-VIS face recognition without any existing NIR-VIS face datasets. Besides, they also propose an Identity-based Maximum Mean Discrepancy loss to facilitate identity feature learning. The good performance shows the efficiency of their method. Strengths:\n1. The authors utilize a novel 3d-rendering based method to generate large-scale NIR-VIS paired data.\n2. The authors propose an Identity-based Maximum Mean Discrepancy loss to facilitate identity feature learning.\n3. The descriptions of implementations of the proposed synthesis method are detailed.\n\nWeakness:\n1. The motivation for using a 3d-rendering based generating dataset is not clearly illustrated, and it seems that the authors just utilize id-related loss when training. It is suggested to strengthen more use of the generation.\n2. The comparison results are not sufficient. It is suggested to add generated data by different percentages to illustrate the effectiveness of the synthesis method.\n3. I suggest more comparative visualizations to verify the effectiveness of the proposed method. In addition, the authors should discuss how generalizable the proposed method is in practical situations.\n\n\n Please see my comments above. The limitations and potential negative societal impact have been adequately addressed." ]
[ -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "rYvySiUn702", "nips_2022_NN_TpS5dpo5", "Pn6IESwYeA9", "qQeilzKJpJm", "Sj7wt6KCf94", "nips_2022_NN_TpS5dpo5", "nips_2022_NN_TpS5dpo5", "nips_2022_NN_TpS5dpo5" ]
nips_2022_i7WqjtdD0u
Learning With an Evolving Class Ontology
Lifelong learners must recognize concept vocabularies that evolve over time. A common yet underexplored scenario is learning with class labels over time that refine/expand old classes. For example, humans learn to recognize ${\tt dog}$ before dog breeds. In practical settings, dataset $\textit{versioning}$ often introduces refinement to ontologies, such as autonomous vehicle benchmarks that refine a previous ${\tt vehicle}$ class into ${\tt school-bus}$ as autonomous operations expand to new cities. This paper formalizes a protocol for studying the problem of $\textit{Learning with Evolving Class Ontology}$ (LECO). LECO requires learning classifiers in distinct time periods (TPs); each TP introduces a new ontology of "fine" labels that refines old ontologies of "coarse" labels (e.g., dog breeds that refine the previous ${\tt dog}$). LECO explores such questions as whether to annotate new data or relabel the old, how to leverage coarse labels, and whether to finetune the previous TP's model or train from scratch. To answer these questions, we leverage insights from related problems such as class-incremental learning. We validate them under the LECO protocol through the lens of image classification (on CIFAR and iNaturalist) and semantic segmentation (on Mapillary). Extensive experiments lead to some surprising conclusions; while the current status quo in the field is to relabel existing datasets with new class ontologies (such as COCO-to-LVIS or Mapillary1.2-to-2.0), LECO demonstrates that a far better strategy is to annotate $\textit{new}$ data with the new ontology. However, this produces an aggregate dataset with inconsistent old-vs-new labels, complicating learning. To address this challenge, we adopt methods from semi-supervised and partial-label learning. We demonstrate that such strategies can surprisingly be made near-optimal, in the sense of approaching an "oracle" that learns on the aggregate dataset exhaustively labeled with the newest ontology.
Accept
The setting of evolving and refining classes over time is certainly a practical one in domains such as text classification. This paper offers some insights on questions like whether the entire data should be relabeled, or can one achieve near optimal performance by labeling only the new chunk. The paper concludes that joint training on old and new data, even if inconsistent, in conjunction with semi-supervised learning can be fairly effective.
train
[ "RnK-1QjrKX", "xXkA6HloE7w", "3IiD-LeZ6lX", "UWp470cV8G7", "FhzJ-CsUDSD", "ipYiMikSC52", "0JLuh8-PMc", "GPVb_QHC_FY", "PBl3sqOd5M2", "h3ETQ0iGko", "E_uwV6_trR", "nSo92ZH_wMJ", "I0D9lbZHJbo", "VXQtlKLsLF2", "h96p-BtWSuC" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you Reviewer YxYF for your interaction and upgraded rating!\n\nAgain, we appreciate your positive attitude with \"*no reservation about the quality of the submission*\" (e.g., clarity and well-organized paper structure, the sound experimental setup and proposed models, novel setting of LECO, novel approach that combines existing techniques, etc.). To strengthen our work, as promised, we will add the new experiment in the camera-ready (cf. setup below for reference). In the meantime, we are delighted to address any other concerns that hold you off upgrading further (e.g., your rating on contribution is \"1 poor\"; a typo?).\n\n*Setup for the new experiment with more TPs.* We repurpose iNaturalist to define four TPs, with each having an ontology out of four at distinct taxa levels [\"order\", \"family\", \"genus\", \"species\"]. We will use our open-source codebase (cf. supplemental material) to readily complete this experiment.", " Thank you Reviewer 6YbD for your interaction, feedback, and upgraded score!\n\nTo better understand and solve the new problem LECO, we believe it is important to rigorously leverage insights from existing approaches to related problems. We agree that doing so might overshadow the significance of our work, even though the new problem LECO formulates a broad range of applications. To strengthen our current work, as promised, we will add the new experiment in the camera-ready (cf. setup below for reference). Furthermore, we expect future work to develop novel algorithms tailored to LECO, e.g., by introducing efficient finetuning techniques across TPs, by focusing on rare fine-grained classes emerging in later TPs, etc.\n\n*Setup for the new experiment with more TPs.* We repurpose iNaturalist to define four TPs, with each having an ontology out of four at distinct taxa levels [\"order\", \"family\", \"genus\", \"species\"]. We will use our open-source codebase (cf. supplemental material) to readily complete this experiment.", " I have read the rebuttal and the reviews from other reviewers. I really appreciate the explanations and additional experimental results (Mapillary dataset) provided in the rebuttal. The authors' response has adequately addressed some of my concerns. However, I am still not convinced about the novelty and significance of the conclusions. Having said that, the new results do provide additional support for the proposed solution and the promised results with multiple TPs should also be interesting. Therefore, I have decided to upgrade my score.", " Thank you for your response! Given a promised extension of the method to multiple TPs, I have upgraded my rating to borderline accept.", " Thank you Reviewer kuwA for your interaction and feedback!\n\nAs for why our approach generalizes to more TPs, besides the explanations in our rebuttal, we agree that an experiment with more TPs will be a good empirical demonstration. Therefore, we will happily set up such an experiment with four TPs on iNaturalist, where we define the four ontologies based on four of its taxa levels (i.e., \"order\", \"family\", \"genus\", and \"species\"). Our open-source codebase (cf. supplemental material) makes completing this experiment straightforward. We will add it in the camera-ready version. We are open to further suggestions about this experiment.", " Thank you for your answers. I still find the two TPs evaluation as the main weakness of the paper. Although the rebuttal suggests that the approach would generalize to more TPs, there is no evidence that demonstrate this claim. Therefore I keep my evaluation as is.\n\nIn general i disagree with the borderline reject scores of the other reviews and suggest weak accept.", " > **Reviewer kuwA asks about the difference between LECO and learning with partial labels (LPL).**\n\nLECO studies how to learn models for new classes which are subclasses of previously learned (super)classes. The foremost question to answer is how to annotate data with new classes (Line55), i.e., relabeling the old or labeling the new. Our experiments convincingly demonstrate that labeling the new is a much better strategy, hence leading to the second question about how to leverage coarsely-labeled old data and fine-labeled new data. While LPL methods can be readily adopted, LECO offers the opportunity to study whether/how to exploit previous TP's model. Our extensive experiments (Table 2) show finetuning significantly outperforms training from scratch, which is the setup for LPL. That said, LECO is a richer venue to explore algorithms than the setup for LPL.\n\n> **Reviewer kuwA recommended a paper that studies how to mine label relations for scene graph generation using semi-supervised learning.**\n\nThank you for the recommended paper. We happily cited this paper (reference [25]) in our updated manuscript (Line221).", " We thank Reviewer kuwA for the valuable comments and appreciating our well-written paper, important and novel problem setup, and effective approaches. Along with the rebuttal, **we have updated our paper by including the experiments on the large-scale Mapillary dataset**, which was included in the supplement. The Mapillary dataset was collected for semantic segmentation research in the context of autonomous driving; its classes follow a long-tailed class distribution. Importantly, it reflects **a real-world case of LECO** because its ontologies were versioned from V1.2 to V2.0, formulated as TP$^0$ and TP$^1$ in our experiments. Below, we address specific issues with referred line numbers in the updated paper.\n\n> **Reviewer kuwA thinks a major weakness of the paper is that experiments set up only two time periods (TPs), and is afraid that the conclusions might not generalize to more TPs.**\n\nWe see the worry as a result of relating LECO with continual learning (CL), which is a different problem. CL emphasizes the issue of catastrophic forgetting, and proposes to exacerbate this issue by setting more TPs (and using a small buffer for storing training data). Differently, LECO emphasizes the difficulty of learning for new classes which are subclasses of the previously learned (super)classes (Line95), without necessarily restricting a buffer size. Results in Table 2 show that, even without buffer restrictions, CL methods still perform poorly (cf. FreezePrev [47] and TrainScratch [59]). Therefore, using two TPs sufficiently emphasizes the difficulty in LECO. We added an explanation for this concern in the updated paper (Line147-152).\n\nFurthermore, our explored techniques are not affected by the number of TPs. Because more TPs provide more data, our techniques will improve further by exploiting such data, e.g., using techniques of semi-supervised learning, joint training, and learning with partial labels. Moreover, we note that using two time periods is also the more common problem formulation in the context of dataset versioning (e.g., Mapillary and Argoverse, cf. Line36-38), implying a solution for it itself may already have widespread impact. \n\n> **Reviewer kuwA is concerned that our experiments only have two datasets.**\n\nThe updated paper contains experiments on the third dataset Mapillary (cf. Mapillary-LECO in Table 1-4), which is large-scale, originally collected for semantic segmentation in the context of autonomous driving research. These experiments were included in our supplement. The Mapillary dataset was versioned from V1.2 to V2.0, reflecting a real-world case where labels evolved from coarse to fine.\n\n> **Reviewer kuwA points out that retraining \"may be very costly from a compute standpoint\", and recommends some recent papers that propose to improve model adaptation or finetuning.**\n\nThank you for the recommended papers; we cited them in the updated paper (Line334). We agree that retraining on large-scale data can be costly and non-trivial. Perhaps fortunately, our explorations demonstrate that a better strategy is finetuning, instead of retraining (Table 2 and Line331). That said, techniques that aim for efficient adaptation/finetuning can be readily applied to LECO.\n\n> **Reviewer kuwA points out that data distribution shift is interesting to explore.**\n\nAgreed! We have noticed this in Line335 and plan to set up this scenario in future work.\n\n> **Reviewer kuwA asks about next steps based on this work, and concerned that LECO might be solved because our explored method approaches the upper-bound.**\n\nAs discussed in Line325-336, next steps include a new setup that incorporates unlabeled data, shifting/dynamic data distributions, etc. We believe such new LECO setups have better upper bounds and offer new exploration space.\n\n> **Reviewer kuwA suggests a better upper bound which is to use all levels of labels and both old and new data.**\n\nGreat suggestion! We will update the upper bounds after we finish training the upperbound models. The training is straightforward but takes about one month on all datasets (particularly the large-scale ones on iNaturalist and Mapillary).", " > **Reviewer YxYF asks if \"training an entire network from scratch with random weights\" means \"training a randomly initialised neural Network\"?**\n\nYes!\n\n> **Reviewer YxYF asks if \"training a classifier on top of a randomly initialised feature extractor\" serves as a lower-bound?**\n\nYes! We removed this lower bound in the updated paper because it does not provide much meaningful analysis.\n\n> **Reviewer YxYF asks (1) if \"filter out\" means \"rejecting data\" in the sentence \"we filter out pseudo-labels that do not align with the ground-truth coarse labels\", and (2) if the coarse label is used?**\n\nYes (Line296), and yes (Line295)!\n", " We thank Reviewer YxYF for the insightful comments and having \"no reservation\" about the paper quality (e.g., clarity and well-organized paper structure, the sound experimental setup and proposed models, etc.). Along with the rebuttal, **we have updated our paper by including the experiments on the large-scale Mapillary dataset**, which was included in the supplement. The Mapillary dataset was collected for semantic segmentation research in the context of autonomous driving; its classes follow a long-tailed class distribution. Importantly, **it reflects a real-world case of LECO** because its ontologies were versioned from V1.2 to V2.0, formulated as TP$^0$ and TP$^1$ in our experiment. Below, we address specific issues with referred line numbers in the updated paper.\n\n> **Reviewer YxYF is worried that the current LECO setup has two time periods (TPs).**\n\nWe see the worry as a result of relating LECO with continual learning (CL), which is a different problem. CL emphasizes the issue of catastrophic forgetting, and proposes to exacerbate this issue by setting more TPs (and using a small buffer for storing training data). Differently, LECO emphasizes the difficulty of learning for new classes which are subclasses of the previously learned (super)classes (Line95), without necessarily restricting a buffer size. Results in Table 2 show that, even without buffer restrictions, CL methods still perform poorly (cf. FreezePrev [47] and TrainScratch [59]). Therefore, using two TPs sufficiently emphasizes the difficulty in LECO. We added an explanation for this concern in the updated paper (Line147-152).\n\nFurthermore, our explored techniques are not affected by the number of TPs. Because more TPs provide more data, our techniques will improve further by exploiting such data, e.g., using techniques of semi-supervised learning, joint training, and learning with partial labels. Moreover, we note that using two time periods is also the more common problem formulation in the context of dataset versioning (e.g., Mapillary and Argoverse, cf. Line36-38), implying a solution for it itself may already have widespread impact. \n\n> **Reviewer YxYF thinks the current LECO setup makes it similar to the problem of Fine-Grained Classification with both Fine and Coarse Supervision (FGC-FCS); it misses some citations and comparisons to the state-of-the-art methods in this line of work.**\n\nThank you for recommending the papers which are among the first that studied the problem of FGC-FCS. We happily cited them (Line110). Importantly, for FGC-FCS, we cited and compared the methods introduced by Su et al. [69,70], which were published in 2021 and are the state-of-the-art (cf. Line221, 230-240). We point out that some of our most effective solutions (such as joint training) are general enough to apply for more complex ontology evolutions (such as those present in Mapillary). \n\n> **Reviewer YxYF points out typos in the referred accuracies in Introduction, and unclear statement how labeling-new-data and relabeling-the-old achieve different results in Table 2; asks whether it is because of learning a feature extractor on the old data and fine-tune it on the new data?**\n\nThank you for pointing out the typos; we apologize! We have corrected them (Line79-83). Yes, you are right! We updated the sentence to make it clear (Line79): \"finetuning the previous TP's model on newly-labeled-data outperforms that on relabeled-old-data: 73.64% vs. 71.26% in accuracy\".\n\n> **Reviewer YxYF points out that using all the labeled data is unrealistic because buffer size has to be finite in practical continual learning systems.**\n\nIt is right that the buffer size may become an issue for practical continual learning systems such as autonomous vehicles, which continuously update their machine-learned models and accumulate huge amounts of data over years. However, to the best of our knowledge, such systems do not have issues to store *labeled data*, though they may have issues for storing *unlabeled* data. For example, in [R1], interviewed auto-manufacturers argue for saving all (important) data for potential legal action. In fact, many other applications save history data for privacy-related consideration (e.g., medical data records) and as forensic evidence (e.g., videos from surveillance cameras). We hope the reviewer considers these real-world applications as realistic setups that use a buffer large enough to store all *labeled* data.\n\n[R1] Amend, J. M. (2018, January 18). Storage almost full: Driverless cars create data crunch. Wards Auto. Retrieved from https://www.wardsauto.com/technology/storage-almost-full-driverless-cars-create-data-crunch\n", " > **Reviewer 6YbD comments that \"the findings are not very surprising\", and thinks \"it is expected that a recent SOTA semi-supervised learning (SSL) method will be able to reach a performance close to supervised upper-bound accuracy trained with all data\".**\n\nAmong many findings in our work, some are quite surprising. For example, Table 2 shows that finetuning the previous TP's model significantly outperforms training-from-scratch on the same data and labels (73.64% vs. 65.69% on iNat-LECO, 30.39% vs. 27.24% on Mapillary-LECO). We conjecture that the coarse-to-fine label evolution serves as a good learning curriculum that improves performance (Line332). Moreover, while prior work shows that exploiting coarse-fine label relationships helps learning, we find that almost all the improvement can be explained by simple joint training (that *doesn’t* need one to define relationships between the old and new labels, making it applicable to far more complex ontology evolutions); cf. 82.98% vs. 83.61% on iNat-LECO in Table 3. We believe our (surprising) findings are valuable assets to our community.\n\n> **Reviewer 6YbD asks how the levels were selected for the iNaturalist-LECO experimental setup?**\n\nGood question! The selection is based on two thoughts aiming to have a clean and challenging setup to study LECO. First, we think it is good to have more fine-grained classes in TP1, hence we choose the most fine-grained level \"species\" in TP1. Second, we think all coarse-classes should be split later to emphasize the difficulty of learning with class-evolution, and TP0's classification task should be challenging as well with more classes. Therefore, we choose the \"order\" level which has 123 taxa. As reference, the original iNat dataset has seven levels: kingdom (3 taxa), phylum (8), class (29), order (123), family (339), genus (729), and species (810).\n\n> **Reviewer 6YbD comments that using a large validation set for hyperparameter search is not realistic, and asks whether we searched hyperparmeters for the upper-bound model.**\n\nLine141 explains that we sample 20% data from the training data as the validation set. This 8:2 train-val ratio is a quite common practice in the community. In real-world applications, practitioners use a validation set large enough to reliably tune hyperparameters, and then train a final model over combined train and val sets. Therefore, using 20% labeled data as the validation sets should not be an issue in our work.\n\nYes, we search hyperparameters in the same way on the same validation set for the upper-bound methods.\n\n> **Reviewer 6YbD points out that the long-tailed class distribution might cause issues for the pseudo-labeling method, because data from tail-classes would get incorrect pseudo-labels as head-classes.**\n\nGreat point! We verify this issue from Table 3's results on the iNat-LECO, in which both coarse and fine classes follow long-tail distributions -- the pseudo-labeling method does not work well in this long-tailed scenario. In particular, it underperforms simple supervised learning, cf. 73.56% vs. 73.64% on iNat-LECO in Table 3. However, our solutions effectively address this issue. For example, ST-Soft (self-training with soft labels) that exploits the softmax scores of pseudo-labels improves to 79.64%, combining the simple joint training boosts to 83.61%.\n", " We thank Reviewer 6YbD for the insightful comments. Reviewer 6YbD appreciates our new problem LECO, our simple and intuitive solution, and our extensive and systematic experiments. Along with the rebuttal, **we have updated our paper by including the experiments on the large-scale Mapillary dataset**, which was included in the supplement. The Mapillary dataset was collected for semantic segmentation research in the context of autonomous driving; its classes follow a long-tailed class distribution. Importantly, it reflects a **real-world case of LECO** because its ontologies were versioned from V1.2 to V2.0, formulated as TP$^0$ and TP$^1$ in our experiment. Below, we address specific issues with referred line numbers in the updated paper.\n\n> **Reviewer 6YbD thinks that (1) the proposed problem LECO is similar to continual learning, and (2) the proposed solution is an ensemble of existing techniques for other problems, which lacks technical novelty.**\n\nLECO and continual learning are two different problems (detailed in Line39-54) although both require learning models for new classes in distinct time periods (TPs). For example, LECO emphasizes the difficulty of learning for new classes which are subclasses of previously learned (super)classes (Line42). LECO formulates an underexplored but common real-world scenario as demonstrated by contemporary dataset versioning (Line35-38). In contrast, CL emphasizes catastrophic forgetting (Line93) and assumes old and new classes have clear class boundaries (Line90). Therefore, LECO is a novel problem that is quite different from CL.\n\nAs a new problem, LECO offers a new testbed to exploit existing techniques developed for other problems (including class-incremental learning, fine-grained classification with both coarse and fine supervision, semi-supervised learning, etc.). We believe leveraging their insights is crucial to developing meaningful solutions to LECO. Our extensive exploration leads to novel technical solutions, i.e., one should always label new data (Line58-60), adopt the simple joint training which is surprisingly effective and can deal with inconsistent coarse-to-fine labels (Line62), use semi-supervised learning techniques to generate pseudo labels on old data and use old labels to reconcile inconsistent pseudo labels (Line65), and finetune previous TP's model which significantly outperforms training from scratch (Line69). Reviewer YxYF thinks we \"combine existing methods in a *novel* way\". \n\n> **Reviewer 6YbD thinks the problem setup \"is restricted\" because it (1) \"assumes a negligible cost for data acquisition\", (2) expects finer labels but not novel ones (which do not have a parent/coarse label, (3) does not consider long-tail distribution of classes.**\n\nThe problem LECO is well demonstrated by dataset versioning (Line35-38), e.g., Mapillary has updated ontologies from its V1.2 to V2.0. We also repurpose the Mapillary dataset to formulate a realistic setup to study LECO, i.e., using the ontologies of its two versions in two TPs. Therefore, we argue our problem setup is quite realistic rather than restricted. We answer the three points below.\n\n1. As explained in Line330, although data acquisition has a cost, many applications acquire data continuously regardless of the cost, e.g., acquiring data from autonomous-vehicle fleets, medical and bio-images from microscopic scanners, video frames from surveillance cameras, etc. Therefore, LECO does apply to a broad range of applications.\n2. Our work *does* discuss the scenario when new classes emerge that have no parents (Line43). In this case, new classes can be thought of as fine-grained classes of a catch-all ${\\tt background}$ class. In fact, Mapillary has such a ${\\tt background}$ (aka ${\\tt void}$) class in its V1.2 version, which is split into multiple new classes in V2.0, such as ${\\tt temporary-barrier}$, ${\\tt traffic-island}$ and ${\\tt void-ground}$ (Line139). Our new experiments on Mapillary demonstrate that our solutions generalize to such real world scenarios. We added a remark on this point in Line153-159.\n3. Our work *does* consider the long-tail distribution using real-world datasets iNaturalist and Mapillary (Line129). Long-tail distributions of classes in these datasets are depicted in the supplement (Figure 3 and 4 in supplement). Our solutions work well in the long-tailed scenario (Table 3 and 4 in main paper).", " This paper introduces a new problem called Learning with Evolving Class Ontology (LECO) where the objective is to learn finer classes over time. The authors have shown that labeling new data with fine-grained annotation is more valuable and proposed to use the techniques from semi-supervised learning and learning with partial labels literature to utilize the old coarsely-grained data. The authors have proposed a benchmarking protocol for LECO on top of two image classification datasets: CIFAR100 and iNaturalist and shown promising results. Strengths:\n\n- Tackles a somewhat new and interesting problem\n\n- The proposed solution is simple and intuitive\n\n- Experiments are extensive and systematic\n\nWeaknesses:\n\n- The technical novelty of the proposed solution is low. This is not a bad thing in general. However, in this particular case, since the proposed problem (LECO) is somewhat similar to the existing continual learning problems and the proposed solution is an ensemble of multiple existing ideas I find this to be a major weakness.\n\n- I find the current problem setup and solution a bit restrictive. It overlooks a lot of practical concerns. (a) assumes a negligible cost for data acquisition, if new data is not available one can only relabel old data. (b) only expects finer labels in subsequent time periods. What happens if a new category is introduced which doesn't have a parent. The pseudo-labeling approach will mistakenly assign previous coarse-grained classes to this newly introduced category. Filtering the pseudo-labels based on class ontology might work in this case but if the data is long-tailed (the new class being a head class) this might remove a lot of old data from training. \n\n- The findings are not very surprising. If data acquisition cost is ignored and a large portion of data is labeled (which is the case in the current experimental setup) it is expected that a recent SOTA semi-supervised method will be able to reach a performance closed to supervised upper-bound accuracy trained with all data.\n - How were the levels selected for the iNaturalist-LECO experimental setup?\n\n- The proposed method uses a large validation set. Is the final performance sensitive to the validation set size? Using such a large validation set to search hyperparameters is not very realistic. Were the upper-bound hyperparameters searched in a similar manner? \n\n- What happens if the fine-grained splitting of a corase-grained class is long-tailed. Most of the pseudo-labels for the coarse-grained old class will be assigned to the head fine-grained child class. \n The authors have adequately addressed the limitations and potential negative societal impact of their work.", " The paper introduces the problem of Learning with Evolving Class Ontology (LECO), a general case of Class Incremental Learning, where instead of introducing new classes at each time step, we refine existing class labels into more granular ones. The authors ask whether, given incoming data and a fixed annotation budget, it is better to annotate the new data with the new labels or re-annotate the old data. In other words, whether it is better to train a classifier on a smaller but homogenous dataset or a larger one with a mix of old (coarse) and new (fine) labels. Based on experiments on CIFAR and iNaturalist, the authors conclude that by using semi-supervised learning and incorporating the hierarchical taxonomy, it is possible to achieve excellent classification accuracy with a heterogeneous dataset. In particular, by using pseudo-labelling and Learning-with-Partial-Labels, it is possible to get close to the performance of a classifier trained on a homogenous dataset of the same size. The main strength of the paper is clarity. The structure is good, the writing is easy to understand, and the results are presented in a logical, incremental order. The authors precisely state the research questions and contributions and briefly summarise the conclusions in the introduction, which helps to guide the reader. The paper includes relevant details to reproduce the experiments.\n\nI have no reservations about the quality of the submission. The experimental setup is sound, and the proposed models seem appropriate to tackle the presented problem. The authors evaluate their methods on two different datasets, modified to match the evolving ontology scenario. As mentioned below, the evaluation could be improved by including related work.\n\nThe biggest weakness of the submitted work is originality. While the LECO setting appears novel and could be an interesting type of class-incremental learning, limiting it to two time steps and giving the model access to all the data turns it into fine classification with coarse supervision, a problem tackled by other omitted work, such as:\n\n- A Weakly Supervised Fine Label Classifier Enhanced by Coarse Supervision, Taherkhani et al. 2019\n- Weakly Supervised Image Classification with Coarse and Fine Labels, Lei et al. 2017\n- A Pseudo-Label Method for Coarse-to-Fine Multi-Label Learning with Limited Supervision, Hsieh et al. 2019\n- From Categories to Subcategories: Large-scale Image Classification with Partial Class Label Refinement\n\nThe authors combine existing methods in a novel way, which could be a valuable contribution. However, the lack of comparison to previous approaches makes it hard to judge whether their work advances state of the art and therefore undermines the significance of their findings. My suggestion would be to either extend the work to multiple TPs or focus on the problem of fine classification with coarse supervision: remove the LECO formulation and compare their method with existing work. I think either would make the submission much stronger. In the \"Technical Insights\" section of the introduction, we read that \"Under the LECO protocol of the iNaturalist benchmark (Section 3), (1) labelling new data outperforms relabeling the old: 73.3% vs 70.9% in accuracy;\". I couldn't find these numbers in the result tables, so it is unclear how the label mismatch is handled in this base case. Is the strategy to learn a feature extractor on the old data and fine-tune it on the new data?\n\nIn the \"Benchmarks\" section of Chapter 3, the authors claim that \"using all the labelled data is realistic because, for example, safety-critical systems (e.g., autonomous vehicles) must not trade off recognition accuracy with parsimonious memory buffer.\" While the buffer size is indeed arbitrary, for any practical continual learning system, it has to be *finite*. If we extend the LECO setting to multiple TPs, the buffer size will eventually become an important consideration.\n\nOne of the strategies in Chapter 4 is to \"train an entire network from scratch with random weights.\" The phrasing is unclear here. Does it mean \"train a randomly initialised neural network\"?\n\nAnother strategy in Chapter 4 is to train a classifier on top of a randomly initialised feature extractor. What is the purpose of this? Is it to provide a lower bound on the classifier performance?\n\nIn Section 6.2., the first strategy is described as follows: \"we filter out pseudo-labels that do not align with the ground-truth coarse labels.\" What does filter out mean in this context? Is the data point rejected? Is the coarse label used instead? The authors correctly identified an essential caveat in their research question: obtaining new data is generally equally or more costly than annotating it. Re-annotation is also usually quicker than annotating from scratch because the annotator can use existing labels to constrain the task to a handful of classes.\n\nOne significant problem mentioned but not explored enough is the assumption that the findings would generalise to multiple TPs. In particular, it would be interesting to see the performance of Learning-with-Partial-Labels with multiple levels of granularity.", " The paper introduces a new continual learning problem setup, where class vocabulary becomes more fine grained in a continual fashion. Different than classic continual learning this setup allows access to the historical examples. Thus it is not prone to catastrophic forgetting.\n\nThe paper explores several research questions, like when the vocabulary evolves whether to annotate new data or relabel the old data (without collecting new data), how to leverage coarse label (of old data), and whether to finetune the model trained on old data, or train from scratch.\n\nThe paper show that a semi supervised approach, that only requires labelling the new data without relabeling the old data, is almost equivalent to relabeling all the data (both old and new).\n\nThe approach uses the new data to provide pseudo labels for the old data, and use the old data coarse labels to resolve reconcile conflicts between the pseudo fine labels and the true coarse labels.\n \n### Strengths\n\n* The paper is well written, very easy to read, and provides decent baselines and upper bounds.\n\n* The problem setup is important from a practical standpoint, and novel to the best of my knowledge (I am not an expert in continual learning, or hierarchical learning).\n\n* The suggested approach saves relabeling efforts, which can become quadratic with the number of \"time periods\".\n\n\n### Weaknesses\n\n* Although the problem setup is general, and allowing multiple \"time periods\" (TP), in practice the experiment are only with a single TP. I think this is the major weakness of the paper, because it is not clear how well the approach generalizes, as the labels become more fine grained.\n\n* The approach was only evaluated on two datasets. It would be useful to provide experiments for a different data domain (not vision). \n\n* The problem setup ignores the fact that aside from storage, retraining on the old samples may be very costly from a compute standpoint. E.g. retraining a model with a scale of Giga samples like a in autonomous driving may cost millions of dollars. There are recent works like [1,2,3] that try to alleviate this problem. I suggest to discuss it in the related work.\n\n* This setup may introduce biases. There may be a distribution shift when collecting more fine grained labels. It would have been beneficial to add this type of bias to the benchmark, and demonstrate how sensitive is the approach to such a shift.\n\n* It is unclear what are the open questions and the next steps. If the approach reaches the upper bound, does it mean that this problem is solved?\n\n* Experiments: There is a better upper bound. Which is utilizing both the old and new labels for the old data \n\n\n### References\n[1] Houlsby et al. Parameter-Efficient Transfer Learning for NLP, ICML 2019\n\n[2] Li et al, Cross-domain Few-shot Learning with Task-specific Adapters, CVPR 2022\n\n[3] Cohen et al, \"This is my unicorn, Fluffy\": Personalizing frozen vision-language representations, ECCV 2022\n \n1. It is not clear in what the new problem setup is different than LPL\n\n\n2. The approach is somewhat similar to [4], which was applied to a different problem setup. I suggest citing it in the related work.\n\n###References\n[4] Goel et al, Not All Relations are Equal: Mining Informative Labels for Scene Graph Generation, CVPR 2022 Ok,\n\nSee my feedback in the weakness section\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "UWp470cV8G7", "3IiD-LeZ6lX", "E_uwV6_trR", "PBl3sqOd5M2", "ipYiMikSC52", "GPVb_QHC_FY", "GPVb_QHC_FY", "h96p-BtWSuC", "h3ETQ0iGko", "VXQtlKLsLF2", "nSo92ZH_wMJ", "I0D9lbZHJbo", "nips_2022_i7WqjtdD0u", "nips_2022_i7WqjtdD0u", "nips_2022_i7WqjtdD0u" ]
nips_2022_x2WTG5bV977
The Curse of Low Task Diversity: On the Failure of Transfer Learning to Outperform MAML and their Empirical Equivalence
Recently, it has been observed that a transfer learning solution might be all we need to solve many few-shot learning benchmarks -- thus raising important questions about when and how meta-learning algorithms should be deployed. In this paper, we seek to clarify these questions by 1. proposing a novel metric -- the {\it diversity coefficient} -- to measure the diversity of tasks in a few-shot learning benchmark and 2. by comparing MAML and transfer learning under fair conditions (same architecture, same optimizer and all models trained to convergence). Using the diversity coefficient, we show that the popular MiniImagenet and Cifar-fs few-shot learning benchmarks have low diversity. This novel insight contextualizes claims that transfer learning solutions are better than meta-learned solutions in the regime of low diversity under a fair comparison. Specifically, we empirically find that a low diversity coefficient correlates with a high similarity between transfer learning and Model-Agnostic Meta-Learning (MAML) learned solutions in terms of accuracy at meta-test time and classification layer similarity (using feature based distance metrics like SVCCA, PWCCA, CKA, and OPD). To further support our claim, we find this meta-test accuracy holds even as the model size changes. Therefore, we conclude that in the low diversity regime, MAML and transfer learning have equivalent meta-test performance when both are compared fairly. We also hope our work inspires more thoughtful constructions and quantitative evaluations of meta-learning benchmarks in the future.
Reject
The paper performs some empirical study between transfer learning and MAML (as a meta-learning method) through the lens of task diversity. When the task diversity is low, the authors claim that the performance of MAML and transfer learning methods are similar under a fair comparison (e.g. same architecture, optimizer etc). All reviewers are on a negative side for this paper due to weak experimental supports, poor write-up, weak novelty, etc, and the authors also fail to convince the reviewers through their rebuttal responses. Hence, AC cannot recommend acceptance at the current form. In particular, AC agrees that "the paper looks kind of an intermediate work on the way to its finalized version" and "I couldn't understand the clear takeaway or message from the paper except for certain empirical insights" pointed out by reviewers. AC thinks that this paper becomes much stronger if the authors can propose new better meta-learning benchmarks using the insights obtained through the authors' analysis.
train
[ "ZW9MPODnxe", "W8i2v2Qt1Gh", "OiezYFIryg", "GX75BQ5JVfW", "6AafxDJfqiL", "KAfrK-mJAgp", "i6zPBBaOkKh", "HVpRm3lwQip", "4-O5PLN94Fb", "uhVB3MINJj", "W96vJbpx-Si", "jema0OvfYf" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you the authros for addressing my concerns raised in the initial review. However, I am not satisfied with the reply from the authors. Please see my comments below.\n\n> Providing different results using different probe networks is more superior than an emsemble approach\n\nThis is quite arguable and I will agree if one can show that the 4 probe networks are not converged to the same point or result in the same bias. The reason I asked for a probabilistic modelling approach is to consider the whole distribution of models, and hence, we can marginalize the effect of model biasing toward to the data, especially in case of point estimation. If the distribution of models are used, the result would help to shape a more convincing observation and conclusion.\n\n> The accuracy indeed can vary a lot, and that is why we provided 95%-confidence intervals with sufficient data points for all our experiments. Previous work used 600 [8] and we followed a similar value.\n\nI still stand my point since reporting the result on a subset of data (500 data points) with confident interval in this case does not provide a good summary on the accuracy result. In addition, previous work used 600 tasks does not mean that they did the correct thing (as shown in the two papers I refered in my initial review).\n\n> accuracy is not a valid way to compare models\n\nI did not say that accuracy is an invalid metric. Instead, looking at accuracy only is insufficient to conclude. Although the authors gave some reasons about the usage of accuracy, I disagree with them all. My main concern is from the class imbalance in machine learning and the issue of *task diversity* here is very similar/related. In the case of class imbalance, simply using accuracy to measure the performance is not the right way. That is why there are many different metrics proposed for such cases. Here, the research tried to investigate the *task diversity* by using the accuracy, and hence, it is insufficient.\n\n> informatic modelling\n\nMy apology for not making it clear. My main suggestion is to investigate the *task diversity* based on distribution or probabilistic modelling. In this case, we have a distribution of tasks where each task is a vector obtained from TASK2VEC. Could we use any tools in probabilistic modelling to conclude about such distribution?", " We’d like to convince you that our 500 samples are enough to make strong statistical inferences about the population -- even if it’s as large as 7,624,512. If we assume the distribution of the data is Gaussian, then we expect to see a single mode with an approximate bell curve. If we plot the histogram of task pair distances of the 500 tasks and see this then we can infer our Gaussian assumption is approximately correct. Given that we do see that in figure X, then we can infer our assumption is approximately correct. This implies we can make strong statistical assumptions about the population – in particular, that we have a good estimate of the diversity coefficient using 500 samples. \n\nWe update the supplementary material, section J.2 to address this and show all 15 histograms to back up our argument.", " Responses to the weaknesses:\n\nBullet point 1:\nOur diversity coefficient accurately represents diversity because Task2Vec had extensive experiments showing that their task vectors capture taxonomic and semantic properties of tasks and labels. Therefore, they concluded that Task2Vec was a good vectorial representation of tasks. Therefore, the computation of diversity of task is correctly computed. In other words, the same strengths are transitively inherited by our method. There might be a concern for outlier tasks due to the averaging operation, but we addressed that in the supplementary section by plotting the heat maps of the individual pair-wise distance of tasks. In addition, for synthetic tasks, the *true* diversity can be computed because we have the true (Hellinger) distance between distributions. Therefore, we show in the supplementary section that the Task2Vec diversity and Hellinger diversity correlate well with an R=0.99 in figure 11.\nThe reviewer mentioned Meta-Dataset which is an interesting point. The main issue with Meta-Dataset is that preliminary results on its derivatives show that it is a medium/high diversity data set. This is the main reason we deliberately excluded it in our analysis, since it’s not the low diversity regime we are studying (see title of paper). Since our focus was to 1. analyze if previous work claiming USL was better than MAML were true and 2. if they were not why – which we found was on the low diversity regime. This is why we focused on an in depth analysis of the low diversity regime, and we decided that Meta-Dataset was out of scope since the low diversity regime is already rich in analysis & novel, as we believe our paper shows. \nIf we included Meta-Dataset, showed it was medium/high diversity, and showed the difference between MAML and USL is none zero (which is what we would expect & observing in preliminary results) the only place I see it going is in the supplementary section. Meta-data set simply isn’t the type of data set that explains our observations or puts previous work in context with a fair comparison. Or we are sincerely curious and respectfully ask, how would JSFh think it would help to include results from medium/high diversity data sets?\n\n\nBullet point 2:\nThe main reason we chose MAML is because of MAML’s simplicity & richness of analysis we already had. We could have provided more methods to do the analysis but we opted for other types of analysis we thought were more important. For example, Figure 3 compares the difference between MAML vs USL in the low diversity regime using as the size of the model increased. We initially hypothesized that for small models the meta-learned model (i.e. MAML) would make more of a difference against USL – but our work shows it did not, and they still performed similarly. Simplicity has an additional technical advantage: it makes the comparison most fair since there are less complicated moving parts. Initially, we tried to do the comparison by controlling model complexity but the additional SGD step in MAML makes it unclear how to quantitatively include that into model complexity. Instead, we opted for the simplest algorithms with the same settings and once the algorithm could not improve further (e.g. have converged or had zero train error) we proceeded to do the analysis. \nHowever, we do not disagree it would be interesting to do more analysis with other methods but given the time window, we are unsure if we could train enough models to meet the request. If yes, by when would they be needed?\n\nBullet point 3:\nBullet point 3 is the trickiest to address, since this was done by design. We discovered that the low diversity regime was already rich in analysis and provided an interesting insight – since we found the setting where in fact MAML was not worse than USL – contrary to what was reported previously *in the same benchmarks*. \nHowever, we actually did provide some analysis of the high diversity regime in figure 4 – where we reproduced the overperformance of USL against MAML from a problem-centric perspective.\nWe do admit that more realistic data sets were missing, but preliminary results shows they do not exhibit low diversity regime. Therefore, we decided it was worthy of a separate in-depth analysis like the one we did here. \nWe will address the comment on “the practical standpoint” of our method. Our method’s most practical impact is directly for AI researchers. In particular, we suggest the following paradigm shift:\nProviding a new tool for the creation of data sets and moving away from only making them larger and larger – instead, we suggest analyzing intrinsic properties of the data itself.\nA novel way to report meta-learning results for researchers. We believe that previous publications were misleading – especially because methods are compared without any control for variables like neural network backbone. People believe meta-learning might not work now – do we really know this?", " Final comment on why we think accuracy is a good metric, besides all the previously mentioned arguments. In addition, we suspect the reviewer might think that, even though two models have the same accuracy performance, they might actually be quite different at test time. This does not apply for any of the benchmarks we tried because the task distribution is the same at train and at test time. For example, for MiniImagenet all tasks were sampled using images from Imagenet (natural images) and processed the same way. So the sampling of tasks at train and test time match. Same with our synthetic experiments. Comparing functions is hard and there is no computationally feasible complete solution. Evaluating accuracy differences seems sufficient given the stated goals & how it’s been done historically.", " A comment on your suggestion with respect to model complexity. One question is what model complexity use that takes into account the gradient steps in MAML? This is one of the reasons we didn’t take the model complexity route since it seems tricky to take an arbitrary adaptation meta-learner step into account. Even if we could do that I’d suggest doing the reverse of your suggestion: use the model complexity to ensure a fair comparison and then compare their test performance (which is what we approximately did). If we do what you suggest i.e. achieve a certain accuracy and then choose the model that is “simpler” according to some model complexity measure – then we are not objective in what “best” since that makes it mean “simpler”. We instead want a fair comparison and let the performance speak for itself – which is why we followed previous work with test accuracy.", " We are happy to add a conclusion that makes the connection explicit of all the points into one coherent argument. The coherent argument is stated in the contribution list but we can emphasize it in a conclusion section. In one sentence: we provide extensive and diverse evidence to back up our main claim that under low diversity, MAML and USL are similar. \n\nIt is true that the diversity coefficient depends on the probe network. However, we did address this issue by providing 4 different probe networks in Table 1. This approach is arguably superior to an ensemble – since it doesn’t combine all diversities into a single number.\n\nWe did take precautions against outlier tasks. We reported confidence intervals and heat maps for individual task distances in Figures 13, 14, and 12. This should address this issue since we can clearly see the homogeneity of tasks.\nThe accuracy indeed can vary a lot, and that is why we provided 95%-confidence intervals with sufficient data points for all our experiments. Previous work used 600 [8] and we followed a similar value.\n\nWe want to respond to PmXX’s raised point on using “only” 500 tasks. The main issue is that combinatorial arguments are insufficient to calculate the *true* number of different tasks. This is why we developed the diversity coefficient. Using the formula Choose(L, n) = (eL/n)^n completely ignore the structure of the data and therefore massively overestimates the number of tasks. Therefore, the solution we choose to estimate if 500 is enough is being that number on: \n1. it’s close to previous work [8] \n2. we noticed the confidence intervals shrunk sufficiently \n3. the heat maps in the supplementary section Figure 13, 14 showed the lack of variation & outliers.\nHowever, one alternative solution is to treat the problem compositionally. Since tasks are made using the say L=64 labels in the data sets, then we can instead use those labels as “the core tasks”. Now the run time decreases from exponential O(eL/n)^n to squared O(L^2).\nNote that we did not do this because we believed the provided reasons were sufficient but it’s easy to compute if the reviewer requires them.\n\nWe are curious why reviewer PmXX thinks accuracy is not a valid way to compare models. The community of machine learning has focused for many years on using such metrics and thus it’s unclear why a different one is better, e.g. we aren’t using medical images so type errors lack context. In addition, accuracy is the stated metric in the problem statement. Accuracy is great because it is “absolute” in the sense that it’s not an arbitrary number like the type of errors regression or losses give. Even normalized L2 errors like NED or R^2 aren’t as easy to compare to accuracy. Overall we don’t fully appreciate the issues, especially since we are responding to the current trend on claiming USL is better than MAML and those claims are made wrt accuracy. \n\nHowever, we actually *do* have at least one metric that isn’t accurate to show the similarity of MAML and USL. This is the similarity of the classification layer using SVCCA, PWCCA, LINKCKA, and OPD in figure 2. In particular, there we show that similarity increases at the final layer. This agrees with the accuracy analysis.\n\nWe did not use information theory based metrics because 1. estimating entropy in our scenario is hard and open (see Wikipedia entry for entropy estimation). 2. our empirical experiments were based on the theory in the supplementary section K of the paper that depends on distance of tasks. To expand on this final point, the hypothesis is that if there is no difference between tasks then there is no need to learn to adapt – which is what we tested inspired by the theory in section K. Finally we don’t think this point is enough to reject our work because Task2Vec is grounded on extensive experiments, and in addition, we showed that our Task2Vec diversity correlates with the *true* Hellinger distance of tasks in the supplementary section H figure 11. Therefore, the Hellinger connection is stronger, since Hellinger can be connected to KL and other divergence metrics.\n\n\nMy apologies if I don’t fully understand your final point – but we do not want to misrepresent your suggestions. We believe the main objective is that you want us to change the analysis away from accuracy. I definitely want to understand this deeper because as I expressed earlier I don’t see a serious issue with it. I do understand that two models are not the same if their output is the same, and we don’t claim that. In fact, we point out that the feature layers are different, although the accuracy is very similar, and pointed it out as a fascinating point in figure 2. The main contribution is that the two models are indistinguishable from a performance perspective, which is contrary to what other works claim – e.g. [8] . We are happy to clarify this in the paper if this is the issue. \n", " Hi 3tgb. Thanks for the direct, honest and concise review. \n\nResponse to questions:\nIn Figure 4, all 7 data sets that we trained were on a fully connected network. Thus, our experiments hold for fully connected nets for at least of 7 data sets with varying diversities in the 1D Gaussian setting. In addition, each pair of points in that figure is a data set, which means we evaluated 7 different data sets. We admit these tasks are synthetic, but their true diversity can be computed exactly using the Hellinger distance between distributions – which is impossible for any real task. We do admit it did not occur to us to do transformer-based experiments, since previous work claiming that transfer learning is better than MAML did not use transformers. Our work was mainly in response to that work (e.g. [8] Y. Tian, Y. Wang, D. Krishnan, J. B. Tenenbaum, and P. Isola, “Rethinking Few-Shot Image 325 Classification: a Good Embedding Is All You Need?,” 2020), but we think it’s an interesting idea to include transformer-based experiments in the future (e.g. ViT). But we don’t think it’s essential, since transformer based models have not unambiguously dominated the computer vision yet. We do want to express that to a point, it’s impossible to try all architectures and tried the most relevant ones for few-shot learning. However, we want to emphasize that instead we did an experiment with increasing model size given a fixed architecture (figure 3). Which adds a fascinating perspective to the problem, since currently there seems to be an emphasis on increasing model sizes. Perhaps one would expect meta-learning to help the most when the model is small, and we showed that this is not the case, even with tiny models. We argue this strengthens our hypothesis further in the most meaningful way, since it discards model size as an important factor for meta-learning – especially in the setting where one would expect it to be most influential where small models might benefit the most from meta-learning. \nWe reference the definition of a task in section 3. It is the standard n-way k-shot classification task. We are happy to further clarify if needed. The 500 tasks is a subset of the 64 choose 5 ways to choose a task. Using MiniImagenet as an example, there are 64 labels, and we usually select 5 classes to form a task. The tasks are definitively unique, but given we used 500 – it is likely that some tasks share some labels. But this is not special to our work, this is how few-shot learning has been evaluated in the past. We do think the definition of a task must affect the value of diversity. We can easily provide experiments to see the effects of say, changing the shots, or the number of classes and see how the diversity coefficient changes if we want to explore that. However, few-shot learning is focused on few-shots (e.g. n=5) and that is how we computed our metric. We are unsure what insights it would provide to increase the shots since it wouldn’t be few shot anymore, but it's easy to try. Perhaps you will be interested in looking at figure 11 – where we plot the correlation of the Task2Vec diversity against the *true* Hellinger Diversity for a synthetic task. They correlated positively, and this is interesting because we can compute the true Hellinger distance between distributions in this case. Final comment on definition of a task, the diversity coefficient computes the difference between tasks, so it attempts to compute the *true way to count* how many truly different tasks there are. Contrast this with counting tasks using 64 choose 5. The latter doesn’t take the actual data into account in it’s counting, and thus is prone to over counting. We see diversity analogously to how VC-dimension attempts to count the true number of different models in a hypothesis class. We are happy to make these remarks explicit in our paper & how we believe the current way to define a task is lacking.\nOh, that is strange. We will for sure fix it (“Line 292, 396, the section number is missing.”) – should have been section K. For that, we used the MIT license.\n", " We respectfully disagree that our results meaningful because they are not mechanistic. Our argument does the following:\nIt avoids the current pattern in machine learning literature that appeals to vague intuition for definitions of “tasks” – especially, without quantifiable metrics. We provide a very concrete way to measure the properties of the data and see how those affect different algorithms. \nWe provide a concrete way to go beyond collecting large data sets with vague claims of them being “diverse tasks”. We believe we move the field forward and away from appealing to intuition and instead use grounding data set creation on quantitative metrics.\nMost importantly, one can now relate the performance of different meta-learning algorithms to these quantitative properties of the data sets. \nTo the best of our knowledge, this goes beyond what is being attempted right now as far as we know. In the most respectful manner and with a sincere effort to understand your perspective, why do you think our quantitative data-centric approach to meta-learning is not meaningful?\n\nWe do want to make a comment on the mechanistic perspective. We actually do have some results on this in section K of the supplementary section. \nThis is where perhaps its subjective. We (perhaps incorrectly) assume you think it’s important since it’s the first point you brought up. But during the research & writing process, we didn’t think so – to the point where we relegated it to the supplementary section. We are happy to fix that if the reviewers value it. However, we suspect that a complicated phenomenon – like separating meta-learning algorithms – is more realistically studied from an empirical study.\nHowever, our none exhaustive theoretical analysis did inspire this work.\n\nWe’d like to address the second weakness point – the one that mentions that our use of Task2vec is insufficiently novel. The novelty is in the use of well-studied task embeddings to make non-trivial observations of meta-learning. In addition, our use of these metrics suggests a paradigm shift in the creation of benchmarks – the foundation of machine learning research.\n\nWe committed to improving the writing and the figures for the final version. \n\nAnswers to question:\nQ1 We don’t believe it is obvious that MAML and USL should learn to produce different (or similar) representations. From section K of the theory out, it was reasonable to hypothesize MAML and USL to learn similar representations under low diversity. In short, the experiments were meant to explore a. if they learned the same function since they have very similar accuracy and b. to test the hypothesis inspired by the theory\nQ2 Happy to clarify why diversity matters. The intuition is that the more diverse sampled tasks from a benchmark are, the more the algorithm has to learn to adapt/learn. Section K in the supplementary is an attempt to make that formal. Following that idea, if a task has low diversity, then the algorithm doesn’t need to be learn to learn. This is reflected in the extreme cases in section K: a. the decision function doesn’t need to change if task diversity is zero, or b. the task can be identified so the meta-learner has the capacity of adapting to produce the perfect decision function for that task. In the case where the diversity is low, we’d expect either method to be very similar – which is what we observed. \nQ3 We want to emphasize that our paper was meant to explore the low diversity regime in depth e.g. see the title of our paper. This is because the low diversity setting is already rich in empirical and theoretical analysis. If it is needed, please let us know explicitly and ideally with a reason so that we can act accordingly. Having said that, we are happy to respond. This question is excellent and nuanced. The high diversity regime can be in some cases easier. This is unintuitive, but in the high diversity regime there is more information to discriminate classes. However, in the low diversity setting, it is harder because the model has to work with less variation. This is especially clear in our synthetic experiments, where the tasks are harder to distinguish.\nQ4: We relied implicitly on the original justifications of Task2Vec. They showed that their vectors align with qualitative properties of tasks e.g. it correlates with taxonomic distance, and vectors cluster wrt taxonomies & semantics. We did not include a thorough recapitulation, but we are happy to provide it. Final comment on the relation of Task2vec diversity and “true diversity”. To do this, one needs a way to estimate the distance between the true task distributions. This is not usually available or it’s hard to estimate for high dimensional data. But in our synthetic experiments, we know exactly which Gaussians we used to generate tasks. Therefore, in figure 11 section H we explicitly correlate the Task2Vec diversity with the true diversity. We observed that they correlate well with an R-value of 0.990.\n", " This paper revisits the agreement of recent results of transfer learning methods outperforming meta-learning algorithms in few-shot learning domains, and claims that they are in fact not much different in performance particularly for a dataset with low diversity. To quantify the diversity, a new metric, called diversity coefficient, is proposed by leveraging the vectorized representation of each task obtained by Task2Vec. In empirical study, the authors' claim seems to be valid as the accuracy are pretty much similar to each other in between MAML and USL (i.e., a transfer learning method) using Cifar-FS and Mini-ImageNet, both of which have low diversity. (Strengths)\n1. The main claim of this paper sounds very important as it tries to break the common knowledge agreed by many recent works in few-shot learning.\n2. Experiments are well designed in a way that the corresponding results properly justify the main claims and questions.\n\n(Weaknesses)\n1. Although the paper starts with ambitious insights, it does not properly give a meaningful conclusion in the end. There is no explanation on why low diversity leads to the similar performance of MAML and USL. Furthermore, when the diversity gets higher, it seems that the common knowledge is still correct as USL starts to outperform MAML. Thus, it would be much more interesting if the paper deeply investigates the hidden relationship between diversity and performance of both methodologies in few-shot learning. At the present form, the paper looks kind of an intermediate work on the way to its finalized version.\n\n2. The proposed diversity metric is adequate, but its novelty is mostly coming from the corresponding preliminary work, i.e., Task2Vec. What is newly proposed in this work is to obtain the expected distance between tasks via their vectorized representations.\n\n3. In terms of presentation quality, this write-up should be more improved and finalized. There are some typos and missing references throughout the paper, and most of figures are in low quality and hard to recognize. 1. The experimental results of Figure 2 is somewhat obvious. Are they intended to see models trained by MAML and USL are not the same in their weights?\n\n2. Could you provide any insights on why diversity matters in the performance of MAML and USL? \n\n3. Given a high diversity dataset, USL seems to be still better than MAML. Doesn't this imply that USL deals with a more challenging learning problem setting better than MAML?\n\n4. How can we trust the proposed metric indeed well reflects the true diversity? Any experimental justification on that? Potential negative societal impact is neither discussed nor applicable to this work.", " This paper presents a novel metric which quantifies the diversity of few-shot learning benchmarks. The authors compare MAML with transfer learning w.r.t. multiple aspects on two few-shot learning benchmarks. Several empirical discoveries are presented including that transfer learning fails to outperform MAML when the intrinsic diversity is low for the evaluation benchmark. Strengths:\n- The paper is quite easy to follow.\n- The paper compares MAML and transfer learning from an interesting perspective, the so-called \"problem centric\" approach.\n- The idea of rethinking the diversity of datasets is important for analyzing models, and this paper makes a good case for it by conducting empirical evaluations.\n- The diversity coefficient is intuitive.\n\nWeakness:\n- The empirical evaluation needs to be stronger. \n- Some parts are not clear.\n- Minor typos.\nPlease see questions for details. 1. In the empirical evaluation, it seems that only classification task and ConvNet-based models are used. For making a concrete conclusion, more tasks and diverse architectures (e.g. transformer-based backbone) will be more persuasive. Otherwise, the scope of the conclusion needs to be narrowed down.\n2. The definition of tasks is not clear in the caption of Table 1. Are the 500 few-shot learning tasks subsets from the original dataset? Do they have disjoint classes? Does changing the definition of these tasks results in different diversity coefficient?\n3. Line 292, 396, the section number is missing. The limitation of the work is discussed in Section 6. The potential negative societal impact is not discussed.", " The aim of the paper is to empirically compare the performance of transfer-learning (fine-tuning last fully connected layer in a pre-trained neural network) and meta-learning (and in particular, MAML - an instance of meta-learning). The two learning approaches of interest are investigated in different points of views: intrinsic diversity of the dataset used, similarity between their features at some imtermediate layers of the neural networks used and different model sizes. The empirical results show that MAML and fine-tuning a pre-trained model are equivalent in terms of performance. **Strengths**\n\nThe major strength of the paper is to study the difference between meta-learning and transfer-learning. This is an interesting research direction to understand which method performs better in which setting. Also, as stated at the end of the introduction section, it provides another point of view in terms of evaluation, not just simply making larger and larger datasets.\n\nAnother strength of the paper is to quantify the characteristic of tasks sampled according to the episode setting. The newly-proposed metric, *diversity coefficient* , is based on the representation of tasks, known as Task2Vec, to calculate the cosine similarity between tasks belonging to the same dataset. This quantifies the \"information\" of the dataset, providing insights into the performances of the model of interest.\n\nOne more strength is that the paper is gone through a thorough review of the literature with an extended section in the literature.\n\n---\n\n**Weaknesses**\n\nFirstly, the paper lacks of a coherence to connect all the contributions together. As I understand, the main goal is to show that meta-learning and transfer learning share similar performances even when the dataset used is diversity enough or not. However, the presentation and writing leads to a hard time for me to get to the point.\n\nSecondly, the newly-proposed metric, known as *diversity coefficient*, is a straight-forward extension from Task2Vec, which heavily relies on the *probe* network used. Thus, I believe that using a single network is not enough, but we might need an ensemble of networks to marginalize out the bias of *probe* networks. In addition, the proposed metric might not work well in the present of outlier tasks since it is based on the expected (or average) similarity distance. Furthermore, as shown in (Dhillon et. al 2020, Figure 1) and (Nguyen et. al 2021, Figure 1), the accuracy of different tasks evaluated by the same model varies significantly. Thus, simply picking a small number of tasks, e.g. 500 tasks as mentioned in Table 1, to calculate the metric is inadequate. And since the number of tasks formed from a dataset is very large, e.g. there are $n = \\binom{64}{5} = 7,624,512$ 5-way classification tasks from the training set of mini-ImageNet, the number of distances calculate would be $\\mathcal{O}(n^{2})$, making this metric intractable when there are more classes. Of course, we might not need to take all tasks into account, but only consider the set of *core* tasks. However, finding that set is another problem.\n\nFinally, the evaluation to compare meta-learning and transfer-learning is based on classification accuracy on two datasets only. I believe that this might be insufficient to have a clear conclusion.\n\n**References**\n- Guneet S Dhillon, Pratik Chaudhari, Avinash Ravichandran, and Stefano Soatto. \"A baseline for few-shot image classification\". In *International Conference on Learning Representations*, 2020.\n- Cuong C. Nguyen, Thanh-Toan Do, and Gustavo Carneiro. \"Probabilistic task modelling for meta-learning\". In *Uncertainty in Artificial Intelligence*, 2021. The proposed *diversity coefficient* based on an average distance is not very convincing. I wonder if the diversity of a dataset can be calculated informatically. What I mean is that given a set of vectors representing tasks (obtained from Task2Vec), could we estimate the task distribution and explicitly calculate the entropy of that distribution, or implicitly calculate the entropy from those vectors? The reason for entropy is that the higher the entropy, the more informative that task distribution, which, to me, means the higher diversity.\n\nI have a concern about comparison between meta-learning and transfer-learning. Since both are using the blackbox approach, calculating the feature similarity of hidden layers might not be a good idea. The reason is that if two things are similar at the output, it does not mean that their components must be the same. Since their classification accuracy is similar, I suggest to look at the model complexity and we can base on the Occam's razor principle to see which method is better. Or, could we use different evaluation metric, such as calibration error, to see how difference the two methods are? The current form of the paper is quite limited since it is only applicable for one instance of meta-learning (MAML).", " The paper analyses some of the earlier claims about a simple transfer learning baseline outperforming SOTA meta-learning methods through the lens of task diversity. The paper posits, that with a fair comparison (e.g. same architecture, optimizer etc. ) MAML and transfer learning methods are statistically similar when the meta-training task diversity is low. Strengths:\n\t- The paper tackles an important question on the differences b/w transfer learning and meta-learning methods. It's an important question for the few-shot community whose answer is still not clear except for certain empirical insights. Hence, it's a good problem to study.\n\n\nWhile the problem being studied is important, the paper lacks in execution and does not have solid insights or takeaways. \n\n- The task diversity metric is not novel, but that is fine if more analysis is shown on why such a metric captures task diversity well. The current diversity metric is computed only on miniImagenet and CIFAR which are toy datasets for few-shot learning currently. It would be a good idea to take a wide range of datasets (e.g. MetaDataset, ORBIT, VTAB) and show how the task diversity metric plays out. I feel the current analysis and subsequent insights are incomplete.\n\n- The empirical results state that under a fair condition, when task diversity is low: MAML and Supervised learning methods are equivalent. What is only MAML chosen for the analysis? It is quite far from the state-of-the-art meta-learning methods. It would be a good idea to choose more meta-learning methods to provide a complete picture. While there exists plethora of methods, some technically simple, but well performing methods could be chosen (e.g., R2D2, MetaOptNet, FEAT).\n\n- There are no empirical results when the task diversity is high. It would be good to run such analysis on more real-world datasets.\nI couldn't understand the clear takeaway or message from the paper except for certain empirical insights. From a practical standpoint, it would be good to link the effectiveness of task diversity to how a method might be chosen. For e.g., if the task diversity has a direct correlation with certain methods, just computing the task diversity can help choose from a variety of meta-learners.\n The questions are stated in Strengths and Weaknesses. While the paper tackles an interesting and important question on when to use transfer learning vs. meta-learning, the paper falls short of execution in its current form. I would suggest to authors to revisit the paper, look into the comments and make the draft stronger with a) more experiments on the task diversity metric; (b) insights and experiments on how to link task diversity with few-shot methods; c) extending beyond MAML and testing more meta-learners. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "KAfrK-mJAgp", "KAfrK-mJAgp", "jema0OvfYf", "KAfrK-mJAgp", "KAfrK-mJAgp", "W96vJbpx-Si", "uhVB3MINJj", "4-O5PLN94Fb", "nips_2022_x2WTG5bV977", "nips_2022_x2WTG5bV977", "nips_2022_x2WTG5bV977", "nips_2022_x2WTG5bV977" ]
nips_2022_BWEGx_GFCbL
Stability and Generalization Analysis of Gradient Methods for Shallow Neural Networks
While significant theoretical progress has been achieved, unveiling the generalization mystery of overparameterized neural networks still remains largely elusive. In this paper, we study the generalization behavior of shallow neural networks (SNNs) by leveraging the concept of algorithmic stability. We consider gradient descent (GD) and stochastic gradient descent (SGD) to train SNNs, for both of which we develop consistent excess risk bounds by balancing the optimization and generalization via early-stopping. As compared to existing analysis on GD, our new analysis requires a relaxed overparameterization assumption and also applies to SGD. The key for the improvement is a better estimation of the smallest eigenvalues of the Hessian matrices of the empirical risks and the loss function along the trajectories of GD and SGD by providing a refined estimation of their iterates.
Accept
The paper studies the generalization of a committee machine using algorithm stability. Compared to previous works, the authors obtain similar generalization error for smaller width for both GD and SGD. Reviewers had some conflicting opinions about this paper, with major concerns on the limited novelty compared to [46] and the small interpretability of the generalization bound beyond NTK results. However they valued the ability to control the bias term in a kernel free manner which was left open in [46] and found the stability analysis interesting and promising. I do therefore recommend acceptance of the paper.
train
[ "9b6r2YXH-m", "p6O-DOk__zp", "tX9zIgPYAgA", "i8QMqeNKGzR", "YD4v72-bbLu", "fyo4MR6Xi6K", "dL2vXim_CVg", "bvwMr7tFc9", "09ff1HjLjC", "rqkQrjcnZiW", "pa2aTKfkNEJ", "nYV1gDB6ZRq", "8Zxp6IzO-T_", "PBxpm46PYkm" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the nice suggestion. We will follow your advice and will move the proof ideas of GD and SGD to the main text in the revised version.", " Thanks for the clarification of the role of Assumption 3 particularly in Theorem 6. The added sections in the appendix during the rebuttal on the proof ideas of GD and SGD are much clearer and, in my opinion, are worth adding to the main text. ", " Thank you very much for the clarification! Your response made the role of assumption 3 clearer, especially the fact that the improvement over previous work is not coming from there but rather from the analysis. It is also very interesting that your work can also incorporate large learning rates, further moving away from the NTK setting.\n\n", " Dear AC and reviewers,\n\nWe would like to thank you for the constructive comments and suggestions. We have posted point-to-point response to your comments, and we sincerely hope it would clarify your concerns. As the Author- Reviewer Discussion phase is about to close, we are very much looking forward to hearing from you about any further feedback. We will be very pleased to clarify any further concerns (if any). Thanks.\n\nBest Regards,\nAuthors\n", " Thank you very much for your constructive comments and suggestions.\n\n**Q: With the bounded norm, one can apply norm-based bounds in uniform convergence without applying stability-based bound. Therefore, could the authors provide a specific case where $\\|\\mathbf{W}^\\*\\|$ is not bounded? a specific case where we can apply the proposed bound when $\\|\\mathbf{W}_t\\|_2$ is not bounded?**\n\n**A**: Thank you for the insightful comment. A key challenge in applying the norm-based generalization bounds is that we only get bounds of $\\mathbf{W}_T$ in expectation instead of with high-probability. As suggested by you, we can apply uniform-convergence to get generalization bounds if we can get bounds of $\\mathbf{W}_t$ with high probability. However, the high-probability analysis on the norm of $\\mathbf{W}_t$ is much more challenging as this requires several concentration inequalities on martingale sequences and empirical process. This is even more challenging if we want to derive optimistic bounds. The analysis in [46] shows that $\\|\\mathbf{W}_t\\|_2=O(\\sqrt{\\eta t})$. This bound goes to infinity as $t$ increases and holds almost surely. We are not sure whether our analysis can imply a finite bound for $\\|\\mathbf{W}_t\\|_2$ with high probability. We will add discussions regarding this in the revised version, and will consider this interesting question in the future study.\n\n**Q: The authors need to distinguish between the subscript $\\mathbf{W}_T$ and $\\mathbf{W}_\\{1/\\eta T\\}$.**\n\n**A**: Thank you for the comment. We note the similarity between these two notations. Therefore, we introduce asterisk in the notation $\\mathbf{W}^\\*_{1/\\eta T}$. We will emphasize this in the revised version.", " Thank you very much for your constructive comments and suggestions.\n\n**Q: I think it is worth to add that citation to the related works since uniform convergence-based bounds are already discussed.**\n\n**A**: Thank you for indicating the related work on the uniform convergence, which would make our stability analysis more convincing. We have added this interesting reference in our discussion of the uniform-convergence approach for deep learning in the rebuttal revision (line 102).\n\n**Q**: The authors show Assumption 3 holds if $\\mathbf{W}^\\*$ have constant norm (not completely obvious). Is assumption 3 met for some image classification task? If yes, with what alpha? This seems tricky to check since we need to know $\\mathbf{W}^\\*$. It is not obvious how the analysis profits from assumption 3 which is not present in previous works. I guess optimization analysis is not the same as in [46] and assumption 3 hence implicitly shows up here? I would find it very helpful if the authors could clarify the role and intuition of assumption 3.\n\n**A**: Thank you for the insightful comment. Motivated by your comment, we have modified Theorem 6 on the excess risk bounds. In the rebuttal revision, we remove Assumption 3 in Theorem 6 and get\n\n $$\n \\mathbb{E}[L(\\mathbf{W}_T)] - L(\\mathbf{W}^\\*) = O\\Big(\\frac{\\eta TL(\\mathbf{W}^\\*)}{n}+\\Lambda_\\{\\frac{1}{\\eta T}\\}\\Big),\n $$\n\n where\n\n $$\n \\Lambda_\\lambda:=\\inf_{\\mathbf{W}}\\big(L(\\mathbf{W})+\\lambda\\|\\mathbf{W}-\\mathbf{W}_0\\|_2^2\\big)-L(\\mathbf{W}^\\*).\n $$\n\n This matches the analysis in [46] but relaxing the overparameterization from $m \\gtrsim (\\eta T)^5$ in [46] to $m \\gtrsim (\\eta T)^3$. Therefore, our improvement over [46] comes from our analysis and does not rely on Assumption 3. In Corollary 7, we get explicit rates by imposing Assumption 3. Intuitively, Assumption 3 tells us how fast we can approximate the target model $\\mathbf{W}^\\*$ within a ball of radius $R$. For example,\n if we assume \n\n$$\n\\min_{\\|\\mathbf{W}\\|_2\\leq R}L(\\mathbf{W})-L(\\mathbf{W}^\\*)\\leq c_\\alpha' R^{\\frac{2\\alpha}{\\alpha-1}},\n$$ \n\nthen Assumption 3 holds. Indeed, let \n\n$$\n\\mathbf{W}'_{R}=\\mbox{argmin}_\\{\\|\\mathbf{W}\\|_2\\leq R\\}L(\\mathbf{W})-L(\\mathbf{W}^\\*).\n$$\n\nWe then have\n\n $$\n \\min_{\\mathbf{W}}L(\\mathbf{W})-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}\\|_2^2 \\leq L(\\mathbf{W}_R')-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}_R'\\|_2^2\n \\leq c_\\alpha' R^{\\frac{2\\alpha}{\\alpha-1}}+\\lambda R^2.\n $$\n\n If we choose $R=\\lambda^{\\frac{\\alpha-1}{2}}$ then Assumption 3 holds as follows\n\n $$\n \\min_{\\mathbf{W}}L(\\mathbf{W})-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}\\|_2^2\\leq c_\\alpha'\\lambda^{\\frac{\\alpha-1}{2}\\frac{2\\alpha}{\\alpha-1}}+\\lambda\\lambda^{\\alpha-1}=(c_\\alpha'+1)\\lambda^\\alpha.\n $$\n\n The parameter $\\alpha$ also depends on the regularity of the unknown $\\mathbf{W}^\\*$, which is not easy to check in practice. However, this assumption is common in the approximation error analysis. For example, in the kernel learning setting [18, 52] people often impose assumption as $\\min L(f)-L(f^\\*)+\\lambda\\|f\\|_K^2=O(\\lambda^\\alpha)$. The parameter $\\alpha$ reflects the regularity of the optimal function $f^\\*$: $\\alpha$ increases to $1$ if $f^\\*$ becomes more regular.\n\n**Q: Readability would greatly benefit from reducing the number of Theorems, Lemmas and Corollaries in the main text. Explicitly writing out the exact width requirements as well as other bounds, hinders readability as well and makes statements rather cluttered.**\n\n**A**: Thank you for the suggestion on the organization of the paper. We will put several theorems/lemmas to the appendix, and only leave main results in the main text. We will also use big-O notation and leave the exact form in the appendix to improve the readability of the paper.\n\n**Q: The NTK is associated with large width but it also strongly relies on small learning rates. How do the learning rates in this work ($\\eta T \\approx n$ in the noiseless case) compare to the minimal learning rates needed to be in the NTK regime, i.e. Theorem 2.1 in [1]? Are we also operating outside of the NTK regime in terms of learning rate size?**\n\n**A**: Thank you for the comment. For gradient descent, our excess population risk bounds hold if $\\eta=1/(2\\rho)$. Since $\\rho\\leq C_x^2\\big(B^2_{\\phi'}+B_{\\phi''}B_\\phi+B_{\\phi''}C_y\\big)$, the learning rate can be larger than $1/2C_x^2\\big(B^2_{\\phi'}+B_{\\phi''}B_\\phi+B_{\\phi''}C_y\\big)$, which is independent of $m$ and $n$ and is outside of the NTK regime. As a comparison, Theorem 2.1 in Lee et al (2019) requires $\\eta\\leq2/\\lambda_{\\max}(\\Theta)$, where $\\Theta\\in\\mathbb{R}^{(md)\\times(md)}$ is an neural tangent kernel. Therefore, the learning rate in Lee et al (2019) is small. We will add discussions in the revised version.\n\n[1] Wide Networks of Any Depth Evolve as Linear Models Under Gradient Descent, Jaehoon Lee et al.\n\n[2] Uniform Convergence May be Unable to Explan Generalization in Deep Learning, Vaishnavh Nagarajan, Zico Kolter", " Thank you very much for your constructive comments and suggestions.\n\n**Q: It also suffers from the same limitations as the original paper where the number of parameters still depends on $T$ and early stopping is required even for the noiseless or low noise case. As these questions still remain, it is hard to evaluate the impact of reducing the scale of width from $(\\eta T)^5$ to $(\\eta T)^3$.**\n\n**A**: Thank you for the comment. According to Corollary 7, we should set $\\eta T\\asymp n^{\\frac{1}{\\alpha+1}}$. Then we get an improvement over [46] by a factor of $(\\eta T)^2\\asymp n^{\\frac{2}{1+\\alpha}}\\geq n$, which is significant if $n$ is large. It would be very interesting to develop risk bounds in a low-noise setting without early-stopping. A starting point would be the recent work \"Stability vs Implicit Bias of Gradient Methods on Separable Data and Beyond\" by M. Schliserman, T. Koren, where it was shown that SGD/GD can run with a larger number of iterations without overfitting for separable data. We will leave it as future work.\n\nMoreover, we provide stability and generalization analysis for SGD while the techniques in [46] can not apply, e.g., the critical estimation $ \\|\\mathbf{W}_t - \\mathbf{W}_0\\|_2 \\le \\sqrt{2\\eta t L_S(\\mathbf{W}_0)}$ used in [46]. Indeed, it was mentioned as Remark 1 in the paper [47] ([48] of the rebuttal revision) by the same author as [46] as a challenging open question for deriving stability and generalization of SGD.\n\n**Q: The justification for Assumption 3 is not very convincing as $\\|\\mathbf{W}^\\*\\|_2=O(1)$ is a strong requirement as the $\\mathbf{W}^\\*$ is a $d\\times m$-matrix. Can the authors provide more justification for this as this seems central to the analysis, e.g. Theorem 6?**\n\n**A**: Thank you for the comment. We have modified Theorem 6 in the rebuttal revision. In this version, we remove Assumption 3 in Theorem 6 and get the following bound\n\n $$\n \\mathbb{E}[L(\\mathbf{W}_T)] - L(\\mathbf{W}^\\*) = O\\Big(\\frac{\\eta TL(\\mathbf{W}^\\*)}{n}+\\Lambda_\\{\\frac{1}{\\eta T}\\}\\Big),\n $$\n\n where\n\n $$\n \\Lambda_\\lambda:=\\inf_{\\mathbf{W}}\\big(L(\\mathbf{W})+\\lambda\\|\\mathbf{W}-\\mathbf{W}_0\\|_2^2\\big)-L(\\mathbf{W}^\\*).\n $$\n\n This matches the bounds in [46] but is derived in a relaxed overparameterization $m \\gtrsim (\\eta T)^3$. This shows our improvement over [46] does not come from the Assumption 3 but from our analysis. Furthermore, if the optimal $\\mathbf{W}^\\*$ is sparse then $\\mathbf{W}^\\*$ has a finite norm. Assumption 3 amounts to saying that there exists a model with a controlled norm and accuracy comparable to $L(\\mathbf{W}^\\*)$. For example,\n if we assume \n\n$$\n\\min_\\{\\|\\mathbf{W}\\|_2\\leq R\\}L(\\mathbf{W})-L(\\mathbf{W}^\\*)\\leq c_\\alpha' R^{\\frac{2\\alpha}{\\alpha-1}},\n$$ \n\nthen Assumption 3 holds. Indeed, let \n\n$$\\mathbf{W}'_{R}=\\mbox{argmin}_\\{\\|\\mathbf{W}\\|_2\\leq R\\}L(\\mathbf{W})-L(\\mathbf{W}^\\*).$$\n\nWe then have\n\n $$\n \\min_{\\mathbf{W}}L(\\mathbf{W})-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}\\|_2^2 \\leq L(\\mathbf{W}_R')-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}_R'\\|_2^2 \\leq c_\\alpha' R^{\\frac{2\\alpha}{\\alpha-1}}+\\lambda R^2.\n $$\n\n If we choose $R=\\lambda^{\\frac{\\alpha-1}{2}}$ then Assumption 3 holds as follows\n\n $$\n \\min_{\\mathbf{W}}L(\\mathbf{W})-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}\\|_2^2\\leq c_\\alpha'\\lambda^{\\frac{\\alpha-1}{2}\\frac{2\\alpha}{\\alpha-1}}+\\lambda\\lambda^{\\alpha-1}=(c_\\alpha'+1)\\lambda^\\alpha.\n $$", " Thank you very much for your constructive comments and suggestions.\n\n**Q: Comparison to the literature is somewhat lacking, in particular in terms of rates (e.g. to [46] and/or to [1\\*]), or at least a discussion why it is challenging is missing.**\n\n**A:** Thank you for the comment. We modify Theorem 6 and Theorem 13 to make our results comparable to the results in [46]. In the current rebuttal revision, we remove Assumption 3 in Theorem 6 and get\n\n $$\n \\mathbb{E}[L(\\mathbf{W}_T)] - L(\\mathbf{W}^\\*) = O\\Big(\\frac{\\eta TL(\\mathbf{W}^\\*)}{n}+\\Lambda_\\{\\frac{1}{\\eta T}\\}\\Big),\n $$\n\n where\n\n $$\n \\Lambda_\\lambda:=\\inf_{\\mathbf{W}} \\big(L(\\mathbf{W})+\\lambda\\|\\mathbf{W}-\\mathbf{W}_0\\|_2^2\\big)-L(\\mathbf{W}^\\*).\n $$\n\n This matches the bound in [46] and, if we impose Assumption 3, the analysis in [46] implies similar rates as Corollary 7. We would like to mention that the key improvement in our work is that we relax the assumption $m \\gtrsim (\\eta T)^5$ in [46] to $m \\gtrsim (\\eta T)^3$ which will lead to much better relaxation on the overparametrization condition (the relation between $m$ and $n$) as summarized in Table 1 in the revised version. In particular, if $\\alpha=1$, our results indicate both GD and SGD for 2-layer SNNs with subquadratic overparametrization $m \\gtrsim n^{3/2}$ can lead to optimal risk rate $O(n^{-1/2})$ while the results in [46] always need superquadratic overparametrization $m\\gtrsim n^{5/2}$.\n\n**Q: Sometimes the narrative is unclear, bits of the proof ideas are introduced here and there. Perhaps it would be beneficial to have a separate proof idea section.**\n\n**A**: Thank you for the nice suggestion. We agree and have added sections on proof idea to clarify the idea. Please see Section B.1 and C.1 in the rebuttal revision.\n\n**Q: Finally makes its way into the exponent of the excess risk rate (as in the ridge regression case). At this point one would expect some comparison of rates: for instance [46] showed some rates where is on RKHS or GD in nonparametric setting [1\\*].**\n\n**A**: Thank you for pointing out the very interesting work [1*] which we are not aware of. We will cite this work and discuss the related results. [1*] established the generalization of GD on overparameterized neural networks. This work imposes an assumption that the optimal model $f^\\*$ lies in the RKHS of an NTK, which amounts to saying that the approximation error satisfies $\\min L(f)-L(f^\\*)+\\lambda\\|f\\|_K^2=O(\\lambda)$, where $\\|\\cdot\\|_K$ denotes the norm in the RKHS. The paper [1*] studies that GD for one-hidden-layer ReLU network with $L_2$ regularization from the NTK perspective and derives the appealing minimax optimal rate under the assumption that the width $m$ of the network is sufficiently large (e.g., $m$ is at least larger than $O(n^8)$ as we can see from the proof for Theorem 5.1 and 5.2 there). However, it is hard to derive a direct comparison since we study GD and SGD for one-hidden-layer network with a smooth activation function. We will add discussions in the revised version.\n\n**Q: How large the width should be in terms of the sample size when $\\eta T$ is set w.r.t. sample size? Is overparameterization mild (i.e. subquadratic)? Actually, Table 1 could also include the order of the width in terms of the sample size.**\n\n**A**: Thank you for the suggestion. According to Corollary 7, the width should be of the order of $n^{\\frac{3}{\\alpha+1}}$ in the general case. In particular, if $\\alpha=1$ we get $m\\asymp n^{\\frac{3}{2}}$ which indicates subquadratic overparameterization. In the low noise case, we need to set $m\\asymp n^3$ to get improved rate. We have added this important information of width in Table 1 of the rebuttal revision.\n\n[1*] Hu, T., Wang, W., Lin, C., \\& Cheng, G. (2021, March). Regularization matters: A nonparametric perspective on overparametrized neural network. In International Conference on Artificial Intelligence and Statistics (pp. 829-837). PMLR.", " Thank you very much for your constructive comments and suggestions.\n\n**Q: The authors note on Line 182 the bound in [46] is $m > (\\eta T)^3$; this is very close to the bound $m > (\\eta T)^3 n^{-2 \\alpha/(1+\\alpha)}$. These are minor technical improvements and one wonders whether we are learning anything new that gives a unique insight. The proof of Lemma 3, the comments on Lines 196-200 and Remark 3 suggest $W^\\*_{1/{\\eta T}}$ or $W^\\*$ are close to the initialization $W_0$ and thereby close to $W_t$ as well. I have similar reservations about Theorem 5.**\n\n**A:** Thank you for the comment. If $\\alpha=1$, our requirement $m \\gtrsim (\\eta T)^3 n^{-\\frac{2\\alpha}{1+\\alpha}} = (\\eta T)^3 n^{-1}$ in the stability analysis is sharper than the one in [46] by a factor of $n$, which is significant if $n$ is large. The intuitive insight for this improvement is that the smallest eigenvalue of the Hessian matrix of the empirical risk between $\\mathbf{W}_t$ and $\\mathbf{W}_t^{(i)}$ scales as $-\\frac{1}{\\sqrt{m}}(\\|\\mathbf{W}_t-\\mathbf{W}_t^{(i)}\\|_2+1)$. The analysis in [46] uses the crude bound $\\|\\mathbf{W}_t-\\mathbf{W}_t^{(i)}\\|_2=O(\\sqrt{\\eta t})$ based on the observation $ \\|\\mathbf{W}_t - \\mathbf{W}_0\\|_2 \\le \\sqrt{2\\eta t L_S(\\mathbf{W}_0)}$ for GD, while we use a better bound $\\|\\mathbf{W}_t-\\mathbf{W}_t^{(i)}\\|_2=O(n^{-1}(\\eta t)^{\\frac{3}{2}})$ based on the observation that $\\mathbf{W}_t$ and $\\mathbf{W}_t^{(i)}$ are produced by SGD on neighboring datasets.\n\n To control optimization errors, the analysis in [46] uses the crude bound $\\|\\mathbf{W}_t-\\mathbf{W}_0\\|_2=O(\\sqrt{\\eta t})$. As a comparison, we show the expectation of the norm of $\\mathbf{W}_t $ is uniformly bounded, i.e., $\\mathbb{E}[\\|\\mathbf{W}_t\\|_2]=O(1)$ under some conditions which suffice to derive risk bound. This key new estimation allows us to relax the assumption $m \\gtrsim (\\eta T)^5$ in [46] to $m \\gtrsim (\\eta T)^3$. Note $\\eta T\\asymp n^{\\frac{1}{\\alpha+1}}$ and our requirement in the overparameterization is sharper than the one in [46] by a factor of $(\\eta T)^2\\asymp n^{\\frac{2}{\\alpha+1}}\\geq n$.\n\n\n Moreover, we provide stability ang generalization for SGD while the techniques in [46] can not apply, e.g., the critical estimation $ \\|\\mathbf{W}_t - \\mathbf{W}_0\\|_2 \\le \\sqrt{2\\eta t L_S(\\mathbf{W}_0)}$ used in [46]. Indeed, it was mentioned as Remark 1 in the paper [47] ([48] of the rebuttal revision) by the same author as [46] as a challenging open question for deriving stability and generalization of SGD.\n\n**Q: There is a typo on Line 120, the empirical risk should be divided by n.**\n\n**A:** Thank you for the careful reading. We have corrected it in the rebuttal revision.\n\n**Q: The relevance of the results rests crucially on alpha. Can you give an intuitive explanation of what the parameter is?**\n\n**A:** Thank you for the comment. Intuitively, the parameter $\\alpha$ tells us how fast we can approximate the target model $\\mathbf{W}^\\*$ within a ball of radius $R$. For example, if we assume $\\min_{\\|\\mathbf{W}\\|_2\\leq R}L(\\mathbf{W})-L(\\mathbf{W}^\\*)\\leq c_\\alpha' R^{\\frac{2\\alpha}{\\alpha-1}}$, then Assumption 3 holds. Indeed, let $\\mathbf{W}'_R=\\mbox{argmin}_\\{\\|\\mathbf{W}\\|_2\\leq R\\}L(\\mathbf{W})-L(\\mathbf{W}^\\*)$. We then have\n\n $$\n \\min_{\\mathbf{W}}L(\\mathbf{W})-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}\\|_2^2 \\leq L(\\mathbf{W}_R')-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}_R'\\|_2^2\n \\leq c_\\alpha' R^{\\frac{2\\alpha}{\\alpha-1}}+\\lambda R^2.\n $$\n\n If we choose $R=\\lambda^{\\frac{\\alpha-1}{2}}$ then Assumption 3 holds as follows\n $$\n \\min_{\\mathbf{W}}L(\\mathbf{W})-L(\\mathbf{W}^\\*)+\\lambda\\|\\mathbf{W}\\|_2^2\\leq c_\\alpha'\\lambda^{\\frac{\\alpha-1}{2}\\frac{2\\alpha}{\\alpha-1}}+\\lambda\\lambda^{\\alpha-1}=(c_\\alpha'+1)\\lambda^\\alpha.\n $$\n Assumption 3 is motivated from the approximation analysis in kernel learning. Let $K$ be a Mercer kernel and $H_K$ be the associated reproducing kernel Hilbert space with the norm $\\|\\cdot\\|_K$. In kernel learning, we often impose an assumption on the decay of approximation error as follows [18, 52]\n\n $$\n \\min_{f\\in H_K}L(f)-L(f^\\*)+\\lambda\\|f\\|_K\\leq c_\\alpha\\lambda^\\alpha.\n $$\n This assumption is related to the regularity of the target function $f^\\*$. For example, if $f^\\*$ lies onto the range of a fractional power of an integral operator, then the above assumption holds.\n We adapt this assumption to learning with shallow neural networks.\n\n For the case of $\\alpha=1$, one explains this using the least popular risk can be achieved by some function from 2-layer neural networks, i.e., there exists $\\mathbf{W}^\\*$ such that $L(\\mathbf{W}^\\*) = \\inf_\\mathbf{W} L(\\mathbf{W})$ as we argued in (3.3).\n\n Furthermore, we modify Theorem 6 by deriving risk bounds without Assumption 3. These bounds are similar to those in [46] but require a relaxed overparameterization. This shows that our improvement over [46] does not come from Assumption 3 but from our analysis.", " This paper calculates a bound on the generalization error of a committee machine (i.e., a two layer neural network with weights of the top layer fixed to all 1s). It shows O(1/sqrt(n)) generalization gap if the number of hidden neurons is m > (eta T)^3 where eta is the learning rate of gradient descent/stochastic gradient descent and T is the number of gradient updates. This improves a previous result (m > (eta T)^5) slightly. The analysis relies on using algorithmic stability and constructing a lower bound on the smallest eigenvalue of the Hessian. + The analysis is sound as far as I could check.\n\n+ The analysis of SGD follows along very similar lines of [46] and the analysis of GD, but I believe it is novel, in principle.\n\n- I believe this work makes very minor improvements, both technical and methodological ones, on top of existing work, in particular [46]. I will give an example below.\n\nThe authors note on Line 182 the bound in [46] is m > (eta T)^3; this is very close to the bound in this paper of m > (eta T)^3 n^{-2 alpha/(1+alpha)}. This comment also applies to the comment on Line 189 where the bound on the eigenvalue of the Hessian is improved from O(sqrt(eta T)) to O(n^{-1} eta T). These are minor technical improvements and one wonders whether we are learning anything new about the problem that gives a unique insight. The proof of Lemma 3, the comments on Lines 196-200 and Remark 3 suggest that W^*_{1/eta T} or W^* are close to the initialization W_0 and thereby close to W_t as well. I have similar reservations about Theorem 5. 1. There is a typo on Line 120, the empirical risk should be divided by n.\n\n2. The relevance of the results rests crucially on alpha. Can you give an intuitive explanation of what the parameter is? N/A", " The paper improves algorithmic stability analysis of GD-trained shallow neural networks of [46]. In particular, [46] required overparameterization of order $\\text{width} \\geq (\\text{step-size} \\cdot \\text{GD-steps})^3$ for the stability/generalization bound, whereas in the current paper this is improves till $\\text{width} \\geq (\\text{step-size} \\cdot \\text{GD-steps})^2$ when the problem is 'easy'. Moreover [46] showed an excess risk bound which required overparameterization of order $\\text{width} \\geq (\\text{step-size} \\cdot \\text{GD-steps})^5$, whereas in the current paper this is improved till exponent of 3. In this paper (as in [46]) the theory works for an early stopped GD (i.e. $\\text{step-size} \\cdot \\text{GD-steps}$ is taken to be a sublinear function of the sample size), which makes sense since otherwise consistency is not achievable. The excess risk bound scales with the `niceness' exponent of the problem. The analysis is also extended to SGD. Strengths:\n* Improves overparameterization rates compared to [46] (see details below).\n* Excess risk analysis avoid oracle-type arguments of [46] and makes bound more specialized (see details below).\n* Extends analysis to SGD.\n\nWeaknesses:\n* Comparison to the literature is somewhat lacking, in particular in terms of rates (e.g. to [46] and/or to [1*]), or at least a discussion why it is challenging is missing.\n* Sometimes the narrative is unclear, bits of the proof ideas are introduced here and there. Perhaps it would be beneficial to have a separate proof idea section.\n\nRate improvements achieved in this paper are rather technical. The key observation is that the smallest eigenvalue of the Hessian matrix of the empirical risk between two parameters $\\mathbf{W}$ and $\\mathbf{\\tilde{W}}$ scales as $-\\frac{1}{\\sqrt{\\text{width}}} (\\|\\mathbf{W} - \\mathbf{\\tilde{W}}\\| + 1)$. When these parameters are taken to be iterates of GD with intact and perturbed training sample, [46] controlled $\\|\\mathbf{W} - \\mathbf{\\tilde{W}}\\|$ in a pessimistic way through descent-lemma type argument. The current paper delves into a more elaborate argument and shows that both iterates converge to the regularized solution + some offset which is expected to be much smaller than in the pessimistic case.\n\nThe second contribution of the paper is control of the excess risk which is based on the certain regularity of a problem. [46] showed an oracle-type bound where the excess risk scales with the $\\ell 2$-norm of a minimal-norm interpolating network (here norm is understood as relative to initialization, i.e. always involves $\\cdot -\\mathbf{W}_0$). In the current paper, instead, the control is done w.r.t. the minimizer of $\\ell 2$-penalized risk and then the paper assumes that the approximation error of the true risk behaves nicely, i.e. as $\\lambda^{\\alpha}$ where $\\lambda$ is a regularization parameter and where $\\alpha$ is a niceness exponent. This is a common technique in analysis of the ridge regression. Finally $\\alpha$ makes its way into the exponent of the excess risk rate (as in the ridge regression case).\nAt this point one would expect some comparison of rates: for instance [46] showed some rates where is on RKHS or GD in nonparametric setting [1*].\n\n[1*] Hu, T., Wang, W., Lin, C., & Cheng, G. (2021, March). Regularization matters: A nonparametric perspective on overparametrized neural network. In International Conference on Artificial Intelligence and Statistics (pp. 829-837). PMLR. How large the width should be in terms of the sample size when $\\text{step-size} \\cdot \\text{GD-steps}$ is set w.r.t. sample size? Is overparameterization mild (i.e. subquadratic)?\nActually, Table 1 could also include the order of the width in terms of the sample size. Yes", " The paper studies the generalisation of shallow neural networks using algorithm stability. The work provides a tighter analysis of [1], obtaining a similar generalization for smaller width $ m \\sim O ((\\eta T) ^ {3}) $ in comparison to $ O ((\\eta T) ^ {5}) $ in [1]. The paper also extends the results for SGD with a similar over-parameterization requirement. Strengths:\n\n1) This work identifies crucial quantities in the analysis of analysis as [1] majorly, $ \\| W_ {t} - W_{1/\\eta T}^{*} \\| $ or $R_{T}$. Using these quantities, the paper produces a finer analysis reducing the overparameterisation required to obtain similar generalisation guarantees. \n\n2) Also extend the stability analyses to SGD. \n\nWeakness:\n1) The paper clearly extends the results of [1] and mostly follows a similar framework. Hence, it also suffers from the same limitations as the original paper where the number of parameters still depends on $T$ and early stopping is required even for the noiseless or low noise case. As these questions still remain, it is hard to evaluate the impact of reducing the scale of width from $O((\\eta T)^5)$ to $O((\\eta T)^3)$ . \n\n\n[1] Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel, NeurIPS 2021. One potential concern is Assumption 3, which is crucial to control $R_{T}$. The justification for Assumption 3 is not very convincing as $ \\| W^* \\| = O(1) $ is a strong requirement as the $ \\| W^{*} \\| $ is a $ d \\times m $-matrix. Can the authors provide more justification for this as this seems central to the analysis, e.g. Theorem 6? The limitations are adequately addressed. ", " The authors study excess risk bounds for one hidden-layer neural networks where the last layer is fixed to the initialization and only the first layer is trained by gradient descent. The resulting bounds significantly improve the overparametrization requirements of previous work while preserving the same rates. They achieve this by splitting the excess risk into a generalization part, the optimization part and a left-over part, and controlling each with separate techniques. More precisely, the generalization part is analysed through the notion of on-average stability, and it is here where a more careful control over the smallest Hessian eigenvalue leads to improvements over previous works. The atuhors further show that the optimization can be similarly controlled with weaker overparametrization than previously, showing that under some assumptions they can recover the same results as [1] under smaller widths. The final part is largely controlled by a regularity assumption not made in previous works Moreover, they extend their results also to the setting of stochastic gradient descent, something which was not possible in previous work. \n\n[1] *Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel, D. Richards and I. Kuzborskij* **Strengths:**\n1. In my opinion the paper pushes several directions that are worth pursuing to improve over current state-of-the-art generalization bounds. First, this work aims to move away from the so-called NTK regime by not relying on the heavy overparametrization usually required in these works, similarly to [1]. This is important since in practice, neural networks have been observed to outperform their kernel counterparts in realistic settings. Second, analyzing stability bounds moves away from the paradigm of uniform convergence and the implied capacity bounds relying on notions such as Rademacher complexities and VC-dimensions. Not relying on uniform convergence may be crucial to achieve non-trivial progress in the field of generalization, as the work [2] has shown. I think it is worth to add that citation to the related works since uniform convergence-based bounds are already discussed.\n2. The technical contributions seem highly non-trivial improvements over previous works and the width requirement is strongly reduced without incurring too much loss in terms of the achieved rates, both in noisy and noiseless settings. Moreover, the extension to SGD also seems technically very involved and is something that could not be achieved in [1]. \n\n**Weaknesses:**\n1. The role of assumption 3 is not clear to me on several levels. What does it encapsulate on an intuitive level? Mathematically, it is the difference between the optimal regularized the optimal non-regularised generalization loss. The assumption imposes a polynomial upper bound in the regularization parameter lambda. The authors show that this assumption is satisfied if the optimal weights W* have constant (i.e. not growing ) norm, something also not completely obvious. For instance if I think of some image classification task such as MNIST, is assumption 3 met? If yes, with what alpha? This seems tricky to check unfortunately since we need to know the optimal model W^*. On the other hand, it’s also not obvious how the analysis in this paper profits from assumption 3 which is not present in previous works. Theorem 2 and Theorem 5 don’t seem to explictily need assumption 3 but I guess the optimization part analyzed in this paper is not the same as in [1] and assumption 3 hence implicitly shows up here too? I would find it very helpful if the authors could clarify the role and intuiton of assumption 3 and whether we can gain any numerical insights into it.\n2. While the paper is well-written, I think its readibility would greatly benefit from reducing the number of Theorems, Lemmas and Corollaries in the main text (there are 14!). Restricting this to the main results (Generalization gap, Optimization error, excess bound, novel key lemma etc.) would already make the read more enjoyable, without losing too much of the content and story-line. While I appreciate that the main text is extremely precise about all constants, I also think that explicitly writing out the exact width requirements as well as other bounds, hinders readibility as well and makes statements rather cluttered. Listing the main dependencies in big-O fashion in the main text would make it simpler to get an understanding of the terms. The exact forms of the terms could for instance be listed in the appendix.\n\n[1] *Stability & generalisation of gradient descent for shallow neural networks without the neural tangent kernel, D. Richards and I. Kuzborskij*\n\n[2] *Uniform Convergence May be Unable to Explan Generalization in Deep Learning, Vaishnavh Nagarajan, Zico Kolter* The NTK is mainly associated with large width but it also strongly relies on small learning rates. How do the learning rates in this work (\\eta T \\approx n in the noiseless case) compare to the minimal learning rates needed to be in the NTK regime (provided we have a wide enough network), i.e. Theorem 2.1 in [1]? Are we also operating outside of the NTK regime in terms of learning rate size? \n\n[1] *Wide Networks of Any Depth Evolve as Linear Models Under Gradient Descent, Jaehoon Lee et al.* The authors discuss limitations, some more insight into Assumption 3 would be helpful to the reader however.", " This paper focuses on deriving stability-based generalization bound for shallow neural networks. Specifically, the authors improve the previous bounds by relaxing the requirement for the width from (\\etaT)^5 to (\\eta T)^3, where \\eta denotes the step size and T is the training iterate. The key technical difference from the previous works is a more fine-grained estimation of the smallest eigenvalue of the Hessian matrix. The authors also apply their methods to the SGD regimes. \n\nI think this paper has the potential to be accepted. However, there are still some questions to be answered. \nMy major concern falls in comparing this paper's bound with the norm-based generalization bound (uniform convergence).\nIn Line 194-195, the authors show E\\|W_t\\| = O(1) in their analysis.\nHowever, this case might be solved by norm-based bound trivially. \nFor more details, see the question part. \n Contributions.\n1. The authors improve the previous stability-based bound on the shallow neural networks. Specifically, the authors improve the requirements for the width from (\\etaT)^5 to (\\eta T)^3.\n2. The authors also extend their analysis to SGD, which is more challenging. \n3. To reach the bound, the authors provide a more fine-grained analysis of the smallest eigenvalue of the Hessian matrix.\n4. This paper is clearly written, and the authors provide many insights. \n\nFlaws:\nAs said before, in Line194-195, the authors show that if etaT = O(\\sqrt n) and |W^* - W_0| =O(1), we can show that |W_t – W*| = O(1).\nThe first choice of etaT is used later, and the second condition of |W^* - W_0| is just the special case proposed in Assumption~3. \nTherefore, it seems that $|W_t| = O(1)$ easily holds in practice. \nHowever, with the bounded norm, one can apply norm-based bounds in uniform convergence without applying stability-based bound.\nTherefore, could the authors provide a specific case where $|W_t|$ is not bounded? \n\nNot important:\nThe authors need to distinguish between the subscript W_T and W_{1/\\eta T}.\n As said above, could the authors provide a specific case where we can apply the proposed bound when $|W_t|$ is not bounded?\nThis may also require another example for Assumption~3. No potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3, 2 ]
[ "p6O-DOk__zp", "dL2vXim_CVg", "fyo4MR6Xi6K", "nips_2022_BWEGx_GFCbL", "PBxpm46PYkm", "8Zxp6IzO-T_", "nYV1gDB6ZRq", "pa2aTKfkNEJ", "rqkQrjcnZiW", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL", "nips_2022_BWEGx_GFCbL" ]
nips_2022_V88BafmH9Pj
A Contrastive Framework for Neural Text Generation
Text generation is of great importance to many natural language processing applications. However, maximization-based decoding methods (e.g., beam search) of neural language models often lead to degenerate solutions---the generated text is unnatural and contains undesirable repetitions. Existing approaches introduce stochasticity via sampling or modify training objectives to decrease the probabilities of certain tokens (e.g., unlikelihood training). However, they often lead to solutions that lack coherence. In this work, we show that an underlying reason for model degeneration is the anisotropic distribution of token representations. We present a contrastive solution: (i) SimCTG, a contrastive training objective to calibrate the model's representation space, and (ii) a decoding method---contrastive search---to encourage diversity while maintaining coherence in the generated text. Extensive experiments and analyses on three benchmarks from two languages demonstrate that our proposed approach outperforms state-of-the-art text generation methods as evaluated by both human and automatic metrics.
Accept
All four reviewers sided to accept the paper, as the proposed contrastive search approach to mitigating text degeneration problem is simple and effective and has applications to a variety of NLG tasks. Its evaluation is quite comprehensive and includes competitive baselines, human evaluation, and evaluation of both LM/generation quality on Wikitext-103 and effect on a downstream task (dialog). Two of the reviewers were more hesitant (borderline accept), but one of them was quite satisfied with the author response and the other reviewer didn't raise any major issue. The one remaining concern is that experiments with GPT-2 were base on the "small" model, but the rebuttal shows that the findings of the paper mostly hold with bigger language models (medium and large) but become relatively small with XL. We suggest including these additional experiments in the next version of the paper, along with further discussions of these smaller differences.
val
[ "vhGXz0_eBAx", "1brnWHQRnZ3", "geUDHasYjwb", "vpNH_k-4He", "yVgpdBO1sT4X", "59k677nSXUq", "_a58NwIHW4xV", "bPwRE2FlxNd", "TEl5cd0GTRt", "8o_drRyrpT", "zVIOpvI5KN6", "3YR6VMoUPq5", "2mHZdd0oQ3Z", "T56hkp-qWo", "kdhFQZtIHzs" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our response!", " The response has addressed my major concerns. However, contrastive learning and its findings for NLP are not new. \n\nI decide to raise my rating to 5 -- borderline accept.\n\n", " Thank you for reading our response!", " Thank you for your comprehensive reply and addressing the weakness I pointed out with experiments on larger variants of GPT2. I've raised my score to 7.", " Thank you for your questions.\n\n### Weakness 1: Lack of novelty:\nTo the best of our knowledge, our work is the first effort on applying (token-level) contrastive learning approach to improve open-ended text generation models. The novelty and originality of our work are universally acknowledged by all other reviewers (Reviewer R4YS, a2Vh and v9y3).\n\n### Weakness 2: Reason for degeneration:\nOur work is motivated by the anisotropic nature of language models. We demonstrated that the anisotropy of language models is one of the underlying factors for model degeneration. Conversely, by maintaining an isotropic distribution of token representations, the model degeneration problem can be successfully addressed with our proposed decoding method, i.e. contrastive search.\n\n### Weakness 3: More experiments:\nOpen-ended text generation by itself is a core task in the NLP community and it is different in nature with respect to other NLG tasks, such as machine translation and document summarization, that have a low degree of freedom. In this work, our approach was specifically designed for the task of open-ended text generation. We have demonstrated the effectiveness of our approach through comprehensive experiments and analysis as acknowledged by Reviewers R4YS, a2Vh and v9y3. \n\nIt is interesting to investigate how well our approach performs on other NLG tasks like machine translation. We will leave it to our future work as described in our limitation section (Appendix A).\n\n### Question 1: Definition of anisotropy:\nThe anisotropic nature of language models was first investigated by [1]. The authors' original definition of anisotropic token distribution was based on token-level cosine similarity measurement [1]. In our study, we follow the same method as [1] and illustrate the language model's anisotropy from token-level measurement as demonstrated in Figure 1. Please refer to the original paper [1] for more details.\n\n### Question 2: How the language modelling quality is evaluated:\nDecoding algorithms are not required and only human-written texts are needed for the evaluation of language modelling quality. Please refer to Lines 140-148 of our paper and [2,3,4,5] for the definition of evaluation metrics on language modelling quality.\n\n### Limitations:\nWe never limit the model to only ***\"generate tokens that have not appeared in the previous context\"***. Instead, the proposed contrastive search is able to generate sequences containing a reasonable amount of repetitions, that are comparable to human-written texts, for high-frequency tokens as demonstrated in Table 1. \n\nAdditionally, as per our response to weakness \\#3, we focus on the task of open-ended text generation which has a high degree of freedom by its nature. Accordingly, our approach was specifically designed for this task and we have demonstrated the effectiveness of our method through comprehensive experiments and analysis as acknowledged by Reviewers R4YS, a2Vh and v9y3.\n\n\n[1] - [https://aclanthology.org/D19-1006.pdf](https://aclanthology.org/D19-1006.pdf)\n\n[2] - [https://aclanthology.org/P19-1285.pdf](https://aclanthology.org/P19-1285.pdf)\n\n[3] - [https://arxiv.org/pdf/1908.04319.pdf](https://arxiv.org/pdf/1908.04319.pdf)\n\n[4] - [https://arxiv.org/pdf/1803.10049.pdf](https://arxiv.org/pdf/1803.10049.pdf)\n\n[5] - [https://openreview.net/references/pdf?id=HJ7I_nV5g](https://openreview.net/references/pdf?id=HJ7I_nV5g)", " Thank you for your thoughtful reviews and constructive suggestions!\n\n### 1. How the proposed approach generalized to larger language models:\nThank you very much for your suggestion on investigating our approach with larger language models. In the following, we provide experimental results to analyze this problem from two aspects.\n\n#### 1.1. How anisotropic larger language models are:\n|Model|Model Size|Training Objective|perplexity$\\\\downarrow$|conicity$\\\\downarrow$|self-similarity$\\\\downarrow$|\n|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|\n|**Vanilla Transformer**|117M|MLE|26.60|0.50|0.22|\n|||SimCTG|**26.55**|**0.47**|**0.19**|\n|||||||\n|**GPT-2-small**|117M|MLE|24.32|0.90|0.86|\n|||SimCTG|**23.82**|**0.43**|**0.18**|\n|||||||\n|**GPT-2-medium**|345M|MLE|17.26|0.75|0.63|\n|||SimCTG|**17.10**|**0.44**|**0.18**|\n|||||||\n|**GPT-2-large**|774M|MLE|16.57|0.46|0.20|\n|||SimCTG|**16.53**|**0.42**|**0.17**|\n|||||||\n|**GPT-2-xl**|1.6B|MLE|16.10|0.45|0.20|\n|||SimCTG|**16.08**|**0.43**|**0.18**|\n\nFirst, we evaluate the anisotropy of language models with different sizes. To this end, we conduct experiments on Wikitext-103 by varying the size of the language model from 117M (i.e. GPT-2-small) up to 1.6B (i.e. GPT-2-xl). In addition, as suggested by Reviewer a2Vh, we also include a non-pre-trained model (i.e. vanilla transformer) with the same size as GPT-2-small and the conicity metric [1] to measure the anisotropy of language models.\n\nThe experimental results are presented in the Table above from which we can draw several conclusions:\n * (1) For all models, SimCTG helps to improve the perplexity as well as the language model's isotropy. \n * (2) With the same number of model parameters (i.e. 117M), the non-pre-trained model (i.e. vanilla transformer) does not suffer the anisotropic problem as the pre-trained GPT-2 does.\n * (3) For pre-trained language models, as the model size increases, the anisotropic problem becomes less severe when training with the vanilla MLE objective. Specifically, when the underlying language model is large enough (i.e. GPT-2-large and GPT-2-xl), the performances of SimCTG and MLE are comparable with each other.\n\nTo conclude, the anisotropy of language models relates to two factors: (i) whether the language model is pre-trained or not; and (ii) the size of the language model. Nonetheless, SimCTG always helps under all circumstances. In the camera-ready version, we will add more discussions with respect to this aspect. We leave the full-scope and rigorous investigations on the anisotropy of larger language models to our future work.", " #### 1.2. How larger language models perform on open-ended text generation with the proposed approach:\n|Model|Model Size| Objective|ppl$\\\\downarrow$|acc$\\\\uparrow$|conicity$\\\\downarrow$|self-similarity$\\\\downarrow$|Method|diversity$\\\\uparrow$|MAUVE$\\\\uparrow$|coherence$\\\\uparrow$|\n|:-------------:|:-------------:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|\n|**Vanilla Transformer**|117M|MLE|26.60|35.62|0.50|0.22|nucleus|0.89|0.81|0.541|\n||||||||contrastive|0.90|0.83|0.561|\n|||SimCTG|**26.55**|**36.03**|**0.47**|**0.19**|nucleus|0.89|0.82|0.543|\n||||||||contrastive|**0.91**|**0.85**|**0.566**|\n||||||||||||\n|**GPT-2-small**|117M|MLE|24.32|39.63|0.90|0.86|nucleus|0.94|0.90|0.577|\n||||||||contrastive|0.24|0.18|0.599|\n|||SimCTG|**23.82**|**40.91**|**0.43**|**0.18**|nucleus|0.94|0.92|0.584|\n||||||||contrastive|**0.95**|**0.94**|**0.610**|\n||||||||||||\n|**GPT-2-large**|774M|MLE|16.57|43.34|0.46|0.20|nucleus|0.94|0.91|0.583|\n||||||||contrastive|**0.95**|**0.96**|0.623|\n|||SimCTG|**16.53**|**43.47**|**0.42**|**0.17**|nucleus|**0.95**|0.93|0.591|\n||||||||contrastive|**0.95**|**0.96**|**0.626**|\n||||||||||||\n|**Human**|-|-|-|-|-|-|-|0.95|1.00|0.644|\n\nNext, we conduct experiments on Wikitext-103 and evaluate the performance of larger language models on open-ended text generation. Specifically, we use three different underlying language models, including vanilla transformer (177M), GPT-2-small (177M), and GPT-2-large (774M). The experimental results are shown in the Table above where the results of GPT-2-small are copied from Table 1 of our paper.\n\nFrom the results, we have the following findings:\n * (1) SimCTG + contrastive search always outperforms the strongest baseline (MLE + nucleus sampling) under all model configurations.\n * (2) When the language model gets large enough, the contrastive search can achieve superior performance even **without** SimCTG. This is due to the fact that larger language models can better learn isotropic representation space as we demonstrated in our answer \\#1.1. \n * (3) The performance of GPT-2-large with SimCTG + contrastive search is also in line with our human evaluation results provided in the main paper. In Table 2 of our paper, we show that GPT-2-large performs better than GPT-2-small when using SimCTG + contrastive search (Line 211). These results validate the clear generalization ability of our approach to larger language models.\n\nTo summarize, \n * (i) When the size of the language model gets large enough (e.g. 774M parameters for GPT-2-large), the contrastive search can be directly applied to **off-the-shelf** language models and can yield the best generation performances **without** any additional training. \n * (ii) Under the circumstances where the computational overhead and inference latency are the primary concerns, smaller language models are always preferred. In such cases, SimCTG is a simple yet effective solution to boost the performance of smaller language models.\n\nIn the camera-ready version, we will add some discussions on the performance of our approach on larger language models. We will save the full-scope and rigorous investigations for our future work. Thank you again for suggesting this interesting and definitely valuable research direction!\n\n### 2. Suggested modifications:\n * (1) Related references: Thank you for sharing these interesting and related references with us. We will add them in our camera-ready version.\n * (2) Introduction: We will adjust our writing in the introduction and explicitly mention that the anisotropic representation issue was first noticed by [2].\n * (3) Aggregated human evaluation score: Thank you for your suggestion. In the camera-ready version, we will add another column of aggregated human evaluation score to better present our results.\n * (4) Typo: We will fix the typo in our next version. Thank you for pointing it out!\n\n\n[1] - [https://aclanthology.org/P18-1012/](https://aclanthology.org/P18-1012/)\n\n[2] - [https://aclanthology.org/D19-1006.pdf](https://aclanthology.org/D19-1006.pdf)", " Thank you for your thoughtful reviews and valuable suggestions!\n\n### 1. How to adapt SimCTG to encoder-decoder models:\nFor encoder-decoder models, we should apply SimCTG on the decoder side to let the decoder learn an isotropic representation space and let the encoder remain unchanged. Accordingly, when we use contrastive search, we can simply modify the degeneration penalty in Eq. (5) to let it only consider the hidden representations of previously generated tokens by the decoder. We will leave the test of our approach on encoder-decoder models for future work.\n\n### 2. Elaboration on the informativeness metric:\nIn the following, we provide some concrete examples to illustrate the definition of the informativeness metric.\n\nTake the sentence ___\"Kujira developed from a broad area of disturbed weather as a tropical depression on April 9 well removed from any landmasses. Shortly after development, Kujira\"___ as the prefix text, the generated text with different informative scores are:\n\n * **5-score example:** _\"quickly intensified in its early stages, and was upgraded to a tropical storm just two days after cyclogenesis. Strengthening slowed afterwards, though the storm attained typhoon intensity on April 14. Intensification continued and late on April 15, Kujira reached its peak intensity with winds of 165 km / h ( 105 mph ) and a minimum barometric pressure of 930 mbar ( hPa ; 27.46 inHg ). Following peak intensity, Kujira would begin to track northwest and oscillate in strength, cresting an additional two times in intensity.\"_\n\n **[Discussion]** We can see that a 5-score generated text is both logically as well as grammatically fluent, and contains interesting and novel content.\n\n * **3-score example:** _\"became extratropical, which caused the storm to turn to the northeast-northwest on April 10. By the next day, Kujira had weakened to tropical storm status and was located in the Gulf of Mexico at an elevation of 1,200 m ( 3,600 ft ). It made landfall near Corpus Christi, Texas, on April 11 near the mouth of the Brazos River. The cyclone moved ashore in southeastern Texas, on April 11 near the mouth of the Brazos River.\"_\n\n **[Discussion]** We can see that the last two sentences in this example are partially repeated.\n\n * **1-score example:** _\"was inundated with tropical cyclones, which caused it to become a tropical depression. The storm was accompanied by a tropical cyclone named the Tropical Storm of the Year, which caused it to become a tropical depression. Kujira was one of the most severe storms to hit the United States in the past decade. The storm was accompanied by a tropical cyclone named the Tropical Storm of the Year, which caused it to become a tropical depression. The storm was accompanied by a tropical cyclone named the Tropical Storm of the Year, which caused it to become a tropical depression.\"_\n\n **[Discussion]** Obviously, in the 1-score generated text, there is less useful information or novel content. And most of its content is already displayed in the prefix text.", " ### 3. The performance of SimCTG on the vanilla model:\n\n|Model| Objective|ppl$\\\\downarrow$|acc$\\\\uparrow$|conicity$\\\\downarrow$|self-similarity$\\\\downarrow$|Method|diversity$\\\\uparrow$|MAUVE$\\\\uparrow$|coherence$\\\\uparrow$|\n|:-------------:|:-------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|\n|**Vanilla Transformer**|MLE|26.60|35.62|0.50|0.22|nucleus|0.89|0.81|0.541|\n|||||||contrastive|0.90|0.83|0.561|\n||SimCTG|**26.55**|**36.03**|**0.47**|**0.19**|nucleus|0.89|0.82|0.543|\n|||||||contrastive|**0.91**|**0.85**|**0.566**|\n|||||||||||\n|**GPT-2**|MLE|24.32|39.63|0.90|0.86|nucleus|0.94|0.90|0.577|\n|||||||contrastive|0.24|0.18|0.599|\n||SimCTG|**23.82**|**40.91**|**0.43**|**0.18**|nucleus|0.94|0.92|0.584|\n|||||||contrastive|**0.95**|**0.94**|**0.610**|\n|||||||||||\n|**Human**|-|-|-|-|-|-|0.95|1.00|0.644|\n\nThank you for your suggestion on testing our approach on vanilla models!\n\nWe have conducted experiments with the vanilla transformer model. To make a consistent comparison, we keep the parameter size of the vanilla transformer model the same as the GPT-2 model. The automatic evaluation results are presented in the above Table, where the results of GPT-2 are copied from Table 1 of our paper. To measure the isotropy of language models, in the above Table, we also report the results of the self-similarity metric (Eq. (6) of our paper) and the conicity metric from [Chandrahas et al., 2018](https://aclanthology.org/P18-1012.pdf).\n\nFrom the results, we can draw several conclusions:\n\n* (1) SimCTG also helps when training vanilla transformer model as demonstrated by the improved ppl and acc results compared with the MLE baseline, revealing the clear generalization ability of our approach on both pre-trained and non-pre-trained models.\n* (2) For the vanilla transformer model, both MLE and SimCTG perform comparably on self-similarity and conicity metrics, indicating that the vanilla transformer model does not suffer from the anisotropic problem as the pre-trained GPT-2 does. This finding is particularly interesting and it is worth further investigation on why the pre-training procedure harms the isotropic property of the language model. We will leave it to our future work.\n* (3) Given the isotropic nature of the non-pre-trained model, our proposed contrastive search can also be directly applied to the vanilla transformer model trained with MLE and also yields better results than the nucleus sampling method. \n* (4) Overall, on both pre-trained and non-pre-trained models, SimCTG + contrastive search yields the best results across the board.\n\n### 4. Limitations:\n* (1) **Offensive content generation:** Thank you for your suggestion on the potential drawback of our approach. We will add more discussions of our approach on social impact in the camera-ready version.\n* (2) **Missiing reference:** Thank you for pointing out the missing reference. We will add it to in our next version.\n* (3) **Conicity metric:** Thank you for suggesting the conicity metric [(Chandrahas et al., 2018)](https://aclanthology.org/P18-1012.pdf) for measuring the isotropy of language models. We have reported its results in our answer \\#3 and we certainly think it helps to further strengthen the arguments of our paper. In the camera-ready version, we will include the results of the conicity metric in our experimental tables. \n\n", " Thank you for your valuable suggestions and questions!\n\n### 1. Inner connection between SimCTG and contrastive search:\nActually, we can draw a nice connection between SimCTG and contrastive search.\n* (1) The goal of SimCTG is to let the language model learn an isotropic representation space from human-written texts. To put it in another way, given a **human-written text**, SimCTG encourages the language model to obtain an isotropic and discriminative token similarity matrix in which only the diagonal entries have high similarities as shown in Figure 1(b). \n* (2) On the other hand, as mentioned in Lines 36-37, contrastive search aims to keep the isotropic property (i.e., spareness) of the token similarity matrix of the **generated text**. Therefore, the text generated by contrastive search is more similar to human-written text.\n\nTo conclude, SimCTG enables the language model to obtain isotropic representations with human-written texts, while contrastive search maintains the isotropic property of the text generated by the language model.\n\n**[Visual Demonstration]** We also provide a visual demonstration of the connection between SimCTG and contrastive search at Figure 3 in Appendix I. From which we see that, only by combining SimCTG and contrastive search, we can obatin a nice and isotropic token similarity matrix for both the human-written text (i.e., the prefix text) as shown in the red box and the generated text as shown in the yellow box.\n\n### 2. Justification for isotropic representation and contrastive search could be more solid:\nThank you for your suggestion. In the camera-ready version, we will include the details from our answer \\#1 to further emphasize the connection between isotropic representation and contrastive search. \n\n### 3. Effectiveness of the metric on coherence:\nWe have considered all the baseline automatic metrics we could find in the recent literature. However, for open-ended text generation tasks, we are not able to find an existing metric that automatically measures the semantic coherence between the prefix text and the generated text. To this end, we propose to use a strong sentence embedding method, SimCSE, to automatically measure the coherence between the prefix text and the generated text. More importantly, we conduct extensive human evaluations to further assure the advantage of our approach in terms of the coherence aspect. The human evaluation results validate that our method indeed generates significantly more coherent text compared to other baseline methods, which is in line with the results acquired by our proposed coherence metric.\n\n### 4. Why contrastive search works much better on SimCTG:\nThe reason why contrastive search works best on SimCTG is that MLE and unlikelihood cannot obtain an isotropic representation space. As we describe in Lines 186-187, when the language model's representation space is anisotropic, the degeneration penalty $\\\\max\\\\{s(h_v, h_{x_j}):1\\leq j \\leq t-1\\\\}$ in Eq. (5) of different token candidates $v$ become indistinguishable with respect to each other, making contrastive search less effective.\n\nLet's consider a simple example. Suppose the representation space of the language model is extremely anisotropic such that the representations of all tokens are identical. Therefore, the cosine similarity between representations of any two tokens is always 1.0. In this case, when we apply contrastive search, the degeneration penalty for all candidate tokens would all be 1.0, i.e. all identical. Therefore, the selection of the output token will only depend on the model confidence term in Eq. (5), making contrastive search degenerate to the vanilla greedy search which further leads to unsatisfactory performance.\n\nAs we demonstrate in Section 6.1, the representation space of SimCTG is much more isotropic than MLE and Unlikelihood. As a result, contrastive search works best on SimCTG.", " Thank you for the comprehensive reviews and thoughtful comments. We are delighted that reviewers appreciated the novelty and originality of the paper.\n\nWe are excited with the recognition that \"The proposed methods are relatively simple and general\" and that our approach is \"Applicable to wide variety of NLG tasks\". We are thrilled with the reviewers' acknowledgment that \"The main experiments are complemented with good analysis experiments\" and that \"Comprehensive strategy analysis and generation analysis\". We are also pleased that the reviewers found that \"This paper has good originality, clarity, quality\" and \"The paper is well-written and easy to understand\".\n\nBelow, we respond to each reviewer separately. Please let us know if you have additional questions or comments!", " This paper aims to solve the degeneration problem using a contrastive training objective and a contractive search. The proposed contrastive training objective encourages the model to learn isotropic representation for the tokens. In addition, the proposed contrastive search algorithm scores hypotheses by discriminating the candidate tokens and previous context tokens. Experiments show improved ppl and acc on language model trained with the contrastive objective and enhanced generation qualities with contrastive search.\n Strength: 1) The proposed methods are relatively simple and general, which can potentially be applied to any text generation model. 2) The results strongly support the methods. 3) Well written.\n\nWeakness: 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. 2) The justification for isotropic representation and contractive search could be more solid.\n LN158: Is there any study demonstrating the effectiveness of the metric on coherence?\n\nTable 1: Why does contrastive search work so much better for SimCTG than for MLE and Unlike?\n\n NA", " NLG models rely on the maximum likelihood objective during training and a related decoding strategy. In the most vanilla formulation, this combination of training objective + inference strategy results in degenerate text. One reason is the anisotropic distribution of the token embeddings learned by the underlying models. Inspired by contrastive representation learning, this paper proposes SimCTG a new approach that incorporates an additional term in the training formulation and a modified inference time strategy during decoding. The combination induces diversity (reducing degeneration) in the generated text while maintaining relevance to the input. Strengths: \n1. The paper is well-written and easy to understand.\n2. Applicable to wide variety of NLG tasks.\n3. Good performance on Wikitext-103, LCCC, and Dialog Daily.\n4. Comprehensive strategy analysis and generation analysis (section 6).\n5. Fair comparison against competitive baselines and readable code in supplementary.\n\nWeaknesses: \n1. [Very Minor] While the contributions of this work are certainly novel, the formulation is a simple variation of previously known schemes, one of which had been explored for encoder-only models. The training strategy is a variation of TaCL [19] + Self-similarity [6]. The decoding objective is a variation of the unlikelihood objective.\n2. Human Evaluation for informativeness is vague.\n3. Evaluation of non-pre-trained models is missing. This would help in understanding if the performance gains are only restricted to pre-trained models or not.\n 1. The paper mentions in Line 112 that the formulation is architecture agnostic. Would the same strategy work in the case of encoder-decoder models, where the decoder might learn the isotropic embeddings while the encoder might remain more or less unaffected? Or should the objective also involve constraining the encoder representations as well?\n2. Can you please elaborate on the Informativeness evaluation in human studies? The description given in the appendix is also difficult to parse.\n3. How does the formulation perform on a vanilla transformer network / standard LSTM model? The authors have addressed the limitations in the appendix however they should discuss the potential downsides of using the GPT2 model - offensive content generation. Since the decoding process involves the generation of words that are different from the word appearing before them (contrastive search), it might help to show some cases where the model diverges too much from the content and starts rewarding offensive content (or at least mention it).\n\nMissing citation:\n\"Towards Transparent and Explainable Attention Models (Mohankumar et al., ACL 2020)\"\n\nOne evaluation metric for checking token dissimilarity is proposed in \"Towards Understanding the Geometry of Knowledge Graph Embeddings (Chandrahas et al, ACL 2018). Kindly evaluate that since self-similarity is very similar to the objective that is being tried to optimize in the SimCTG method.", " In this work, the authors investigate the model degeneration problem in neural text generation, they show that the main reason for model degeneration is the anisotropic distribution of token representation. Hence, they use a contrastive training objective to enlarge the distance between tokens and propose a new decoding method of contrastive search to keep the coherence in the generated text. This paper is well-written and it shows its advantages compare to other methods. Strengths: \n1. This paper is well-written, methods are clearly explained. \n2. The given storyline and corresponding experiments are logically consistent.\n---------\nWeakness: \n1. The contrastive learning method has been widely used to solve the representation degeneration problem, the method is not novel. \n2. The proof is not convincing enough, it is not enough to use the token cosine similarity to reveal the model degeneration problem is caused by the token representation degeneration. \n3. It needs more experiments to illustrate the effectiveness of the methods, it needs to add some representative NLG experiments, e.g. machine translation. 1. The anisotropic distribution of token embeddings is considered from a global perspective, why Figure 1 can reveal the phenomenon? I think the phenomenon can not be explained by anisotropic. \n2. In Table 1, for the metric of language modeling quality, what decoding strategies are used for different models? The method proposed by the authors prefers to generate tokens that have not appeared in the previous context, it will influence the performance of high-frequency tokens. At the same time, this method is more suitable for tasks with high degrees of freedom.", " Current language models suffer from the issue of degeneration, which makes their outputs dull and repetitive. This paper argues that this degeneration in text generation is due to the anisotropic distribution of LMs. Prior work [1] has shown that LM representations are anisotropic --- the cosine similarity between the representation vectors for *different* tokens in a sentence is very high, up to 0.95. This paper designs training + inference time algorithms to fix this issue, and report improvements in text generation quality.\n\nThis paper proposes the SimCTG algorithm, which helps make representations more isotropic during training. The key idea is to use an extra loss function which pushes away representations of different tokens. During inference, the paper proposes the use of contrastive search, a decoding objective which encourages the generation of tokens whose representations are dissimilar to one another. Contrastive search can be applied without the contrastive training on any existing LM.\n\nThe paper evaluates their approach on open-ended dialogue and document generation, covering both English and Chinese datasets. Extensive automatic and human confirm that the proposed approach beats baselines like nucleus sampling & unlikelihood training.\n\n[1] - https://aclanthology.org/D19-1006.pdf **Strengths**\n\n1. The paper works towards fixing an important issue in current text generation systems --- their tendency to produce degenerate text. The paper draws an nice connection to prior work [1] which has found LM representations are anisotropic.\n\n2. The paper presents simple training and inference time algorithms to make language model representations more isotropic. The inference time algorithm (contrastive search) can be flexibly applied to any existing LM without further training, and seems to improve over nucleus sampling for dialogue generation even without contrastive training.\n\n3. The paper evaluates their algorithm on three datasets spanning two tasks and two languages. Extensive automatic evaluation (using effective metrics like MAUVE) and human evaluation is conducted to confirm the efficacy method.\n\n4. The main experiments are complemented with good analysis experiments discussing qualitative aspects of improvements with model generations, timing analysis, effect of varying hyperparameters, isotropy of SimCTG representations.\n\n**Weaknesses**\n\nI have one major concern with the paper. All experiments have been conducted on a very small language model, GPT2-small which has 117M parameters. However, GPT2-small is a weak language model compared to much larger open-source alternatives like GPT2-large / XL [6], T5 variants fine-tuned for causal language modeling [2], OPT [5] etc. For dialogue generation you could simply fine-tune larger T5 variants or BART.\n\nTesting out novel text generation ideas at a larger scale is important --- as the LMs get bigger, generative quality of MLE baselines significantly improves. It's unclear how anisotropic larger LMs are, and if yes, whether this anisotropy affects generation quality. A related informal tweet - [7].\n\n**Minor**\n\nIt will be good to discuss the relations to [3, 4] in the related work since they use contrastive learning for text generation as well. These two papers have come out in the last month (contemporary to this paper), so no direct comparison is necessary.\n\nIn the introduction, explicitly mentioned that the anisotropic representation issue was first noticed by [1].\n\nIn Table 2 / 3, keep a column which aggregates scores and mentions the fraction of data points where all three metrics simultaneously score a 4 or 5. This will give a good idea about the overall performance of methods.\n\nline 173: confusing --> confused\n\n**Overall** --- This paper has good originality, clarity, quality, but moderate significance. My main concern is that experiments were only conducted on GPT2-small, a very small language model compared to the state-of-the-art. Nevertheless, I'm leaning accept due to interesting idea as well as good automatic & human evaluation conducted in the paper.\n\n---\n\n**After Rebuttal** - Thank you for your comprehensive reply and addressing the weakness I pointed out. I've raised my score to 7.\n\n[1] - https://aclanthology.org/D19-1006.pdf \n[2] - https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k \n[3] - https://arxiv.org/abs/2205.09726 \n[4] - https://arxiv.org/abs/2205.14690 \n[5] - https://arxiv.org/pdf/2205.01068.pdf \n[6] - https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf \n[7] - https://mobile.twitter.com/_jasonwei/status/1526589104758042624 None Good limitations section in Appendix" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "1brnWHQRnZ3", "yVgpdBO1sT4X", "vpNH_k-4He", "_a58NwIHW4xV", "T56hkp-qWo", "kdhFQZtIHzs", "kdhFQZtIHzs", "2mHZdd0oQ3Z", "2mHZdd0oQ3Z", "3YR6VMoUPq5", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj", "nips_2022_V88BafmH9Pj" ]
nips_2022_Gsbnnc--bnw
Generative Visual Prompt: Unifying Distributional Control of Pre-Trained Generative Models
Generative models (e.g., GANs, diffusion models) learn the underlying data distribution in an unsupervised manner. However, many applications of interest require sampling from a particular region of the output space or sampling evenly over a range of characteristics. For efficient sampling in these scenarios, we propose Generative Visual Prompt (PromptGen), a framework for distributional control over pre-trained generative models by incorporating knowledge of other off-the-shelf models. PromptGen defines control as energy-based models (EBMs) and samples images in a feed-forward manner by approximating the EBM with invertible neural networks, avoiding optimization at inference. Our experiments demonstrate how PromptGen can efficiently sample from several unconditional generative models (e.g., StyleGAN2, StyleNeRF, diffusion autoencoder, NVAE) in a controlled or/and de-biased manner using various off-the-shelf models: (1) with the CLIP model as control, PromptGen can sample images guided by text, (2) with image classifiers as control, PromptGen can de-bias generative models across a set of attributes or attribute combinations, and (3) with inverse graphics models as control, PromptGen can sample images of the same identity in different poses. (4) Finally, PromptGen reveals that the CLIP model shows a "reporting bias" when used as control, and PromptGen can further de-bias this controlled distribution in an iterative manner. The code is available at https://github.com/ChenWu98/Generative-Visual-Prompt.
Accept
This work concerns a unifying method for repurposing "off the shelf" conditional models in order to define an energy-based model of vectors in the latent space of a pre-trained generative model, for the purpose of controlling synthesis, and a feed-forward approximation using invertible neural networks. The authors present several use cases and experiments on each across a range of different model types. Reviewers were positive on the presentation, originality and usefulness, and generally felt the experiments were well chosen. There were some concerns regarding discussion of societal impact (gbfq), the fact that most results involved faces and those that didn't were less compelling (5eQN), and clarity around the derived energy function and positioning relative to prior work (byc9). Most concerns were addressed in rebuttal, however QXTM felt quantitative results evaluating controllability, specifically, left much to be desired, and lowered their score following a rebuttal that they felt failed to address this issue. Based upon the discussion and my own reading of the paper, the AC views this work in an overall positive light, the valid concerns of QXTM notwithstanding. With some reservations, I recommend acceptance.
train
[ "hJymahhlX6v", "IwrGC_DBXUt", "rPzdRASZ-dh", "mOb2uXRwbIb", "TTePoXoVJRi", "MAqlyZDBMJg", "RyNqXOwil61", "HL6kcFW84G-", "BR0cJsdNuH8", "F3zpUR7kGCw", "_bhYvOU8pN8", "q0cBDN592SC", "X_hfVOZcQe", "_GN8gxqmk-" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Approximation error**\n\nThanks for pointing this out! The approximate error can be measured by $D_{\\text{KL}}(p_{\\theta}(\\boldsymbol{z}) || p(\\boldsymbol{z} | \\mathcal{C}))$. This KL divergence is defined in Eq. (9). However, it is worth noting that $\\log Z$ is expensive to estimate in practice (recall that this partition $\\log Z$ does not depend on $\\theta$ so we can safely discard it in the training objective). We will discuss this approximation error in more detail in the final version. \n\n**Text-conditioned models**\n\nWe should definitely elaborate on this! \n\n> One can definitely train a text-conditioned model given a list of text descriptions, but it can limit the generalization to novel text descriptions.\n\nBy “text-conditioned model” we mean a text-conditioned PromptGen. To train it, one needs a list of domain-specific (faces, buildings, etc.) descriptions; however, given a description not seen during training, this text-conditioned PromptGen may fail to generalize. \n\n> After all, there now exist very good generative models of images given text.\n\nBy “very good generative models of images given text” we believe you mean models like DALL-E 2, Imagen, and Parti. Our observation working with some of these models is that the quality of these models in highly specialized domains (e.g., faces) still lags behind domain experts such as StyleGAN. For example, one may try the following prompts using DALL-E 2 / DALLE-mini API: “a photo of a baby’s face” and “a photo of an Asian female”. \n", " **Is CLIP energy an appropriate evaluation metric?**\n\nThanks for raising this question! We would like to argue that CLIP energy is an adequate evaluation metric; using the same off-the-shelf model (or models trained on the same dataset) for optimization and evaluation has been adopted in previous works. In PPGM [47], Table S3 used the same image classifier for optimization and evaluation. In LACE [50], Tables 1 and 2 used a latent-space classifier for optimization and an image-space classifier for evaluation (the two classifiers are trained on the same dataset). Our usage of CLIP energy shares the same spirit as their usage of classifier-based metrics. \n\n\n**Complex controls do not always work**\n\nWe provided a comprehensive error analysis in Appendix F. We agree that complex CLIP sentences cause undesired effects in the generated images, but we have analyzed that many failure cases are caused by the low density in the training data of generative models. For instance: “Photo of a happy Asian person with a hat and glasses” performs worse than “Photo of a happy man with a hat and glasses” where “man” replaces “Asian person”. The complexity of these sentences in terms of the number of attributes they specify is the same however since there are more men than Asian people in the training set, the results are better. We have been preparing a more comprehensive set of experiments around this which we plan to include in the appendix. Regarding the hyperparameter tuning of the energy functions, we allow tuning the importance of decomposed descriptions to overcome the language modeling limitation of CLIP by adjusting only a weight parameter. \n\nMoreover, we would like to point out that, compared with previous papers on controlling generative models with latent-space EBMs (e.g., PPGM [47], LACE [50]) – which only consider classifier control – we experiment with more complex controls. Specifically, we have experiments on (1) CLIP guidance, (2) inverse graphics guidance, and (3) moments constraints. For (1), we re-implemented PPGM for comparison, while we did not re-implement LACE because we do not have CLIP models in the w-space of GANs (note that LACE used classifiers trained in the w-space instead of the image space). We did not re-implement PPGM and LACE for other guidance because (2) needs to train a network conditioned on poses, which we do not know how to apply to PPGM and LACE, and (3) is a core part of our methodology. \n\n\n**Diversity of PromptGen in the class-embedding space**\n\nWe agree with the limitations of PromptGen in the *PromptGen in the class-embedding space* setting. We will clarify and emphasize this limitation in the final version of the paper.\n\n\n\n[47] Anh Nguyen, Jeff Clune, Yoshua Bengio, Alexey Dosovitskiy, and Jason Yosinski. Plug & play generative networks: Conditional iterative generation of images in latent space. CVPR, 2017. \n\n[50] Weili Nie, Arash Vahdat, and Anima Anandkumar. Controllable and compositional generation with latent-space energy-based models. NeurIPS, 2021.\n", " Thank you for your detailed response. \n\n> This approximation error can be investigated by comparing PromptGen with MCMC-based PPGM, whose error diminishes given enough optimization steps. In Section 4.6 (Section 4.5 in the original version), we reported a quantitative comparison between PromptGen and PPGM, which shows that PromptGen and PPGM have similar controllability (when the control is successful). To verify the failure cases also happen to PPGM, we ran an additional experiment of “photo of a man without beard” for PPGM. Results are reported in Figure 20 in the updated version.\n\nThis is an empirical argument: in the cases you tried, PromptGen achieves similar controllability to PPGM. My original question was whether PromptGen can be relied on to diagnose limitations of another model (e.g. CLIP). This experiment supports that claim, but can we be sure that we will achieve low approximation error in general, as we can with PPGM? \n\n> One can definitely train a text-conditioned model given a list of text descriptions, but it can limit the generalization to novel text descriptions\n\nWhy would this be the case? After all, there now exist very good generative models of images given text.", " I thank the authors for their response. However, (1) my major concerns on “this work didn’t provide sufficient quantitative results of evaluating the controllability” have not been addressed. The authors mentioned that the CLIP energy has been used for measuring the controlling performance of text-guided generation. But the CLIP energy is the training or inference objective for the PPGM and PromptGen methods, which, I think, should not be an appropriate evaluation metric any more. (2) Also, it seems that the proposed method does not work well with complex text sentences in the text control case, but needs many hand-crafted heuristics to compose energy functions with careful hyperparameter tunings. This makes me less convinced of its effectiveness in text-guided generation. (3) Since the setting of “PromptGen in the class-embedding space of a class-conditional GAN” suffers from a severe diversity issue, it seems to be better to present it as a failure case.\n\nThus, I lowered my score to “borderline accept”. ", " We would like to thank the reviewers for pointing out the issue of societal impacts. In the updated version, we included the following statement in Appendix G.\n\nWith the improvements in generative models for images, DeepFake technology has become more accessible. Like any new technology, it is a double-edged sword, and \nit is crucial to research and comprehend the possible advantages and disadvantages of generative models for society. \n\nOn the positive side, we show that PromptGen can be used to de-bias pre-trained generative models and to reveal biases learned by text-image models (e.g., CLIP), indicating that PromptGen might be a useful tool for fair AI if used appropriately. The efficient inference provided by PromptGen also helps reduce the computational expense, which has a positive impact on the environment. Better controllability, however, unavoidably makes it simpler to synthesize targeted pictures, which might have detrimental social effects in creating deceptive media (e.g., DeepFakes) or privacy leaks (e.g., identity-conditioned human face synthesis). To battle these cases, there are current technologies that can detect fake media effectively, and we expect companies and users to use these technologies to distinguish what is real from fake. We encourage practitioners to consider these risks when using PromptGen to develop systems. \n", " **Quantitative analysis for text and pose controls**\n\nIn Section 4.6 (Section 4.5 in the original version), we reported quantitative results for controllability and inference speed in the text control setting. For the pose control, we visually verified that for different people the pose is consistent. \n\n**Lack of diversity on ImageNet in Figure 5(e) [previously 5(d)]**\n\nFirst, we refer to **Clarification: PromptGen in the class-embedding space** section in **General response to all reviewers**. The ImageNet experiment is mainly designed to show that PromptGen can not only model distributions in the $\\boldsymbol{z}$-space, but also the class-embedding space, an extension of our main method. However, we agree that this extension suffers from low diversity; in the updated version, we mentioned this limitation.\n\n\n**More discussion on compositionality and complex control**\n\nThanks for pointing it out! First, we refer to **Clarification: energy composition v.s. functional composition** section in **General response to all reviewers**. What you mentioned is what we call energy composition, and we added a new section (Section 4.5) to discuss this issue. Specifically, we show that control with a complex sentence (i.e.,*a photo of a bald black man with beard*) is not very successful. We then decompose the complex sentence into three simpler ones (i.e., *a photo of a black man*, *a photo of a bald man*, and *a photo of a man with beard*), and we find that by tuning the weight $\\lambda_i$ for each of them (e.g., we need larger $\\lambda_i$ for *a photo of a black man*), we have better control performance. \n\n\n**PromptGen for generative models with a class embedding space**\n\nFirst, we refer to **Clarification: PromptGen in the class-embedding space** section in **General response to all reviewers**. The setting you mention in question (4) is what we refer to as *PromptGen in the $\\boldsymbol{z}$-space of a class-conditional GAN*. As you mentioned, this setting is the same as the unconditional setting discussed throughout this paper, given the fact that $G(\\cdot, \\text{Emb}(c))$ has the same form as an unconditional generative model. \n\nOn the other hand, *PromptGen in the class-embedding space of a class-conditional GAN* aims at learning a distribution in the class-embedding space. This setting exploits the observation that a class-conditional GAN has a certain ability to generate out-of-domain samples when the provided class embedding is not one of the classes in ImageNet. Under this setting, PromptGen aims to find a distribution over class embeddings $y$ such that $G(z, y)$ are “photos of a glow and light dog”. However, results in Figure 5(e) [previously 5(d)] show that the diversity of images sampled from such control is limited; in the updated version, we mentioned this limitation. \n", " **Quality and coverage of pre-trained generators are a bottleneck**\n\nWe agree. The main focus of this work is to sample from a specific output region (or to reweight the output distribution for de-biasing) of a pre-trained network. This limitation is also related to the last of your questions (mode-seeking and domain adaptation). \n\n\n**Trade-off between inference-time optimization and amortization (training-time optimization)**\n\nThanks for pointing it out! Amortization is helpful when one wants to reuse a controlled distribution many times, which we believe is the case for (1) creating controllable training data (for recognition tasks) from generative models, (2) debiasing generative models, and (3) pose-conditioned face modeling. Moreover, we recall that training the INN for text-control experiments only takes hundreds of steps to converge. In contrast, inference-time optimization methods (e.g., the MCMC-based PPGM) take 50 optimization steps to generate one sample. In the updated version, we discuss this issue in Section 4.6. \n\n\n**Approximation error of PromptGen should be considered in the error breakdown.**\n\nWe agree. This approximation error can be investigated by comparing PromptGen with MCMC-based PPGM, whose error diminishes given enough optimization steps. In Section 4.6 (Section 4.5 in the original version), we reported a quantitative comparison between PromptGen and PPGM, which shows that PromptGen and PPGM have similar controllability (when the control is successful). To verify the failure cases also happen to PPGM, we ran an additional experiment of “photo of a man without beard” for PPGM. Results are reported in Figure 20 in the updated version. \n\n\n**Why not train a text-conditioned model for many possible text descriptions?**\n\nOne can definitely train a text-conditioned model given a list of text descriptions, but it can limit the generalization to novel text descriptions. One related experiment is the pose-conditioned experiment, in which PromptGen is conditioned on the pose parameter; in this experiment, generalization is enabled by conditioning since we can easily sample all possible pose parameters from its domain during training. \n\n\n**Applicability of PromptGen to diffusion models**\n\nThanks for pointing it out! We added one experiment on Diffusion Autoencoder in Figure 5(d). It is a hybrid model of diffusion models (specifically DDIM) and autoencoders. However, we did not run an experiment on the original DDPM since it does not have a typical latent code. \n\n\n**Mode-seeking v.s. domain adaptation**\n\n*Mode-seeking* means *controllability* in this paper, a term sometimes used in literature to refer to sampling from a particular mode learned by the generative model. *Domain adaptation* means to finetune the generative model to generate samples from a domain not seen during training. For example, StyleGAN-NADA, a domain adaptation method, can finetune a StyleGAN trained on FFHQ (real faces) to generate Pixar-like faces. \n\nThe main difference between *mode-seeking* and *domain adaptation* is the support set of the output distribution. In *mode-seeking*, the support set is the same as (or is a subset of) the original distribution’s support set. In contrast, in *domain adaptation* the support set can be drastically different from the original distribution’s support set. In Figure 4(b), we showed that *domain adaptation* fails when *mode-seeking* should be used (i.e., the set of baby faces is a subset of the set of human faces). On the other hand, *mode-seeking* can also fail when we want to generate something that the generative model never sees (e.g., Pixar-like faces). In some cases, maybe we need both *mode-seeking* and *domain adaptation* (e.g., Pixar-like baby faces). PromptGen allows for this combination: the functional composition $G \\circ f_\\theta$ is a mapping from $\\mathbb{R}^{d}$ to $\\mathcal{X}$, i.e., a generative model (we refer to **Clarification: energy composition v.s. functional composition** section in **General response to all reviewers**); therefore, we can directly apply StyleGAN-NADA to $G \\circ f_\\theta$. We will clarify this in the final version of the paper. \n", " **What does model-dependent mean? Distinctions from previous approaches on latent (or style) code editing/interpolation**\n\nWe agree that “model-dependent” needs further clarification. We described previous methods on latent (or style) code editing/interpolation as “model-dependent” because they put *non-trivial assumptions of locality and interpolation* on the latent (or style) space. Specifically, latent (or style) code editing assumes that image semantics can be guided by locally modifying the latent (or style) code of each image; latent (or style) code interpolation (e.g., the Slerp interpolation) assumes that any point on the interpolation of two latent (or style) codes corresponds to an image whose semantics is also interpolated. These assumptions do not always hold for every control. As we showed in Figure 4(a), the local editing-based StyleCLIP cannot model “a photo of a baby” well, and we attributed it to the fact that not all images’ latent code can be locally edited into a baby. Moreover, since PromptGen’s $f_\\theta$ is an invertible mapping from $\\mathbb{R}^{d}$ to $\\mathbb{R}^{d}$, local editing is a special case of $f_\\theta$. \n\nIn the original version, we cited works on latent (or style) code editing in Line 62: “local editing of the learned representation, e.g, “style” codes [1, 54, 44, 64, 74]”. We are updating the above arguments in Sections 1, 2, and 4. \n\nFinally, given one specific control, one never knows which method works the best. We hope our method will serve as an effective and efficient tool in downstream applications. \n\n\n**Where do the energy functions come from?**\n\nThe classifier energy is based on the Bayes rule and temperature-adjusted distributions, commonly used by previous works that we cannot find an appropriate reference. The CLIP energy (especially the differentiable augmentation part) is inspired based on FuseDream [44], which we cited properly, while they did not use it for energy-based modeling. We designed the inverse graphics energy, whose interpretation (e.g., what kind of distribution it models) is provided in Appendix B.1. We note that all energy functions are not unique and can take other forms, and should be flexibly adjusted based on the application. We will further clarify this in the final version. \n\n\n**Connection to prompt tuning**\n\nThanks for pointing this out! The major connection is that PromptGen learns a distribution over a pre-trained generative model’s input space, so arbitrary controls can be achieved without finetuning the pre-trained model. We agree that “Prompt Generation” is ambiguous, which sounds like a task that aims at generating text prompts. \n", " **Most results are on faces**\n\nWe used faces (real faces or faces in Met Art collections) as the main data for two reasons (1) humans are more sensitive to artifacts in the generated faces (than generated cars, cats, churches, etc.), and (2) there are well-studied/used face repositories that are good for comparison with previous approaches. In addition to faces, in Figure 5(c), Figure 12 (Appendix), and Figure 13 (Appendix), we provided results on more datasets such as Landscape, LSUN Church, and AFHQ Cats. In appendix E, we also provided some preliminary results on 3D human faces. \n\n\n**Visualizations are too small for high-res images**\n\nPlease, zoom in within the PDF for better visualization (although they are already downsampled). Since we want to keep the PDF reasonable in size, we will include full-res samples (mostly 1024 x 1024 and 512 x 512) on our accompanying website. We also had some high-res samples in Appendix D and F. \n\n\n**Lack of diversity on ImageNet**\n\nFirst, we refer to **Clarification: PromptGen in the class-embedding space** section in **General response to all reviewers**. The ImageNet experiment is mainly designed to show that PromptGen can not only model distributions in the $\\boldsymbol{z}$-space, but also the class-embedding space, an extension of our main method. However, we agree that this extension suffers from low diversity; in the updated version, we mentioned this limitation. Thanks for pointing this out. \n\n\n**How to do error breakdown in cases where CLIP control doesn’t work well?**\n\nThat is a good question. We provided a preliminary answer in Appendix F. Specifically, we used CLIP to retrieve images from a large set of 400M images; if the retrieved images are faithful to the text description, then CLIP should not be blamed. Using this idea, we showed in Appendix F that CLIP could not model “without beard” and is gender-biased when modeling “a person without makeup”. \n\nRegarding your question on “a cat with closed eyes”, we find that almost all retrieved cats from the 400M images have closed eyes; therefore, the failure probably comes from the low density in the training data. Moreover, the energy-based model reweighs the distribution instead of enforcing constraints, but picking useful images from the reweighted distribution will be more efficient than picking from the original one. We will describe our findings in this regard in the final version of the paper. \n\n\n**More analysis on functional composition**\n\nFirst, we refer to **Clarification: energy composition v.s. functional composition** section in **General response to all reviewers**. Decomposing a complex description into two simpler ones is interesting and falls into what we mean by “energy composition”. We added a new section (Section 4.5) to discuss this issue. Specifically, we show that control with a complex sentence (i.e.,*a photo of a bald black man with beard*) is not very successful. We then decompose the complex sentence into three simpler ones (i.e., *a photo of a black man*, *a photo of a bald man*, and *a photo of a man with beard*), and we find that by tuning the weight $\\lambda_i$ for each of them (e.g., we need larger $\\lambda_i$ for *a photo of a black man*), we have better control performance. \n\n\n**Additional experiments for Table 2**\n\nIn the updated version, we reported the hair color de-biasing results in Table 2, and PromptGen achieves nearly perfect de-biasing performance (we will also report baseline results if time permits). Moreover, all attributes and attribute combinations reported in Table 2 come from Table 1 of FairStyle [30]. \n\n\n**Latent vectors learned by GAN look fairly well separated on synthetic data (Fig. 9)**\n\nThe motivation of this synthetic experiment is to provide visual intuition about what PromptGen learns. We agree that this is not representative of real data, and for this reason, we put it in the appendix. The paper provides various real data experiments to show PromptGen’s ability to cope with less structured latent spaces. \n\n\n**Does Eq. (8) hold for all INNs?**\n\nThis is correct; Eq. (8) holds for all INNs. \n", " Thank you for your valuable feedback! Here, we would like to address some questions raised by more than one reviewer. \n\n**Clarification: energy composition v.s. functional composition**\n\nBy “energy composition” we mean $E_{\\mathcal{C}}(\\boldsymbol{x}) = \\sum_{i=1}^{M} \\lambda_i E_i(\\boldsymbol{x}, \\boldsymbol{y}_i)$ in Eq. (1) and Eq. (2), where the control $\\mathcal{C}$ is composed of $M$ independent properties $\\{\\boldsymbol{y}_1, \\ldots, \\boldsymbol{y}_M\\}$, e.g., $\\boldsymbol{y}_1$ can be a text description and $\\boldsymbol{y}_2$ can be an attribute. This energy composition is useful when multiple controls **can** be specified simultaneously. In the updated version, we also added a section (Section 4.5) to discuss how energy composition helps us decompose complex controls into simpler ones.\n\nBy “functional composition” we mean the iterative control described in Figure 2, where we treat a PromptGen-controlled generative model $G \\circ f_\\theta$ as a new generative model. This is based on the fact that $f_\\theta$ is a mapping from $\\mathbb{R}^{d}$ to $\\mathbb{R}^{d}$ and $G$ is a mapping from $\\mathbb{R}^{d}$ to $\\mathcal{X}$; therefore $G \\circ f_\\theta$ is a mapping from $\\mathbb{R}^{d}$ to $\\mathcal{X}$, i.e., a generative model. Functional composition (or iterative control) is useful when multiple controls **cannot** be specified simultaneously. For example, in Section 4.4, we provided a case where we want to generate a gender-debiased image distribution over people without makeup. In this case, we need to *first* learn the distribution over people without makeup and *then* de-bias this distribution, since debiasing a distribution requires knowing what the distribution looks like. \n\nThanks for pointing this out. In the updated version, we experiment with both energy composition and functional composition in Section 4.5 and Section 4.6. We will further clarify these terms in the final version of the paper. \n\n**Clarification: PromptGen in the class-embedding space**\n\nWe would like to distinguish two settings when using a class-conditional GAN as the generative model: (1) PromptGen in the $\\boldsymbol{z}$-space of a class-conditional GAN and (2) PromptGen in the class-embedding space of a class-conditional GAN. The first setting is useful for finding modes in a particular class, while the second helps generate novel objects that are not one of the 1000 classes in ImageNet. \n\n*PromptGen in the $\\boldsymbol{z}$-space of a class-conditional GAN*: one first specifies a class $c$ in ImageNet (e.g., $c=$ cat) and one control (e.g., “a photo of a cat sitting on a mat”), and PromptGen learns a distribution in the $\\boldsymbol{z}$-space. Under this setting, the goal of PromptGen is to find a distribution over latent codes $\\boldsymbol{z}$ such that $G(z, \\text{Emb}(c))$’s are “sitting cats”. This setting is the same as the unconditional setting discussed throughout this paper, given the fact that $G(\\cdot, \\text{Emb}(c))$ has the same form as an unconditional generative model. \n\n*PromptGen in the class-embedding space of a class-conditional GAN*: one specifies an out-of-domain control (e.g., “a photo of a glow and light dog”), and PromptGen learns a distribution in the class-embedding space. This setting exploits the observation that a class-conditional GAN can generate out-of-domain samples when the provided class embedding is not one of the classes in ImageNet. Under this setting, PromptGen aims to find a distribution over the class embeddings $y$ such that $G(z, y)$’s are “photos of a glow and light dog”. However, results in Figure 5(e) [previously 5(d)] show that the diversity of images sampled from such control is limited; in the updated version, we mentioned this limitation. \n", " This work proposes a method to control pre-trained generative models (e.g. to condition samples on a text prompt, or to control the value of an attribute, or to debias samples). The control can be specified using a different model (e.g. CLIP or inverse graphics or a classifier) via an energy-based formulation that outputs a controlled-version of the pre-trained generative model. **Strengths**\n- Compatibility with a wide range of control models (classifiers, inverse graphics, embedding)\n- Compatibility with a wide range of generative models (results include StyleNerf, NVAE, StyleGAN2, and BigGan)\n- Compositionality of controls\n- No optimization at inference\n- The writing is easy to follow. (But here's a specific point. The following motif is repeated at several points in the paper: PromptGen leverages “the knowledge of various off-the-shelf models” for distributional control of pre-trained generative models. It is quite unclear especially in the abstract what “off-the-shelf models” you are referring to, and whether you are still talking about generative models. Perhaps it would be clearer to state that you are trying to control pre-trained generative models using knowledge from “other models” rather than “off-the-shelf models”?)\n\n**Weaknesses/concerns**:\n- Most results are on faces. The authors could have evaluated on different data or chosen a wider variety of prompts. So far it is only clear that the model can control/yield pictures of babies, cats, and persons (especially controlling for age, make-up, or race).\n- When evaluated on non-facial data, the results lack diversity (PromptGen on ImageNet 512 in Figure 5(d)). “A photo of a glow and light dog” seems to yield a single specimen. The melancholy robot always has the same helmet and the background is a consistent color. There’s no further results from ImageNet in the supplementary. Also the size of the generated samples is too small given they were 512x512.\n- CLIP control doesn’t always seem to work well e.g. in Fig 12(a), at least a third of the images have cats with open eyes. It is unclear whether it is CLIP to blame, or a bias in the image dataset. - Functional composition is an important property that needs further evidence. Consider providing results that show how well CLIP captions can be composed (e.g. C_1=“photo of an Asian man” and C_2=“photo of a person with eyeglasses” and C_3=“photo of a bald person”). If you could expound on the performance difference between using a multi-component control C (as suggested in Figure 2’s caption) or using two separate controls C_1 and C_2, that would be super helpful.\n- Table 2: could you please add at least one more column here so we can ensure the attributes aren’t cherry-picked? How about hairColor (say “black” if you need a binary attribute)?\n- Fig 9: the latents learnt by the GAN looks fairly well separated between the two clusters. How well is PromptGen able to cope with less structured latent spaces?\n- Could you clarify whether equation 8 holds for all invertible NNs, or only the architecture you've chosen? Section F in the appendix covers some interesting failure modes. But it places the burden of failure mostly on the pre-trained generative model or control model. It would be nice to see where the overall idea/method might fail.", " This paper introduces an approach to use supervision from off-the-shelf pre-trained models, including classifiers, scoring functions, and multi-modal encoders, as controls for the generation of images with custom properties. In particular, the proposed approach transforms random noise, using an invertible neural network, that is fed to a pre-trained generative model. +This paper includes a through set of experiments that showcase the effectiveness of the proposed approach across several pre-trained generative models (e.g. StyleNeRF, NVAE, BigGAN, StyleGAN2) and different types of controls (binary properties, continuous values, text descriptions). \n\n+A unified approach to custom controls for successfully steering pre-trained generative models can facilitate several down-stream applications and has high significance. In particular, success in compositional controls makes this work even more interesting.\n\n-Both the positioning of the work and the explanation of the method can be further clarified. See below for more details.\n * The authors attempt at differentiating this paper from prior work suggests that [prior methods] “are either model-dependent (i.e., requiring a well-structured style space) or label-intensive (i.e., requiring all training samples to be labeled for explicit conditions), limiting their generality and practical use.” However, there are several prior works that enable sampling from a specific region of the latent space in a frozen generative model without doing any optimization at inference time (e.g. [1]) (not needing labels at training time, using a frozen generative model, using interpolation of previously learnt vectors at inference time, applicable to both GAN and VAE models). Additionally, it is unclear what being “model-dependent” means here. Does it mean learning to navigate the latent space that is specific to a single pre-trained generative model and may not generalize to other pretrained generative models? If so, the parameters $\\theta$ in PromptGen also rely on the pretrained generative model at hand (In Algorithm 2, the gradients pass through the frozen $G_{\\phi}$.) Conceptually, both PromptGen and the aforementioned approaches apply to different pre-trained generative models, though the parameters are going to be tuned for each frozen model. The authors might have missed such prior research in their literature review or the writeup does not properly communicate that. Please clarify the distinctions and positioning of this work.\n\n* Method clarification: It is unclear where the different formulations of energy for the classifier/CLIP type models/and scoring functions (i.e. the inverse graphics case) come from. Is this well-defined in an EBM? Is this inspired by prior work? Is this a design choice? \n\n* Prompt tuning uses a (small) dataset for tuning additional parameters that are (usually) prepended to a meaningful (textual) input to steer the model’s outputs in a direction of interest. Some variants do it in the discrete natural language format and some in continuous embedding space. Some form of concatenation to an existing meaningful input is the fingerprint of what is usually dubbed as “prompt tuning”. So in my opinion, drawing a parallel between “learning transformations over random noise” and “prompt tuning” seems a bit tangential and potentially distracting at first. Perhaps authors meant other connections? If yes, would appreciate clarifications.\n\n[1] Shen, Yujun, et al. \"Interpreting the latent space of gans for semantic face editing.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n The authors have briefly described the societal impact of their work. Additionally, one can argue that more granular controls for generative models might make it easier for generating targeted Deepfakes and its resulting potential negative impacts.", " This paper proposes PromptGen, a method for sampling pre-trained generator networks (e.g. StyleGAN) conditioned on a prompt (label, text, etc) using off-the-shelf discriminative models (classifiers, CLIP etc). The proposed method uses an invertible neural network (INN) to express a distribution p(z ; C) over generator latents z given constraint C. A prompted sample is obtained by sampling p(z ; C) and then passing through the generator to obtain a data-space sample (all experiments are conducted with images but in principle the method could be applied to other modalities). INNs are trained either for a specific instant of the prompt C, or in a prompt conditional setting.\n\nThe method is demonstrated on a range of tasks, including de-biasing StyleGAN, and generating conditioned on a text prompt (with CLIP).\n\n\n\n Strengths\n* Unlike iterative inference time methods like PPGM, PromptGen can be applied (after training) to sample latents in a single forward pass, resulting in significantly improved inference speeds. This is particularly useful in cases where the INN can effectively represent a constraint that can be applied to all samples, e.g. StyleGAN race de-biasing. \n* The compositionality of the proposed method, where prompting mechanisms can be applied in sequence, is a desirable property. \n* The experiments: race / gender de-biasing, text-conditioning are well chosen for their real-world importance.\n\nWeaknesses\n* The broad usefulness of the method depends on the quality and coverage of the pre-trained generator network. GAN generators in general are prone to mode-dropping, especially those trained on complex natural image distributions (BigGAN). This is addressed in the limitations section.\n* For complex conditional tasks, like text-conditioned generation, the INN will have to learn a very complex mapping (e.g. from text -> latents). This will require lots of data / compute / optimization etc and as such might be prohibitive in practice. \n* In general there is a balance between the benefits of amortization: fast inference speeds, and the costs: training time and compute. And which is preferable depends on a user's constraints. I think the paper would benefit from a discussion of these issues. * The prospect of using such models to reveal the limitations of pre-trained models (e.g. CLIP and the makeup example) is potentially very useful. But how do we know if the limitation belongs to the prompting model, or if it is a limitation of the INR-based p(z ; C)? For some prompts, p(z ; C) might be very complex, and therefore challenging to learn. So how do we know that we haven't just failed to (fully) learn the mapping when training the INR? If there isn't a good way of knowing, then this is a limitation, as in general it won't be possible to confidently use the proposed method to diagnose other models.\n* For the text conditional experiments, is the INN trained just for a single prompt? Why not train a text-conditional model? It is impractical to train separate models for each text input, and is a limitation if it is not possible with the proposed approach.\n* Can PromptGen be applied to diffusion models? \n* Could you expand on this comment in the limitations: \"Except for the case of generative models with a class-embedding space, PromptGen focuses on mode-seeking and mode-reweighting instead of domain adaptation\" - I didn't really understand what was meant by domain adaptation vs mode seeking in this context. * The authors adequately addressed the fact that the method is dependent on the data coverage of the generator network used. E.g. if you're using StyleGAN you're going to get faces and not spaceships. \n* The need to train a potentially very powerful conditional INN for complex tasks such as text-conditional generation wasn't really addressed (as mentioned in the weaknesses section). For this use case the burden of training such a network could easily be so high that a practicioner may prefer an iterative inference time method (like PPGN), even with its higher inference costs.\n* Potential negative societal impacts were not really discussed, however the experimental work on de-biasing pre-trained models (StyleGAN, CLIP) communicates to the reader that bias is an issue with such models that should be addressed.", " This work proposes a unified method called PromptGen to control the image synthesis of pre-trained generative models, such as StyleGAN, NVAE and StyleNeRF. The basic idea is to 1) first formulate the latent variable z distribution conditioned on the control \\mathcal{C}, i.e., $p(x|\\mathcal{C})$ as a latent-space EBM, where the latent-space EBM can use pre-trained image classifiers, CLIP model and inverse graphics model to specify the control; and 2) train an invertible neural network (INN) to approximate the latent-space EBM using an KL divergence as the training objective. In experiments across different image datasets, this work shows the efficacy and efficiency of PromptGen in the tasks of image synthesis based on text description, de-biasing generative models, pose-guided face synthesis, and iterative control via functional composition.\n Strengths:\n\n(1) Controlling generative models using EBM with MCMC Langevin dynamics is effective but slow (it needs multiple inference/optimization iterations). Thus, this work proposes to train another latent generator $z=f_\\theta(\\epsilon)$ that approximates the latent-space EBM, so the sampling can be performed with one forward pass, which I think is the main originality and significance of this work.\n\n(2) For the second contribution, with the latent generator $z=f_\\theta(\\epsilon)$, this work also demonstrates 1) the generality for different controllable generation tasks, and 2) the iterative controls by composing the latent generators and the pre-trained generative model. \n\n(3) The paper is well-written and easy to read. Extensive experiments were performed to show the wide applicability, effectiveness and efficiency of the proposed method.\n\nWeaknesses:\n\nMy main concern is that in experiments, this work didn’t provide sufficient quantitative results of evaluating the controllability. I like the quantitative results of evaluating the de-biasing performance, but regarding other controllable generation tasks, such as text-guided generation and pose-guided generation, I didn’t see the quantitative results of how the proposed method controls the generation to satisfy the specified attribute or text. \n\nFor other concerns, please see the questions below.\n (1) Can more quantitative results on the controllability be added to show the performance of PromptGen? \n\n(2) In Figure 5(d), I think we see a severe mode collapse issue, as the background colors and object patterns are very similar across different generated images. How do you explain this phenomenon?\n\n(3) For the text-guided generation, can PromptGen perform well with more complex sentences, where we have a composition of multiple attributes, such as “a photo of a smiling baby with glasses”? These kinds of results will better demonstrate the compositionality of the proposed method. \n\n(4) For the generative models with a class-embedding space, I wonder why we need to train another latent-space generator $y=h_\\theta(\\xi)$? I feel like we can also directly train a single latent-space generation $z=f_\\theta(\\epsilon, y)$ to approximate the EBM. Any justification?\n This work has well addressed their limitations. But I didn’t see many discussions about their negative societal impact. I think this work shares with other image synthesis tools similar potential benefits and risks, which have been discussed extensively in [1]. I suggest adding more discussions about the risks of controllable image synthesis. \n\n[1] Vaccari, C. and Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media+ Society. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "rPzdRASZ-dh", "mOb2uXRwbIb", "RyNqXOwil61", "MAqlyZDBMJg", "nips_2022_Gsbnnc--bnw", "_GN8gxqmk-", "X_hfVOZcQe", "q0cBDN592SC", "_bhYvOU8pN8", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw", "nips_2022_Gsbnnc--bnw" ]
nips_2022_LCWQ8OYsf-O
Polyhistor: Parameter-Efficient Multi-Task Adaptation for Dense Vision Tasks
Adapting large-scale pretrained models to various downstream tasks via fine-tuning is a standard method in machine learning. Recently, parameter-efficient fine-tuning methods have shown promise in adapting a pretrained model to different tasks while training only a few parameters. Despite their success, most existing methods are proposed in Natural Language Processing tasks with language Transformers, and adaptation to Computer Vision tasks with Vision Transformers remains under-explored, especially for dense vision tasks. Further, in multi-task settings, individually fine-tuning and storing separate models for different tasks is inefficient. In this work, we provide an extensive single- and multi-task parameter-efficient benchmark and examine existing parameter-efficient fine-tuning NLP methods for vision tasks. Our results on four different dense vision tasks showed that existing methods cannot be efficiently integrated due to the hierarchical nature of the Hierarchical Vision Transformers. To overcome this issue, we propose Polyhistor and Polyhistor-Lite, consisting of Decomposed HyperNetworks and Layer-wise Scaling Kernels, to share information across different tasks with a few trainable parameters. This leads to favorable performance improvements against existing parameter-efficient methods while using fewer trainable parameters. Specifically, Polyhistor achieves competitive accuracy compared to the state-of-the-art while only using less than 10% of their trainable parameters. Furthermore, our methods show larger performance gains when large networks and more pretraining data are used.
Accept
The proposed Polyhistor and Polyhistor-Lite for parameter-efficient multi-task adaptation achieves competitive performance gains on dense vision datasets. All reviewers give consistent positive scores. The requested experiments for more backbones, self-supervised backbones and analyses have been accordingly added during the discussion phase. Reviewer Gyt3 is concerned about the unclear explanation of the framework, and why HyperNetwork and scalable kernels could help. The authors addressed the issues and modified the paper. The meta-reviewers thus recommend to accept this paper, and encourage the authors to add all new experiments and make the presentation more clear in the camera ready.
train
[ "4q58QxRTGT", "0Zc2CCLW99fO", "VqN8Z0RMQ6a", "pFuS339hGX6", "Mv5uZT4Esjf", "_rAU6gSeBy3", "xMsjF5Jfm7v", "h2uf4ti5i6v", "TKMQ4dd9mtk", "hAYmh8ei0x0", "uNbSqOdBwcY", "l2YQMJZbt90", "xyjO6EvinD7" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I will keep my score unchanged.", " Thank the authors for the response. \nI have no further questions. After reading the rebuttal and other reviewers' comments, I would like to keep my score at 7. ", " We thank all reviewers for providing constructive thoughtful feedback!\n\nWe are deeply encouraged by the reviewers’ positive comments such as *“conducts a thorough study”* (R #Qanq, R #eYWB), *“novel method”* (R #Qanq), *“reasonable method”* (R #eYWB, R #zAUZ), *“achieves a competitive performance gain”* (R #Qanq), *“interesting to explore the parameter-efficient methods for dense vision tasks”* (R #eYWB), *“presentation is clear”* (R #eYWB, R #zAUZ), and *“benchmark is helpful to the community”* (R #zAUZ). \n\n---\n\nWe appreciate the above positive comments and would like to provide more experiments and analyses to further improve our paper. We summarize the additional experiments/analyses we made in the rebuttal per the suggestions from the reviewers.\n\n- **[Experiment added]** We show our proposed Polyhistor and Polyhistor-Lite can be applied to other backbone architectures (e.g., Pyramid Vision Transformer [1]), and our proposed methods can achieve comparable results to the SoTA method (i.e., Hyperformer) by using significantly fewer trainable parameters.\n- **[Experiment added]** We show our proposed Polyhistor and Polyhistor-Lite can be applied to self-supervised models (e.g., MoBY [2]; self-supervised SwinTransformer), and our proposed methods can achieve competitive or even better results against the other methods with fewer trainable parameters.\n- **[Experiment added]** We examine our proposed method under sequential multi-tasking learning. \n- **[Analysis added]** We vary different down-projection ratios of adapters and report their results on multiple vision tasks.\n\n---\n\nIn addition, we revised our paper as listed in the following points, and we colorize them blue in the revised version. \n\n- We modified Figure 2 and its caption for a better understanding of our framework.\n- We made the comparison to VPT more concisely and put an in-depth discussion in the appendix.\n- We added the above new experiments and their discussion in the appendix. \n\nThank you again for your time and effort in reviewing our paper!\n\n\n---\n\nReference:\n\n[1] “Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions”, Wang et al., ICCV’ 2021\n\n[2] “Self-Supervised Learning with Swin Transformers”, Xie et al., arXiv 2021\n\n", " ***Q2: What if the tasks come one by one?***\n\n**A:** We thank the reviewer for asking this question, and this is an interesting question worth exploring. Therefore, we consider the sequential tasks scenario, where each task is available at a time and four tasks sequentially appear.\n\n- **[Comparison between joint training and sequential training]** Since the joint training allows one to use the training data from all tasks to jointly train the hypernetwork, the hypernetwork can share information across similar tasks and thus achieve slightly higher results compared to the sequential training. However, the sequential training does not have the constraint of accessing all training data from all tasks at the same time. \n\n- **[Benefit of our method in sequential training scenario]** Usually, the sequential training or continual learning scenario has the forgetting issue [3, 4], where the model learned on new tasks degrades its accuracy of previous tasks. Interestingly, we find our framework does not suffer from such an issue. \n\n It is because our adapter weights are generated from the hypernetwork and can be stored ***offline***, and we can easily insert the stored adapters into the pretrained feature backbone to perform the learned task (note that the pretrained model is frozen all the time). Therefore, even when the hypernetwork is trained to learn new tasks and generates adapter weights for the new tasks, the learning of new tasks will not affect the model inference of the previous tasks. \n\n To be more specific, once we learned the hyper-network to generate the adapters for a specific task, we can store the adapters offline and use the trained hypernetwork to continually learn new tasks. Although the forgetting issue might appear in the hypernetwork, the previously stored adapters will not be affected and can be inserted into the pretrained feature backbone to perform previously learned tasks. \n\n\n| | Time | Seg.↑ (mIoU) | H.Seg.↑ (mIoU) | Sal.↑ (mIoU) | Normals↓ (mErr) |\n|:-------------:|:---:|:------------:|:--------------:|:------------:|:---------------:|\n| Sequential | T=0 | 69.79 | - | - | - |\n| | T=1 | 69.79 | 63.99 | - | - |\n| | T=2 | 69.79 | 63.99 | 58.59 | - |\n| | T=3 | 69.79 | 63.99 | 58.59 | 17.87 |\n| | | | | | |\n| Joint 4-tasks | - | 70.24 | 64.75 | 59.12 | 17.40 |\n\n\n---\n\n**Reference:**\n\n[1] “Self-Supervised Learning with Swin Transformers”, Xie et al., arXiv 2021\n\n[2] “Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks”, Mahabadi et al., ACL 2021\n\n[3] “Overcoming catastrophic forgetting in neural networks”, Kirkpatrick et al., arXiv 2016\n\n[4] “Learning without Forgetting”, Li et al., PAMI 2018\n", " ***Q1: Could this method be applied to self-supervised backbones?***\n\n**A:** Thanks for the suggestion. We are also curious about this interesting question and conducted an experiment using the self-supervised Swim Transformer-Tiny (MoBY-Tiny [1]). For a fair comparison, we also run all baselines with MoBY-Tiny and report the results in the following Table.\n\n| | ▼ Results of **MoBY-Tiny [1]** | | | | | |\n|:----------------------------:|:------------------------------------------------:|:----------------:|:------------------:|:----------------:|:-------------------:|:-----------------:|\n| **Methods** | **Trainable Parameters (Encoder/All; Millions)** | **Seg.↑ (mIoU)** | **H.Seg.↑ (mIoU)** | **Sal.↑ (mIoU)** | **Normals↓ (mErr)** | **Avg. Improve.** |\n| Single-task full fine-tuning | 110.07/112.62 | 65.52 | 61.78 | 62.05 | 18.14 | 0.00% |\n| Fine-tuning Decoders | 0.00/2.55 | 59.64 | 52.97 | 59.60 | 19.88 | -9.21% |\n| Bitfit | 0.30/2.85 | 63.43 | 54.90 | 59.50 | 19.80 | -6.90% |\n| VPT-shallow | 0.02/2.57 | 59.50 | 52.84 | 59.48 | 19.88 | -9.36% |\n| VPT-deep | 0.88/3.43 | 56.15 | 50.30 | 57.22 | 20.71 | -13.72% |\n| Adapter | 8.69/11.24 | 65.00 | 56.66 | 60.84 | 18.64 | -3.45% |\n| LoRA | 0.32/2.87 | 65.64 | 57.66 | 62.29 | 18.47 | -1.99% |\n| Low-rank adapter | 0.34/2.89 | 63.30 | 55.24 | 59.72 | 19.14 | -5.82% |\n| PHM layer | 0.59/3.14 | 63.21 | 54.99 | 59.70 | 19.13 | -5.95% |\n| Compacter++ | 0.11/2.66 | 62.31 | 54.69 | 59.43 | 19.58 | -7.14% |\n| Hyperformer | 19.29/44.25 | 66.50 | 58.97 | 66.02 | 17.61 | 1.56% |\n| **Polyhistor** | **6.41/8.96** | **67.69** | **59.32** | **65.15** | **17.43** | **2.05%** |\n| **Polyhistor-Lite** | **0.41/2.96** | **67.23** | **58.90** | **64.62** | **17.72** | **1.09%** |\n\n---", " ***Q1: Would the proposed method also work well with other hierarchical vision transformers?***\n\n**A:**\nYes, our method can be applied to other backbones. In addition to applying it to the Swin Transformer in the paper, for the rebuttal we further apply our method and other baseline methods to the **Pyramid Vision Transformer** [1] as shown in the Table. The conclusions are consistent with our original experiments. We find our Polyhistor can achieve comparable results to Hyperformer while using much fewer trainable parameters. Polyhistor-lite can further reduce trainable parameters and achieve higher accuracy than all other methods using a similar amount of trainable parameters (e.g., BitFit, PHM layer, Compacter, LoRA, and Low-rank Adapter). This trend is aligned with what we found in the original experiments when using Swin Transformer. With these new experiments, we show that our method generalizes to different backbones. \n\n| | ▼ Results of **PVT** | | | | | |\n|:----------------------------:|:--------------------------------------------:|:------------:|:--------------:|:------------:|:---------------:|:-------------:|\n| **Method** | **Trainable Parameters (Encoder/All; Millions)** | **Seg.↑ (mIoU)** | **H.Seg.↑ (mIoU)** | **Sal.↑ (mIoU)** | **Normals↓ (mErr)** | **Avg. Improve.** |\n| Single-task full fine-tuning | 0.00/97.99 | 68.81 | 61.27 | 62.67 | 17.55 | 0.00% |\n| Fine-tuning Decoders | 0.00/2.11 | 64.86 | 51.18 | 61.54 | 19.55 | -8.85% |\n| Bitfit | 0.22/2.34 | 71.41 | 55.71 | 64.08 | 18.69 | -2.38% |\n| Adapter | 0.79/2.90 | 71.94 | 56.38 | 64.16 | 18.75 | -1.97% |\n| LoRA | 0.30/2.41 | 71.89 | 56.90 | 64.27 | 18.48 | -1.35% |\n| Low-Rank adapter | 0.25/2.36 | 70.72 | 55.34 | 63.39 | 18.70 | -3.08% |\n| PHM layer | 0.42/2.53 | 70.81 | 55.02 | 63.51 | 18.75 | -3.20% |\n| Compacter++ | 0.09/2.20 | 70.29 | 54.80 | 63.16 | 18.82 | -3.71% |\n| Hyperformer | 14.03/16.14 | 70.81 | 57.76 | 65.49 | 17.75 | 0.14% |\n| **Polyhistor** | **5.21/7.32** | **71.00** | **57.52** | **65.83** | **17.83** | **0.13%** |\n| **Polyhistor-Lite** | **0.29/2.40** | **70.93** | **56.71** | **65.00** | **17.95** | **-0.73%** |\n\n---\n\n***Q2: In Figure 2(b), the Transformer in the lower part has no direct connection with the upper part and is meaningless.***\n\n**A:** Thanks for pointing this out, and we modified this figure for better understanding. We intend to show that channel sizes and adapter weight sizes are different in different blocks of the hierarchical vision transformers.\n\n---\n\n**Reference:**\n\n[1] “Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions”, Wang et al., ICCV’ 2021\n", " ***Q1: The framework design of the proposed Polyhistor is not explained clearly. The usage of schematic diagram makes the idea easier to follow and understand.***\n\n**A:** Thanks for the suggestion. We will make some changes in the revised paper to help make it easier to understand.\n- We made the description of Visual prompt tuning more concisely in the related works.\n- We modified the figure of our main framework (Figure 2b) and made the framework clear. \n- We modified the caption of Figure 2 to clarify the framework design of Polyhistor and Polyhistor-Lite.\n\n---\n\n***Q2: The difference between Visual Prompt Tuning and this work could be compared more concisely.***\n\n**A:** Thanks for the suggestion. We made the comparison between Visual Prompt Tuning (VPT) and our work more concisely in the main paper (line 125-126). We also provided the empirical comparison to VPT in all our experimental cases. In addition, we would be happy to provide a more in-depth comparison to VPT in the following paragraphs, and we put these additional discussions in the appendix (Section 1.7 of the revised appendix). \n\n\n- **[Different Problem Settings]** VPT focuses on single-task parameter-efficient adaptation, while our proposed method focuses on multi-task parameter-efficient adaptation. Our goal is to perform a parameter-efficient adaptation for multiple tasks and share the beneficial information across multiple vision tasks. \n- **[Different types of parameter-efficient methods]** VPT adds learnable parameters along with the visual embeddings, while our proposed method utilizes a shared hyper-network to produce the adapter weights for different tasks. Also, the insertion locations of learnable parameters are different (VPT: input space, Ours: parallel to fully-connected layers). \n\n---\n\n***Q3: Introducing more about HyperNetwork and the proposed Scalable Kernel could help.***\n\n\n**A:** Thanks for the suggestion. We added more descriptions of Hypernetwork and Scalable Kernel. \n- **[HyperNetwork]** To learn the jointly beneficial information across different visual tasks, we introduce a pair of hyper-networks, which are learnable individual modules, to produce the weights of the adapters inserted in the dense prediction model. Different from the prior work [1], we decompose the adapter weight into two low-rank matrices and thus significantly reduce the parameters used in the hypernetworks (as shown in Section 4.1).\n\n- **[Scaling Kernels]** Scaling Kernels are proposed to address the quadratically growing parameters issue for hierarchical vision transformers, and these layer-wise Scaling Kernels are then combined with the Template Kernels (produced by the hypernetworks) by using Kronecker Produce. In this way, we can efficiently scale up the Template Kernels and fit them into transformer layers with different scales (as shown in Section 4.2).\n\n---\n\n***Q4: Typos.***\n\n**A:** Thanks for pointing this out. We have corrected this accordingly. ", " ***Q2: Can the proposed method be applied to other backbones?***\n\n**A:** Yes, our method can be applied to other backbones. In addition to applying it to the Swin Transformer in the paper, for the rebuttal we further apply our method and other baseline methods to the **Pyramid Vision Transformer** [1] as shown in the Table. The conclusions are consistent with our original experiments. We find our Polyhistor can achieve comparable results to Hyperformer while using much fewer trainable parameters. Polyhistor-lite can further reduce trainable parameters and achieve higher accuracy than all other methods using a similar amount of trainable parameters (e.g., BitFit, PHM layer, Compacter, LoRA, and Low-rank Adapter). This trend is aligned with what we found in the original experiments when using Swin Transformer. With these new experiments, we show that our method generalizes to different backbones. \n\n| | ▼ Results of **PVT** | | | | | |\n|:----------------------------:|:--------------------------------------------:|:------------:|:--------------:|:------------:|:---------------:|:-------------:|\n| **Method** | **Trainable Parameters (Encoder/All; Millions)** | **Seg.↑ (mIoU)** | **H.Seg.↑ (mIoU)** | **Sal.↑ (mIoU)** | **Normals↓ (mErr)** | **Avg. Improve.** |\n| Single-task full fine-tuning | 0.00/97.99 | 68.81 | 61.27 | 62.67 | 17.55 | 0.00% |\n| Fine-tuning Decoders | 0.00/2.11 | 64.86 | 51.18 | 61.54 | 19.55 | -8.85% |\n| Bitfit | 0.22/2.34 | 71.41 | 55.71 | 64.08 | 18.69 | -2.38% |\n| Adapter | 0.79/2.90 | 71.94 | 56.38 | 64.16 | 18.75 | -1.97% |\n| LoRA | 0.30/2.41 | 71.89 | 56.90 | 64.27 | 18.48 | -1.35% |\n| Low-Rank adapter | 0.25/2.36 | 70.72 | 55.34 | 63.39 | 18.70 | -3.08% |\n| PHM layer | 0.42/2.53 | 70.81 | 55.02 | 63.51 | 18.75 | -3.20% |\n| Compacter++ | 0.09/2.20 | 70.29 | 54.80 | 63.16 | 18.82 | -3.71% |\n| Hyperformer | 14.03/16.14 | 70.81 | 57.76 | 65.49 | 17.75 | 0.14% |\n| **Polyhistor** | **5.21/7.32** | **71.00** | **57.52** | **65.83** | **17.83** | **0.13%** |\n| **Polyhistor-Lite** | **0.29/2.40** | **70.93** | **56.71** | **65.00** | **17.95** | **-0.73%** |\n\n\n\n\n**Reference:**\n\n[1] “Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions”, Wang et al., ICCV’ 2021", " ***Q1: Results of different down-project ratios of adapters?***\n\n**A:** We vary the down-projection ratios (ρ) of the adapters and report the results in the Table. We find that the semantic segmentation reaches the near-optimal performance when the small adapters are used (ρ = 32). However, for other dense prediction tasks, there exist obvious gaps when the smaller adapters are used, and averaged relative improvement shrinks when the adapter sizes are smaller.\nThis suggests that the required network capacity for semantic segmentation is sufficient when small adapters are used, while other dense prediction tasks require more trainable parameters. \n\nSuch a trend potentially comes from the usage of a backbone pretrained on image classification tasks with overlapping object categories (ImageNet). Such a backbone is expected to contain similar semantic information required by semantic segmentation, so that using a limited amount of trainable parameters can achieve near-optimal results. \n\n| Down-Proj. Ratio | Methods | Trainable Parameters (Encoder/ All; Millions) | Seg.↑ (mIoU) | H.Seg.↑ (mIoU) | Sal.↑ (mIoU) | Normals↓ (mErr) | Avg. Improve. |\n|:-----------------:|:----:|:---------------------------------------------:|:------------:|:--------------:|:------------:|:---------------:|:--------------:|\n| 1 | Ours | 1.291/1.3353 | 73.7 | 63.32 | 66.5 | 16.93 | 6.38% |\n| 2 | Ours | 0.6213/3.6656 | 73.69 | 63.04 | 66.56 | 17.301 | 5.80% |\n| 4 | Ours | 0.3862/3.4305 | 73.57 | 62.04 | 65.84 | 17.7 | 4.55% |\n| 8 | Ours | 0.2937/3.338 | 73.92 | 62.15 | 65.37 | 17.7 | 4.53% |\n| 32 | Ours | 0.2352/3.2795 | 73.8 | 61.32 | 64.64 | 17.92 | 3.57% |\n\n", " This paper proposes Polyhistor and Polyhistor-Lite, consisting of Decomposed HyperNetworks and Layer-wise Scaling Kernels, to share information across different tasks with a few trainable parameters and address parameter-efficient multi-task adaptation for vision tasks. The authors construct a unified framework with the same implementation details and provide a comprehensive and fair comparison between existing parameter-efficient adaptation works in NLP on multi-tasking dense vision problems. Compared with the state-of-the-art multi-tasking parameter-efficient adaptation method, the method achieves competitive performance improvement with ∼ 90% reduction in the trainable parameters . Strengths:\n1. This paper conducts a thorough study on how the existing successful parameter efficient methods on NLP tasks perform on vision tasks, i.e., semantic segmentation, human part segmentation, saliency detection, and surface normals estimation.\n2. The authors design a novel parameter-efficient method for adaptation to dense vision tasks. Specifically, the hyper-networks take input task embeddings and layer embeddings to produce low-rank matrices and further obtain the adapter weights.\n3. Experimental results show that Polyhistor-Lite can achieve a competitive performance gain compared with the state-of-the-art method and only use a very limited amount of tunable parameters.\n\nWeaknesses:\n1. There are many hyper-parameters in this method, e.g., task embedding size, dimension of ranks, and the down-projection ratio of adapters. Searching for suitable hyper-parameters costs a lot of resources.\n2. The method is implemented based on SwinTransformer backbone. It would be better to conduct experiments on more other backbones. 1. In Table 1 in Appendix, why the performance of Polyhistor-Lite (ρ = 32) in semantic segmentation is slightly better than Polyhistor-Lite (ρ = 1). It would be better to show more results of different down-projection ratio of adapters.\n2. Can this method be applied to other backbones? The method is novel and solve multiple tasks with limited tunable parameters. However, there are several hyper-parameters needed to be tuned in this method. Besides, The method only focuses on dense vision tasks. It would be better to include more common vision tasks like object detection.", " The manuscript provides an extensive multi-task parameter-efficient benchmark and examines existing parameter-efficient fine-tuning NLP methods for vision tasks. The main contribution is that this work is the first to address parameter-efficient multi-task adaptation for vision tasks, and developed a unified framework to benchmark several parameter-efficient fine-tuning NLP methods on dense vision tasks. The paper is mostly written in a good manner, and the idea is straight yet effective. The paper seems to combine several existing methods, and extend their applicable scene, making the contribution of this work less of a strength. However, they did several modifications to the existing methodologies, which should be detailed and illustrated. Major:\n1.\tThe framework design of proposed Polyhistor is not explained clearly. Illustration on main steps of parameters choosing as well as the formulations are encouraged, and the usage of schematic diagram makes the idea easier to follow and understand.\n2.\tThe difference between Visual Prompt Tuning and this work could be compared more concisely. \n3.\tThe novelty of this work mainly focused on the improving HyperNetwork and proposed scalable kernel, thus introduce more about these ideas would be of help.\n\nMinor:\n1.\tA small grammar mistake: in line 82, it should be ‘a unified framework’, not ‘an’.\n The true novelty of the work should be further justified.", " This paper proposes a parameter-efficient multi-task adaptation method for dense vision tasks, called Polyhistor-lite. It is used to adapt a pre-trained hierarchical vision transformer for solving multiple dense vision tasks. The proposed method consists of two aspects, the Decomposed Hyper-networks and the Layer-Wise Scaling Kernels. Models are evaluated on PASCAL-Context datasets for semantic segmentation, human part segmentation, surface normals estimation, and saliency detection and are shown to be effective. \n**Strengths**\n1. It is interesting to explore the parameter-efficient adaptation techniques for dense vision tasks. It has not been investigated in this area. \n2. The presentation of this work is clear. This paper also provides a detailed discussion of the differences and relations with parameter-efficient multi-task adaptation methods in NLP tasks. \n3. The proposed parameter-sharing method is reasonable and is shown to be helpful in reducing the learning parameters but also keeping relatively high performance. \n4. Comparisons with existing approaches are thorough and significant.\n\n**Weaknesses** \n1. In Figure 2(b), the Transformer in the lower part has no direct connection with the upper part and is meaningless. \n2. The proposed method is only evaluated with Swin-Transformer. As claimed by this paper, it is designed for hierarchical vision transformers. Would it also work well with other hierarchical vision transformers?\n \nSee Weaknesses point 2 for the question. Overall, this proposed method is novel and effective with well-presentation. \n Limitations are discussed in the paper. ", " The paper proposes Polyhistor, a parameter efficient tuning method for jointly tuned dense vision tasks. Previous adapter-like methods in NLP are benchmarked in detail for dense vision tasks. The proposed method is proved to give better parameter-performance trade-off on the studied tasks. Strengths:\n\n1. The paper is neatly written. It is easy to follow and understand.\n2. I haven't seen a lot of paper to study parameter-efficient tuning specifically for dense vision tasks, so the benchmark for adapter-like methods in this paper should be helpful for the community. The authors also promised to release the code. \n3. The idea makes sense to me, and the performance is well-supported by the experiments. Low-rank decomposition and dynamic weight are nothing new, but they are properly applied. The Layer-wise Scaling Kernel is novel and effective.\n\nWeakness:\n\nThe comparison w.r.t single task baselines doesn't seem fair to me. A model jointly trained on similar tasks is expected to outperform models trained on single tasks respectively. I feel results on single-task should be reported. It's ok to perform worse on a few tasks, since this paper is mainly comparing with multitask methods, but the results should be presented for completeness.\n I am just curious about why the four tasks are jointly tuned. The reason that we need parameter-efficient tuning is that we can not afford replicating a model with 500B parameters on 1000 down stream tasks, especially when the tasks come one by one, unexpectedly. If we jointly tune the model for a few tasks, how do we do with the upcoming tasks?\n\nI do not mean to criticize the paper on this point. It's a good paper, and the world needs diversity. I just have this confusion, and some other readers may also have this confusion. So, maybe this is a problem worth addressing in the paper. 1. Only pretrained Swin Transformer is studied in this paper, but I feel this method can be easily extended to ConvNets.\n2. More experiments on models pretrained on SSL tasks should also make this paper stronger, since SSL models are competitive against IN-pretrained models, and the features are very different. The behavior of a method can be very different depending on the pretrain task." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "pFuS339hGX6", "_rAU6gSeBy3", "nips_2022_LCWQ8OYsf-O", "Mv5uZT4Esjf", "xyjO6EvinD7", "l2YQMJZbt90", "uNbSqOdBwcY", "TKMQ4dd9mtk", "hAYmh8ei0x0", "nips_2022_LCWQ8OYsf-O", "nips_2022_LCWQ8OYsf-O", "nips_2022_LCWQ8OYsf-O", "nips_2022_LCWQ8OYsf-O" ]
nips_2022_hYa_lseXK8
Model-based Safe Deep Reinforcement Learning via a Constrained Proximal Policy Optimization Algorithm
During initial iterations of training in most Reinforcement Learning (RL) algorithms, agents perform a significant number of random exploratory steps. In the real world, this can limit the practicality of these algorithms as it can lead to potentially dangerous behavior. Hence safe exploration is a critical issue in applying RL algorithms in the real world. This problem has been recently well studied under the Constrained Markov Decision Process (CMDP) Framework, where in addition to single-stage rewards, an agent receives single-stage costs or penalties as well depending on the state transitions. The prescribed cost functions are responsible for mapping undesirable behavior at any given time-step to a scalar value. The goal then is to find a feasible policy that maximizes reward returns while constraining the cost returns to be below a prescribed threshold during training as well as deployment. We propose an On-policy Model-based Safe Deep RL algorithm in which we learn the transition dynamics of the environment in an online manner as well as find a feasible optimal policy using the Lagrangian Relaxation-based Proximal Policy Optimization. We use an ensemble of neural networks with different initializations to tackle epistemic and aleatoric uncertainty issues faced during environment model learning. We compare our approach with relevant model-free and model-based approaches in Constrained RL using the challenging Safe Reinforcement Learning benchmark - the Open AI Safety Gym. We demonstrate that our algorithm is more sample efficient and results in lower cumulative hazard violations as compared to constrained model-free approaches. Further, our approach shows better reward performance than other constrained model-based approaches in the literature.
Accept
This paper presents Model-based PPO-Lagrangian (MBPPO-Lagrangian) algorithm for safe RL, which reduces epistemic and aleatoric uncertainty with an ensemble of neural networks. The authors experimented the proposed algorithm in safety benchmarks such as Safety Gym: PointGoal1 and CarGoal for which MBPPO-Lagrangian showed better performances and safety guarantees than other model-free/based safe RL baseline algorithms. This paper presented a model-based safe RL algorithm that has immense applications in RL for safety-critical problems. The paper is generally well-written and intuitive, with most concepts clearly explained. The safety results demonstrated in the experiments are convincing and the algorithms are easy enough to implement for most practical applications. Therefore, the review committee has a consensus of recommending acceptance for this work to NeurIPS 22.
train
[ "uezg7zjUD5", "OPLPi24xKoP", "7lIs5rejBL", "qNsyWuKTHO6", "VsTvJSQChey", "tZ0PHYBZs6h", "H7O4upmToN4", "X827usIJD8X", "RnWxJbUHe65", "Uhqu0DGsDrn", "D0Jy53AC0Uf", "MIaULM1ywB-", "5URMZ7Chmqa" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you for your comments. Your suggested changes will be incorporated.", " Thanks for addressing all my concerns regarding the paper! \n\nI have two more recommendations to improve the paper. I wonder whether showing the standard deviations of graphs in Figure 1 and in Appendix D is possible. Also, it would be better to size up figures and the fonts in the figures. I saw the revised version, but the readability is still not improved. \nTo sum up, I will keep my score same, but I recommend adding more experiments as other reviewers also mentioned.\n\n", " Thanks to the authors for their feedback. Combined with other reviewers' comments, I think the current experiment evaluation is still limited, thus I will keep the same score.\n", " Wrong reference number, safe-LOOP [3] is actually safe-LOOP[1].", " We summarise below the broad changes made based on the reviewer comments.\n\n1) We have included an explanation of the environments in appendix A.\n2) We have included the variance of PPO-Lagrangian performance at convergence in plots of the main paper in Figures 2 and 3.\n3) We have added an additional experiment which shows perfomance variation of the agent with respect to $/beta$ in Appendix D on the CarGoal and RC-Car environments. \n4) As suggested by reviewers, we have unified the names PointGoal1 as PointGoal and CarGoal1 as CarGoal.\n5) We have used a more general keyword \"PR Threshold\" instead of 70% in Algorithm 1 earlier and included comments on PR threshold in Appendix A.\n6) We have included a comment on the usage of real and imaginary data in the Algorithm 1 (after L6) and explained the same in Appendix A as well.\n7) We have corrected the typos in L163 and eq-17,18.", " We thank the reviewer for taking the time to review our work! We address the concerns raised by the reviewer as follows:\n\n\n**\"About the experiment part, why does PPO-Lagrangian perform worse than MBPPO-Lagrangian asymptomatically?\"**\n\nThanks for pointing this out, we have now updated the plots with variance of asymptotic performance of the PPO-Lagrangian (shaded grey area in both plots in Figure 2) to represent the $\\pm 1$ standard deviation of the final policy performance of PPO-Lagrangian. We hadn't shown the standard deviation in the previous plots for the case of the PPO-Lagrangian. Also, in figure 4, we can observe that the 95% confidence bands of the reward performance of PPO-Lagrangian and MBPPO-Lagrangian are highly overlapping.", " We thank the reviewer for taking the time to review our work! We address the concerns raised by the reviewer as follows:\n\n**\"How was the $PR > 70$ % choice made? Did you run ablation analysis on this parameter?\"**\nThe concept of Performance ratio was introduced in Kurutach et al.[1] where they used 70% as the threshold. In our experiments, we use PR of 66% as mentioned in our supplementary material. We have updated line 5 in Algorithm 1 as \"Performance ratio > PR Threshold\" for the general use case. Thanks for pointing this out! The logic behind this number is that our agent should perform better in more than 50% of the models in the ensemble. In our case we train an ensemble of 8 models out of which we use the best 6 models with minimum validation losses to calculate PR. We want our agent to perform better in at least 4 out of the 6 models that gives us the value as $\\sim66 $%.\n\n\n\n**\"Can you please run Figure 1 for more environments?\"**\n\nIn the case of Safety Gym, even in model-free baselines such as PPO-Lagrangian, CPO, we observe that lower cost violations lead to lower reward performance since the agent explores pessimistically. With a lower value of beta (stricter cost threshold), we observe a lower reward performance in the case of the CarGoal environment as well. We have added a similar plot for the CarGoal environment in Appendix D of supplementary material. Other than safety gym, we have included a similar plot for the RC-Car [2] environment in Appendix D, where a car has to rotate within a circle with a certain target velocity for earning rewards. If the car goes out of the circle, it however incurs a cost. We observe a similar trend here, where lower beta, i.e., stricter threshold, leads to lower reward returns. The difference in the cost returns is also visible. \n\n**Presentation** : We have increased the font size of the text used in the figures.\n\n[1] Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine. Deep reinforcement\nlearning in a handful of trials using probabilistic dynamics models. Advances in neural information processing systems, 31, 2018.\n[2] E. Ahn. Towards safe reinforcement learning in the real world. PhD thesis, 2019.", " We thank the reviewer for taking the time to review our work! We address the concerns raised by the reviewer as follows:\n\n**\"A static choice of the safety threshold seems very conservative. As training progresses I could imagine that the estimation of cost returns becomes less uncertain and thus relaxing the safety thresholds could allow for better performance in terms of rewards. How would you incorporate model uncertainty to set the safety threshold dynamically?\"**\n\nWe assume that the safety threshold is known and available (Refer L-145 of main paper) and hence we keep it static. For instance, think of a self-driving vehicle navigating in the face of uncertain traffic conditions. A safety threshold in this case could amount to having a safety bubble of say one metre around the car and if any vehicle comes within that distance, the car would then need to perform an action to save itself from a possible collision. Certainly a dynamically changing threshold would not make much sense in this case.\nFurther, in our opinion, since agent explores pessimistically it is not guaranteed that as training progress, estimation of cost returns becomes less uncertain because due to its limited exploration agent might encounter states that it hasn't seen before. This phenomena should be more pronounced in an agent which explores pessimistically as opposed to an unconstrained agent. \n\n\n**\"Is there any particular reason why the method trains the value and cost function using purely imaginary rollouts? When new data is collected to learn new dynamics models, would there be any benefits of using such data to also improve the learned value and cost functions?\"**\nThanks for pointing this out! For the first pass through the while loop (Algorithm 1, Line 5), we do mix real environment interactions with imaginary rollouts after which the policy gets updated. Then for further passes until the PR doesn't degrade we use imaginary rollouts to update policy because real environment rollouts were collected from a different policy. We have now included a comment on this after line 6 of Algorithm 1 in the paper.\n\n**Experiments** : Based on the reviewer's comments, we have also run additional experiments in the limited time we had on the RC-Car [2] environment for different values of $\\beta$. \nIn this task, a car has to rotate within a circle with a certain target velocity for earning rewards. If the car goes out of the circle, it however incurs a cost.\nWe have obtained initial results and have included these for now in Appendix D so that the reviewers can have a look at these additional results.\nThe model-based baseline safe-LOOP [3] however has a high running time as mentioned in Appendix C and also required code modification before we could implement the same.\n\n[1] Harshit Sikchi, Wenxuan Zhou, and David Held. Learning off-policy with online planning. CoRL, 2021. \n\n[2] E. Ahn. Towards safe reinforcement learning in the real world. PhD thesis, 2019.\n", " We thank the reviewer for taking the time to review our work! We address the concerns raised by the reviewer as follows:\n\n**\"Please explain the environments you used. Also, please explain why you used PointGoal1 and CarGoal1 with more hazards, not PointGoal2 and CarGoal2\"** : \n\nSafety Gym consists of several environments with choice of robot and difficulty of tasks. We test all baseline algorithms and\nour work on our modified version of PointGoal and CarGoal environments where robots are 'Point' and 'Car' and task correspond to 'Goal'. Please refer to the screenshot of PointGoal environment with labeled robot, hazard and goal as shown in Figure 1 of the supplementary material (SM). In Goal-based environments the aim is to reach the goal position (shown in green in Figure 1 of SM) with as few collisions as possible with hazards. If robot (shown in red in Figure 1 of SM) accesses 'hazard' positions (in blue in Figure 1 of SM), the agent incurs a cost = 1. We modify the original environment to remove vases (a fragile box-type obstacle) because cost of touching a vase is function of velocity of the vase after collision (Reference : https://github.com/openai/safety-gym/blob/master/safety_gym/envs/engine.py) \nwhich is not part of the robot's state vector. The difference between PointGoal1 and PointGoal2 is more number of vases, we compensate that with increasing number of hazards.\nWe apologise for the confusion regarding names and will unify the names \"PointGoal1\" and \"PointGoal\" and similarly for CarGoal environments.\n\n**\"Figure 4 is hard to interpret. I cannot understand what you want to say from Figure 4. Is there any reference that made you decide to show the data in this format?\"** : \nWe plot figure 4 using 'rliable' library by Agarwal et al. [2] which uses statistical techniques to provide a more robust way of comparing RL algorithms in order to deal with statistical uncertainty. We normalise scores in the respective tasks by dividing the reward performance of the final policy by the reward performance of the final policy of unconstrained PPO in that task so that the performance can be compared across tasks as well. Similarly we do this for cumulative cost violations as well. Then we plot 95% confidence intervals of mean, median, inter-quartile mean estimates computed across different runs (or seeds) and tasks. Please refer to Agarwal et al. [2] for more details of their approach. From Figure 4, we would like to convey that across all estimates (mean/median/IQM), our approach is better than other model based approaches by Sikchi et al. [3] in terms of reward performance (left) while it is competitive in terms of cumulative cost violations (right) since 95% confidence intervals are overlapping.\n\n\n**\"I think the comparison between yours and Liu et al.[1] should be done ... model-based.\"**: We do not benchmark Liu et al. [1] for two main reasons -- first, their optimisation problem structure consists of single-stage cost constraint while ours is across a finite horizon (See eq 1 of Liu et al. [1]) and second, they initially train their environment model by collecting random episodes for 50,000 steps as can be seen from their code (Refer line 62,63 of https://github.com/liuzuxin/safe-mbrl/blob/master/run.py) and their implementation doesn't account for hazards violations caused in that phase. Also their implementation counts three cost violations as 1 which underestimates risk of their approach (See line 117-135 of https://github.com/liuzuxin/safe-mbrl/blob/master/utils/env_utils.py, see also line 16 of https://github.com/liuzuxin/safe-mbrl/blob/master/run.py).\nThanks for pointing out typos in L163 and Eq 17,18, we have corrected them in the paper.\n\n**Experiments** : Based on the reviewer's comments, we have also run additional experiments in the limited time we had on the RC-Car [2] environment for different values of $\\beta$. \nIn this task, a car has to rotate within a circle with a certain target velocity for earning rewards. If the car goes out of the circle, it however incurs a cost.\nWe have obtained initial results and have included these for now in Appendix D so that the reviewers can have a look at these additional results.\nThe model-based baseline safe-LOOP [3] however has a high running time as mentioned in Appendix C and also required code modification before we could implement the same.\n\n[1] Zuxin Liu, Hongyi Zhou, Baiming Chen, Sicheng Zhong, Martial Hebert, and Ding Zhao. Safe model-based reinforcement learning with robust cross-entropy method, 2020.\n\n[2] Rishabh Agarwal, Max Schwarzer, Pablo Samuel Castro, Aaron C Courville, and Marc Bellemare. Deep reinforcement learning at the edge of the statistical precipice. Advances in Neural Information 326 Processing Systems, 34, 2021.\n\n[3] Harshit Sikchi, Wenxuan Zhou, and David Held. Learning off-policy with online planning. CoRL, 2021.\n\n[4] E. Ahn. Towards safe reinforcement learning in the real world. PhD thesis, 2019.", " This paper presents Model-based PPO-Lagrangian (MBPPO-Lagrangian) algorithm for safe RL, which reduces epistemic and aleatoric uncertainty with an ensemble of neural networks and solves the underestimation problem of cost returns in a truncated horizon with the stricter threshold using a hyperparameter. The authors compared the algorithm in Safety Gym: PointGoal1 and CarGoal1. MBPPO-Lagrangian showed higher cumulative reward and lower cumulative cost performance than baseline methods. The authors successfully applied a model-based approach to the PPO-Lagrangian method by reducing the uncertainty of state transition using an ensemble of neural networks. Also, they suggested a simple solution that uses a hyperparameter to fix an innate underestimation-of-cost problem in the model-based approach that assumes a truncated horizon.\n\nHowever, I suggest the authors strengthen the Experiments part. The environments used in experiments are not sufficiently explained and I feel the paper needs more baseline experiments. The more detailed suggestions will be described in the next section. \n <Explanation of experiments>\n\n- Please explain the environments you used. Also, please explain why you used PointGoal1 and CarGoal1 with more hazards, not PointGoal2 and CarGoal2. \n- The names ``PointGoal`` and ``PointGoal1`` both seem to be used to name the same environment (e.g. Figure 2), so please unify the name. The same goes for the CarGoal1 environment too. \n- Figure 4 is hard to interpret. I cannot understand what you want to say from Figure 4. Is there any reference that made you decide to show the data in this format? \n\n<More baselines>\n\nI think the comparison between yours and Liu et al.[1] should be done since you mentioned the common points in L264. Also, Liu’s paper dealt with the same environment as yours and they are also model-based. \n\n<Details>\n\n- L163: is $\\rightarrow$ in \n- I think $t+1$ should be $t+l$ in Eq. 17 and Eq. 18.\n\n[1] Zuxin Liu, Hongyi Zhou, Baiming Chen, Sicheng Zhong, Martial Hebert, and Ding Zhao. Safe model-based reinforcement learning with robust cross-entropy method, 2020.\n Yes, the authors mentioned the limitation at the end of the conclusion.", " The paper presents a model-based method for safe RL that learns; a dynamics model of the environment, a reward value function, a cost value function, and a policy. The method leverages the learned dynamic models to generate imaginary rollouts to learn the reward and cost value functions, which are then used by a lagrangian relaxation of PPO in the setting of constrained Markov decision processes. \n\nThe proposed method is evaluated in the Open AI safety Gym, where it achieved better rewards than constrained model-based baselines and it also obtained better sample efficiency with lower constraints violation than other constrained model-free approaches. Strengths:\n- The presented method is simple and builds on well-established methods. The results obtained a balance of the benefits of constrained model-based and model-free baselines, showcasing better sample efficiency with low constraint violation wrt to constrained model-free approaches and better rewards wrt model-based methods.\n\n- The paper is clear and well written. It presents a comprehensive related work and introduces a detailed background. \n\nWeaknesses:\n- The approach is evaluated in only two environments (PointGoal, CarGoal). Although the environments were modified to make them more challenging, the work would benefit from evaluating the method in more setups like the Doggo environment (which is partially reported in the appendix), or other variations of the Point and Car setting like the Button or Push setups from the Open AI Safety Gym.\n\n- The method seems to struggle with longer time horizon tasks, due to the compounding errors of the learned dynamics, reward value, and cost functions. A conservative safety margin was introduced to deal especially with the underestimation of cost returns. However, such a safety margin results in a strong tradeoff wrt the reward performance. \n - A static choice of the safety threshold seems very conservative. As training progresses I could imagine that the estimation of cost returns becomes less uncertain and thus relaxing the safety thresholds could allow for better performance in terms of rewards. How would you incorporate model uncertainty to set the safety threshold dynamically?\n\n- Is there any particular reason why the method trains the value and cost function using purely imaginary rollouts? When new data is collected to learn new dynamics models, would there be any benefits of using such data to also improve the learned value and cost functions? Yes. Limitations of the method are mentioned in the conclusions. The potential societal impact was qualified as Not Applicable by the authors. \n", " The authors propose a new algorithm for a model-based safe RL (under the framework of CMDP). The proposed variant is modification of the PPO algorithm, namely by learning a model of the environment and introducing lagrangian relaxation to make sure the policy satisfies the safety constraints.\n The paper is well written and easy to understand. The main algorithm - Algorithm 1 - is also very helpful.\n\nThe motivation is sound, the results are interesting, and the algorithm itself seems easy to implement.\n\nWhat I find as a satisfying sanity check is the fact that unconstrained PPO does reach the same or better performance as MBPPO-Lagrangian. I understand that as the graphs (Figure 2 and 3) use as x-axis the number of environment interactions, the model-based approach is likely to do better initially. But as we keep going, PPO really needs to get there, especially as it is unconstrained and it clearly heavily violates the constraints.\n\n\nWeaknesses\n\n> [266] CarGoal1 by increasing the number of hazards from 10 to 15.\n\n I am not a fan of this.. It’s fine if that’s another environment in the experiments, but the original unmodified environment should also be included.\n\nPR > 70% is a bit of an arbitrary choice, and I don’t see this being discussed anywhere. \n\nDue to the truncated horizon, the introduced \\beta parameter and associated Equation (26) is something that is likely to be domain-specific. I would appreciate it if the authors could run Figure1 for multiple environments to see how the effect of beta varies across domains.\n\nAll the Figures, especially Figure 1 and Figure 4 are very hard to read, please increase the font size.\n How was the PR > 70% choice made? Did you run ablation analysis on this parameter?\n\nCan you please run Figure 1 for more environments? Adressed.", " This paper proposes a safe reinforcement learning method in a model-based manner. The method uses the Lagrangian relaxation of the original constrained optimization problem and then uses dual gradient descent to find the saddle point. The method also improves the sample efficiency by learning an ensemble of dynamics models. Experiment results on Safety Gym environments demonstrate its effectiveness over model-free and model-based safe RL baselines. ### Strength\n- Proposed method uses an ensemble of dynamics models to improve sample efficiency and address the problem of epistemic uncertainties.\n- Experiment results show that MBPPO-Lagrangian significantly outperformstones in Safety Gym environments.\n- It found that the safety criteria should be tighter in order to achieve better performance when using truncated model rollouts.\n\n### Weakness\n- It might be better if the evaluation part could be more comprehensive since only the results of two tasks are reported.\n- The authors use dual gradient descent to find the local saddle point of the Lagrangian relaxation of the original constrained problem. It would be better if they can provide some mathematical analysis of the effectiveness of this approach. - About the experiment part, why does PPO-Lagrangian perform worse than MBPPO-Lagrangian asymptomatically? Since PPO-Lagrangian uses rollouts from the real environment while MBPPO-Lagrangian learns from model rollouts, the latter can have better sample efficiency but suffer from biases. So I was wondering why MBPPO-Lagrangian can outperform PPO-Lagrangian in terms of asymptomatic performances. NA" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "OPLPi24xKoP", "RnWxJbUHe65", "tZ0PHYBZs6h", "X827usIJD8X", "nips_2022_hYa_lseXK8", "5URMZ7Chmqa", "MIaULM1ywB-", "D0Jy53AC0Uf", "Uhqu0DGsDrn", "nips_2022_hYa_lseXK8", "nips_2022_hYa_lseXK8", "nips_2022_hYa_lseXK8", "nips_2022_hYa_lseXK8" ]
nips_2022_wO53HILzu65
On the Generalizability and Predictability of Recommender Systems
While other areas of machine learning have seen more and more automation, designing a high-performing recommender system still requires a high level of human effort. Furthermore, recent work has shown that modern recommender system algorithms do not always improve over well-tuned baselines. A natural follow-up question is, "how do we choose the right algorithm for a new dataset and performance metric?" In this work, we start by giving the first large-scale study of recommender system approaches by comparing 24 algorithms and 100 sets of hyperparameters across 85 datasets and 315 metrics. We find that the best algorithms and hyperparameters are highly dependent on the dataset and performance metric. However, there is also a strong correlation between the performance of each algorithm and various meta-features of the datasets. Motivated by these findings, we create RecZilla, a meta-learning approach to recommender systems that uses a model to predict the best algorithm and hyperparameters for new, unseen datasets. By using far more meta-training data than prior work, RecZilla is able to substantially reduce the level of human involvement when faced with a new recommender system application. We not only release our code and pretrained RecZilla models, but also all of our raw experimental results, so that practitioners can train a RecZilla model for their desired performance metric: https://github.com/naszilla/reczilla.
Accept
The core idea is to specialize meta-learning approaches to recommender systems. The specialization is done using features on the dataset themselves so is different from usual autoML approaches. Some code is provided allowing easy comparison to a lot of well tuned baselines in the domain. Yet easily reusable it is also demonstrating that several papers accepted in the past few years were overclaiming because of lazy comparisons. It also formalize some experience that many practitionner have about the "good" algorithms to use depending of the metric ans data for recommender systems. Reviewers significantly updated their scores during the discussion phase as the authors ran a new set of comparisons and clarified some sections. I feel the work can be reused so I recommend an accept.
train
[ "nwh4otVU_Tb", "3Rb0NxMwFbp", "KVBFGo1Oo9", "e3DXg3wzJ_L", "it-WS0JUk5e", "UYXDGaVQ6YJ", "LGR-xo2P8Uv", "6KZnF42ZDYI", "Bxmi2cLJW72", "JRpUdnQCZKc", "V0dDaPfvgbt", "ppycnMAdaBp", "uS_9dY4zB4a" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your additional feedback. We agree with your minor comment about giving more details for practitioners leveraging Section 2. We have now updated Section C.2 with concrete examples and more details.\n\nNote that the particular use cases and goals of a practitioner may be very specific (they may be concerned with one particular dataset, or many datasets; they may be very concerned with the training time and/or latency of the model; they may need to use a specific algorithm for various reasons). We give three examples: (1) computing the “hardness” of a dataset in advance, with concrete numbers, and what to do next, (2) predicting the training time and/or latency of a model, and (3) gaining concrete insights for a particular algorithm. \n", " Thanks for the additional explanation. My concern regarding the algorithm selection has been resolved. I am also somewhat more convinced to adopt the practical value of this work, so I will raise my score from 4 to 5.", " Thanks for addressing the feedback. I have a look at the latest version on 03 August 2022, and it looks good to me. Thus, I increase my score from 4 to 7. Well done!\n\n[Minor] I have one minor comment for Section C.2 from Line 950-959: the authors should give further details of key insights for practitioners as the current version seems too general in my opinion. For example, after practitioners calculate the entropy of the rating matrix following Table 8, do we have a range of values that we consider as 'low'? What would happen **next** if their dataset is considered as 'hard'? Should they change their algorithms (if yes, what's best practice)? I believe give as much details as possible for C.2 would further strengthen the paper, but it's just a minor comment. \n\n", " What do you think of the clarifications brought by the authors? In particular s1pX and 3P6Z do you have changed your mind ?", " Thank you for your excellent review. We are pleased to hear that you are impressed by the extensive experiments. Overall, your suggestions helped us to improve our paper, for example by adding six new deep learning algorithms which also further improves our meta-learner, running Section 3 with other objectives, and including a guide to practitioners. Thank you for your suggestions, and we give the details below.\n\n**\"1a. The contributions are not strong enough.\"**\n\nWe respectfully remind the reviewer that our work introduces the largest public repository (and analysis) of recommender systems datasets and algorithms, which in itself is a sizeable contribution. We have now added a guide to practitioners, so that users can get the most out of our work (Section C.2).\n\n**\"1.b not novel enough\"**\n\nWe respectfully remind the reviewer that experimental survey-type papers have been recently accepted to ICML [6], ICLR [7,8], and NeurIPS [9]. Just in case the reviewer would like a refresher on these discussions, please see [here](https://openreview.net/forum?id=SJgIPJBFvH&noteId=8EatWJ_2_U), [here](https://openreview.net/forum?id=HygrdpVKvr&noteId=C5rPJ2sLzP), and [here](https://openreview.net/forum?id=6RB77-6-_oI&noteId=l5H13jujQKo).\n\n- [6] Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers, ICML 2021.\n- [7] Fantastic Generalization Measures and Where to Find Them, ICLR 2020.\n- [8] NAS evaluation is frustratingly hard, ICLR 2021.\n- [9] How Powerful are Performance Predictors in Neural Architecture Search, NeurIPS 2021.\n\n**\"2. Similarities to [4].\"**\n\nWe agree that Section 3 shares similarities with [4], however, the main focus of our work is studying the generalizability and predictability of a large variety of recommender system algorithms and datasets. Unlike [4], we open-source our code, which includes a common framework for all 85 datasets.\n\n [4] Deep Learning based Recommender System: A Survey and New Perspectives. ACM Computing Surveys 2019.\n\n**\"3. No deep learning based algorithms.\"**\n\nWe agree with your suggestion, and so we have added six deep learning based algorithms. We have now updated the results of Sections 2 and 3 with these new algorithms. We find that the deep learning algorithms also further improve our meta-learner compared to prior work [4], which helps to address your question 2 above, as well. In particular, the average %Diff (percent difference of predicted performance from best performance) improves from 35.1 to 33.2. while [4] achieved 52.9 and [5] achieved 43.5.\n\n- [4] Deep Learning based Recommender System: A Survey and New Perspectives. ACM Computing Surveys 2019. \n- [5] CF4CF: Recommending Collaborative Filtering algorithms using Collaborative Filtering. RecSys 2018.\n\n**\"4. Only focus on PREC@10 in Section 3.\"**\n\nWe thank you for pointing this out. We already released pretrained models on three different objectives (mentioned at the end of Section 3.2). Following your suggestion, we also re-ran the experiments in Section 3 with COVERAGE@50 and HIT-RATE@5 (chosen to give a good variety) in addition to PREC@10. See the new results in Section C.3.\n\n**\"5. Minor comments.\"**\n\nWe have fixed the typo and included that reference. Thank you for pointing these out.\n\n**\"Overall emphasis on practitioners\"**\n\nWe thank you for this suggestion. We have now added a guide to practitioners (Section C.2) which explains the key takeaways and insights from our analysis, as well as how to use our pre-trained models, so that practitioners can get the largest value from our work.\n\nWe once again thank you for these excellent suggestions (especially adding deep learning algorithms, adding experiments with more objectives, and adding a guide to practitioners). We respectfully ask that you please consider increasing your score if you find that our responses help to address your questions. We are also happy to continue answering follow-up questions. Thank you!\n", " Thank you for your insightful review. We are glad to see that you list our thorough comparison, open-source code, and pretrained models as strengths of our work. We address your concern below.\n\n**\"The models are traditional and simplistic\"**\n\nWe agree, and so we have now updated our work to include six more sophisticated (deep learning based) algorithms: DELF_EF, DELF_MLP, INeuRec, MultiVAE, SpectralCF, and UNeuRec. See the updated list of algorithms [here](https://anonymous.4open.science/r/anon-reczilla-51FC/RecSys2019_DeepLearning_Evaluation/algorithm_handler.py), and see our updated paper for the new results.\n\nWe find that indeed some of the deep learning approaches are less susceptible to issues with generalizability. However, we still do see issues with generalizability across all algorithms (In Table 11, the best average rank achieved by the best deep learning approach, Mult-VAE, is still only 8.6) which is consistent with prior work [1]. However, the addition of the deep learning algorithms **do** improve the overall performance of RecZilla: %Diff (percent difference of predicted performance from best performance) improves from 35.1 to 33.2.\n\nWe thank the reviewer once again for their suggestion, since we agree that it improves the impact of our work. We respectfully ask that you please consider increasing your score, if you find the additions satisfactory. Otherwise, we are happy to answer additional questions or concerns.\n\n[1] [A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research](https://arxiv.org/abs/1911.07698)\n", " Thank you for your thoughtful review. We are glad to see that you find our work addresses an important question of current research, and that you believe the large-scale study is valuable to the field. We reply to your questions below.\n \n**\"W1. Unclear if hyper parameters are optimized on train or test\"**\n\nWe use the validation set to optimize hyperparameters, and then we report the results on the test set. Thank you for clarifying; we have now made it more clear in the paper. Please see this excerpt from our [README](https://anonymous.4open.science/r/anon-reczilla-51FC/README.md)\nas an example output from our DataSplitter function.\n\n```\nDataSplitter_global_timestamp: DataReader: Movielens100K\n Num items: 1682\n Num users: 751\n Train interactions 79999, density 6.33E-02\n Validation interactions 1535, density 1.22E-03\n Test interactions 1418, density 1.12E-03\n```\n \n**”W2. No neural networks.\"**\n\nWe agree that this is an important aspect, and therefore we have added six neural network approaches to our analysis. We have updated the experiments from Sections 2 and 3 to include these new algorithms. Please see more details in our [general comment](https://openreview.net/forum?id=wO53HILzu65&noteId=Bxmi2cLJW72), and in the updated paper.\n \n**\"W3. Comparisons to other models are in the appendix.\"**\n\nWe agree, and so we have now moved this to Section 3. Thank you for the suggestion.\n \n**\"Minor comments.\"**\n\nWe thank the reviewer for their close reading of our paper - we have fixed all the typos and confusing points.\n \nWe thank the reviewer once again for their comments and suggestions. If you have any follow-up questions, please let us know.", " Thank you for your thoughtful review. We are glad to hear that you feel we target an important problem and that we validate our claims with a large number of experiments. We reply to your comments below.\n\n**\"1. Most algorithms are older methods.\"**\n\nWe agree with this concern, and so we have added six deep learning algorithms to our experiment codebase. We have incorporated these algorithms into our results in Sections 2 and 3 (see the details in the [general comment](https://openreview.net/forum?id=wO53HILzu65&noteId=Bxmi2cLJW72), and in the updated paper). We agree with you that this update gives our work more practical significance.\n\n**\" 2. User’s behavior is not consistent.\"**\n\nWe are not sure we understand the reviewer’s question (please let us know, thank you). All 85 datasets used in our work **do** come from real-world settings, from different platforms and user interfaces, and therefore we have a diverse distribution of datasets. Since our work considers by far the largest number of datasets out of any recommender system paper, this is our biggest strength compared to prior published work. See all datasets [here](https://anonymous.4open.science/r/anon-reczilla-51FC/RecSys2019_DeepLearning_Evaluation/dataset_handler.py).\n\nIf the reviewer meant that in real-world settings, the user’s preferences are constantly changing over time, then we completely agree, and including approaches that specifically focus on dynamically changing preferences is an exciting avenue for future work. \n\n**\"3. Cost of adding new datasets is large.\"**\n\nWe agree that ironically, since our experiments were so thorough, adding one new dataset across **all** settings would require running it on 20 algorithms and 100 hyperparameters each. However, even evaluating on a subset of these settings would give a very useful addition to our meta-dataset. For example, due to the limitations mentioned in Section 4 and Appendix A.1, our current results do not include all possible combinations of datasets, algorithms, and hyperparameters (but they still include 84,769 successful experiments).\n\n**\"Justify the value in practice, such as the datasets are larger and the algorithms are more complex.\"**\n\nTo help answer your overall concern about the value in practice, we added a guide for practitioners in Section C.2, including key takeaways from our analysis, as well as how to use our pre-trained RecZilla models. Regarding datasets, as shown in Table 6 of Section A.3, the size of our datasets range from <100 interactions to more than 77 million interactions (Netflix, Yahoo, Amazon), which are among the *largest that we can include while keeping our repository public* (i.e., the largest public recommender system datasets). To address your concern about algorithms, we incorporated six new deep learning-based algorithms, as described above. \n\n**\"Limitations\"**\n\nAs requested, we have extended Section 4 to address the limitations you pointed out.\n\nWe thank you once again for raising these important points. We kindly ask that you consider updating your score if you think the additions we made improve our work. We are also happy to answer any follow-up questions or new comments. Thank you!\n", " Dear reviewers and AC, we have now addressed all of the suggestions and concerns mentioned by the reviewers. We thank the reviewers very much for these comments, which we agree has improved our work.\n\nThe primary weakness noted by all four reviewers was the lack of implementation of deep learning based recommender system algorithms. We are pleased to report that we have **now included six deep learning algorithms** in our experiments: [DELF_EF](https://www.ijcai.org/proceedings/2018/0462.pdf), \n[DELF_MLP](https://www.ijcai.org/proceedings/2018/0462.pdf), \n[INeuRec](https://arxiv.org/abs/1805.03002), \n[Mult-VAE](https://arxiv.org/abs/1802.05814), \n[SpectralCF](https://arxiv.org/abs/1808.10523), \nand [UNeuRec](https://arxiv.org/abs/1805.03002). \nSee our [algorithm handler](https://anonymous.4open.science/r/anon-reczilla-51FC/RecSys2019_DeepLearning_Evaluation/algorithm_handler.py) and see our updated paper for the new results in Sections 2 and 3.\n\nThe full list of changes are as follows\n - Section 2 is updated with the six deep learning algorithms. Two of the algorithms (Mult-VAE and U-NeuRec) are comparable in performance to many of the 18 non-neural algorithms. However, the best algorithm across all settings is still Item-KNN.\n - Section 3 is updated with the deep learning algorithms. Reczilla now outperforms prior approaches by an even larger margin: average %Diff (percent difference of predicted performance from best performance) improves from 35.1 to 33.2 (the next-best, cf4cf-meta, is 43.5).\n - New section, Appendix C.2, **“A Guide for Practitioners”**, that describes the key takeaways for practitioners (including insights from our analysis, and how to use our pre-trained models) so that practitioners can use our work most effectively.\n - New section, Appendix C.3, that re-runs Section 3 experiments (which used PREC@10) with the base metric set to COVERAGE@50 and HITRATE@5.\n - The table comparing RecZilla with prior work is moved to Section 3.\n - Fixed other minor typos and clarifications.\n\nWe thank all reviewers once again for these suggestions. We are happy to address any new follow-ups or concerns.\n", " This paper studies the impact of different datasets, algorithms, and hyperparameters on the performance of recommender systems through a large number of experiments. It proves that different datasets and algorithms in the recommender system significantly impact the performance, and they are not general but predictable. Further, this paper proposes RecZilla, a meta-learning method that predicts the best-performing algorithm and hyperparameters on new datasets by inputting meta-features. The authors show that RecZilla quickly finds high-performing algorithms on datasets it has never seen before. Strengths\n1. Since choosing algorithms and hyperparameters for recommender systems has always depended on human experience, this paper targets an important problem in applying ML in industrial practices.\n2. The claims of this paper have been validated by a large number of experiments. To verify the generality and predictability of the recommender system, the authors have selected a large number of datasets, algorithms, and hyperparameters to conduct experiments.\n3. The authors have open-sourced the experimental code, which is essential for an empirical study.\n\nWeakness\n\nDespite the extensive experiments by the authors, I remain skeptical about the generalizability of this method. \n1. First of all, in the selection of algorithms, most of the algorithms selected by the author were published earlier, such as SVD and MF, and many papers were published before 2015. These methods have rarely been used in industrial recommendation systems nowadays, as neural network-based methods have become popular in recent years. \n2. Secondly, most of the datasets used by the author come from public datasets, which makes this paper highly credible and reproducible. However, in real-world recommendation systems, the user's behavior is not consistent in different scenarios, such as different platforms, user interfaces, etc., which may affect the distribution of datasets. Although the author provides a pre-trained model, it is still doubtful that this model can work across different empirical datasets.\n3. If new datasets are added, the cost of model training and updating appears to be very large because of the large number of new data points that need to be collected.\n\nSeveral typos:\\\n line 172: 'aare'\\\n line 182: 'leave-one-out' 1. In terms of algorithm selection, why most of the selected methods are earlier works? Although it might be because their computational complexity is low and it is easy to conduct experiments, I still think that more advanced algorithms should be considered and evaluated.\n2. How do the authors justify the value of the proposed method in practical recommender systems, as the datasets are larger and the algorithms are more complex. Yes, but not good enough. There is still a large room for discussing the remaining gap in the presented work from the realistic recommendation scenarios.", " The paper addresses the problem of model selection\nfor recommender systems. The authors investigate the \nperformances of 18 different recommender models on \n19 datasets and 23 metrics and find, that item-based \nnearest neighbor performs best on average, but each \nmodel performs best for one dataset/metric pair and\nvery bad for another one. In a second step they transfer\nthe SatZilla algorithm selector [76] to selecting recommender\nsystem models and show that it outperforms two \nexisting such model selectors from the literature.\n strong points:\ns1. large-scale meta study for recommender systems.\ns2. transfer of the SatZilla algorithm selector to recommender\n systems.\n\nweak points:\nw1. it is not clear if the hyperparameters of the methods\n have been optimized on train or test.\nw2. no neural networks are among the tested models.\nw3. The comparison against other model selectors for recommender\n systems is only in the appendix.\n The paper addresses an import question of current research\nin recommender systems: can some existing models be fitted\nuniversally to any dataset/metric pair? And if not, can we predict,\nwhich model will have the best performance for a specific \ndataset/metric pair?\n\nThe authors conduct a large-scale study that alone will be valuable\nfor the field in my opinion. Also their meta learning method \n\"RecZilla\" sets a good basis for future research. \n\nGiven all the positive aspects, I see nevertheless some weak points:\nw1. it is not clear if the hyperparameters of the methods\n have been optimized on train or test.\n On p.4 you just write \"First we identify the best-performing \n hyperparameter set for each (algorithm,dataset) pair\". \n But do you use the performance on train or on test? If on test,\n you systematically overestimate the performance of the algorithms.\n And can you clarify: you do this separately for each metric? \n\nw2. The authors use 18 different models from the literature,\n but there is no neural network among those models.\n I understand the motivation: that the choosen models likely\n are way faster to train. And that Dacrema et al. [26] found\n them not to perform competitively. But nevertheless, it feels\n the study misses an important aspect this way.\n \nw3. The comparison against other model selectors for recommender\n systems is only in the appendix B.4.\n To me this looks like a crucial aspect of your paper and \n I wondered why it has been banned to the appendix. \n For example, are the sensitivity analyses in fig. 4 really\n more important?\n\nSmaller points:\n- p. 4 \"for each dataset, we compute a train and test split\". \n Given your earlier definition '\"dataset\" refers to a single \n train-test split\" (p.3), this is a little bit confusing.\n- several of your references miss their venue, e.g., [6] is a \n RecSys paper.\n\ntypos:\n- p.5 \"aare\"\n- p.6 \"leave-one-out leave-one-out\"\n Yes.", " The paper presents a large-scale analysis of the performance of existing recommender system algorithms on multiple (85) datasets on 315 performance metrics. The study reveals that the algorithm’s performance is highly susceptible to the performance metric and dataset’s meta features. Thus, they propose an automated algorithm selection approach that chooses the best algorithm for a new, unseen dataset based on these findings. Strengths\n•\tTheirs is the first large scale study that compares recommender system algorithms across many metrics and datasets. The study is thorough, and the authors have covered almost all metrics, datasets and meta-features of the dataset.\n•\tThey open source their training data as well as pretrained models that can be beneficial for practitioners to choose best performing model for their problem specifications.\n\nWeakness\n•\tThe models used in the study are traditional and simplistic in the recommender literature. For instance, they have compared clustering models, Matrix factorization models and linear models. The literature has evolved to use more sophisticated models that might be less susceptible to the observed performance variations. Thus, selecting the best algorithm out of those selected from their proposed model might be best suited for a reliable baseline. Do the authors have plan to extend it to more sophisticated (non-linear, graph-based models) in the future? What will be the challenges for using such models in the current framework?\n See above. The authors have adequately discussed limitations of the work and their broader impacts on the society. ", " This paper conducts extensive experiments on 18 algorithms and 85 datasets in recommendation domain. Specifically, the authors found that “the best algorithms and hyperparameters are highly dependent on the dataset and performance metric”. As a result, the authors proposed RecZilla that uses as a model to predict the best recommenders with hyperparameters for new, unseen datasets. Some strengths:\n\n1. The experiments of the paper are very extensive and great. \n2. The paper considers 19 algorithms with 85 datasets, which cover the majority of recsys datasets \n3. The Appendix shows good results with supplementary materials\n4. It’s also good that the authors are going to release the source code for reproducibility \n\n\nSome of my concerns/suggestions for improvements: \n\n1. Although the experiments are great, the contributions are not strong enough in my opinion. The conclusion of the first contribution is already stated in various previous works [1, 2, 3]. Thus, in my opinion, this work is great as a survey paper, but the contributions are incremental and not novel enough. \n2. Moreover, [4] also shows the idea of selecting algorithms for recsys, and I don’t see much difference in the performance of [4] compared to RecZilla according to Table 3.\n3. Why didn’t we consider deep learning based algorithms? The 18 proposed algorithms are already introduced in [1, 2]. In fact, the authors also used the implementations from [1, 2]. Any specific reasons to discard the deep learning approaches such as GRU4Rec, etc?\n4. Why only focus on Prec@10 in the Experimental Setup section? Isn’t it yield ‘bias’ as well since we only focus on one single metric and optimize for that metric?\n5. [Minor] Typo in line 172: “are” instead of “aare”, missing related work [3]\n\n[1] Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches. RecSys 2019.\n[2] A troubling analysis of reproducibility and progress in recommender systems research. TOIS 2021.\n[3] The Datasets Dilemma: How Much Do We Really Know About Recommendation Datasets?. WSDM 2022\n[4] Deep Learning based Recommender System: A Survey and New Perspectives. ACM Computing Surveys 2019.\n[5] CF4CF: Recommending Collaborative Filtering algorithms using Collaborative Filtering. RecSys 2018.\n\nOverall, I’m impressed by the extensive experiments, but I’m not convinced from the novelty of the contributions. The contributions are incremental only to me. Since the authors mentioned a lot about “practice” and “practitioners”, I believe the authors could emphasize more on this perspective in subsequent versions to make the contributions become more interesting. For example, how the researchers and practitioners benefit from this work? Any insights from practical perspective? \n Some of my concerns/suggestions for improvements: \n\n1. Although the experiments are great, the contributions are not strong enough in my opinion. The conclusion of the first contribution is already stated in various previous works [1, 2, 3]. Thus, in my opinion, this work is great as a survey paper, but the contributions are incremental and not novel enough. \n2. Moreover, [4] also shows the idea of selecting algorithms for recsys, and I don’t see much difference in the performance of [4] compared to RecZilla according to Table 3.\n3. Why didn’t we consider deep learning based algorithms? The 18 proposed algorithms are already introduced in [1, 2]. In fact, the authors also used the implementations from [1, 2]. Any specific reasons to discard the deep learning approaches such as GRU4Rec, etc?\n4. Why only focus on Prec@10 in the Experimental Setup section? Isn’t it yield ‘bias’ as well since we only focus on one single metric and optimize for that metric?\n5. [Minor] Typo in line 172: “are” instead of “aare”, missing related work [3]\n\n[1] Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches. RecSys 2019.\n[2] A troubling analysis of reproducibility and progress in recommender systems research. TOIS 2021.\n[3] The Datasets Dilemma: How Much Do We Really Know About Recommendation Datasets?. WSDM 2022\n[4] Deep Learning based Recommender System: A Survey and New Perspectives. ACM Computing Surveys 2019.\n[5] CF4CF: Recommending Collaborative Filtering algorithms using Collaborative Filtering. RecSys 2018.\n\nOverall, I’m impressed by the extensive experiments, but I’m not convinced from the novelty of the contributions. The contributions are incremental only to me. Since the authors mentioned a lot about “practice” and “practitioners”, I believe the authors could emphasize more on this perspective in subsequent versions to make the contributions become more interesting. For example, how the researchers and practitioners benefit from this work? Any insights from practical perspective? \n The authors did address the limitations and potential negative social impact very well." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "KVBFGo1Oo9", "6KZnF42ZDYI", "it-WS0JUk5e", "Bxmi2cLJW72", "uS_9dY4zB4a", "ppycnMAdaBp", "V0dDaPfvgbt", "JRpUdnQCZKc", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65", "nips_2022_wO53HILzu65" ]
nips_2022_mMT8bhVBoUa
Generalized Variational Inference in Function Spaces: Gaussian Measures meet Bayesian Deep Learning
We develop a framework for generalized variational inference in infinite-dimensional function spaces and use it to construct a method termed Gaussian Wasserstein inference (GWI). GWI leverages the Wasserstein distance between Gaussian measures on the Hilbert space of square-integrable functions in order to determine a variational posterior using a tractable optimization criterion. It avoids pathologies arising in standard variational function space inference. An exciting application of GWI is the ability to use deep neural networks in the variational parametrization of GWI, combining their superior predictive performance with the principled uncertainty quantification analogous to that of Gaussian processes. The proposed method obtains state-of-the-art performance on several benchmark datasets.
Accept
Technically solid paper that introduces and benchmarks a novel inference framework, with application to inference in GPs. All reviewers recommend to accept, after a decent amount of discussion in which reviewers raised their scores in response to a fairly significant round of updates to the manuscript itself. Recommend to accept, despite some questions regarding overall impact.
test
[ "Nm8NRsL2A9v", "1O8Nr77q73-", "N5NVHOc0eyJ", "NhBLJ7z0tg7", "-1kh2llWtte", "flhtJXI4NEN", "_14PmfOKPYE", "d5sPuG_X0k", "59QwKQ5fne", "b74DgukvHAA" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the rebuttal, for answering my questions and for the additional figure. I stand with my previous score, that is I would like to see this paper accepted. ", " I thank the authors for their detailed response. Changes made to address the motivation and model comparisons will substantially improve the manuscript.", " I would like to thank the authors for the detailed rebuttal. It has mostly addressed my concerns and I have thus increased my score.", " We thank the reviewers for their time and effort. We believe their comments and suggestions have significantly improved the work. We are happy to find that all reviewers find our work rigorous, well-written, and significant. We respond to each reviewer individually below.\nThe main changes to the manuscript will be as follows:\n\n1. The introduction will mention the interpretations as a regularised optimisation problem over the space of Gaussian measures. We will include references to the connections with Wasserstein flows and Bayesian inference with early stopping.\n\n2. We will discuss the approximation quality of the Wasserstein distance and how it relates to the spectrum of the kernel operators. We have included some plots to illustrate the quick decay of eigenvalues that we observed for the SE kernel.\n\n3. We will describe and contextualize the experiments in more detail. We will mention in the main body that we matched neural network architectures and inference procedures to have a fair comparison between the different models. \n\n4. We will include a comparison to standard SVGP.\n\n5. We will add a section that discusses the limitations of our method. This will include model misspecification, algorithmic aspects, and the limiting nature of the Gaussianity assumption.", " The authors are very grateful for the careful reading of our manuscript. The comments helped to improve the work significantly.\n\nWeaknesses:\n\na.) The main goal of the work of Pleiss and Cunningham (2021) appears to be a comparison of the role that depth plays in the BNNs/DGPs. They compare BNNs with increasing width of hidden layers where in the infinite width limit the BNN converges to a GP. Their results seem to suggest that finite width or even the small width regime is beneficial for modeling uncertainty in BNNs/DGPs. It is hard to draw conclusions for our methodology for several reasons. First, we use a neural network architecture for the posterior mean (for a fair comparison) where the width of the hidden layers is given as 10 (cp. l.678-680 in the supplementary material). This means we are for the mean function in the regime described as beneficial in Pleiss and Cunningham (2021). The uncertainty quantification is similar to that of a sparse GP posterior method, which is a consequence of the use of the SVGP kernel. Here the comparison to the uncertainty quantification of a BNN posterior is even more difficult. Generally speaking, our assumption that the posterior is described by a GP will mostly have two consequences: for a fixed $x$ the posterior distribution $F(x)$ will be unimodal and concentrated around the posterior mean. This unimodal behavior is arguably less problematic in the function space than in the parameter space, but one might still argue that heavier tails than Gaussian could be appropriate for some applications.\n\nb.) The fair comparison aspect is something that we have considered thoroughly. We use the same neural network architecture for all models and similar training procedures. The details are described in the supplementary material (l. 678-680, l. 686-690). We will make this point more explicitly in the main body of the paper.\n\nThe minor points will be corrected. Thank you for pointing out these flaws in the citations and terminology.\n\nQuestions:\n\na.) A SVGP method will be added to the plots in Figure 1. Thank you for this great idea. We have added such a plot as well in the rebuttal revision in Appendix A.12, Figure 3.\n\nb.) The prior hyperparameters are chosen by maximizing the marginal log-likelihood on a random subset of $X$ of size $\\sqrt N$. Due to the page limit, we had to move these details to the supplementary materials. The details can be found in l.675-l.678. The inducing points are indeed just subsampled randomly from the data points (cf. l. 672- l.674). \n\nLimitations:\n\nThanks for pointing this out. Indeed after reading the reviews, we decided to include a limitations section in the final manuscript. Here we will discuss limiting assumptions such as Gaussianity as raised by the reviewer and some implementation aspects.\n", " The authors thank the reviewer for pointing out some interesting observations that have helped improve the paper. We will respond below point by point. \n\nMajor Questions:\n\n1. The points $X_S$ are chosen to approximate the spectrum of the appearing covariance operators. In essence, we can approximate the spectrum of a covariance operator by the spectrum of the kernel matrix $k(X,X)$, where $X$ contains all $N$ data points. However, all covariance operators are trace-class operators, which means that the infinitely many eigenvalues have to be summable. This means the sequence of ordered eigenvalues has to converge to zero. Typically this convergence is very fast for kernels such as the SE or Matern kernel. \nIntuition, therefore, suggests that for kernels with quickly decaying eigenvalues, a smaller subset of eigenvalues, for example, the $N_S = 100$ largest ones, may already provide a good approximation to the spectrum of the operator. The $N_S = 100$ largest eigenvalues can be approximated by $k(X_S,X_S)$ where $X_S$ is a randomly chosen subset of $X$ of size $N_S = 100$. We have tried larger sizes of $N_S$ for example $N_S=500$ and seen little change in our results. We also have provided some experimental plots to further prove this point. However, we must admit that we do not have theoretical results to quantify the loss in accuracy. However, the experimental results seem to suggest that we can approximate the spectrum quite well by just choosing a random subset of the data points.\n\n2. We view our method as providing a novel way of combining the strength of neural networks as function approximators with GP uncertainty quantification. It sits therefore somewhere in between deterministic neural networks and GP inference. The reviewer is right in pointing out that our method does not allow for the standard Bayesian neural network inference, i.e. stochastic weights in a BNN. Rather, our method operates on a function space directly and the posterior is fully specified in terms of its covariance and mean functions. Since our method outperforms several variational inference methods that work explicitly with BNNs (cp. Table 1) it may be fair to ask precisely what aspect of BNNs causes their empirical success. The favourable performance of our method suggests that the function approximation properties of neural networks may be responsible for most of it. The function-space uncertainty quantification akin to that of GPs wrapped around the powerful predictive abilities of the NN as a parametrisation of the posterior mean gives competitive results to BNNs at least when trained with variational inference.\n\n3. The authors apologize for the confusion. The results in Table 2 are meant to compare different inference procedures for a given architecture, i.e. GVI in function space vs VI in function and weight space. In order to ensure meaningful comparisons with prior related work, we follow the experimental set-up of Immer et. al. (2021) where the same CNN architecture is chosen for all BNN. Our results are state-of-the-art in the sense that for a given NN architecture our inference method performs the best. It would indeed be interesting to consider other more sophisticated NN architectures and explore if GVI objectives lead to further performance and uncertainty quantification gains. We however considered this to be beyond the scope of our paper. We will explain this in more detail in the paper to avoid this confusion.\n\nMinor:\n1. We will look into these papers and cite them in the related work section.\n2. Thanks for pointing this out. The comparison with standard SVGP inference is indeed interesting and will be included.\n3. We give the runtime in l. 258 as $\\mathcal{O}(N_S^2 N_B + N_S^3)$. However, as pointed out in the manuscript the full costs are determined by the computations that occur in $r$. Since the evaluation of $r$ is dominated by the inversion of a $M \\times M$ matrix where $M = \\sqrt N$ our final GWI-net method scales as $N \\sqrt N$ (per inference step). This is the same complexity that is typically achieved in the sparse GP literature. We roughly observed that training on a dataset with 40000 datapoints (protein dataset) took 30 minutes on a Nvidia GTX1080 card. Regarding the baselines: this is a difficult task since most of the authors did not provide a runtime complexity calculation in their method. We hoped that showing that our method is scalable to datasets of the above size without any further problems will provide enough justification for its usefulness. The computational costs of the GWI-net will be explicitly stated in the final version of the manuscript.\n", " We thank the first reviewer for the careful reading of our manuscript. The review provided several insights and new perspectives for our paper. A detailed explanation follows below. We respond to the weaknesses in the numerical order provided by the reviewer.\n\n1. The motivation for our work indeed relies heavily on the rule of three presented in Knoblauch et. all (2018). The fact that our objective function can be interpreted as a regularised measure valued optimisation problem (as mentioned in 1a by the reviewer) is discussed in Knoblauch et. all (2018). We agree with the reviewer that mentioning this explicitly will improve the manuscript. The fact that our objective function can also be interpreted as a Bayesian method with early stopping where the distance is measured by the Wasserstein distance is interesting. We will provide some references in the manuscript to point readers to this connection. It may even prove useful in improving algorithmic designs for Generalised Variational Inference in function space. We will also provide a reference to Lambert (2022) and discuss the limitations of restricting the approximation family to be Gaussian.\n\n2. Regarding suggestions for additional experiments:\na) Using neural networks for both m_Q and r is something that we actually tried but was unsuccessful. In essence, the neural network typically does not increase the uncertainty quickly enough once we go away from the observation points. We experimented with several changes in the architecture of the neural network but were not able to obtain satisfying results. This can be easily illustrated with the toy examples, but we also verified it on the UCI regression data set. This discussion will be included in the paper. \nb) A sparse GP and plain NN have been added in the revised version.\n\nMinor Comments will be addressed.\n\nRegarding the Questions:\n1. Yes, we indeed have taken care to match neural network architecture and training procedures. The details are discussed in the supplementary materials A.7 and A.8.\n\n2. The approximation quality of the 2-Wasserstein distance is determined by the approximation quality of the spectrum of the corresponding covariance operators. For most kernels used in practice, like SE or Matern kernel, the spectrum decays very quickly, which is why using the first 100 eigenvalues often empirically seems to be sufficient to approximate the spectrum and therefore the 2-Wasserstein distance. We did not manage to derive a theoretical result that gives further insights, but our empirical results suggest a high approximation quality. We have provided some additional plots to substantiate this point in the revision.\n\n3. The suggestion to use a neural network to output pseudo-observations indeed sounds intriguing. In the process of writing the paper we considered several ideas along those lines but did not obtain meaningful results. We therefore decided to reserve exploration of this topic for future research.\n\nWe would like to thank the reviewer again for their time and insightful comments. We believe they have significantly improved the manuscript.", " The paper proposes a new probabilistic regression and classification framework called \"generalized variational inference in function space\" (GVI-FS), which involves taking the standard function-space ELBO formulation and regularizing $\\mathbb{E}_\\mathbb{Q} [\\log p(y|F)]$ with the Wasserstein-2 metric, $\\mathcal{W}_2(\\mathbb{Q}^F, \\mathbb{P}^F)$, instead of the standard KL-divergence, $KL(\\mathbb{Q}^F \\left| \\right| \\mathbb{P}^F)$. This substitution is motivated by the generalized variational inference framework of Knoblauch et al. (2019) and the difficulties of working with KL divergences. The paper proposes to use Gaussian measures (GMs) to parameterize the predictive functions, although GPs with additional assumptions provide an equivalent approach. The paper proposes two specific variants of this model, a model in which both the predictive mean and the predictive covariance are parameterized by a sparse variational GP and another in which the predictive mean is parameterized by a neural network and the predictive covariance is parameterized by a sparse variational GP. The paper then presents results from regression, classification, and predictive variance-based out-of-distribution detection, comparing to existing probabilistic methods. Strengths\n\n1. The work is technically sound.\n\n2. The work tackles problems of broad interest: classification, regression, and OOD detection using function-space methods. \n\n3. The work demonstrates substantially improved performance compared to the baseline models tested.\n\n\nWeaknesses\n\n1. The motivation for the proposed approach in terms of \"generalized\" variational inference is very weak. Specifically, I find the generalized variational inference framework of Knoblauch et al. (2019) to be so general as to be mostly divorced from the intentions and benefits of Bayesian inference, other than being an explicitly probabilistic method. Here are two alternative interpretations/motivations I find more convincing.\n\n1a) The proposed method is GP regression/classification with Wasserstein-2 regularization. This is very simple and, at least to a first approximation, accurate.\n\n1b) Another connection relates to Bayesian inference more directly. Here is a sketch. Consider the objective\n\n$$\\max_Q \\mathbb{E}_{F \\sim Q} [\\log p(y|F) - \\log q(F)] - \\lambda \\mathcal{W}_2(Q,P)$$\n\nwhere $\\lambda \\in \\mathbb{R}^+$. This is a close variant of Eq. 12, only differing in the $ \\mathbb{E} [- \\log q(F)]$ entropy term and the $lambda$ multiplier. With an unrestricted variational family, the optimal approximate posterior in the unregularized $\\lambda=0$ case is $q(F) = p(F|y)$, the true posterior.\n\nConsider Langevin diffusion for this unregularized case. We start with a single particle $F_0$ and update in continuous time $t \\geq 0$ according to\n\n$$\\text{d}F_t = -\\nabla V(F_t) \\text{d}t + \\sqrt{2} \\text{d}B_t $$\n\nwhere $V(\\cdot) = - \\log p(y|\\cdot)$ and $B_t$ denotes Brownian motion. The stationary distribution is $\\propto p(F|y) \\propto \\exp(-V)$, the true posterior. The work of Jordan, Kinderlehrer, & Otto (1998) shows that the marginal law of this diffusion process obeys gradient flow of the functional $KL(\\cdot,\\pi)$ with respect to the Wasserstein-2 metric.\n\nThis implies the resulting marginal law of $F_t$ follows gradient flow of $KL(q(F), p(F|y)) = \\mathbb{E}_{F \\sim Q} [\\log p(y|F) - \\log q(F)]$, the first term in the original objective.\n\nNow, back to the original objective, we can interpret $\\lambda$ as a Lagrange multiplier, giving the dual formulation\n\n$$\\max_Q \\mathbb{E}_{F \\sim Q} [\\log p(y|F) - \\log q(F)] \\quad \\text{s.t. } \\mathcal{W}_2(Q,P) \\leq C$$\n\nfor some $C = C(\\lambda) > 0$. this gives the original objective a nice interpretation. If we start the Langevin diffusion process with our variational posterior equal to the prior, $Q_0(F) = P(F)$ and stop the process once the diffusion process has covered a distance of $C$ w.r.t the Wasserstein-2 metric, then we will have approximately optimized our objective. As $C \\leftarrow \\infty$ (corresponding to $\\lambda \\leftarrow 0$), the approximate posterior $Q(F)$ approaches $p(F|y)$, the solution to the full Bayesian inference problem. This objective can then be interpreted as Bayesian inference with early stopping.\n\nOne wrinkle is that $Q(F)$ is restricted to be a GP. See the recent work of Lambert et al. (2022), \"Variational inference via Wasserstein gradient flows\" for a good discussion of this issue.\n\n2. The experiments section should contain an additional comparison with a DNN with the same architecture and objective as DNN-SVGP, but with both predictive mean and uncertainty output by the network. This will give the reader a sense of how much the uncertainty handling by the sparse GP is helping performance. An additional comparison without the Wasserstein-2 regularization would also be informative. Additionally, a sparse GP method in addition to the last column in Table 1 (Exact GP) would be helpful for interpreting how good these results are.\n\n3. (minor) l.296-8, VAEs use neural networks to parameterize variational posteriors. I would not consider this approach fundamentally different.\n\n4. (minor) I don't believe $\\Sigma$ in Eq. 18 is defined.\n\n5. (minor) The supplied code is almost completely uncommented, making it impractical read.\n 1. In the experiments section, the performance numbers for baseline models are taken from previous work. Are the model architectures and training procedures for the proposed methods comparable? I didn't notice any notes on this issue.\n\n2. An approximate Wasserstein-2 metric is given (Eq. 15). How good is this approximation in practice? In what situations is it more or less accurate. Some content along these lines would help readers understand the pros and cons of method.\n\n3. (minor) It would be interesting to rely on the GP to influence both the predictive mean and uncertainty in the DNN-SVGP case. This could be achieved by using the NN to output \"pseudo-observations\" with uncertainty, which are then conditioned using the GP mean and covariance to produce a predictive mean. Yes", " The authors propose to use the Wasserstein distance between Gaussian measures on infinite-dimensional function spaces to perform generalized variational inference over distributions over functions, with applications to Gaussian processes and Bayesian neural networks. Strengths:\n\n- The paper is well written and is mathematically rigorous.\n- The idea of using Wasserstein inference in function spaces in this way seems novel.\n- It is plausible that the proposed method might have advantages over standard variational GPs.\n\nWeaknesses:\n\n- The performance in the experiments seems suspiciously low and a few ablation studies seem to be missing.\n- Given that the title advertises \"Bayesian Deep Learning\", it is unclear how one would exactly apply this to Bayesian neural networks. Major:\n\n- If we can think of the $X_S$ like inducing points in sparse GPs, it seems like their selection should impact the performance and the estimated distances in the proposed method. Could you comment on how they are chosen? Is there any theoretical guidance on what would be a good choice?\n- In the proposed GWInet, it seems that the model ultimately just uses a deterministic NN for the mean function and then wraps a normal GP around that. This seems like a rather complicated method to ultimately \"just\" perform GP inference. How would one use the proposed method to perform actual BNN inference in function space (that is, using a neural network with stochastic weights), so that one could actually use the learned features to capture the uncertainties instead of having to rely on a GP kernel? This seems crucial, since BNNs outperform GPs in many practical applications.\n- State-of-the-art performance on FashionMNIST is 97% and on CIFAR10 around 95%, so the reported performances in Tab. 2 all seem suspiciously low. Could you comment on why that is? I think using baselines that get closer to the performances we actually see in practice would make the comparison stronger and more trustworthy.\n\nMinor:\n\n- l. 75: Regarding variational BNNs in function space, functional SVGD [1] and functional repulsive ensembles [2] seem to be relevant methods to mention.\n- Tab. 1: I understand that both SVGP and DNN-SVGP are fit using GWI, right? If so, it seems like a comparison against a standard SVGP implementation would be interesting to see.\n- Overall, it's unclear to me whether the proposed method is faster or slower than the baselines. Could you report runtimes (at least rough estimates)?\n\n\n[1] https://arxiv.org/abs/2106.10760\n\n[2] https://proceedings.neurips.cc/paper/2021/hash/1c63926ebcabda26b5cdb31b5cc91efb-Abstract.html There seems to be no negative societal impact, but the authors could do a slightly better job at addressing the technical limitations, especially regarding runtimes, performance, and how to integrate BNNs (see above).", " The paper presents a new methodology to apply variational inference in function space. Starting from the classic ELBO, the authors replace the KL divergence for measures with the Wasserstein distance and, by choosing Gaussian measures for the prior and the approximate posterior, the authors show how to approximate this quantity to have a tractable loss to optimize. Also, the authors present two ways to parameterize the variational posterior, one related to sparse Gaussian processes and the second based on deep neural networks. Finally, the paper is concluded with some experiments, including 1D toy regression, the classic UCI regression benchmark and some image classification problems (with OOD detection). ## Strengths\n\n- The paper is technically very sound and promises to improve upon standard methods for weight-space and function-space inference of neural networks (and not only).\n- The paper is generally easy to follow and it is sufficiently self-contained \n\n## Weaknesses\n\n- The original assumption of working with Gaussian posteriors could be a limiting factor, even if we are working in function-space. For example, a recent work [Pleiss and Cunningham, 2021] on the behaviour of wide parametric (and non) models seems to suggest that the limiting Gaussian posterior performs worse than the BNN/DGP posterior. This is common for all variational inference methods in function space with Gaussian posterior, but I would still appreciate a comment from the Authors on this. \n- The experimental evaluation is a bit limited (with nonetheless interesting results). My only concern is on the comparison with other methods, given that those numbers have been copied from various papers. Did you use the same models, same architectures, and same setup?\n\n\n### Minor\n- The reference section can be improved (e.g. miss-capitalizations and arxiv citations when proceedings are available).\n- In the experiment, the second parameterization is sometimes referred to as GWI-DNN-SVGP and other times as GWI-Net. Maybe a uniform notation is easier to follow.\n\nGeoff Pleiss, John Patrick Cunningham. The Limitations of Large Width in Neural Networks: A Deep Gaussian Process Perspective. NeurIPS 2021 - For me, the example shown in Figure 1 is not very illustrative. Fig 1 shows only one of the proposed models with 3 different function estimation problems. For example, it would be interesting to see the comparison between GWI-SVGP and GWI-DNN-SVGP and SVGP alone (SVGP is particularly interesting to see, given that it should be the same model as GWI-SVGP trained on a different loss)\n- How are you treating the prior parameters? Are they fixed, cross-validated or optimized? And what about the inducing points? Are these just a sub-sample of the training set or are they optimized? Some additional comments on the limitations would have been welcomed (especially regarding the practical implementation). \nNo comments on potential negative societal impact, as expected for this kind of work." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "-1kh2llWtte", "_14PmfOKPYE", "flhtJXI4NEN", "nips_2022_mMT8bhVBoUa", "b74DgukvHAA", "59QwKQ5fne", "d5sPuG_X0k", "nips_2022_mMT8bhVBoUa", "nips_2022_mMT8bhVBoUa", "nips_2022_mMT8bhVBoUa" ]
nips_2022_zuL5OYIBgcV
Non-deep Networks
Latency is of utmost importance in safety-critical systems. In neural networks, lowest theoretical latency is dependent on the depth of the network. This begs the question -- is it possible to build high-performing ``non-deep" neural networks? We show that it is. To do so, we use parallel subnetworks instead of stacking one layer after another. This helps effectively reduce depth while maintaining high performance. By utilizing parallel substructures, we show, for the first time, that a network with a depth of just 12 can achieve top-1 accuracy over 80% on ImageNet, 96% on CIFAR10, and 81% on CIFAR100. We also show that a network with a low-depth (12) backbone can achieve an AP of 48% on MS-COCO. We analyze the scaling rules for our design and show how to increase performance without changing the network's depth. Finally, we provide a proof of concept for how non-deep networks could be used to build low-latency recognition systems. Code is available at https://github.com/imankgoyal/NonDeepNetworks.
Accept
This work considers the task of training state-of-the-art CNNs with limited depth. The benefits considered in this work are related to the potential parallelization which is induced by depth reduction. This paper generated a fair bit of discussion with the reviewers about the motivations and the basic thesis. The authors do a good job of representing their viewpoint, and adding a version of this discussion to the final manuscript will undoubtedly be needed. The empirical results look quite promising, but the authors are also encouraged to further discuss adding an additional motivation to reducing depth (e.g., a theoretical reason, as proposed by reviewer zBDf, which is currently only one paragraph long in the related section) and/or performing a deeper study of hyper-parameters affecting accuracy/latency. With proper framing of the question studied here, the scope of evaluation and the assumptions on the hardware, this will be an interesting contribution to NeurIPS
val
[ "q7W43nrhK0t", "OBBlOppRHD8", "5VLOyDT8buX", "w9GscWn15mJ", "Uky-tUI00K", "IK_rou7anBQ", "NYDQaSIHDd9", "AiZrWvEpgrOT", "2AuPYWuWOEM", "eaJMbkiEalA", "D3qntmYPHplx", "a61N5woR6NZ", "BQxZA8Ul4b1", "Z8TYkIwQoB2", "uZ7Bkbtdf_7", "P3d_IJLpbLD", "43AI4ktSBQU", "MWlH6RGrDKf", "p6oduJ1LbbH", "2JNlxK-qRYw" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We believe that the reviewer is missing the point that as we have hardware with more cores, depth will increasingly become an important and limiting factor as there is no way to circumvent depth (or number of sequential steps). \n\nAlso, we pointed out that the differences caused in latency by large and small depth might become a factor as we move towards applications with stringent latency requirements.\n\nIn our opinion, it would be myopic to ignore the importance of depth in the mentioned example [1]. Saying that depth is important does not mean everything else, including optimized software, hardware and architecture is not important. We have many papers exploring the other factors and fewer focusing on depth, and hence our work is valuable, to point out this important dimension.", " **As suggested, we will also add result at the same resolution for clarity.**\n\nThank you; could you share the result here so that I may put the current result into context? Do you also have results for YOLOv4-CSP at the higher resolution for a complete comparison? What are the two resolutions in question?\n\n**The table leads to the same conclusion with or without the formatting.**\n\nThe formatting in Table 6 leads me to believe that the largest ParNet model is the third-best performer among the models on CIFAR10, since it is underlined and in the largest (implicitly best-performing) group. This is not an accurate reflection of the data in the full table. Really, though, this is a small issue in the larger scheme of things.\n\n\n## My current summary\n\nAs I've maintained, high network performance (accuracy, not latency/throughput) with so few layers is new and interesting. However, the motivation is poor, results are confusingly-presented, and the overall importance is questionable:\n\n- The motivation for low-depth networks seems to be minimizing latency, which is theoretically bounded by the longest sequential path through a network's operations. In practice, though, depth is only one of dozens of metrics that dictate a network's latency, on any processor. **It remains unclear to me why low network depth is inherently Good.**\n\n- ImageNet results\n\n - ParNet-L's accuracy (77.66/93.60) is roughly the same as ResNet-50 (77.53/93.87), but it has twice as many parameters (54.9M vs. 25.6M) and three times as many FLOPs (26.7B vs. 8.2B).\n\n - As such, it takes 3x the compute resources to roughly match the speed (249 vs. 222 samp/sec).\n\n - The claim in the caption is \"In spite of communication overhead, ParNet is faster than similar-performing ResNets.\" Another different way to position this result would be to say \"thanks to the extra computing resources given to ParNet, it is faster than similar-peforming ResNets.\" **In this light, is the result really surprising or impressive?**\n\n- CIFAR10/100 results\n\n - The claim in Table 6 is \"ParNet performs competitively with deep state-of-the-art architectures while having a much smaller depth.\" If we state it another way: \"ParNet requires more than twice the number of parameters as SOTA architectures in order to approach their performance.\" **Without a good reason to prefer lower network depth, is this result compelling?**\n\n- MSCOCO results\n\n - The authors report being both faster and more accurate than the baseline YOLOv4-CSP.\n\n - However, this comparison is between the baseline at a low resolution and the ParNet at a higher resolution.\n\n - **Without the full results of both networks at both resolutions, it cannot be concluded that that ParNets are superior.** (Does the baseline network's mAP surpass that of ParNet at the higher resolution? Did ParNet require higher resolution to meet the baseline mAP?)\n\t\n - The authors' claim is simply that \"non-deep networks can be used as backbones for fast and accurate object detection systems,\" which seems to be reasonable (pending the actual resolution used). The **emphasized** note directly above is important in the context of my overall conclusion about the submission, below.\n\n- Other parallelization techniques\n\n - ParNets are a natural way to parallelize a network across multiple processors in order to improve accuracy with a given latency budget.\n\n - Ensembling is an existing way to parallelize a network across multiple processors in order to improve accuracy. **The experiment performed does not provide information about how a ensembling a non-ParNet architecture compares with the ParNet approach.**\n\n - **Similarly, spatially parallel convolutions have not been considered as way to parallelize (for throughput/latency) non-ParNet architectures.**\n\n - As such, the reader cannot conclude anything about the relative quality of the ParNet approach to parallelization.\n\nThe authors have found a way to pack complexity into each logical layer in order to reduce network depth and to structure the work in each layer in a way that makes it simple to parallelize across multiple processors. Despite the authors' motivation and claims, it has not been shown that this leads to a demonstrably superior model. Making a better case for the importance of low network depth, or showing that this method of parallelizing a network (of any depth) is superior to other approaches would make the submission much stronger and improve the importance of its findings.\n", " Thank you for your continued discussion. However, I believe you are missing my point. Network depth alone is not directly responsible for a network's latency in any real scenario. Thus, it is not reasonable to focus solely on network depth in order to optimize a network's latency or performance.\n\n**There are some issues with this calculation. It assumes that each layer is executed in one GPU cycle.**\n\nI have adopted this precise assumption from your introduction: \"the lowest achievable latency is *d/f*, where *d* is depth of the network and *f* is the processor frequency.\" (Whether or not that processor is a GPU is immaterial here.) In what I agree is the more realistic case that each layer is not executed in one cycle, this is because the processor is limited by hardware FLOPs, or available bandwidth, or some other resource. In this case, depth is not the sole determining factor of network latency.\n\nPut another way - either \n1. The processor has enough hardware FLOPs and bandwidth to achieve a 1-cycle-per-layer latency, and the network's depth is on the critical path, or\n2. The processor must re-use its hardware FLOPs multiple times per layer or they sit idle waiting for data to operate on, at which point the network's latency is equally dependent on the number of FLOPs to be performed or the amount of data to be moved.\n\nI assumed the former case when concluding the difference in latencies is negligible for reasonable network depths and clock speeds, since, as you agreed, focusing on network depth as the driver of low latency only makes sense when the hardware is unencumbered by FLOPs or bandwidth. If I've made the wrong assumption, and the second case is the one you care about (which you suggest may be the case with your example of a GPU having x/10 cores), then the network's FLOPs and memory requirements are equally (or more) important to the overall latency. In your example, reducing the operations in each layer and reducing the number of times each cores is used will give just as much benefit as reducing the depth. As such, there is no \"depth and latency trade off\" without also considering the number of FLOPs to be performed, bytes to be moved, etc. Reducing depth but increasing width (in channels, independent streams, etc.) is not inherently beneficial.\n\n**For example, the control system for Fusion Reactors operates at 10KHz, or 0.1ms [1]. Such constraints restrict the neural network to be small and only of depth 4.**\n\nThese constraints restrict the network's *complexity*, not strictly its depth. The authors note that, to achieve a runtime in this tight latency target, they \"remove superfluous weights and computations … [and] tailored the neural network structure to optimize the use of the processor’s cache and enable vectorized instructions for optimal performance.\" This supports my position that other factors, not just depth, are crucial to good performance.\n", " **Reviewer momd: My concern here is that your first figure sets the tone that all the results use the exact same configuration for fair accuracy comparisons. Table 4 presents results that do not use the same configuration (the ParNet results use a higher resolution), and it is only with careful reading of the text that this becomes clear. The baseline results' accuracies could be higher under the same experimental configuration. I understand that their latencies would also increase, and that your results are already better in both metrics (accuracy and latency). An easy solution is to expand the table to include the ParNet results at the same resolution - this makes the configurations crystal clear and gives the reader some extra information about how ParNets scale.**\n\nResponse: As suggested, we will also add result at the same resolution for clarity.\n\n**Reviewer momd: In addition to clarifying that three GPUs were used, the claim should be tempered with \"… faster than similar-performing ResNets when they are not parallelized beyond a single GPU.\" The structure of ResNets may not be as trivially parallelizable as ParNets, but it is not impossible, especially for large input sizes typically used for detection tasks (see e.g. \"Spatially Parallel Convolutions,\" Jin et al., ICLR 2018 Workshops, or ensembling [more below]). Claiming speed superiority when using different computing resources is not fair. **\n\nResponse: Thanks for the suggestion. We will update the claim as suggested for clarity.\n\n\n\n**Reviewer momd: I agree - this organization is commonly-used. Further, there's certainly no optimal organization for all situations. There may not be an optimal organization even for just this table! However, the formatting added to identify the second and third best performing models in each category makes it seem that, for a given parameter limit, ParNet is in the top three networks for CIFAR10 at the upper end of accuracy. This conclusion would be incorrect, however, when considering all the models that fit into the same parameter constraint. Either the table and its formatting should not lead a casual reader to an incorrect conclusion, or the proper conclusion should be explicitly mentioned in the text. In this case, removing the formatting would be sufficient to avoid the implicit suggestion that ParNet is in the top-3 for high-accuracy CIFAR10 models, but I'm curious about what conclusions would fall out if the table were ordered by accuracy, instead. (Either data set; hopefully the other data set's accuracies will be close to ordered.)**\n\nResponse: As suggested by the reviewer, we will remove the formatting. But we do not believe it is an issue as the table leads to the same conclusion with or without the formatting. On CIFAR-10 and CIFAR-100, ParNet performs better or as well as ResNet and Wide-ResNet; similar to vanilla DenseNet; and worse than DenseNet with compression and bottleneck. ParNets are within the top 3, considering all model classes, which are ResNets and its variants, Wide-ResNets, vanilla DenseNets, DenseNets with compression and bottleneck and ParNets.\n\n**Reviewer momd: Thank you for clarifying my misunderstanding with the ensembled model. That is not my concern with this experiment, though. The section's title and text suggest the comparison is between \"ParNet vs. Ensembles,\" but a more accurate description of the experiment might be \"Ensembled single-stream ParNet vs. multi-branch ParNet.\" I'd much prefer to see the experiment suggested by the original text!\n\nL308 further qualifies your above statement \"... non-deep networks are not a replacement for their deep counterparts [in] low-compute settings requiring small number of parameter and flops.\" Ensembling something like RN50 would address a different setting, in which large amounts of parameters and flops are allowed. If they are not a replacement there, either, then where are they best used?**\n\nResponse: Thanks. We will update the title to \"Ensembled single-stream ParNet vs. multi-branch ParNet.\" as suggested for clarity. \n\nParNets would be best used with hardware with more parallelization and memory. We will clarify the same in the paper.", " We thank the reviewer for the constructive suggestions. We appreciate that the reviewer has reiterated that the performance of ParNet is impressive considering their depth. Following we have tried to address their concerns:\n\n**Reviewer momd: “This is a better phrasing, but I would add that this focus on depth as the limiter of latency is dependent on future hardware which is unencumbered by FLOPs or bandwidth.”**\n\nResponse: Thanks for the suggestion! We will add it for clarification.\n\n**Reviewer momd: “the idea is that there will be a material difference between a latency of O(10) cycles and a latency of O(50) cycles at a reasonable clock speed? Even at just 10MHz, the difference between the theoretical lowest latencies is 4us, which is a drop in the bucket of full system latencies which are typically three (or more) orders of magnitude larger, even for low-latency applications. While I agree with the statement that latency is theoretically bounded by the longest sequential path through a network, I remain unconvinced that this matters in practice. This is the source of my view that the motivation was a weakness of the submission.”**\n\nResponse: There are some issues with this calculation. It assumes that each layer is executed in one GPU cycle. However, each layer may take multiple instruction cycles for memory access, writing result, synchronization etc. Hence, if each layer takes 10 cycles, the theoretical lowest latency difference for the above case would be 40 us. We will clarify it in the paper. \n\nAlso, let's say the number of cores required for perfect parallelism is x. But when a GPU has x/10 cores, then the theoretical maximum latency would be 400 us or .4ms. So the theoretical latency could still be useful with GPUs with relatively larger numbers of cores.\n\nFurther, many applications like robotics and self-driving cars might require inference from multiple networks in real-time. Hence, the latency requirement for each network becomes stringent.\n\nFor some applications, these latency differences might be tolerable. However, they quickly become a limiting factor in some other applications. For example, the control system for Fusion Reactors operates at 10KHz, or 0.1ms [1]. Such constraints restrict the neural network to be small and only of depth 4. Hence, we believe with future applications and hardware, depth and latency trade off will become more important.\n\n[1] Magnetic control of tokamak plasmas through deep reinforcement learning, Nature 2021\n\n**Reviewer momd: The conclusion from the observations in this section is not that shallow networks are preferable; it's that parallelization matters. This provides the motivation to parallelize the structure of ParNets - with some math to perform, the way to improve performance when the clock speed cannot be increased is to do them at the same time using more resources. This technique applies equally to shallow and deep networks. Suggesting that non-deep networks are advantageous at the end of this section relies on the same assertion of theoretical minimum latency as discussed directly above in our responses, and claiming it here conflates low depth and high parallelism.**\n\nResponse: In the previous response, we have tried to clarify the importance of depth in latency. We will clarify it in the paper further to not conflate low depth and high parallelism.\n\n**Reviewer momd: Table 6 suggests that ResNet may not be SOTA, so I would omit that qualifier. The best-performing networks in Table 6 are DenseNets; even if they are SOTA, I don't think that ParNet works \"as well\" given the differences in accuracies.**\n\nResponse: We will omit the SOTA label for ResNet. \n\nRegarding “as well”, we want to clarify a potential misunderstanding. As stated in Line 257, ParNet does perform “as well” as vanilla DenseNet on CIFAR100 (24.62 vs 24.42 for 1.3 vs 1 M parameters; 20.02 vs 20.20 for 15.5 vs 7 M parameters; and 18.65 vs 19.25 for 35 vs 27.2 M parameters).\n\nParNets do not outperform DenseNet with bottleneck and compression as stated in Line 261-263.", " The authors addressed my concerns, so I decided to keep my rating.", " I still find that the experiments and results require attention in order to support the conclusions as they are presented, or need clarification to avoid incorrect conclusions.\n\n**[Table 4] We would like to clarify that there is no contradiction between the two statements. ... We show that within the latency budget, ParNet can use higher resolution and perform better than baseline.**\n\nMy concern here is that your first figure sets the tone that all the results use the exact same configuration for fair accuracy comparisons. Table 4 presents results that do *not* use the same configuration (the ParNet results use a higher resolution), and it is only with careful reading of the text that this becomes clear. The baseline results' accuracies could be higher under the same experimental configuration. I understand that their latencies would also increase, and that your results are already better in both metrics (accuracy and latency). An easy solution is to expand the table to include the ParNet results at the same resolution - this makes the configurations crystal clear and gives the reader some extra information about how ParNets scale.\n\n**We will revise the table caption to make it clear that ParNet uses 3 GPUs.**\n\nIn addition to clarifying that three GPUs were used, the claim should be tempered with \"… faster than similar-performing ResNets when they are not parallelized beyond a single GPU.\" The structure of ResNets may not be as trivially parallelizable as ParNets, but it is not impossible, especially for large input sizes typically used for detection tasks (see e.g. \"Spatially Parallel Convolutions,\" Jin et al., ICLR 2018 Workshops, or ensembling [more below]). Claiming speed superiority when using different computing resources is not fair.\n\n**Organizing models by size is a well-adopted and clear way to present such results. We are open to suggestions that the reviewer might have. In our opinion, the organization shows how ParNet performs similar to DenseNet and worse than DenseNet (Bottleneck + Comparison) in terms of number of parameters vs accuracy. We also state the same in L257-263.**\n\nI agree - this organization is commonly-used. Further, there's certainly no optimal organization for all situations. There may not be an optimal organization even for just this table! However, the formatting added to identify the second and third best performing models in each category makes it seem that, for a given parameter limit, ParNet is in the top three networks for CIFAR10 at the upper end of accuracy. This conclusion would be incorrect, however, when considering all the models that fit into the same parameter constraint. Either the table and its formatting should not lead a casual reader to an incorrect conclusion, or the proper conclusion should be explicitly mentioned in the text. In this case, removing the formatting would be sufficient to avoid the implicit suggestion that ParNet is in the top-3 for high-accuracy CIFAR10 models, but I'm curious about what conclusions would fall out if the table were ordered by accuracy, instead. (Either data set; hopefully the other data set's accuracies will be close to ordered.)\n\n**We want to clarify a potential misunderstanding. The model used for ensemble is not ParNet-M but a single-stream ParNet model (Table 9). ... Regarding comparison with ensembles of deeper networks such as RN50, we agree that ParNets currently are not a replacement for them (L308). As pointed out by the reviewer, our objective is to show that it is possible to build high-performing non-deep networks.**\n\nThank you for clarifying my misunderstanding with the ensembled model. That is not my concern with this experiment, though. The section's title and text suggest the comparison is between \"ParNet vs. Ensembles,\" but a more accurate description of the experiment might be \"Ensembled single-stream ParNet vs. multi-branch ParNet.\" I'd much prefer to see the experiment suggested by the original text!\n\nL308 further qualifies your above statement \"... non-deep networks are not a replacement for their deep counterparts *[in] low-compute settings requiring small number of parameter and flops.*\" Ensembling something like RN50 would address a different setting, in which large amounts of parameters and flops are allowed. If they are not a replacement there, either, then where are they best used?", " I truly appreciate the authors' thoughtful responses to all the reviews. I'll respond to the first half of their response to my review here, and to the second half under that response.\n\nTo reiterate: such shallow networks performing as well as they do is impressive. I do not know if the motivation of reducing theoretical latency is particularly compelling, though. Let me explain by clarifying some of my earlier points in context of your response.\n\n\n**For better clarity, we will rephrase to “Lowest theoretical latency is dependent on depth” We are open to suggestions from the reviewer.**\n\nThis is a better phrasing, but I would add that this focus on depth as the limiter of latency is dependent on future hardware which is unencumbered by FLOPs or bandwidth. (If the hardware *could* be limited by these other factors, then disregarding them presents an incomplete view of the lowest theoretical latency.)\n\n**In hindsight, we feel a theoretical limitation might be a better way to say it. We are open to suggestions from the reviewer.**\n\nTo make sure I understand the focus on depth and latency: the idea is that there will be a material difference between a latency of O(10) cycles and a latency of O(50) cycles at a reasonable clock speed? Even at just 10MHz, the difference between the theoretical lowest latencies is 4us, which is a drop in the bucket of full system latencies which are typically three (or more) orders of magnitude larger, even for low-latency applications. While I agree with the statement that latency is theoretically bounded by the longest sequential path through a network, I remain unconvinced that this matters in practice. This is the source of my view that the motivation was a weakness of the submission.\n\n**Depth matters in this section because we discuss here that one cannot increase the processor frequency because of physical limitations. Hence, networks with less sequential operations (like non-deep networks) are advantageous. We will clarify this further in the paper.**\n\nThe conclusion from the observations in this section is *not* that shallow networks are preferable; it's that parallelization matters. This provides the motivation to parallelize the structure of ParNets - with some math to perform, the way to improve performance when the clock speed cannot be increased is to do them at the same time using more resources. This technique applies equally to shallow and deep networks. Suggesting that non-deep networks are advantageous at the end of this section relies on the same assertion of theoretical minimum latency as discussed directly above in our responses, and claiming it here conflates low depth and high parallelism.\n\n**We will revise the statement to the following for clarity: “This further indicates that a non-deep network such as ParNet can work as well as deeper SOTA networks such as ResNet”**\n\nTable 6 suggests that ResNet may not be SOTA, so I would omit that qualifier. The best-performing networks in Table 6 are DenseNets; even if they are SOTA, I don't think that ParNet works \"as well\" given the differences in accuracies.\n\n**Overall, we find that the depth of 12 to be better for ParNet when considering both the latency and accuracy.**\n\nThis is interesting data! Please include it in the paper; I believe readers will appreciate it.", " We thank the reviewer for positive comments and helpful feedback on our work. We address the concerns below:\n\n**Concern: Though the discussions of the future hardware are interesting, this paper shows no clue on how to realize it. Maybe the authors can show a potential direction.**\n\nThanks for the question! One way to realize hardware to support parallel architectures is to have a multi-die GPU where fast connections can be established between GPUs. In fact, we are seeing a trend towards such hardware with Cerebras releasing WSE-2, an 84 die AI chip with fast interconnect and AMD releasing the first multi-die GPU [1]. Such hardware is emerging because there are constraints with reducing the size of the transistor and increasing the area of ​​the die (more detail in L162-172). \n\n[1] https://www.cerebras.net/blog/an-ai-chip-with-unprecedented-performance-to-do-the-unimaginable/\n[2] https://www.amd.com/en/graphics/instinct-server-accelerators\n\n**Concern: A discussion with RepVGG may highlight the contributions of this paper as well as show the differences. For example, \"this paper focuses on the depth while RepVGG was more about the simplicity, .........., this paper is related to RepVGG because a RepVGG-style overall architecture is suitable for shallow model design\".**\n\nThanks for the great suggestion. We will do so.\n\n**Concern: L103: \"VGG-style block\" seems confusing. I would suggest to use \"VGG-style architecture\" or \"RepVGG-style design\".**\n\nThis is a great suggestion. We will do so.\n\n**Concern: L133: missing reference. L143: missing reference**\n\nThanks for pointing this out. We will update.\n\n**Concern: Caption of Table 2: I would suggest the authors use 3×3 instead of 3X3.**\n\nThanks for the suggestion. We will revise accordingly.\n\n**Concern: L311: \"consists of consists of\"**\n\nThanks for pointing this out. We will revise.\n", " We thank the reviewer for positive comments and helpful feedback on our work. We address the concerns below:\n\n**Concern: Provide an example where authors can lower-bound the latency of ResNet-like or VGG-like architecture, (theoretically and empirically) and show to what extent parallelism gives a crucial advantage? Table 3 compare ParNet with and without parallelism but do not compare the speed with other architectures.**\n\nThanks for the interesting question. \n\nFollowing is a empirical scenario for autonomous vehicles. Ideally, the latency of the autonomy system should be less than or equal to the latency of the sensor captur system. The autonomy systems includes image preprocessing, network prediction and control. Hence, the latency requirement for the network prediction are more stringent. To cater to this requirement, autonomy systems have proposed to use two different detectors operating at different frequencies: a less accurate detector with high speed and a more accurate detector at low speed [1]. This is because the current detectors based on ResNet-like or VGG-like architecture might be insufficient for accurate prediction at high speed. We believe that parallelism could provide a crucial advantage here by allowing us to operate accurate and fast detectors. \n\nFor theoretical lower bound, if we assume a clock speed of 1000 MHz (typical for GPUs), and that each layer can be executed in 10 clock cycles, then a network with 100 Layers can be best executed in 1 ms. With parallelism, and non-deep networks, one can theoretically do the same operation in 0.1 ms assuming a depth of 10. \n\nWe provide comparison of speed with other architectures in Table 2.\n\n[1] Shen, A., Tesla Inc, 2020. Machine learning models operating at different frequencies for autonomous vehicles.\n\n**Concern: Why is model parallelism not used for object detection in MSCOCO?**\n\nWe found in our experiments that parallelization does not offer much advantage in the object detection case due to the high overhead of transferring the large features of higher resolution. In this case, communication becomes a major bottleneck. We believe that parallelism will be more relevant for this purpose when that issue is reduced i.e. when the relative data transfer latency will be less compared to the execution time of model layers. Developments like multi-die GPU are steps in this direction. \n\n**Concern: From a theoretical side, are there classes of functions actually requiring a certain depth to be effectively approximated by a neural network. This could help identify the use cases where non-deep networks will never be an acceptable replacement for their deep counterparts.**\n\nThanks for the interesting question! In fact, the classic work by Cybenko et al. [1] shows that even a single layer neural network, when sufficiently wide, can approximate any function with arbitrarily small error. However, such a network might be impractical because they may need large number of parameters to satisfy the width requirement. In our work, we show how one can use a non-deep network with reasonable parameter count for computer vision applications. We hope our work can inspire theoretical investigation in this direction.\n\n[1] Approximation by Superpositions of a Sigmoidal Function", " We thank the reviewer for positive comments and helpful feedback on our work. We address the concerns below:\n\n**Concern: What is the effect of changing depth on ParNet. Would scaling ParNet to greater depths further increase its performances?**\n \nWe thank the reviewer for the suggestion. Following we show results with ParNet variants with depth 9, 12 and 15. M variants have 128, 256 and 512 channels; L have 160, 320 and 640 channels; XL have 200, 400 and 800 channels in the three branches (Fig. 1). \n\n| Size | Depth | Latency (in ms) | ImageNet Top-1 Acc |\n|--:|--:|--:|--:|\n| M | 9 | 3.1 | 73.9 |\n| L | 9 | 3.2 | 75.1 |\n| XL | 9 | 3.7 | 76.1 |\n| M | 12 | 3.8 | 76.6 |\n| L | 12 | 4.0 | 77.7 |\n| XL | 12 | 4.4 | 78.6 |\n| M | 15 | 4.8 | 77.0 |\n| L | 15 | 4.9 | 78.4 |\n| XL | 15 | 5.4 | 79.4| \n\n\nWe find that decreasing the depth of ParNet from 12 to 9 reduces latency but also reduces performance. ParNet-XL with depth 9 and ParNet-M with depth 12 have similar latency (3.7 vs 3.8 ms) but slightly worse performance (76.1 vs 76.6).\n\nSimilarly, increasing the depth of ParNet from 12 to 15 increases performance but also increases latency. ParNet-M with depth 15 is both slower and less accurate than ParNet-L with depth 12.\n\n**Concern: Why/How did you choose the depth of 12?**\nAs shown in the previous table, we find the depth of 12 to be near optimal for ParNet when considering the trade-off between latency and accuracy. Hence, we choose a depth of 12.\n", " **Concern: Figure 1 mentions that results with longer training, higher resolution, or multi-crop testing are excluded for fairness. However, the authors also report ParNet’s accuracy with higher resolution in Table 4's results (L226).**\n\nWe would like to clarify that there is no contradiction between the two statements. The two statements are being made in different contexts. Figure 1 reports depth vs accuracy on ImageNet. Auxiliary factors like multi-crop testing and higher resolution can affect the performance of all networks, including ParNet. Hence, for fair comparison we report numbers for all networks (including ParNet) without these auxiliary factors. In Table 5, we separately show the effect of auxiliary factors like longer training, higher resolution and multi-crop testing for ParNet on ImageNet. \n\nOn the other hand, L226-L240 describes latency vs performance on MSCOCO object detection. We show that within the latency budget, ParNet can use higher resolution and perform better than baseline.\n\n\n**Concern: Table 2 presents latency results with the conclusion that, \"In spite of communication overhead, ParNet is faster than similar-performing ResNets.\" However, ParNets are using 3x as many compute resources (L210-211). This detail should be reflected in the table.**\n\nThanks for the suggestion. We will revise the table caption to make it clear that ParNet uses 3 GPUs.\n\n**Concern: Table 6 is organized such that models with similar sizes are grouped together, and then ParNets are compared only within those groups. This disguises other potential comparisons, such as DenseNet (Bottleneck+Compression) with a depth of 250, with the final group. If it were included, it would show that a network with less than half as many parameters has higher accuracy than the largest ParNet model.**\n\nOrganizing models by size is a well-adopted and clear way to present such results. We are open to suggestions that the reviewer might have. In our opinion, the organization shows how ParNet performs similar to DenseNet and worse than DenseNet (Bottleneck + Comparison) in terms of number of parameters vs accuracy. We also state the same in L257-263. \n\n**Concern: Ensemble is an obvious cousin of ParNet. The only comparison is with ensembling ParNet-M models. A better choice might be ResNet-50; as presented in Table 2, it has higher accuracy, fewer parameters, and fewer FLOPs than ParNet-M. If the latency is roughly the same for improved accuracy, then what advantage do ParNets have?**\n\nWe want to clarify a potential misunderstanding. The model used for ensemble is not ParNet-M but a single-stream ParNet model (Table 9). Ensemble on a single-stream ParNet model allows us to compare the advantage of ensemble vs multi-branches while controlling for other factors like the block structure and depth. Hence, only for ParNet we show the advantage of multiple branches vs ensembles.\n\nRegarding comparison with ensembles of deeper networks such as RN50, we agree that ParNets currently are not a replacement for them (L308). As pointed out by the reviewer, our objective is to show that it is possible to build high-performing non-deep networks.\n\n**Concern: The conclusion from Fig. 3 (L299-301) is that by increasing compute, one could achieve even higher performance with ParNet while maintaining low depth. This is in contrast to Table 10, though, which shows that scaling from 3 to 4 branches reduces Top-1 accuracy. Figure 3 does not scale far enough to see the saturation shown in Table 10.**\n\nThanks for the suggestion. We want to clarify a confusion. In Table 10, we increase the number of streams while keeping the number of parameters the same. For Figure 3, we mean scaling via increasing compute that can involve increasing the number of parameters and flops. That being said, we agree that there could be saturation beyond the range we tested, so we will revise to clarify it.", " We thank the reviewer for positive comments and helpful feedback on our work. We address the concerns below:\n\n**Concern: The focus on depth is myopic. It is asserted that “latency is fundamentally dependent on the depth of the network.\" This is also true of other factors such as the number of parameters, the input resolution, choice of activation function, and choice of layer types. Am I missing a fundamental relationship between depth and latency that doesn't exist between other axes, like layer width, and latency?**\n\nThanks for pointing out the confusion, We would like to clarify a potential misunderstanding. It is correct, that for a particular piece of general purpose hardware like GPU, latency is dependent on all the factors mentioned by the reviewer like number of parameters, width of layer etc. This is because the current GPUs have limited memory and number of cores. Hence, the extent of parallelization is not perfect. \n\nBut in the context of our work (L20 - L22), we are referring to the lowest achievable latency for a network with optimal hardware for parallelization. One way to achieve this latency would be to print the entire network on a chip. Further, future hardware with more memory and cores would also facilitate such parallelization. For better clarity, we will rephrase to “Lowest theoretical latency is dependent on depth” We are open to suggestions from the reviewer. \n\n\n**Concern: The focus on depth gets stranger with line 20's \"the lowest achievable latency is d/f\" - layer-seconds per cycle is an odd unit of latency. I understand the intent, but feel it could be stated less sensationally.**\n\nOur intention was not to sensationalize but to make a point about a theoretical bound on latency. In hindsight, we feel a theoretical limitation might be a better way to say it. We are open to suggestions from the reviewer. \n\n**Concern: The focus on depth is out of place in Section 3.5's Line 177-178: \"All these factors make non-deep parallel structures advantageous…\". Nothing in this section was related to network depth, just parallel operations.**\n\nWe regret the confusion and would like to clarify the misunderstanding. Depth matters in this section because we discuss here that one cannot increase the processor frequency because of physical limitations. Hence, networks with less sequential operations (like non-deep networks) are advantageous. We will clarify this further in the paper.\n\n**Concern: L264 suggests that \"it is surprising that a mere depth-12 network could achieve…. This further indicates that non-deep networks can work as well as deeper counterparts.\" Strictly speaking, any surprise at the results does not indicate non-deep networks' relative performance to their deeper counterparts. What it shows is that ParNet's structure allows for shallower networks than the other network structures.**\n\nThanks for the great suggestion! We will revise the statement to the following for clarity: “This further indicates that a non-deep network such as ParNet can work as well as deeper SOTA networks such as ResNet”\n\n**Concern: What happens if the streams of ParNet are made deeper? Does performance not improve, supporting this assertion? Conversely, what if depth is decreased - is there a minimum necessary depth for ParNet? Any results would not diminish the contributions, but help put them into perspective.**\n\n\nWe thank the reviewer for the suggestion. Below we show results with ParNet variants with depth 9, 12 and 15. Note that networks with size M have 128, 256 and 512 channels; size L have 160, 320 and 640 channels; size XL have 200, 400 and 800 channels in the three branches (Fig. 1). \n\nWe find that decreasing the depth of ParNet from 12 to 9 reduces latency but also reduces performance. ParNet-XL with depth 9 and ParNet-M with depth 12 have similar latency (3.7 vs 3.8 ms) but slightly worse performance (76.1 vs 76.6). \n\nSimilarly, increasing the depth of ParNet from 12 to 15 increases performance but also increases latency. ParNet-M with depth 15 is both slower and less accurate than ParNet-L with depth 12.\n\nOverall, we find that the depth of 12 to be better for ParNet when considering both the latency and accuracy.\n\n| Size | Depth | Latency (in ms) | ImageNet Top-1 Acc |\n|--:|--:|--:|--:|\n| M | 9 | 3.1 | 73.9 |\n| L | 9 | 3.2 | 75.1 |\n| XL | 9 | 3.7 | 76.1 |\n| M | 12 | 3.8 | 76.6 |\n| L | 12 | 4.0 | 77.7 |\n| XL | 12 | 4.4 | 78.6 |\n| M | 15 | 4.8 | 77.0 |\n| L | 15 | 4.9 | 78.4 |\n| XL | 15 | 5.4 | 79.4|", " We thank the reviewer for positive comments and helpful feedback on our work. We address the concerns below:\n\n**Concern: The technical contributions are not significant, as most of the crucial components of ParNet are present in literature**\n\nWhile components of ParNet have appeared in prior literature as stated in L44, our significant technical contribution is not in proposing new components, but in using them properly for building non-deep networks. For example, parallel branches have been used in HRNet, but unlike us, HRNet introduces many interconnections between branches which reduces the degree of parallelization. Similarly, the ParNet block combines ideas from multiple sources, including Inception, RepVGG, and Squeeze and Excitation. \n\n**Concern: ParNet introduces some element-wise operations that are not parallelism-friendly and hard to optimize**\n\nWe agree that ParNet introduces some operations which are currently not as optimized for parallelism as 3x3 convolution or ReLU. However, this is not a theoretical limitation, and with better software implementation the gap could be reduced. For example, global average pooling can be implemented as a parallel global mean reduce operation. Also, support for activations like SiLU and sigmoid have been improving and they are becoming as fast as ReLU [1]. We believe that if these layers are shown to be useful, better software and hardware to support them will follow. \n\n[1] https://benjaminwarner.dev/2021/07/19/benchmarking-pytorch-native-mish\n\n**Concern: To reduce the network depth, ParNet sacrifices too much on the number of parameters and FLOPs.**\n\nParNet uses more parameters and FLOPs than deeper networks such as ResNet. However, this tradeoff may be worthwhile, if the application requires low-latency. Hence depending on the latency requirements, one might prefer a model with more parameters and FLOPs. We agree that for some applications one might wish to minimize parameters and FLOPs in particular, because of memory or energy constraints, but in this paper we explore a different objective in neural network design.\n\n**Concern: ParNet is compared to models with a high compute budget like ResNet and not compact models like EfficientNet. Compat models are more widely used for low latency and some of their design philosophy might help to reduce the compute cost of ParNet.**\n\nThank you so much for the great suggestion! Our design philosophy is complementary to those of compact models such as EfficientNet. Combining the two might help in building low-latency and compact networks. The focus of the presented work is in showing how to build non-deep networks that achieve surprisingly high performance on vision benchmarks. Extending non-deep networks to compact design is a great direction for future work.", " We would like to thank the reviewers for their feedback and help in improving our work. We are excited by the support of the reviewers! We are happy that they found our work novel (Zr8N), intriguing (momd), boldly challenging conventional wisdom (momd) and well motivated (t5as, zBDf); our results impressive (Zr8N), valuable for the literature (cvGo); our experiments being numerous (t5as) and covering a wide breadth (momd); and our paper well-written and clearly presented (Zr8N, momd, t5as). Below we have addressed the individual concerns of the reviewers.", " In this paper, the authors explore how to reduce the depth of neural networks while keeping the model capacity. They propose ParNet, which has a depth of only 12 and can achieve comparable performance with deep neural networks. The authors also show some potential advantages of using non-deep networks for a smaller latency on future hardwares. Strengths:\n- The whole idea of non-deep networks is novel and not well-explored in previous works, as increasing the depth is usually a common practice for most neural network architectures to scale up themselves.\n- Achieving similar performance as 'deep networks' with a depth of 12 is pretty impressive.\n- The authors also discuss the potential advantages of using shallow networks to fully utilize the parallelism of hardware (e.g. GPUs).\n- Some previous works are reproduced under same setting for fair comparison.\n- The key limitations are well discussed.\n- The writing is good and all details are presented very clearly.\n\nWeaknesses:\n- The technical contributions are not significant, as most of the crucial components of ParNet were presented in other papers. For example, the overall ParNet structure looks very similar to HRNet [1] and the ParNet block looks like a variant of Inception [2].\n- Although the authors intend to improve the inference speed by reducing the network depth, ParNet still introduces some element-wise operations that are not parallism-friendly and hard to optimize (e.g. global average pooling and sigmoid in SSE layer and SiLU activation).\n- To reduce the network depth, ParNet sacrifces too much on the number of parameters and FLOPs (as well as inference memory usage even it's not shown in the paper).\n\n[1] Hu, Jie, Li Shen, and Gang Sun. \"Squeeze-and-excitation networks.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.\n[2] Szegedy, Christian, et al. \"Going deeper with convolutions.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. - From my perspective, ParNet is generally comparing to models with a large computing budget like ResNet. It is interesting to see whether ParNet architecture can also achieve comparable performance with modern compact models (for example EfficientNet). Compat models are more widely used in environment that requires low-latency and some of the design philosophy might help to reduce the high computation cost of ParNet. The main limitation of this work (inference speed on current hardwares) is well-discussed in the paper. In my option, figuring out a way to avoid the use of global average pooling and sigmoid while keeping the same depth/performance would be a huge technical contribution, as they are indeed slow on GPUs and hard to parallize or optimize.", " Prevailing wisdom suggests neural networks must be of sufficient depth in order to achieve competitive or useful accuracy. The authors challenge this belief and design a family of networks called ParNets, which use parallel structures that fork quickly from the input and converge again close to the output. ParNets are built of ParNet Blocks, made up of Rep-VGG blocks with Skip-SE layers to increase the receptive field and avoid increasing depth. The authors present results on a variety of tasks and against a variety of baseline networks. Generally, ParNets are able to achieve competitive accuracy and latency. Ablation studies show the impact of several strategies for boosting accuracy and how scaling various dimensions of ParNets (input resolution, number of streams, number of parameters) affects their accuracy.\n == Strengths ==\n\nThe submission is intriguing - it boldly challenges conventional wisdom and shows that networks need not be very deep at all to achieve useful, much less competitive, accuracy. I found the network design methodology to be interesting and grounded in reason. I imagine much of the NeurIPS attendees and readership will find the topic and approach compelling, at least from a theoretical standpoint.\n\nThe text is not hard to follow; I rarely found myself needing to re-read a sentence due to having trouble parsing it or placing it into context with the preceding text.\n\nThe breadth of experiments is appreciated: not only are there accuracy results, but we also see latency for some experiments, comparisons with different baselines, ablations studies of all types, and different tasks are represented.\n\nFinally, the ultimate goal of showing that a shallow network can compete with deep networks is successful: the accuracy of the 12-layer networks, regardless of the number of streams, stream widths, or input size, is sufficient to make the field take notice.\n\n== Weaknesses ==\n\nFor the successes I describe above, I think the exposition and successive analysis are in need of attention before I can recommend the submission for acceptance.\n\nI found the motivation to be lacking nuance and the focus on network depth to be myopic. \n- In the abstract, it is asserted that \"latency is fundamentally dependent on the depth of the network.\" This is also true of the number of parameters width of each layer - wider layers require more FLOPs, increasing latency on a given piece of hardware. Similarly, latency is fundamentally dependent on the input resolution, choice of activation function (is it a simple clamp() function, like ReLU, or does it involve a transcendental function?), and choice of layer types (3x3 convolution vs. 1x1 convolution vs. fully-connected vs. depthwise vs. …).\n- This focus on depth gets stranger with line 20's \"the lowest achievable latency is d/f\" - layer-seconds per cycle is an odd unit of latency. I think I understand the intent, but I can't help but feel it could be stated less sensationally.\n- The focus on depth is similarly out of place in Section 3.5's Line 177-178: \"All these factors make non-deep parallel structures advantageous…\", but nothing in this section was related to network depth, just parallel operations. \n- Line 264 suggests that \"it is surprising that a mere depth-12 network could achieve…. This further indicates that non-deep networks can work as well as deeper counterparts.\" Strictly speaking, any surprise at the results does not indicate anything about non-deep networks' relative performance to their deeper counterparts. What's really shown is that ParNet's structure allows for shallower networks than the other network structures studied.\n\nAn obvious experiment was left un-performed, in light of the claim that non-deep networks can work as well as deeper counterparts: what happens if the streams of ParNet are made deeper? Does performance not improve, supporting this assertion? Conversely, what if depth is decreased - is there a minimum necessary depth for ParNet? Any results here would not diminish the contributions, but they would help put them into perspective.\n\nI noted several unfair or misleading comparisons in the results:\n- Figure 1 mentions that \"for fairness,\" the authors \"exclude results with longer training, higher resolution, or multi-crop testing.\" However, the authors also seem to report their own networks' accuracy results with higher resolution in Table 4's results, as described in Line 237: \"We use this higher image resolution for ParNext-XL and ParNet-XL-CSP.\"\n- Table 2 presents latency results for several baselines networks and several ParNets, with the stated conclusion that, \"In spite of communication overhead, ParNet is faster than similar-performing ResNets.\" This is not a fair comparison, as the ParNets are using 3x as many compute resources, detailed in Line 210-211: \"… for the multi-GPU version, we use 3 GPUs.\" This detail is not reflected in the table, where most readers would notice.\n- Table 6 is organized such that models with similar sizes are grouped together, and then ParNets are compared only within those groups - this shows that ParNets are within the top three most accurate networks in each group. However, this disguises other potential comparisons, such as DenseNet (Bottleneck+Compression) with a depth of 250, with the final group. If it were included, it would show that a network with less than half as many parameters has higher accuracy than the largest ParNet model.\n\t\nI was waiting for a comparison with ensembling, which is an obvious cousin to the ParNet structure: send an input to separate networks, collect the results, and reduce to a single output. However, the only comparison is with ensembling ParNet-M models. A better choice might be ResNet-50; as presented in Table 2, it has higher accuracy, fewer parameters, and fewer FLOPs than ParNet-M. Ensembling two RN50 models would have roughly the same number of parameters as a single ParNet-L. Three RN50s would make use of the three GPUs afforded to ParNet-L in Table 2, so latency for either ensemble would be roughly identical. If the latency is roughly the same for improved accuracy (pending the experiment!), then why would ParNets have an advantage?\n\nThe conclusion from Figure 3 on Lines 299-301 is that \"Based on these charts, we see no saturation in performance while scaling ParNets. This indicates that by increasing compute, one could achieve even higher performance with ParNet while maintaining low depth.\" This is in contrast to Table 10, though, which shows explicitly that scaling from a third to a fourth branch reduces Top-1 accuracy. It seems that Figure 3 simply does not scale far enough to see the saturation shown in Table 10.\n What happens if the streams of ParNet are made deeper?\nConversely, what if depth is decreased - is there a minimum necessary depth for ParNet?\n\nAm I missing a fundamental relationship between depth and latency that doesn't exist between other axes, like layer width, and latency? (Formulating an equation for minimum latency seems like it should be based on FLOP count and bandwidth requirements rather than something more abstract like \"depth.\")\n Yes, the limitations have been discussed adequately.", " This paper presents a new CNN architecture designed for the purpose of having a relatively low depth (12 layers) while still providing results on par with (deeper) classic ResNets on various vision benchmarks, demonstrating that there may be other mechanisms than depth at play to obtain state of the art results with CNNs. The strong expressive power of the architecture is obtained thanks to parallel substructures in the network, each operating at different scale, then fusing together their information content before the final layers. In-depth ablations study are done to demonstrate the relevance of many components of the design.\nThis work is motivated by the goal of decreasing latency, which can be impacted by depth. In order to improve this metrics, it is shown that, once trained, the network can be expressed equivalently as a classic single stream CNN, increasing its speed at inference. $\\textbf{Strengths}$\n\nThe paper is clearly motivated and very well written. The experiments are numerous, thoroughly comparing the newly introduced CNN architecture to deeper ones on several classic vision datasets. An in-depth ablation study quantifies the impact of most parts of the new design, reporting impressive results for a 12 layers deep network. Comparison between this new design and other standard architectures in terms of latency is done, showing promising results for ParNet.\n\n$\\textbf{Weaknesses}$\n\nWhile the effect of increasing the number of parameters and the number of streams have been studied on this new architectures, the effect of the depth is suprisingly not shown while being the highlighted metric of this paper. How ParNets with a number of layers of similar order of magnitude (e.g., 10,11,13 or 14) perform ? If depth/latency was not an issue, would scaling ParNet to greater depths further increase its performances on standard benchmarks ? * Why/How did you chose the depth of 12 ?\n* What is the effect of the depth in your new architecture ? Yes.", " The presented paper questions the importance of the depth in neural architectures to provide state-of-the-art-results. There are two main motivations for this paper, the first being that very deep networks are inherently ill-suited for real-time systems, since input data needs to be processed sequentially through each layer. The second motivation is that computations within shallow networks using parallel substructures can be parallelized, hence (theoretically) reducing the effective computation time. However, depth being a central feature allowing neural architecture to perform complex classification tasks, the main challenge addressed by this paper is to derive a class of shallow architectures whose performances remain competitive against state-of-the-art deep architecture. The resulting architecture, called ParNet, is rather flexible in the sense that one can scale its representational power by increasing the number of parallel substructure. The authors specifically discuss the importance of certain module with respect to the final performance reported in the experiment section. The paper motivates its research effort to derive efficient shallow architectures with several arguments, the first one being that shallow networks are highly desirable in real-time applications. The author also motivates their effort from two \"wall-clock\" computation arguments, the first one being that deep architectures are hardly biologically plausible given the speed of human reaction with respect to certain stimuli. The authors also try to make reasonable assumptions on the improvement that is likely to be provided by the next generation of hardware, pointing out that neural architectures enabling module parallelism (different from data-parallelism), should benefit from such improvements. Three experiments are conducted to assess the relevance of the architecture:\n1) experiments show that ParNet, an architecture of depth only 12, provides performance on par with ResNet-like and VGG-like architectures on CIFAR10, CIFAR100, ImageNet and MS-COCO. The authors also carefully discuss speed versus accuracy in the case of ImageNet.\n2) the impact of specific block design and engineering tricks found in the literature with respect to the final performance are discussed to show how they can be employed to reduce depth while maintaining high performance. The claims are backed by an ablation study.\n3) rules to increase the statistical capacity are presented, and the increased statistical capacity results in increased performance for ParNet.\n\nSome weaknesses are exposed by the authors themselves. In table 2, RepVGG was found to be faster than ParNet, however authors point out that RepVGG benefits from a highly optimized design which is coherent given that this architecture has been tested and improved on many times over. Another key weakness in the presented paper (not the contribution itself) is the lack of real-world scenario where parallelism is actively employed to reduce computation time, hence showcasing the advantage of ParNet in such setting. Table 3 attempts to show that parallelizing the streams on different GPU does reduce the effective computation time, but it is likely that neither the software nor the hardware are optimized for ParNet paradigm, since they were initially developped to tackle data-parallelism. Can the author provide a setting example where they can lower-bound the time it would take for a ResNet-like or VGG-like architecture to process an input, both theoretically and empirically, and show to what extent parallelism gives a crucial advantage ?\nThroughout the paper, the authors mention situations where multi-stream shallow networks should provide much faster computations.\nFor example, table 3 compare ParNet with and without parallelism but do not compare the speed with other architectures.\nFurthermore, the authors promote the need for robust shallow architecture to reduce latency in real-time systems such as autonomous vehicles. The closest experiment addressing this issue is the MS-COCO challenge as it is a detection rather than a classification challenge, but parallelism is not employed here, thus resulting in less than 25% improvement in latency while the depth is reduced by 75% compared to the baseline. Can the authors explain why they did not include more experiments on model parallelism while strongly motivating their paper for this particular purpose ? One limitation of the paper, actually discussed by the authors, is that non-deep networks are not a replacement for their deep counterparts. They motivate such claim by saying that non-deep networks still require a high number of parameter and flops. From a more theoretical side, however, the reviewer strongly wonders if there are classes of functions actually requiring a certain depth to be effectively approximated by a neural network. This last point could help identify the use cases where non-deep networks will never be an acceptable replacement for their deep counterparts.", " This paper proposes a non-deep network for low latency. The key designs and techniques include multi-stream topology, RepVGG-style structural re-parameterization and SSE block. Reasonable results on ImageNet, CIFAR and COCO are reported. The results show that shallow models can perform competitively with deep ones. Strengths:\nS1. This paper shows that a 12-layer model can achieve accuracy of over 80% on ImageNet and an AP of 48% on MS-COCO. The results show that CNNs do not need to be very deep and research on shallow models is still useful, which is valuable to the literature. This is the primary reason I recommend accepting this paper.\nS2. The key techniques include RepVGG-style structural re-parameterization, multi-stream design and SSE block. The motivations are well explained and the effects are verified.\nS3. The results with custom hardware settings are particularly interesting. (For the multi-GPU version, each stream is launched on a separate GPU) The results and discussions of the communication overhead are useful.\n\nWeaknesses\nW1. The reported results are all small- and middle-sized models. It is not sure if heavy-weight models show different depth v.s. performance trade-offs. Of course, I understand the authors may have limited computing resources and did not lower my rating for lack of such large-scale experiments.\nW2. Though the discussions of the future hardware are interesting, this paper shows no clue on how to realize it. Maybe the authors can show a potential direction.\nW3. The writing can be improved. Please see the suggestions below. Suggestions:\n1. A discussion with RepVGG may highlight the contributions of this paper as well as show the differences. For example, \"this paper focus on the depth while RepVGG was more about the simplicity, .........., this paper is related to RepVGG because a RepVGG-style overall architecture is suitable for shallow model design\".\n\n2. L103: \"VGG-style block\" seems confusing. A reader may question why a single 3x3 conv is called a block. I would suggest the authors use \"VGG-style architecture\" or \"RepVGG-style design\".\n\n3. L133: missing reference. L143: missing reference\n\n4. Caption of Table 2: I would suggest the authors use 3$\\times$3 instead of 3X3.\n\n5. L311: \"consists of consists of\"\n Have the authors adequately addressed the limitations and potential negative societal impact of their work? Yes. I appreciate the discussions." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4, 5 ]
[ "5VLOyDT8buX", "w9GscWn15mJ", "Uky-tUI00K", "NYDQaSIHDd9", "AiZrWvEpgrOT", "2JNlxK-qRYw", "a61N5woR6NZ", "BQxZA8Ul4b1", "2JNlxK-qRYw", "p6oduJ1LbbH", "MWlH6RGrDKf", "BQxZA8Ul4b1", "43AI4ktSBQU", "P3d_IJLpbLD", "nips_2022_zuL5OYIBgcV", "nips_2022_zuL5OYIBgcV", "nips_2022_zuL5OYIBgcV", "nips_2022_zuL5OYIBgcV", "nips_2022_zuL5OYIBgcV", "nips_2022_zuL5OYIBgcV" ]
nips_2022_YpyGV_i8Z_J
Private Estimation with Public Data
We initiate the study of differentially private (DP) estimation with access to a small amount of public data. For private estimation of $d$-dimensional Gaussians, we assume that the public data comes from a Gaussian that may have vanishing similarity in total variation distance with the underlying Gaussian of the private data. We show that under the constraints of pure or concentrated DP, $d+1$ public data samples are sufficient to remove any dependence on the range parameters of the private data distribution from the private sample complexity, which is known to be otherwise necessary without public data. For separated Gaussian mixtures, we assume that the underlying public and private distributions are the same, and we consider two settings: (1) when given a dimension-independent amount of public data, the private sample complexity can be improved polynomially in terms of the number of mixture components, and any dependence on the range parameters of the distribution can be removed in the approximate DP case; (2) when given an amount of public data linear in the dimension, the private sample complexity can be made independent of range parameters even under concentrated DP, and additional improvements can be made to the overall sample complexity.
Accept
This paper studies private estimation with a small amount of public data. The idea is that the small public dataset may allow for significantly stronger positive results (e.g., in terms of sample complexity of private data). The authors study two fundamental settings in this direction -- estimating a Gaussian and a Gaussian mixture -- and provide interesting and technically non-trivial positive results. The consensus from the reviews and subsequent discussion is that this work is both conceptually and technically interesting.
train
[ "zkhfDLCiy9N", "Rx_OO_Vxe28P", "XhNmPwNF4Nb", "jJsn9o6-OMo", "cY4G7gWM2Qk", "h8QHIRStJ4j", "anp0uJjZZ2O", "whVRWHRWn06", "_JQKTqJsO6U", "WTalcKPar82", "XSKys1JaYGQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the thoughtful response. The example you gave helps to clarify my question about the relation to $(\\epsilon, \\delta)$-DP, which is my main concern. Therefore, I'm willing to raise my score.", " **A proof-of-concept numerical result:** \nAlthough we position our work as theoretical, since our approach is relatively simple to implement on top of an existing private algorithm, we offer some proof-of-concept simulations that demonstrate the effectiveness of public data in private statistical estimation. Below, we show some plots that evaluate **$1$ public sample private mean estimation** (the algorithm described in Section 2.1 and in more detail in Appendix B.1.1).\n\nWe examine the effect of 1 public sample on the performance of CoinPress [8] with its best parameter setting ($t=2$), in a case where the initial a priori bounds on the mean are weak. Concretely, we draw $n$ samples from a $d=50$ dimensional Gaussian, $\\mathcal N (\\mu , I_d)$, where $\\mu = 1000 \\cdot [1,1,..,1]^T$, and set our a priori bound to be $R = 1000\\sqrt d$ for CoinPress.\n\nPlease find the image of the two plots via the following link: https://i.imgur.com/duYteRQ.png\n\nWe follow the evaluation protocol from [8]: we target zCDP with $\\rho = 0.5$, and at each sample size $n$ we run the estimator 100 times and report the 10% trimmed mean of error from the ground truth. (The second plot is the first one zoomed in.)\n\nThe numerical result demonstrates the promise of utilizing public data for private data analysis, and confirms the takeaway that very little public data can help greatly when a priori knowledge of the private data is weak. As is visible from these plots, the error of our public-private algorithm nearly matches the error of the non-private algorithm.\n\nNote that these results are very preliminary – thorough tuning and evaluation of these algorithms (which is necessary to bring these algorithms to practice) is an important direction for future work.", " We thank the reviewer for their thoughtful feedback and questions. We are glad to hear that the reviewer finds the problem we study – understanding how to leverage public data in private data analysis – timely and significant, and our technical contribution solid. We agree with the reviewer that adding theory to this empirically promising approach is an important direction, and hope that our work helps refine our understanding of the role of public data in private data analysis.\n\nIn the following, we address the specific concerns and questions raised by the reviewer, in order of appearance:\n\n**On the practical implication of our Gaussian estimation results:**\n\n> *In setting 1) it was already known how to get results of a similar flavor without public data in the setting of approximate DP, so the advance lies only in obtaining pure DP, which is mainly of theoretical interest*\n\nFor estimating Gaussians: our advance is indeed theoretical, but we argue that there are reasons why our results might be interesting to a practitioner as well. \nSeveral recent works have focused on this problem of removing range bounds, developing significant technical machinery to do so via approximate DP. However, just because relaxing to approximate DP offers a solution, does not mean approximate DP is the right privacy guarantee we should adopt for every given scenario. Ideally, the decision on what privacy guarantee to go with in practice should depend on the situation itself, rather than the available tools. For example, we note that **the largest scale practical deployment to date of differential privacy, the 2020 US Census, decided to use zCDP** [1]. They make the following assessment of the two definitions:\n\n*zCDP provides privacy protection that weakens as $\\delta$ approaches $0$, but never results in catastrophic failure, which some approximate differential privacy mechanisms do permit.*\n\nAlong these lines, our work shows that in the situation where public data is available, there are alternative solutions that enjoy the benefits of stronger privacy definitions. Some examples of these benefits: both zCDP and pure DP are stronger guarantees that protect against the chance of ``catastrophic failure'' as specified by the delta parameter. Furthermore, even in the case where the analyst is comfortable with approx DP guarantees, zCDP offers composition benefits over approximate DP. Since gaussian estimation is a fundamental task, it is easy to see how it may be a part of a larger analysis. In that case, the much tighter composition guarantees of zCDP vs approximate DP (see the plot under “Composition under variants of differential privacy” in [2]) provide another practical use for our results.\n\n**On the same-distribution assumptions for our Gaussian mixture estimation results:**\n\n>*In setting 2) the public data must come from the same distribution as the private data, which is arguably a strong assumption. (I would guess it can be relaxed along the lines of case 1), but this is not worked out in the paper.)*\n\nWe agree that the assumption is strong. We do believe it is possible to relax the assumption to a certain degree, and that similar algorithms and analyses would work with some modifications. We think relaxing these assumptions in the analysis would be valuable, and leave it to future work, seeing as our current manuscript introduces novel algorithmic ideas and is already long and technical.\n\n**Replies to specific questions:**\n\n>*Given the prior work, what do you mean more precisely by \"we initiate the study\" in the abstract?*\n\nAlthough public data has been used in private data analysis, we are unaware of work where this is done specifically for private statistical estimation tasks.\n\n>*In Theorem 1.2, does $\\gamma$ need to be known to the algorithm, or can the sample size somehow be adapted to how well the public distribution approximates the private one?*\n\nWe do require an upper bound on $\\gamma$ to be known by the algorithm. However, we can be inaccurate by large polynomial factors of $1/(1- \\gamma)$ in the coarse estimation step, and not pay significantly (only polylogarithmically in the above expression) in our private sample complexity. We agree that removing the need for the bound to be pre-specified would be an interesting direction.\n\n**Final comments:** \nIf the reviewer has additional concerns and questions that they feel have not been sufficiently addressed in our reply, we would be happy to address them in the discussion phase.\n\n**References:** \n[1] John M. Abowd, Robert Ashmead, Ryan Cumings-Menon, Simson Garfinkel, Micah Heineck, Christine Heiss, Robert Johns, et al. “The 2020 Census Disclosure Avoidance System TopDown Algorithm”. Harvard Data Science Review. \n[2] Joseph P. Near and Chiké Abuah. Programming Differential Privacy. https://programming-dp.com/notebooks/ch8.html.", " **On the connection between approx DP and pure DP with public data:** \n\n> *I wonder if at least some of the results can be implied by the range-independent results for approximate DP. For $(\\varepsilon,\\delta)$-DP, in the worst case roughly $\\delta$ fraction of the dataset could be released as-is, which might be viewed equivalently as \"public dataset\". It would be great if the authors could discuss the connections.* \n\nThe reviewer points out an interesting possible connection between approximate DP and public-private pure DP. After thinking it over, we make the following remark: \n\n*Algorithms satisfying $(\\varepsilon,\\delta)$-DP do not necessarily satisfy public-private $\\varepsilon’$-DP for any $\\varepsilon’\\geq 0$, and for any designation of the rows as ``private data''.*\n\nTo see why, consider the following mechanism that, with probability $\\delta$, outputs the entire dataset. This is $(0,\\delta)$-DP. However, no matter what non-empty subset of dataset rows we designate as our ”private dataset”, individuals in that “private dataset” have a non-zero probability of being released (which would, otherwise, have been $0$ had they not participated). Therefore, they do not enjoy pure DP guarantees for any $\\varepsilon'\\geq 0$. \n\nIn other words, $\\delta$ could be interpreted as the probability of catastrophic failure -- but that failure could take any form (releasing one private row or the entire private dataset to the public, or any other leaks about the private data itself). If we simply choose to release $\\delta$ fraction of our private dataset (and label it \"public\"), the catastrophic failure probability $\\delta$ would essentially become $1$ because our algorithm is deterministically releasing private data with complete certainty, and this would annihilate any non-trivial privacy guarantees that we wanted to achieve. As such, there is no immediate way to translate $(\\varepsilon,\\delta)$-DP results to our setting.\n\n**Suggestions on writing:** \nWe thank the reviewer for suggestions on improving our writing! We plan to take these suggestions into account for the final version of the manuscript. We will expand our section on Gaussian mixtures to highlight more precisely how we use public data. We will also add the problem formulations of estimating Gaussians and Gaussian mixtures before describing our results.\n\n**Final comments:** \nIf the reviewer has additional concerns and questions that they feel have not been sufficiently addressed in our response, we would be happy to address them in the discussion phase.\n\n**References:** \n[1] Nicolas Papernot, Thomas Steinke. “Hyperparameter Tuning with Renyi Differential Privacy”. ICLR 2022. \n[2] Ishaq Aden-Ali, Hassan Ashtiani, Gautam Kamath. “On the Sample Complexity of Privately Learning Unbounded High-Dimensional Gaussians”. ALT 2021. \n[3] Hassan Ashtiani, Christopher Liaw. “Private and polynomial time algorithms for learning Gaussians and beyond”. COLT 2022. \n[4] Gautam Kamath, Argyris Mouzakis, Vikrant Singhal, Thomas Steinke, Jonathan Ullman. “A Private and Computationally-Efficient Estimator for Unbounded Gaussians”. COLT 2022. \n[5] John M. Abowd, Robert Ashmead, Ryan Cumings-Menon, Simson Garfinkel, Micah Heineck, Christine Heiss, Robert Johns, et al. “The 2020 Census Disclosure Avoidance System TopDown Algorithm”. Harvard Data Science Review. \n[6] Joseph P. Near and Chiké Abuah. Programming Differential Privacy. https://programming-dp.com/notebooks/ch8.html.", " We thank the reviewer for their thoughtful feedback and questions. We are glad the reviewer finds the problem we study – understanding the role of public data in private estimation – important to both theory and practice, and furthermore finds our results meaningful and comprehensive. We are also glad the reviewer finds our motivating Gaussian mean estimation example illuminating. In the following, we address specific concerns and questions raised by the reviewer, in order of appearance.\n\n**On the importance and practical implication of our Gaussian estimation results:** \n> *For Gaussian estimation: considering that the dependence on dimensionality remains the same, removing the logarithmic dependence on range parameters might seem a weak improvement. The improvement can also be achieved by using approximate DP, which is often used in practice, so the practical implication of the result seems limited.*\n\nFor Gaussian estimation, we argue that even logarithmic dependence on the unknown distribution parameters poses a significant problem. Having algorithm parameters and guarantees that depend on a priori knowledge of the solution is always not ideal, and we argue that in the case of privacy, it is especially undesirable. Since setting these parameters a priori is difficult (perhaps easier for the mean, but certainly difficult for spectral bounds on the covariance), an analyst is unfortunately incentivized to look at the private data in order to set these parameters when faced with low utility. This practice undermines and can even completely invalidate privacy guarantees (e.g. see this case study [1] on hyperparameter tuning). \nIt is precisely this desire to eliminate solution-dependent algorithm parameters which has spurred a string of recent works ([2,3,4]), each developing significant technical machinery to address the problem via relaxing to approximate DP. \nHowever, just because relaxing to approximate DP offers a solution, it does not mean that approximate DP is the right privacy guarantee we should adopt for every given scenario. Ideally, the decision on what privacy guarantee to go with, in practice, should depend on the situation itself, rather than the available tools. For example, we note that **the largest scale practical deployment to date of differential privacy, the 2020 US Census, decided to use zCDP** [5]. They make the following assessment of the two definitions:\n\n*zCDP provides privacy protection that weakens as $\\delta$ approaches $0$, but never results in catastrophic failure, which some approximate differential privacy mechanisms do permit.*\n \nAlong these lines, our work shows that in the situation where public data is available, there are alternative solutions that enjoy the benefits of stronger privacy definitions. Some examples of these benefits: both zCDP and pure DP are stronger guarantees that protect against the chance of ``catastrophic failure'' as specified by the $\\delta$ parameter. Furthermore, even in the case where the analyst is comfortable with approximate DP guarantees, zCDP offers composition benefits over approximate DP. Since Gaussian estimation is a fundamental task, it is easy to see how it may be a part of a larger analysis. In that case, the much tighter composition guarantees of zCDP vs approximate DP (see the plot under “Composition under variants of differential privacy” in [6]) provide another practical use for our results.", " **References:** \n[1] Yongqiang Wang, Abdelrahman Mohamed, Due Le, Chunxi Liu, Alex Xiao, Jay Mahadeokar, Hongzhao Huang et al. \"Transformer-based acoustic modeling for hybrid speech recognition.\" ICASSP 2020. \n[2] Gautam Kamath, Or Sheffet, Vikrant Singhal, Jonathan Ullman. “Differentially Private Algorithms for Learning Mixtures of Separated Gaussians.” NeurIPS 2019. \n[3] Ishaq Aden-Ali, Hassan Ashtiani, Christopher Liaw. “Privately Learning Mixtures of Axis-Aligned Gaussians.” NeurIPS 2021. \n[4] Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan. “Nearly tight sample complexity bounds for learning mixtures of Gaussians via sample compression schemes.” NeurIPS 2018. \n[5] Gautam Kamath, Jerry Li, Vikrant Singhal, Jonathan Ullman. “Privately Learning High-Dimensional Distributions”. COLT 2019. \n[6] Gautam Kamath, Vikrant Singhal, Jonathan Ullman. “Private Mean Estimation of Heavy-Tailed Distributions”. COLT 2020. \n[7] Vikrant Singhal, Thomas Steinke. “Privately Learning Subspaces”. NeurIPS 2021. \n[8] Sourav Biswas, Yihe Dong, Gautam Kamath, Jonathan Ullman. “CoinPress: Practical Private Mean and Covariance Estimation”. NeurIPS 2020.", " **Clarifications to technical concerns:** \n> *In Algorithm 1, using only $d+1$ samples $\\widetilde X_i$ to obtain the covariance $\\widehat Σ$ leads to inconsistent estimation, especially when the dimension $d$ is large. Would this make the output inefficient in a certain case?*\n \nAs the reviewer points out, using $d+1$ public samples in Algorithm 1 results in weak accuracy guarantees on our initial ‘coarse estimate’ computed from public data only. Our goal here is to employ a minimal amount of public data (far less than the $\\Omega(d^2/\\alpha^2)$ samples required non-privately for accurate estimation). We are not focused on the statistical consistency, or other asymptotic behaviours of this initial coarse public-data estimate since our focus is on how well we can perform private estimation with a very small amount of public data. The goal of this coarse estimate is just to precondition the distribution in order to make it friendly for the next estimation step using the private data, which is assumed to be available in much larger quantities. \nIn the second step of our Gaussian estimator, we refine the coarse estimate with judicious use of private data. As $d$ increases, the error in the coarse estimate translates to a $\\log(d/\\beta)$ factor increase in the second and third terms of the private sample complexity, compared to the bounded case. This increase is overshadowed by $d^2$ factors that would exist, even if our coarse estimate were correct up to a constant. \nThe full estimator using both public and private data achieves consistency as well as finite sample guarantees with respect to the number of private samples. We do not claim statistical efficiency, which is not commonly studied in the CS literature on DP estimation.\n\n> *Similarly, in Lemma 2.5, how could you make sure the matrix $\\Sigma_Y = \\frac 1 L \\hat{\\Sigma}^{-1/2}\\Sigma\\hat{\\Sigma}^{-1/2}$ is positive definite?*\n\nWe know that with probability 1 over samples from a $d$-dimensional Gaussian, $\\hat{\\Sigma}$ is positive definite (PD). Matrix inverses and square-roots preserve PD-ness. The result of pre-multiplying and post-multiplying a PD matrix with a PD matrix remains PD.\n\n> *In Lemma 2.5, is the probability at least $1-\\beta$ going to $1$ with the increasing of the dimension $d$? In the current presentation, it seems $\\beta$ is an arbitrary constant greater than $0$.*\n\n$\\beta$ is a parameter of the problem: the algorithm designer can choose it to be any value $>0$ they wish (even possibly dependent on $d$, if need be). When one targets a lower failure probability $\\beta$ for the algorithm, Algorithm 1 (public data preconditioning) is adjusted to be more conservative, to account for more low probability events where the public data covariance does not transform the data to the desired range.\n\n**On experiments:** \n> *No numerical performance is conducted to demonstrate the effectiveness of the proposed method.*\n\nWe follow a long line of work on understanding the theoretical sample complexity of private estimation (e.g. [2, 3, 7]). Following these theoretical works and inspired by their techniques, subsequent works have explored practical tools for private estimation [8]. We position our work into the former line, and believe that building practical tools for these problems is a very interesting direction for follow-up work.\n\n**Final comments:** \nWe hope our responses to the reviewer’s questions, including the ones about the technical details, have helped address the concerns over the soundness and the significance of our work. If these concerns persist, or if the reviewer has any additional comments or questions, we would be happy to continue the conversation during the discussion phase to reach a resolution.", " We would like to thank the reviewer for their thoughtful feedback and questions. We are glad to hear that the reviewer finds the problem we study interesting, and shares our belief that taking advantage of public data is a promising approach toward addressing shortcomings in private data analysis. We hope that our study offers insights and inspires future work on understanding the role of public data in private data analysis.\n\nIn the following, we address specific questions and concerns raised by the reviewer, in order of appearance.\n\n**Organization and appendix length:** \n\n> *The organization of this paper should be improved. There are more than thirty pages appendix. The appendix is too long.*\n\nParts of our argument are inherently technical, and thus unfortunately, require many pages to present a complete, precise, and correct argument. As is standard for theory papers in NeurIPS/ICML, we elected to give a broad overview of the approach in the body (highlighting key technical components when possible), and eschewing the details to the appendix/supplement. If the reviewer has any specific suggestions on how to improve presentation in the body, we would be happy to incorporate them.\n\n**On Gaussianity assumptions:** \n\n> *This paper simply assumes the public comes from a Gaussian distribution. This assumption is too strong in realism. For example, the data in the economy and biology domain can usually be heavy-tailed. Is that possible to extend the current assumption to be a general one?*\n\nFirst, we would also like to point out to the reviewer that our results also address Gaussian mixtures – a less restrictive modeling assumption which is employed in practice (e.g. even for complex speech data, GMMs are a component in deep-learning-based speech recognition systems [1]). \nWe agree that our study of Gaussians and Gaussian mixtures does not cover all realistic use cases. Still, we believe it is a fundamental problem that is of relevance to the NeurIPS community (see papers [2, 3], and [4] – note that [4] won a NeurIPS 2018 best paper award). Our study of this relatively ‘simple’ setting already uncovers some conceptual takeaways: a small amount of public data can remove necessary boundedness requirements even when the public data's distribution differs significantly from the private data's distribution; and the private sample complexity may be improved in terms of other parameters (e.g. the number of mixture components in our results about Gaussian mixtures) in other cases using trace amounts of public data. \nWith regards to extensions to other cases: it is common to study the Gaussian case first, and then the heavy-tailed case afterwards. See for example, in the case of private estimation without public data, [5] and then [6]. We suspect that similar techniques to ours could also be applied to the heavy-tailed setting, after modifying the appropriate concentration and spectral inequalities. However, as the reviewer comments, the paper is already long and technical, so these details could be worked out in further work.", " This paper studies differential private estimation by taking advantage of some public data. It has been shown that some improvements on the sample complexity can be made when given a certain amount of public data. Strengths: The targeted problem is interesting. The idea of taking advantage of public data is good. This paper is easy to follow. \n\nWeakness: \n(1) The organization of this paper should be improved. There are more than thirty pages appendix. The appendix is too long.\n\n(2) This paper simply assumes the public comes from a Gaussian distribution. This assumption is too strong in realism. For example, the data in the economy and biology domain can usually be heavy-tailed. Is that possible to extend the current assumption to be a general one?\n\n(3) In Algorithm 1, using only d+1 samples $\\tilde{X}_i$ to obtain the covariance $\\widehat{\\Sigma}$ leads to inconsistent estimation, especially when the dimension d is large. Would this make the output inefficient in a certain case?\n\n(4) Similarly, in Lemma 2.5, how could you make sure the matrix $\\Sigma_Y=\\frac{1}{L}\\widehat{\\Sigma}^{-1/2}\\Sigma\\widehat{\\Sigma}^{-1/2}$ is positive definite?\n\n(5) In Lemma 2.5, is the probability at least 1-$\\beta$ going to 1 with the increasing of the dimension d? In the current presentation, it seems $\\beta$ is an arbitrary constant greater than 0.\n\n(6) No numerical performance is conducted to demonstrate the effectiveness of the proposed method. (1) This paper simply assumes the public comes from a Gaussian distribution. This assumption is too strong in realism. For example, the data in the economy and biology domain can usually be heavy-tailed. Is that possible to extend the current assumption to be a general one?\n\n(2) In Algorithm 1, using only d+1 samples $\\tilde{X}_i$ to obtain the covariance $\\widehat{\\Sigma}$ leads to inconsistent estimation, especially when the dimension d is large. Would this make the output inefficient in a certain case?\n\n(3) Similarly, in Lemma 2.5, how could you make sure the matrix $\\Sigma_Y=\\frac{1}{L}\\widehat{\\Sigma}^{-1/2}\\Sigma\\widehat{\\Sigma}^{-1/2}$ is positive definite?\n\n(4) In Lemma 2.5, is the probability at least 1-$\\beta$ going to 1 with the increasing of the dimension d? In the current presentation, it seems $\\beta$ is an arbitrary constant greater than 0. Yes", " This work studies differentially private estimation of Gaussian distributions and mixtures of Gaussians with additional public non-private data. For Gaussian estimation, the authors demonstrate that with a small amount of public data, the number of private samples required no longer depends on the bound on $\\ell_2$ norm of the mean and condition number of the covariance matrix, and instead depends logarithmically on the discrepancy between public and private data distributions. For Gaussian mixtures, similar results are shown when the public and private distributions are the same. Strengths:\n- Understanding the performance of DP algorithms with public data is an important problem both in theory and in practice. The authors proved meaningful results in the context of estimating Gaussian distributions, which is a fundamental statistical estimation problem.\n- For Gaussian mixtures, the improvement on $k$ is significant.\n- The results are comprehensive. The paper studies both standard Gaussian estimation and mixture of Gaussian. Algorithms are designed for various scenarios.\n- Overall, the writing is clear and well structured. The authors did a good job by motivating the algorithm ideas with a simple Gaussian mean estimation problem.\n\nWeakness:\n- For Gaussian estimation: considering that the dependence on dimensionality remains the same, removing the logarithmic dependence on range parameters might seem a weak improvement. The improvement can also be achieved by using approximate DP, which is often used in practice, so the practical implication of the result seems limited.\n- I wonder if at least some of the results can be implied by the range-independent results for approximate DP. For $(\\epsilon, \\delta)$-DP, in the worst case roughly $\\delta$ fraction of the dataset could be released as-is, which might be viewed equivalently as \"public dataset\". It would be great if the authors could discuss the connections.\n\n---\nUpdate: raised my score to 6 because the authors addressed the concern about the relation to $(\\epsilon, \\delta)$-DP.\n Questions regarding the results have already been stated in the Weakness section.\n\nSuggestion on writing:\n- The algorithm description for Gaussian mixture only consists of high-level ideas. Perhaps the technical contribution could stand out more if the authors could highlight a few important intermediate results or claims?\n- Add a brief description about the problem formulation of estimating high-dimensional Gaussian and Gaussian mixtures. Limitations and negative impacts are adequately addressed.", " The paper studies estimation a setting in which there are two sets of samples available: A \"public\" dataset, and a \"private\" dataset whose elements must be protected with differential privacy. This is studied for two classes of distributions: 1) d-dimensional Gaussians, and 2) mixtures of d-dimensional Gaussians. In case 2) the public dataset must come from the same distribution as the private one, but for 1) it suffices that it is a Gaussian that is \"not too far\" from the same distribution. In both settings it is shown that a small amount of public data (much less than what is needed for estimation) can be used to gear the utility of the private sample, lowering the sampling complexity.\nIn terms of techniques the algorithm for 1) uses the public data to do a parameter range estimation and then apply a private estimator that needs these parameters, a fairly natural (and generally applicable) idea. For 2) the public data is used to improve various steps in an existing algorithm [KSSU19], which is more involved. Strengths:\n- Addresses a timely and significant question (how to leverage public data) in two classical settings\n- Adds to the theory of an approach that has shown great promise empirically, but where there is still only a limited theoretical understanding\n- The fact that public samples from an \"almost entirely dissimilar\" distribution can help is intriguing and worth investigating in other settings (could lead to follow-up work)\n- As far as I can tell, the technical contribution is solid (I am not familiar with all the past work on which this builds)\n- The writing is clear and of high quality\n\nWeaknesses:\n- In setting 1) it was already known how to get results of a similar flavor without public data in the setting of *approximate* DP, so the advance lies only in obtaining pure DP, which is mainly of theoretical interest\n- In setting 2) the public data must come from the same distribution as the private data, which is arguably a strong assumption. (I would guess it can be relaxed along the lines of case 1), but this is not worked out in the paper.)\n - Given the prior work, what do you mean more precisely by \"we initiate the study\" in the abstract?\n- In Theorem 1.2, does $\\gamma$ need to be known to the algorithm, or can the sample size somehow be adapted to how well the public distribution approximates the private one? There is an adequate discussion of limitations. Potential negative societal impacts are unlikely, so not discussed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "cY4G7gWM2Qk", "h8QHIRStJ4j", "XSKys1JaYGQ", "cY4G7gWM2Qk", "WTalcKPar82", "anp0uJjZZ2O", "whVRWHRWn06", "_JQKTqJsO6U", "nips_2022_YpyGV_i8Z_J", "nips_2022_YpyGV_i8Z_J", "nips_2022_YpyGV_i8Z_J" ]
nips_2022_ptUZl8xDMMN
Graph Scattering beyond Wavelet Shackles
This work develops a flexible and mathematically sound framework for the design and analysis of graph scattering networks with variable branching ratios and generic functional calculus filters. Spectrally-agnostic stability guarantees for node- and graph-level perturbations are derived; the vertex-set non-preserving case is treated by utilizing recently developed mathematical-physics based tools. Energy propagation through the network layers is investigated and related to truncation stability. New methods of graph-level feature aggregation are introduced and stability of the resulting composite scattering architectures is established. Finally, scattering transforms are extended to edge- and higher order tensorial input. Theoretical results are complemented by numerical investigations: Suitably chosen scattering networks conforming to the developed theory perform better than traditional graph-wavelet based scattering approaches in social network graph classification tasks and significantly outperform other graph-based learning approaches to regression of quantum-chemical energies on QM$7$.
Accept
In the discussion, we reached a clear consensus that this paper is interesting for the NeurIPS community and should be accepted. The author's rebuttal and subsequent discussion were very useful and we are looking forward to the final version of the paper with the promised improvements implemented.
train
[ "H5sJBYUSuTq", "xlMie1xPK3H", "kQMVxsctndp2", "my1BXEbD3Dl", "Q0E4qKk6V6d", "1ruWgbUT4SU", "QcRPprNnWKK", "sDyns08qrUA", "KVs3xoipRh9", "k6Mqa_nwFS-K", "xplFjj40VcD", "m0PxVrT73Ok", "OM4UnIjUQIM", "7AAdlmclehi" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " It was a pleasure implementing suggestions and providing answers and explanations for questions!", " I thank the authors for their very detailed rebuttal.\n\nI am not going to go over each bullet point again, but I am quite satisfied with the changes provided; especially in section 3 and after each theorem, the paper is much clearer that way.\n\nI have no real other concern, I increased my initial score.", " References:\n\n[12] Feng Gao, Guy Wolf, and Matthew J. Hirn. Geometric scattering for graph data analysis. In\n Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International\n Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA,\n volume 97 of Proceedings of Machine Learning Research, pages 2122–2131. PMLR, 2019.\n\n[10] Fernando Gama, Alejandro Ribeiro, and Joan Bruna. Diffusion scattering transforms on graphs.\n In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA,\n USA, May 6-9, 2019. OpenReview.net, 2019.\n\n[A] Wiatowski, Thomas, Harmonic Analysis of Deep Convolutional Neural Networks, 2018, Doctoral Thesis, ETH Research Collection\n\n", " 9) \"In the expressivity/energy section, the analysis was conducted in the limit of infinite network depth. The idea is quite neat, but it raises several important questions that the authors have to address:\"\n\n•\t\"the authors make the assumption that one can always choose an eigenvector of strictly positive entries. This of course follows from results in spectral graph theory. However, for the connected graph case, which I argue is probably the most important case, it is my understanding that the only eigenvector that satisfies this will be the eigenvector corresponding to the smallest eigenvalue, in which case the eigenvector has constant entries (they are all the same), and the corresponding eigenvalue is just 0, in which case the notation of defining m_n as the minimum and lambda_n etc becomes somewhat redundant. \"\n\nOn a connected graph, it is indeed correct that for the graph Laplacian $ L = D – W$ the lowest lying eigenvalue is zero and that the only eigenvector that can be chosen to have purely positive entries is the eigenvector corresponding to the eigenvalue zero.\nIn that case, this eigenvector has indeed constant entries.\nHowever, one might also consider the normalized graph Laplacian $ Id – D^{-\\frac12}W D^{-\\frac12}$ as e.g. [10] does in a wavelet-scattering setting. In this case, the entries of the lowest lying eigenvector (of the normalized graph Laplacian) are given by the square-roots of the degrees of the corresponding nodes. Thus we believe it has merit to keep $m_n$ as the minimum entry of such a vector as a variable in the formulation of our theorem.\nWe introduced $\\lambda_n$ as the corresponding eigenvalue and did not fix it to equal zero, to emphasize that nothing in particular is dependent on the eigenvalue under consideration being zero. We might equally well base our architecture on $ 3Id – D^{-\\frac12}W D^{-\\frac12}$ instead of $ Id – D^{-\\frac12}W D^{-\\frac12}$, in which case this lowest lying eigenvalue would be equal to $2$.\n\n•\t\"On the other hand, if the graph is disconnected, then the eigenvector that you pick will, to the best of my knowledge, have positive entries in one component and 0's in some other component, which would violate the strictly positive entry assumption. \"\n\nLet us assume that the graph has K disconnected components. Let us further make use of the un-normalized graph Laplacian $ L = D – W$.\nThen the lowest lying eigenspace (corresponding to eigenvalue 0) is $K$-dimensional. An orthogonal basis of this space is given by the $K$ vectors whose entries are equal to one on a specific connected component and zero on all others. \nAny linear combination with positive coefficients of these vectors will yield a vector with only positive entries.\nAs these K vectors form a basis of the lowest lying eigenspace, any linear combination of these vectors will still lie in this eigenspace and hence will be an eigenvector to the eigenvalue zero.\nThus a vector as is desired by our theorem also exists for the disconnected case. In fact, we can simply chose it to be the normalized constant vector with positive entries again.\nFollowing this question by the reviewer, we have included a comment emphasizing this in our revised manuscript.\n\n\n\n•\t\"While I think the energy bounds are interesting, it is unclear to me how useful/related this is to link to the expressivity of the network. The fact that the mapping only maps 0 to 0, when N goes to infinity, seems like a property that is only marginally related to expressivity in some bare-minimum way. One can probably come up with some kind of invertible linear-type transformation that also only maps 0 to 0. This property to me at first sight seems to just mean that the mapping is not contracting, but going from \"no contraction\" to \"expressivity\" seems a bit of a stretch claim to me. I think the authors could modify the wording of their conclusion here.\"\n\nWe do agree that it is a stretch to go from a trivial ‘kernel’ to talking about expressivity. The term ‘Expressivity’ and the surrounding discussion stems from analysis of Euclidean Scattering Networks [A]. We derived the corresponding results for the graph setting in our paper.\nTo decrease the emphasis on expressivity, we struck this word from Abstract and Introduction and demoted the discussion of the trivial ‘Kernel’ property to a side-note in the appendix. In our revised manuscript, we now focus much more on the relation between Energy decay and truncation stability. As this discussion pertains to stability, we have now incorporated it into ‘Section 4: Stability Results’.\n\n", " 6) \"As a result, only those that already have very substantial backgrounds in graph wavelets/graph networks and spectral graph theory will be able to understand it. \"\n\nWe would like to respectfully state that all necessary definitions and notions are contained in the main body of the paper alongside a description of the necessary intuition of the field and our novel conceptual ideas and results. In fact, we start our discussion by reviewing the very basics of the signal processing framework and only afterwards build up steam and introduce our new scattering transforms, while always making sure to properly introduce new or maybe not too well-known concepts. For readers unfamiliar with the field or even parts of it we have written a comprehensive appendix reviewing concepts from fundamental topics in linear algebra to tensorial inputs and functional calculus. Along the way, we provide ample resources and references containing even more detailed explanations of concepts and topics touched upon.\nIt would help us immensely if the reviewer could make clear what she/he feels is missing in our introductions or where precisely we could extend our writing or make it more precise.\n\n\n\n7) \"The lipschitz-type bounds in theorems 4.1 and 4.2 appear to be just iterative applications of a layer-wise lipschitz type condition.\" \n\nWhile Lipschitz continuity plays an important part in deriving the stability bounds, equally important parts are played by applications of Cauchy-Schwartz and Cauchy-Young inequalities and maybe most importantly the generalized frame condition in various guises as well as careful applications of combinations of these inequalities. The importance of the generalized frame property above all else is elucidated further in our next comment below.\n \n8) \"While this is certainly a valid bound, the constants in the upper-bound involves a product of N terms/terms raised to the Nth power (this exponential term also appears in theorem 4.5). I would argue that on a practical level, these exponential terms render the bounds rather impractical, unless matching lower bounds/some sharpness result can be shown. In particular, say N is 10. Then if I perturb say the input by some small constant, the output could have a order C^10 change to it, which can be astronomical. \"\n\nWe empathise with the reviewer's initial reaction to the dependence of the bounds on layer depth; we had the same initial reaction when we first derived these results. However the situation is much much better than it would initially seem.\nA first mitigating factor is that in real world applications, scattering networks are rarely deeper than N = 5, which already controls any exponential behaviour well.\n\nIn greater specificity, let us now address the stability constants of the two theorems individually:\n\nFor Theorem 4.1 we may note that in front of the product of n terms, there is a factor that is zero if $B_n \\leq 1$ and the product $B_n(L_n^+R_n^+)^2 \\leq 1$. \nIf this demand is met, the product of n terms disappears and no longer contributes to the stability constant. What is more, since filters, connecting operators and non-linearities are static parts of the architecture, one can always meet the demand $B_n \\leq 1$ and $B_n(L_n^+R_n^+)^2 \\leq 1$ by a simple rescaling operation.\nIf the demand is met in each layer of the scattering architecture, the resulting scattering transform is 1-Lipschitz irrespective of depth. \nWe have emphasized this in our discussion immediately after Theorem 4.1 (c.f. also equation (2)).\n\nAs for Theorem 4.2 as it stood in our original submission, we note that even upon choosing N= 10, as the reviewer suggested, the contribution to the stability bound by the exponential term will only be $ \\sqrt{2(2^{10} - 1)} \\approx 45$, which is far from being astronomical.\nHowever there is even more that can be said. In our original submission, we had fixed the upper frame bound to be equal to one for simplicity in presentation. Following the reviewers concerns about exponential increase of stability constants with the depth N, we have now kept the upper frame constant B variable. The reason for this is that there is a sort of phase transition going on: For $B\\leq\\frac12$ in each layer, we can prove that the exponential increase with the depth does not persist, and the stability constant can be chosen as $2\\cdot D$ independently of network depth if $B\\leq\\frac12$. Again, this can always be achieved through a rescaling operation. Here D accounts for the Lipschitz constants of the individual filters. More details are provided in our updated Theorem 4.2.\nAs this behaviour is a consequence of the generalized frame condition, this also exemplifies that the derived bounds are not merely a consequence of a repeated Lipschitz condition.\n\nThe discussion for Theorem 4.5 proceeds analogously to the one of Theorem 4.2.\n", " 4) \"In particular, the experimental results for the regression application is great, but for the classification is not very good. I wonder if the performance on the classification task could be improved with an alternative instantiation of their model, such as using other functions than sines and cosines, or changing the layer parameter, or using a different operator than the Laplacian etc.\"\n\nWe have run the classification experiments with different choices of filters (polynomials of varying degrees, exponentials), different network depths, different aggregation methods (low pass vs. general non-linear), different branching ratios (2 vs. the presented 4) and different non-linearities; albeit not on all datasets but only on the IMDB datasets. The architecture presented in the paper provided the best results. Additionally, [12] ran this very classification experiment with geometric wavelets (based on a different operator and different functional calculus filters), as can be read from the row entitled GS-SVM in Table 1 of our paper. Results of [12] were not better than ours. We believe that the presented architecture is at the apex of scattering transforms applied to classification objectives.\nWe believe that the much better performance of our scattering architecture (when compared to other leading approaches) on the regression task can be explained by the fact that inputs in the classification setting can heuristically be considered as discrete (a node has a property or it does not; an edge exists or it does not) in the classification setting, while inputs can heuristically be considered continuous in the regression setting (e.g. interatomic distances can be varied continuously). Scattering is particularly adapted to this continuous setting as the results of ‘Section 4: Stability Results’ and Figure 5 illustrate.\n\n\n\n\n5a) \"There is too much material for a 9 page conference paper.\"\n\nWhile we do agree that our paper does contain many new, and – we believe – interesting as well as widely applicable results, we would argue that we do not present too much material: It is our firm believe that a conference paper should be self-contained and should provide the reader with deep insight into the topic of the work as well as detailed explanations of any novel material.\nWe firmly believe that presenting experiments that display superior numerical performance of non-wavelet filters together with a general theory that also allows the application of scattering transforms to new domains (inaccessible to standard wavelet based scattering transforms) within graph signal processing\nis the best way to convince the community to transcend the graph-wavelet setting and embrace our newly developed general scattering transforms.\nWe have made sure to only include the minimal necessary information to develop this topic, scrapping many additional results we would have ideally liked to present. We do however welcome further advice by the reviewers on how to streamline our paper even more, should they deem it necessary. Should we be given information on which parts seem irrelevant, we would be more than happy to follow such recommendations to scrap!\n\n\n\n\n\n\n5b) \"Important aspects of the paper are delegated to the appendix, and there is not enough room for the authors to give the necessary treatment for background knowledge and definitions. \"\n\nWe are sorry to hear that the reviewer feels this way.\nTo ease reading and facilitate the uptake of our ideas by the community, we took great care to organize the material in a manner most accessible to the reader. In fact, our goal is to keep the main aspects and novel conceptual ideas within the main body of the paper and outsource, for instance, the proofs which are not key for understanding the novel ideas to the appendix. \nWe agree that we wrote a very comprehensive appendix, but this was mainly done for the sake of completeness, in the interest of being self-contained and to aid readers that are not completely familiar with the subject matter.\nWe would like to stress that it is by no means necessary to work through the entire appendix to appreciate the main points of the paper and do sincerely hope that we did not give this impression to the reviewer.\nShould the reviewer still feel that important aspects are missing from the main body of the work, we would be curious to know to which aspects she or he is referring to. We would be happy to do our best to transport them from the appendix to the main body of the paper.\n\n", " We immensely thank the reviewer for the careful read of our paper. We are especially happy that in her or his opinion the novelty/originality of the paper definetely meets the bar for publication at NeurIPS. We were also delighted to read that in her or his opinion the topic of this paper is of sufficient significance for NeurIPS. \n\nWe are also very grateful, for the detailed comments and advice we received, which we have followed, as we detail below point for point:\n\n\n\n1) \"The authors have the style of defining things in the broadest, most abstract and general version first,\"\n\nWe thank the reviewer for this critical observation. Following this comment, we now present a discussion of the operators we are utilizing in numerical experiments together with a discussion of why we elected to utilize precisely these operators already in ‘Section 2: Graph Signal Processing’. \nThe topic is picked up again in ‘Section 3: The Generalized Graph Scattering Transform’:\nIn this section we now also already describe the filters that we utilize in our numerical experiments and how they harmonize well with our choices of normal operators. \nThis section now also describes the scattering transforms corresponding to these filter- and operator choices, so that readers have handy examples at their hands while reading through the theoretical results and have ample time to familiarize themselves with the specific architectures we utilize in computations, before encountering the corresponding numerical results.\nAdditionally in ‘Section 4: Stability Results’, we pick these Example-Architectures up again and explain after each Theorem how the requirements of the Theorem are fulfilled by our two numerically tested Architectures.\n\n\n2) \"and then in the experimental section just make some very specific choices in their model that conform to their general theoretical results, but without justifying those experimental choices at all. \"\n\nWe have now explained our parameter choices much sooner and a lot clearer: \n‘Section 2: Graph Signal Processing’ now includes a discussion of why we pick the normal operators that we utilize for our numerical investigations. In short, the reason is that the spectrum of these operators is contained in the interval [0,1], with the values 0 and 1 being attained. This control over the spectrum aids greatly in selecting filters.\nIn particular, as ‘Section 3: The Generalized Graph Scattering Transform’ now explains in more detail, our experimentally utilized filters essentially provide high- and-low pass filters on this spectrum contained in [0,1]. Thus input-signals are dissected corresponding to their high and low frequency components.\nConnecting operators and non-linearities are chosen in the standard way to facilitate comparability with existing wavelet-scattering architectures.\nDepth of our generalized scattering transform as well as the parameter ‘p’ for the non-linear graph level aggregation method are chosen to avoid overfitting and keep computation-time palatable. \nSplit size for cross validation was chosen to follow the standards on the respective datasets.\n\n\n3) \"I understand that this is a theoretical paper, but I think having one or two tables of pseudocodes on particular instantiations of your architecture, and providing more justification for why certain parameter or modeling choices are made (such as cross-validation etc) would help the users understand and adopt their method much more readily. \"\n\nAs discussed in our point above, we completely agree with this comment and have now diligently provided justifications and heuristics for our parameter choices. We have opted for a graphical representation of our specific Architectures (c.f. Fig. 2 ) utilized in experiments, and have included this graphical representation together with a description of these architectures already in ‘Section 3: Generalized Scattering transforms’.\n\n\n", " Having explained how we implemented the received feedback, we now answer the raised questions:\n\n1) if the higher-order architecture used in the regression experiment ? \n\nAs detailed above, we have now highlighted that second-order scattering is utilized in our regression experiment in our revised manuscript at various points in the paper.\nWe also rewrote ‘Section 6: Higher order scattering’ to focus solely on second-order scattering, as this is the architecture we analyse experimentally with our regression experiment.\n The discussion of scattering transforms applied to input beyond binary relations is now deferred to the appendix, which -- we hope -- also aids clarity and flow of the paper.\n\n\n2) Is the constant \" 2 \" in definition 4.3 arbitrary ?\n\nAny constant larger than 1 would work as well (see Chapter IV of [30] for a very detailed discussion). We have chosen 2 for simplicity. \nFollowing this question, our revised Manuscript, now contains a comment addressing this point, to eliminate any confusion.\n\n3) In theorem 4.2 (and related results), could the proximity of the normal operators be expressed in terms of spectral norm rather than Frobenius?\n\nUnfortunately, this is impossible without making the constant in the inequality of Theorem 4.2 dependent on the cardinality of the vertex sets of the utilized graphs (for more details see e.g. [B] or [38]).\nIf a cardinality-dependent constant is acceptable in an application, one might use the inequality \n$||\\cdot||_F \\leq ||\\cdot||_s \\leq \\sqrt{| G|} \\cdot ||\\cdot||_F$ to facilitate contact between the two norms (with $|G|$ the corresponding vertex set cardinality and $||\\cdot||_s$ denoting spectral norm).\n\nReferences:\n\n[A] Rethinking pooling in graph neural networks, Proceedings of the 34th International Conference on Neural Information Processing Systems, December 2020 Article No.: 187Pages 2220–2231\n\n[1] Uri Alon and Eran Yahav. On the bottleneck of graph neural networks and its practical\n implications. In International Conference on Learning Representations, 2021.\n\n[37] Jake Topping, Francesco Di Giovanni, Benjamin Paul Chamberlain, Xiaowen Dong, and\n Michael M. Bronstein. Understanding over-squashing and bottlenecks on graphs via curvature,\n 2021\n\n[10] Fernando Gama, Alejandro Ribeiro, and Joan Bruna. Diffusion scattering transforms on graphs.\n In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA,\n USA, May 6-9, 2019. OpenReview.net, 2019.\n\n[30] Olaf. Post. Spectral Analysis on Graph-like Spaces / by Olaf Post. Lecture Notes inMathematics,\n 2039. Springer Berlin Heidelberg, Berlin, Heidelberg, 1st ed. 2012. edition, 2012.\n\n[B] Operator Lipschitz functions, Aleksandrov, Alexei and Peller, Vladimir, \nRussian Mathematical Surveys, Volume 71, Number 4, https://arxiv.org/abs/1611.01593 \n\n[38] Wihler T.P. On the Hölder continuity of matrix functions for normal matrices. Journal of\n inequalities in pure and applied mathematics, 10(4), Dec 2009.\n\n", " [[continuation]]\n\nAs far as changing input graphs are concerned, this is the main focus of our numerical experiments.\n\nBeyond that, we did indeed not change graphs as we progress through the layers of our scattering transform in experiments. Changing graphs within an architecture in the sense of graph pooling – while for a time being a persistent approach in the community-- has been shown to be not particularly helpful in many settings [A], while decoupling the input graph from the graph on which message-passing/graph-convolution is performed is a fairly new idea that currently continues to be investigated [1, 37]. While we wanted to state that the scattering setting can indeed accommodate such approaches as well, we thought it best to not widen our focus too much and disentangle the effects of going beyond the wavelet setting from any added benefits of performing convolutions on graphs other than the input graph. We do however plan to investigate this in further works. \nConsidering flexibility in the choices of operators, we would like to point out that while our described theory provides a framework incorporating all previously investigated scattering transforms (based on various choices of graph shift operators), the shift operators we utilize are different from all previously utilized ones; hence showcasing that previous reliance on specific choices on Laplacians (e.g. related to diffusion processes [10] ) are not necessary.\nBoth newly introduced methods of feature aggregation are tested experimentally; one in the classification experiment and one in the regression experiment. We have now made this point clearer by highlighting more which aggregation method is utilized in the respective experiments.\nWe believe that introducing more variation in the experimentally utilized architectures would yield diminishing returns as it pertains to the readability of the paper. \nHowever, should the reviewer disagree, we would of course be happy to oblige any further requests!\n\n\n5) \"the theorems are valid under many assumptions, but a minimal examples satisfying all of them is not given\nThe approach is interesting, but I would suggest either to […] [focus ] far more the description of the architecture on graphs by giving minimal examples satifying all the formulated hypothesis along the way.\"\n\nWe thank the reviewer for this great idea. In fact, we have chosen the experimental setup for our classification experiment precisely so that it provides a minimal example satisfying the conditions for (almost) all stated theoretical results. We have now made this a lot clearer, explaining after each theorem in ‘Section 4: Stability Results’ how the requirements of the respective theorems are fulfilled by our two numerically tested Architectures.\n\n\n", " We would like to thank the reviewer for the careful review and the poignant comments on our paper. We were especially happy to read that the approach was considered interesting, the experiments (especially for regression), were able to convince the reviewer and that the work that went into presenting the mathematical background as well as detailed explanations in the Appendix/Supplementary material was appreciated by her or him.\n\n\nWe followed the advice that was given to us and incorporated the suggested changes in our revised manuscript, as detailed point-by-point below:\n\n1) \"The authors define very abstract operators and elements, […], the actual choice of the filters, […] is quite hidden within the experiment section […]\"\n\nWe thank the reviewer for this important feedback. \nA discussion of the operators we are utilizing together with a discussion of why we elected to utilize precisely these operators is now already presented in ‘Section 2: Graph Signal Processing’. The topic is picked up again in ‘Section 3: The Generalized Graph Scattering Transform’:\nIn this section we now also already describe the filters that we utilize in our numerical experiments and how they harmonize well with our choices of normal operators. \nThis section now also describes the scattering transforms corresponding to these filter- and operator choices, so that readers have handy examples at their hands while reading through the theoretical results and have ample time to familiarize themselves with the specific architectures we utilize in computations, before encountering the corresponding numerical results.\n\n2) \"[…] [the choice of filters] may seem a tad arbitrary. […]\"\n\nTogether with our newly written discussion of filter choices in ‘Section 3: The Generalized Graph Scattering Transform’, we have included a discussion of why we chose precisely these filters. In short, two of these filters provide high and low pass filters on the spectrum of the operators we are utilizing, while the other two are spectral refinements of the former two.\n\n3) \"Examples of implementation on graphs along the abstract description could really help the understanding of the approach.\"\n\nWe thank the reviewer for this observation! We now introduce our choices of parameters, functions, operators, etc. utilized in experiments already in ‘Section 3: The Generalized Graph Scattering Transform’ as particular examples of the general theory. Theoretical results are then illuminated using this example, as we progress through the theoretical sections up to the experimental results.\n\n4) \"many variants are described but, it seems, not tested in experiments (changing graphs, higher-order tensors...)\"\n\nWe thank the reviewer for this feedback; however we have to slightly and respectfully disagree:\nThe higher order architecture is in fact heavily utilized in the regression experiment. To draw attention to this fact, we had already written ‘In order to showcase the prowess of both our higher order scattering scheme and […] we combine these building blocks into a hybrid architecture’ and subsequently described the higher order scattering architecture, for which, as written in our paper ‘we consider a Coulomb matrix as an edge-level input signal on a given graph. ‘.\nThe generated feature, corresponding to second order (i.e. edge level information/ binary relations between nodes) were then‘combined with node level scattering features (based on atomic charge) into composite feature vectors; plotted in Figure 4.’ \nTo make this point even more clear, we now already mention in ‘Section 6: Higher order scattering’ that we test second order scattering experimentally in our regression experiment.\nBeyond that we would be amiss not to mention that the supplementary material submitted with our original manuscript contains a section comparing results of regression on first- and second order scattering vectors (as described in Section 7: Experimental results; Regression of Quantum Chemical Energies) against regression of scattering vectors obtained solely from first order scattering. We obtain the result that the inclusion of second order scattering vectors significantly improves performance.\nFollowing the received feedback, we have now included the results from solely utilizing first order scattering features in our Table 2.\nWe now also discuss the effect of including higher order scattering feature vectors in our main text of ‘Section 7: Experimental Results’\n\n[[continued below]]\n\n", " We thank the reviewer for her or his careful evaluation, appreciation of the paper and kind comments. We were very happy to read that the paper was considered to be well organized and that theoretical- as well as experimental results were thought to be interesting.\n\nLet us address the raised questions and given advice individually:\n\n1) \"The authors present several upper bounds in Section 4 for stability guarantees. However, it is unknown the optimality of these bounds.\"\n\nThe reviewer raised the question of the optimality of bounds in Section 4: \nTo obtain these bounds, we have developed a proof-framework that combines the triangle inequality with the Cauchy Young inequality and the generalized frame condition in various disguises. \nThis allowed us to achieve significantly better and more general bounds than previous works focusing on graph scattering ( e.g. [11]) and recover bounds of the Euclidean setting (see e.g. [40]).\nGiven the desired generality (in terms of utilized filters, connecting operators, non-linearity, graph-sizes,…) in the statement of the respective inequalities; we are unaware of approaches that lead to even better bounds. However, since we cannot rule out their existence, we have added a comment, stating that the derived bounds are not necessarily optimal at the beginning of Section 4.\n\n2) \"More background and details are needed for the section on higher order scattering.\"\n\nFollowing this feedback, we have significantly expanded the section titled ‘Details on Higher Order Scattering’; i.e. Appendix J. It now includes a recap of the notion of higher order (tensorial) input, a precise and very detailed formulation of higher order scattering transforms and a discussion of the corresponding feature aggregation map. \nWe have also reduced the scope of ‘Section 6: Higher Order Scattering’ which now focuses solely on the familiar case of edge-inputs. We made this choice since edge inputs (i.e. binary relations or equivalently 2-tensors) constitute the higher-order input that is actually utilized in our regression experiment in ‘Section 7: Experimental Results’.\nWe also believe that focusing on the familiar Edge level setting first and deferring a full discussion of higher order scattering to the appendix, helps build intuition first and prevents the reader from being overwhelmed from the theory of higher order inputs in full generality before developing an appreciation of the topic.\n\n\nReferences:\n\n[11] Fernando Gama, Alejandro Ribeiro, and Joan Bruna. Stability of graph scattering transforms. In\n Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox,\n and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual\n Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14,\n 2019, Vancouver, BC, Canada, pages 8036–8046, 2019.\n\n[40] Thomas Wiatowski and Helmut Bölcskei. A mathematical theory of deep convolutional neural\n networks for feature extraction. IEEE Transactions on Information Theory, 64:1845–1866,\n 2018.\n", " In this paper, the authors focus on the design and analysis of graph scattering networks with variable branching ratios and generic functional calculus filters. Spectrally-agnostic stability guarantees for node- and graph-level perturbations are established.\n\n Strengths: \nThis paper is well-organized. The theoretical results on stability guarantees and the experimental results on quantum chemical energies are interesting.\n\nWeaknesses:\n1. The authors present several upper bounds in Section 4 for stability guarantees. However, it is unknown the optimality of these bounds.\n\n2. The section on higher order scattering needs more background knowledge to follow for readers. 1. The authors should add comments on the optimality of the upper bounds established in Section 4.\n\n2. More background and details are needed for the section on higher order scattering. Yes.", " The paper proposes a generalization of graph scattering networks that goes beyond the graph wavelets setting. The authors provide stability guarantees for their generalized scattering transform, as well as layer-wise energy decay bounds. The authors propose a simple feature aggregation method to transform graphs into Euclidean space and briefly discussed taking into account higher-order scattering. The authors conducted experiments of their methods on a graph classification task and a graph regression task. - Originality: the idea, methods etc are original. The novelty/originality of the paper definitely meets the bar for publication at NeurIPS.\n- Quality: the background and tools used are somewhat technical and motivated quite abstractly. The quality of the mathematics/derivations, even though I didn't check proofs line by line, are, to the best of my knowledge, sound. \n- Clarity: the organization and exposition of the article leaves much room for improvement. Details are given below. \n- Significance: Nowadays it is certainly interesting and significant to consider problems involving GCNs and graph networks. The topic of this paper is of sufficient significance to meet the bar for publication at NeurIPS. \n\nOverall, I think the originality, significance and mathematical soundness of the paper are fine, whereas the clarity of the paper leaves room for improvement, which I will detail in sections below. Questions on the quality front: \n\n1. The lipschitz-type bounds in theorems 4.1 and 4.2 appear to be just iterative applications of a layer-wise lipschitz type condition. While this is certainly a valid bound, the constants in the upper-bound involves a product of N terms/terms raised to the Nth power (this exponential term also appears in theorem 4.5). I would argue that on a practical level, these exponential terms render the bounds rather impractical, unless matching lower bounds/some sharpness result can be shown. In particular, say N is 10. Then if I perturb say the input by some small constant, the output could have a order C^10 change to it, which can be astronomical. \n\n2. In the expressivity/energy section, the analysis was conducted in the limit of infinite network depth. The idea is quite neat, but it raises several important questions that the authors have to address:\n\n- the authors make the assumption that one can always choose an eigenvector of strictly positive entries. This of course follows from results in spectral graph theory. However, for the connected graph case, which I argue is probably the most important case, it is my understanding that the only eigenvector that satisfies this will be the eigenvector corresponding to the smallest eigenvalue, in which case the eigenvector has constant entries (they are all the same), and the corresponding eigenvalue is just 0, in which case the notation of defining m_n as the minimum and lambda_n etc becomes somewhat redundant. On the other hand, if the graph is disconnected, then the eigenvector that you pick will, to the best of my knowledge, have positive entries in one component and 0's in some other component, which would violate the strictly positive entry assumption. in either case, I think some change/clarification has to be made here. \n\n- While I think the energy bounds are interesting, it is unclear to me how useful/related this is to link to the expressivity of the network. The fact that the mapping only maps 0 to 0, when N goes to infinity, seems like a property that is only marginally related to expressivity in some bare-minimum way. One can probably come up with some kind of invertible linear-type transformation that also only maps 0 to 0. This property to me at first sight seems to just mean that the mapping is not contracting, but going from \"no contraction\" to \"expressivity\" seems a bit of a stretch claim to me. I think the authors could modify the wording of their conclusion here. \n\nIn terms of the clarity of exposition, I think the general ML audience will find this paper difficult to read and apply, for the following reasons: \n\n1. There is too much material for a 9 page conference paper. Important aspects of the paper are delegated to the appendix, and there is not enough room for the authors to give the necessary treatment for background knowledge and definitions. As a result, only those that already have very substantial backgrounds in graph wavelets/graph networks and spectral graph theory will be able to understand it. \n\n2. The authors have the style of defining things in the broadest, most abstract and general version first, and then in the experimental section just make some very specific choices in their model that conform to their general theoretical results, but without justifying those experimental choices at all. I understand that this is a theoretical paper, but I think having one or two tables of pseudocodes on particular instantiations of your architecture, and providing more justification for why certain parameter or modeling choices are made (such as cross-validation etc) would help the users understand and adopt their method much more readily. In particular, the experimental results for the regression application is great, but for the classification is not very good. I wonder if the performance on the classification task could be improved with an alternative instantiation of their model, such as using other functions than sines and cosines, or changing the layer parameter, or using a different operator than the Laplacian etc. \n\n Main limitations are outlined in section above. \nNo negative societal impact. \n", " In this paper, the authors define a very general notion of scattering transform, with an application on graphs. They (re)define each building blocks of the scattering transform: spectral filters, output (low-pass) functions, non-linearities, etc., as well as proper projection operators when the domain changes between layers. Under appropriate assumption of Lipschitzness of each of these elements, they show stability of the resulting transform, as well as energy preservation, generalizing the classical Euclidean results. Some variants on graphs are presented: an aggregation strategy for graph classification and higher-order scattering. Experiments on real data show the effectiveness of the approach. Strengths:\n\n- a general approach, than can take into account many variants, domain changes, etc.\n- all classical theoretical results on scattering hold\n- a very complete supplementary material\n- the experiments are convincing, especially for graph regression\n\nWeaknesses:\n\n- a bit paradoxically, the approach suffers from too much generality. The authors define very abstract operators and elements, and in fact nothing in particular is about graphs at all, until the discussions of sections 6 and 7 which are not the core of the approach. Furthermore, the actual choice of the filters, some combination of sin and cos, is quite hidden within the experiment section, and may seem a tad arbitrary. As a result, the reader is somewhat left wondering all along the paper what the actual architecture is, if this is just an abstract formulation of previous architecture or if there is something fundamentally new here. Examples of implementation on graphs along the abstract description could really help the understanding of the approach.\n- many variants are described but, it seems, not tested in experiments (changing graphs, higher-order tensors...)\n- the theorems are valid under many assumptions, but a minimal examples satisfying all of them is not given -The approach is interesting, but I would suggest either to reformulate the title and/or abstract to be less specialized on graphs, or on the contrary focusing far more the description of the architecture on graphs by giving minimal examples satifying all the formulated hypothesis along the way.\n\n- if the higher-order architecture used in the regression experiment ? It is not clear.\n\n- is the constant \"$2$\" in definition 4.3 arbitrary ?\n\n- in theorem 4.2 (and related results), could the proximity of the normal operators be expressed in terms of spectral norm rather than Frobenius ? This is more often satisfied (eg by random graphs, or spectral graph coarsening, etc) The authors are quite honest about the limitations in their first experiment, where they are often not state-of-the-art." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "xlMie1xPK3H", "sDyns08qrUA", "my1BXEbD3Dl", "Q0E4qKk6V6d", "1ruWgbUT4SU", "QcRPprNnWKK", "OM4UnIjUQIM", "KVs3xoipRh9", "k6Mqa_nwFS-K", "7AAdlmclehi", "m0PxVrT73Ok", "nips_2022_ptUZl8xDMMN", "nips_2022_ptUZl8xDMMN", "nips_2022_ptUZl8xDMMN" ]
nips_2022__r8pCrHwq39
PointTAD: Multi-Label Temporal Action Detection with Learnable Query Points
Traditional temporal action detection (TAD) usually handles untrimmed videos with small number of action instances from a single label (e.g., ActivityNet, THUMOS). However, this setting might be unrealistic as different classes of actions often co-occur in practice. In this paper, we focus on the task of multi-label temporal action detection that aims to localize all action instances from a multi-label untrimmed video. Multi-label TAD is more challenging as it requires for fine-grained class discrimination within a single video and precise localization of the co-occurring instances. To mitigate this issue, we extend the sparse query-based detection paradigm from the traditional TAD and propose the multi-label TAD framework of PointTAD. Specifically, our PointTAD introduces a small set of learnable query points to represent the important frames of each action instance. This point-based representation provides a flexible mechanism to localize the discriminative frames at boundaries and as well the important frames inside the action. Moreover, we perform the action decoding process with the Multi-level Interactive Module to capture both point-level and instance-level action semantics. Finally, our PointTAD employs an end-to-end trainable framework simply based on RGB input for easy deployment. We evaluate our proposed method on two popular benchmarks and introduce the new metric of detection-mAP for multi-label TAD. Our model outperforms all previous methods by a large margin under the detection-mAP metric, and also achieves promising results under the segmentation-mAP metric.
Accept
This paper considers the problem of detecting temporal activities in videos which contain multiple co-occurring activities of different labels. It is an important problem that arises in many computer vision tasks. The paper is generally well written. Specifically, using learnable query points to select representative frames for segment-level video representation seems to be a novel idea. The experiment results also show promises of the proposed method. Nevertheless, a number of comments and questions were raised by the reviewers. We thank the authors for responding to them in detail and even revising their paper accordingly, which includes providing more experiment results to support their claims. The authors are recommended to further revise their paper by addressing the remaining comments raised.
train
[ "GwFlpRTtCAa", "sLxxsjWiggZ", "95r86oB2TS", "BDByOkCCLZq", "qSRhg8oT7Wj", "g9IpSWVoX7", "SNZ34LpXgU8", "e-pmQrvBxaJ", "NjDMGGS43Kx", "sT52aQ_t4QM", "ki1tlv5P7l", "BzOSwbiteML", "qwDbSbC0buz", "Zf9ih2zCrer", "o8vOZSqT5D" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > What is the difference between the method proposed in this paper and [2,4], which the author does not seem to mention?\n\nAs we stated in Line 93 of the revised paper and in the first response, [2] and [4] all use points to represent object tracks or spatiotemporal action tracks, with a focus on representing the **spatial** location of objects or actions and **do not adopt keyframes in temporal aspect**. In contract, PointTAD represents action as a set of **temporal** points (keyframes) and **does not directly interact with the spatial content of video frames** (the spatial resolution is compressed into 1024 channels by backbone network before PointTAD head). \n\n> Although these two methods [2, 4] do not deal with the same task, they should be related to the method proposed in this paper, and in my opinion, they can be compared. Moreover, the author said in the newly added related work that the proposed method was improved based on RepPoints, but the author did not compare it with RepPoints in the experimental part including Table 1.\n\nAs stated above, [2] and [4] are very different from PointTAD for they use spatial centerpoints to represent objects/humans and we use temporal keyframes to represent actions. \n\nThe reason that we cannot directly compare these methods with PointTAD is that : (a) [2] tackles tracking and [4] tackles spatiotemporal action detection, none of the two methods generates temporal action proposals; (b) [2] and [4] both require spatial bounding box supervision for training and TAD benchmarks do not have these annotations, therefore it's not feasible to even re-implement these models on TAD benchmarks.\n\nRepPoints tackles object detection, which is also a very different task from TAD. We follow the idea of representative point representation from RepPoints and adapt it in TAD to address the non-uniform temporal structure in videos. Sorry for the confusion from the wording of \"improved over\", we have revised this word choice in the paper. \n\n", " >Q.1 The idea of using points to represent keyframes or objects is not very new. Discussion of related research is suggested to add into related work.\n\nWhat is the difference between the method proposed in this paper and [2,4], which the author does not seem to mention? \n\n[2]Zhou, Xingyi, Vladlen Koltun, and Philipp Krähenbühl. \"Tracking objects as points.\" In European Conference on Computer Vision, pp. 474-490. Springer, Cham, 2020.\n[4]Li, Yixuan, Zixu Wang, Limin Wang, and Gangshan Wu. \"Actions as moving points.\" In European Conference on Computer Vision, pp. 68-84. Springer, Cham, 2020.\n\n>Q.3 Related methods such as [1,2,3,4] should be compared and discussed in experiments.\n\nAlthough these two methods [2, 4] do not deal with the same task, they should be related to the method proposed in this paper, and in my opinion, they can be compared. Moreover, the author said in the newly added related work that the proposed method was improved based on RepPoints, but the author did not compare it with RepPoints in the experimental part including Table 1.\n\n``Since the author did not address my concerns well, I keep the original score``", " Dear Reviewer tdcx: \n\nThank you again for the constructive suggestions on our paper. There’s less than 24 hours till the discussion deadline, we would like to know if there’s any unresolved questions that we can help with. Please feel free to comment and we would try our best to address your concerns :) \n\nHave a nice day, \nAuthors of Paper 1881", " Dear Reviewer ZWpN: \n\nThank you again for your thoughtful feedback on our paper. There’s less than 24 hours till the discussion deadline, we would like to know if there’s any unresolved questions that we can help with. Please feel free to comment and we would try our best to address your concerns :) \n\nHave a nice day, \nAuthors of Paper 1881 ", " Dear all reviewers: \n\nThanks for your suggestions on our paper. As the reviewer-author discussion deadline is approaching, we would like to know whether our reply has addressed your concerns. If you have any questions, please feel free to let us know, and we will try our best to address your concerns. \n\nBest, \nAuthors of Paper 1881", " We sincerely appreciate all reviewers' efforts in reviewing our paper and giving insightful comments as well as valuable suggestions. We are glad to find that the reviewers generally acknowledge the following novelty and contributions of our work.\n* **Framework.** Using learnable query points to select representative frames for instance-level action representation is novel [tdcx,QSXY,eH4Z] and is more effective over uniform sampling or temporal RoI Align [QSXY]. We hope our work will inspire general video understanding [QSXY] to opt for the more effective temporal action detection with point representation. \n* **Experiments.** Experiments show improved detection-mAP performance on the two popular multi-label TAD benchmarks [tdcx,QSXY,eH4Z]. The improvement of query point representation, point-level locality preservation and instance-level parallel mixing are supported by ablations [tdcx,QSXY,eH4Z]. \n\nAs suggested by the reviewers, we include the following contents in the revised manuscript to further strengthen our paper. The major revision is summarized as follows. Our detailed responses can be found in each response section to the reviewers. \n* **Extended experiments** including evaluation on THUMOS14, ablation on result fusion parameter $\\beta$ and offset scaling factor $s$, comparison to DETR-based baseline are added to the revised appendix [tdcx, QSXY].\n* **Relation to point-based representation literature.** We have added an independent subsection in the revised related work to discuss our differences with keyframe selection literature and point-based detectors [ZWpN, tdcx].\n* **Updates to comparison table.** In the revised paper, we have updated the detection-mAP of segmentation methods [tdcx], included more methods with optical flow input [tdcx] and added detection-mAP result without NoHuman class for MultiTHUMOS results [QSXY]. \n* **Clarifications on equations and statements.** We have clarified all the ambiguities mentioned by the reviewers in the revised manuscript [tdcx,QSXY,eH4Z].", " We thank the reviewer for the feedback. Below is our response to the comment.\n\n**Q.1** *The idea of using points to represent keyframes or objects is not very new. Discussion of related research is suggested to add into related work.* \n**R.1** Thanks for your comment. Your mentioned works are different from our method in many aspects. Our PointTAD tackles multi-label temporal action detection by treating action as a set of temporal points (keyframes), while these mentioned papers all deal with different problems other than TAD and with different techniques. \n[1] uses local and spatial keypoints (SIFT) to extract frame-level features and proposes a greedy algorithm to choose keyframes. [3] selects keyframes based on low-level features and generate video feature for gesture recognition in a bottom-up manner. Instead, our PointTAD presents a top-down method to direct regress the temporal location of keyframes. [2] and [4] all use points to represent object tracks or action tracks, with a focus on representing the spatial location of objects or actions. We have added a subsection in related work to discuss our work with these point-based representations, please check our revised paper. \n\n> [1]Guan, Genliang, Zhiyong Wang, Shiyang Lu, Jeremiah Da Deng, and David Dagan Feng. \"Keypoint-based keyframe selection.\" IEEE Transactions on circuits and systems for video technology 23, no. 4 (2012): 729-734. \n> [2]Zhou, Xingyi, Vladlen Koltun, and Philipp Krähenbühl. \"Tracking objects as points.\" In European Conference on Computer Vision, pp. 474-490. Springer, Cham, 2020. \n> [3]Tang, Hao, Hong Liu, Wei Xiao, and Nicu Sebe. \"Fast and robust dynamic hand gesture recognition via key frames extraction and feature fusion.\" Neurocomputing 331 (2019): 424-433. \n> [4]Li, Yixuan, Zixu Wang, Limin Wang, and Gangshan Wu. \"Actions as moving points.\" In European Conference on Computer Vision, pp. 68-84. Springer, Cham, 2020.\n\n\n\n**Q.2** *No significant improvement over SOTA.* \n**R.2** Thanks for your comment. The main contribution of this paper is to focus on a new setting (multi-label TAD) in temporal action detection and also to introduce a new metric (detection-mAP) for this challenging setting. This contribution is acknowledged by the other reviewers. In Table 1, we show that PointTAD achieves the state-of-the-art performance under detection-mAP by large margin (also recognized by Reviewer tdcx, QSXY and eH4Z). In the rebuttal, we further add other experiments to illustrate the effectiveness of our PointTAD over DETR-alike baselines and on the standard TAD benchmark of THUMOS14.\n\n\n**Q.3** *Related methods such as [1,2,3,4] should be compared and discussed in experiments.* \n**R.3** Thanks for your comment. As we discussed in Q1, [1,2,3,4] tackle very different tasks from TAD and none of them generates temporal action proposals for evaluation. As much as we would love to, it is not feasible to directly compare these methods in experiments.", " ### Other Weakness\n**Q.5** *Relation with RepPoints and deformable DETR is not explicitly described.* \n**R.5** We have added an independent subsection to related work to discuss our relations with point-based detectors, please check the revised paper. The relation to deformable DETR is also added in Line 56 of the revised paper.\n\n\n**Q.6** *Equations / Statements need clarification.* \n**R.6** a) The $N_s$ query point offsets are predicted by Linear layer (input dimension is D, output dimension is $N_s$) from query vectors. We have clarified this in section 3.2 of the revised paper.\n\nb) **q** refers to the $N_q$ query vectors. We have revised section 3.3 for notation consistency in the revised version, thanks for pointing it out :)\n\nc) The linear layer in Eq (3)(4)(6)(7) are all implemented with fully connected layers. The input and output dimension of linear layers in Eq (3)(4)(6)(7) are reported in the table below. We also clarified the input and output dimensions in the revised paper.\n\n| Linear | Eq (3) | Eq (4) | Eq (6) | Eq (7) |\n|------------|--------|--------|--------|--------|\n| Input Dim | D | D | D | D |\n| Output Dim | 4 | $N_s$ | D′ | D′ |\n\n\n### Suggestions\n**Q.7** *Motivation behind the number of deformable sub-points.* \n**R.7** We set this hyperparameter to 4 according to the number of sampling points in TadTR [21] for temporal deformable attention. We have tried 2 as the temporal coordinate has binary directions, but this setting achieves slightly weaker performance: avg-mAP = 21.4\\% on MultiTHUMOS.\n\n**Q.8** *Missing some state-of-the-art methods in comparison table.* \n**R.8** We have added more methods to the comparison table and included methods with Optical Flow input, please check Table 1 in the revised paper. \n\n**Q.9** *How to obtain the segments in Table 2a? Is it from the query points (partial min-max)?* \n**R.9** The segments are NOT pseudo segments converted from query points. In fact, this segment-based baseline is similar to Sparse R-CNN (but with parallel mixing for ablation purposes), where actions are represented as segments by paired start-end positions.\n\n**Q.10** *How to choose the hyperparameters?* \n**R.10** We determine most of the hyper-parameters, such as loss weighting, by empirical results. Some parameters, such as the number of deformable sub-points, the input temporal resolution and input frames for each sample are decided based on the experience from previous works ([20], [21]).", " We thank the reviewer for the detailed comments and constructive suggestions for improvement. Our response to the reviewer's comments is as below.\n\n\n### Questions for Rebuttal\n**Q.1** *How was the threshold chosen for the post-processing of frame-based methods’ results in Table 1?* \n**R.1** Thanks for your comment. We set the threshold to 0.5 to produce binary predictions according to MLAD[32] and MS-TCT[7]. As the reviewer points out this post-processing could be sensitive to thresholds, we also experiment with different thresholds on the previous detection-mAP SOTA PDAN [8] during rebuttal. It turns out the threshold indeed affects the performance. We have updated the detection-mAP of frame-based methods under the optimal threshold in the revised paper. Nevertheless, PointTAD still surpasses the best detection-mAP among all thresholds (MultiTHUMOS: **21.5** vs 17.3; Charades: **11.1** vs 8.5), consistently demonstrating the effectiveness of our model. \n\n| Threshold | 0.1 | 0.3 | 0.5 | 0.7 | 0.9 |\n|-------------|------|------|------|------|------|\n| MultiTHUMOS | 15.6 | 17.3 | 17.1 | 15.3 | 11.8 |\n| Charades | 8.5 | 6.8 | 5.0 | 3.2 | 1.4 |\n\n**Q.2** *Evaluation on single-label TAD datasets.* \n**R.2** Thanks for your comment. Following RTD (query-based TAD method), we use the same feature representation and place our PointTAD head on top to build a direct TAD detector. Note that our TAD detector does not reply on the video-level classifier for action recognition and directly produce the action labels with our own PointTAD head. The result on the THUMOS14 dataset is reported in the below Table. We obtain better performance on this single-label TAD dataset, demonstrating the generalization ability of PointTAD to various TAD datasets.\n| THUMOS14 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | avg |\n|------------|------|------|------|------|------|------|\n| RTD + UNet | 58.5 | 53.1 | 45.1 | 36.4 | 25.0 | 43.6 |\n| PointTAD | 62.6 | 55.9 | 46.2 | 35.3 | 22.8 | 44.6 |\n\n**Q.3** *Ablations for fusion parameter $\\beta$ and scaling parameter $s$.* \n**R.3** 1) $\\beta$: Combining sparse detection results with dense segmentation scores provides smoother frame-level scores for segmentation-mAP. We ablate with choices of $\\beta$ on both datasets in the table below. $\\beta$ is set to 0.2 for MultiTHUMOS and 0.96 for Charades based on empirical results.\n\n| $\\beta$ | 0 | 0.2 | 0.4 | 0.6 | 0.8 | 0.96 | 1 |\n|-------------|------|----------|------|------|------|----------|------|\n| MultiTHUMOS | 33.0 | **39.8** | 39.2 | 38.1 | 37.3 | 36.8 | 35.9 |\n| Charades | 13.8 | 14.3 | 15.1 | 16.6 | 19.2 | **21.0** | 18.7 |\n\n\n2)$s$: This scaling parameter is conventional in box object detectors (Faster R-CNN, Cascade R-CNN and Sparse R-CNN[30]), which is to scale regression offsets with respect to the box size instead of the image size. We extend this design to our point detector. In the table below, we compare the regression offsets predicted with respect to action duration (offset scaled by duration) and with respect to window size (offset without scaling) on MultiTHUMOS.\n\n| $s$ | 0.1 | 0.2 | 0.3 | Avg |\n|-----------------------|------|------|------|------|\n| scale window size | 38.8 | 36.4 | 32.6 | 21.0 |\n| scale action duration | 39.1 | 36.6 | 33.0 | 21.5 |\n\n\n**Q.4** *Comparison with DETR-based baseline.* \n**R.4** Thanks for your suggestion. In Table 2a of submission, we have shown the comparison between PointTAD and a Sparse-RCNN based baseline (segment-based variant), which proves the effectiveness of point representation. According to your suggestion, we have implemented another DETR based baseline on the MultiTHUMOS dataset. The performance comparison is reported in the below table, and our PointTAD obtains better results thanks to our more flexible point-based representation.\n| Methods | 0.1 | 0.2 | 0.3 | Avg |\n|-----------------------------|------|------|------|------|\n| DETR-alike baseline | 28.2 | 26.1 | 23.6 | 15.5 |\n| Sparse R-CNN alike baseline | 35.4 | 33.1 | 30.0 | 19.4 |\n| PointTAD | 39.1 | 36.6 | 33.0 | 21.5 |\n", " Thank you for your positive feedback and constructive suggestions regarding our work. We address the comments as below:\n\n**Q.1** *How does \"partial Min-Max\" select the subset query points?* \n**R.1** Sorry for the confusion. For each query, we randomly take $\\frac{2}{3}N_s$ query points from the point set of size $N_s$ to form the $\\mathcal{P}_{local}$. We have added this detail in the revised paper in section 3.2.\n\n**Q.2** *What's the choice of the query numbers?* \n**R.2** Thanks for your comment. The query number $N_q$ is set to 48 for both benchmarks. We have added it to section 4.1 in the revised paper. \n\n**Q.3** *Eq (9) and Eq (10) look similar. Do you first do label assignment (Eq. 9) off the shelf and then optimize the network based on Eq. 10, or do you do them simultaneously?* \n**R.3** Sorry for the confusion. We made a typo in Eq (10): the $\\sigma(n)$ should be $\\sigma_*(n)$, as it indicates the desired permutation calculated by Eq (9). The label assignment is done off the shelf at each iteration and the network is then optimized based on Eq (10). We have corrected this in the revised paper. \n\n**Q.4** *Detection-mAP without NoHuman class is suggested.* \n**R.4** Thanks for the advice. According to your suggestion, we add the results of detection-mAP without NoHuman class to Table 1 in the revised paper. \n\n**Q.5** *How is the inter-proposal level modeled?* \n**R.5** Thanks for your comment. Each action decoder includes the Multi-level Interactive Module and an MHSA for action queries (as illustrated in Fig.2 and Line 123-126 of the submission). Each query represents an action proposal and so the inter-proposal modeling is conducted via the MHSA. ", " We thank the reviewer for the positive and detailed feedback. Our response is summarized as follows:\n\n**Q.1** *Refine SOTA segmentation results with PointTAD sparse detections.* \n**R.1** The segmentation-mAP of fusing PointTAD predictions with MS-TCT segmentation results based on Eq (13) is 46.9\\% for MultiTHUMOS and 26.8\\% for Charades. \n\n**Q.2** *Do the $N_q$ action queries share the same set of query points?* \n**R.2** The learned embedding of query points are shared across samples, but different within the $N_q$ action queries.\n\n**Q.3** *Qualitative comparison with segment-based variant is suggested.* \n**R.3** We added the qualitative comparison with segment-based baseline to Fig. 4 of the revised paper, please check it out.\n\n**Q.4** *The wording and captions in introduction needs revision.* \n**R.4** We have revised the argument as well as the figure, please check the updated paper.", " This paper focuses on the complex multi-label temporal action detection that aims to localize all action instances from a multi-label untrimmed video. Existing query-based action detectors employ a segment to represent an action instance, which is insufficient to handle the concurrent instances and their richer relations. To mitigate this issue, this paper introduces a small set of learnable query points to represent important frames of each action instance. PointTAD provides a flexible mechanism to localize the discriminative frames at boundaries and as well the important frames inside the action. Strengths: This paper is well written and easy to understand. \n\nWeaknesses: \n1. The idea of using points to represent keyframes or objects is not very new, as it was already mentioned in [1,2,3,4]. The authors should add a subsection to the related work section summarizing the current related work and explaining the differences from these approaches.\n2. From Table 1, the method proposed in this paper does not have a significant performance improvement over the SOTA methods such as MS-TCT.\n3. More related methods such as [1,2,3,4] should be compared and discussed in the experimental part.\n\n[1]Guan, Genliang, Zhiyong Wang, Shiyang Lu, Jeremiah Da Deng, and David Dagan Feng. \"Keypoint-based keyframe selection.\" IEEE Transactions on circuits and systems for video technology 23, no. 4 (2012): 729-734.\n[2]Zhou, Xingyi, Vladlen Koltun, and Philipp Krähenbühl. \"Tracking objects as points.\" In European Conference on Computer Vision, pp. 474-490. Springer, Cham, 2020.\n[3]Tang, Hao, Hong Liu, Wei Xiao, and Nicu Sebe. \"Fast and robust dynamic hand gesture recognition via key frames extraction and feature fusion.\" Neurocomputing 331 (2019): 424-433.\n[4]Li, Yixuan, Zixu Wang, Limin Wang, and Gangshan Wu. \"Actions as moving points.\" In European Conference on Computer Vision, pp. 68-84. Springer, Cham, 2020. See Weaknesses. The author did not provide the limitations and potential negative societal impact of their work.", " \nThis paper introduces PointTAD, an architecture for temporally detecting activities in videos that contain multiple co-occurring activities of different labels. \nThe key idea is to use a sequence of query vectors, where each vector aims to predict an activity instance (or background) (similar to DETR). A key contribution of this work is that it associates a set of learnable points to each query vector, which points try to cover the duration of the activity instance and are iteratively refined. It also models intra-proposal relationships among the query-points as well as inter-instance relationships among the query vectors. The method is evaluated on MultiTHUMOS and Charades, where it leads to improved performance under the newly proposed detection-mAP metric, and competitive performance under the classic per frame mAP metric.\n\n Strengths\n===========\n1. This paper addresses an important problem, i.e. temporally detecting activities in videos that contain multiple co-occurring activities of different label. It also introduces a metric for evaluating the segment-level prediction of activity instances (instead of frame-based).\n2. Ablations show that the proposed deformable convolution and the mixing strategy improve performance.\n3. The method leads to improved detection-mAP on Charades and MultiThumos.\n4. The idea of using query points alongside the query vectors is interesting and seems to be working nicely for pooling features to describe temporal segments.\n\nWeaknesses\n===========\n1. Unfair comparison with existing per-frame labeling approaches with the detection-mAP metric: Although the addition of the detection-mAP metric is an important contribution of this work, comparison with prior work under this metric seems not to be fair for the compared approaches, since they do not directly predict segments. This paper post-processes the per-frame action predictions of these works based on thresholding in order to generate segments (from consecutive frames with predictions above a threshold). However, this is a very naive post-processing approach, which is also very sensitive to the choice of threshold (other options would be to detect peaks/blobs in the score time-series, which also involve hyperparams). Therefore, the big improvements of the proposed approach under this metric could be because of the sub-optimal choice of post-processing of the results of competing methods. Under the segmentation-mAP metric, the proposed approach is lagging behind SOTA. (Also, state-of-the-art numbers are missing. Even if other methods use optical flow, results should be reported and discussed).\n2. Missing baseline: it seems like a baseline would be to use just query vectors, the Multi-Head Self-Attention head and predict proposals and class. This would be very similar to DETR and improvements over it would motivate the need for query points/for the multi-level interactive module.\n3. Missing important ablations: What is the benefit of combining sparse predictions with dense scores instead of using just the dense scores (\\beta=0)? What is the importance of the scaling parameter $s$ which differentiates the current approach from RepPoints[38]? \n4. Choice of datasets: ActivityNet and THUMOS indeed don’t have many co-occurring activities like Charades/MultiThumos, but they are the standard benchmarks for activity detection methods that predict instances (with start/end times) instead of per-frame activity predictions. Evaluating on either one of them would allow comparison with stronger methods for detection (instead of applying a naive post-processing step on frame labeling methods as done now on Charades/MultiThumos).\n5. Relation with existing modules is not explicitly described: The query points module adapts RepPoints[8] to the activity detection task, the Point-level Locality Preservation uses ideas from deformable DETR, etc.\n6. Some equations/statements need clarification: For example, a) how does the query vector predict $N_s$ offsets (ln 140?), b) $\\mathbf{q} is defined to be NqxD in ln 119, but in later equations it seems to refer to one of the N_q vectors., c) The Linear() functions in eq 3, 4, 6, 7 could be explained in more detail.\n Questions for rebuttal\n==========\n1. How was the threshold chosen for the post-processing of other methods’ results in Table 1? \n2. Ideally it would be good to evaluate the method on Thumos or ActivityNet or explain why this is not possible.\n3. Adding ablations for $beta$ and $s$, as well as a DETR-based baseline would strengthen the paper.\n\nSuggestions\n1. What is the motivation of 4 deformable sub-points? (instead of 2 etc)\n2. State-of-the-art table is missing a lot of methods, I would suggest taking a look at the tables from [Nagwal, Activity Graph Transformer for Temporal Action Localization, arxiv21] and [Dai, CTRN: Class Temporal Relational Network for Action Detection, BMVC 2021].\n3. How did you obtain the segments in the ablation study (Table 2a)? Is it from the query points (partial min-max)?\n4. How did you choose the hyperparameters, e.g., for the loss weighting?\n Yes", " The paper proposes a learnable query points-based method for multi-label temporal action detection, a more challenging and realistic task compared to single-level temporal action detection, aka. temporal action localization. Instead of uniform sampling on pre-defined segments, PointTAD uses a set of learnable query points to indicate the frames to attend to for each segments. Further, it applies frame mixing and channel mixing on the segment-level feature to integrate instance-level semantics. The entire model takes RGB frames only as input and achieves significant improvement on detection-level mAP on MultiTHUMOS and Charades datasets. *** Strengths\n\n+ Using learnable query points to select representative frames for segment-level video representation is a novel idea, and proves to be more effective than uniform sampling + Temporal RoIAlign (Table 2a). This might potentially bring some insights to the broader community of general video understanding since the video clips/segments are non-uniform by nature, especially in untrimmed videos.\n\n+ The method achieves significant improvement on detection-level mAP on two datasets. Particularly, the detection mAP is more than doubled on Charades.\n\n+ The paper conducts extensive ablations on the design choices. Most of improvements can be clearly explained by (1) query point-based representation v.s. segment-based representation and (2) the parallel application of frame mixing and channel mixing.\n\n\n*** Weaknesses\n\n- Some technical details are unclear. Some questions may appear in the next section.\n - How doe \"partial Min-Max\" select the subset query points?\n - What's the choice of the query numbers?\n - Eq (9) and Eq (10) look similar. Do you first do label assignment (Eq. 9) off the shelf and then optimize the network based on Eq. 10, or do you do them simultaneously?\n\n- In L271-272, the authors argue that \"*NoHuman* class is not a well-defined action category that has paired action boundaries\". You can also report the detection-mAP by excluding NoHuman class to show it quantitatively.\n\n- From the introduction, \"the action decoder accomplishes context modeling at **point-wise**, **intra-proposal** and **inter-proposal** levels\". How is the inter-proposal level modeled? The Multi-level Interactive Module is operated within individual proposal, isn't it? Any further operation on top? 1. In L153-154, \"*partial Min-Max* function is to select a subset of query points\". How is the subset of local points determined?\n\n\n==== Post-rebuttal Revision\nMy questions have been well addressed by the authors. My previous concerns are mostly on the presentation and I've lifted the quality of the presentation from \"2 fair\" to \"3 good\" and the overall rating from \"6 weak accept\" to \"7 accept\". The authors haven't discussed the limitations and potential negative societal impact of their work.\nSome potential one might include:\n+ High computation cost of training video models end-to-end.\n+ Potential biases in existing video datasets.", " This paper identifies the impractical setup of classic single-label TAD and solves a more complex problem of multi-label TAD. It presents a novel query-based action detector with action point representation and multi-level interactive module to handle the co-occurring actions and fine-grained discrimination between categories. Extensive experiments on MultiTHUMOS and Charades demonstrate the effectiveness of the proposed method under detection metrics. Strengths: \n+ This paper is the first to discuss the reason behind wide usage of video-level classifiers in traditional TAD, which attributes to the label-deficient traditional TAD benchmarks. It provides the first instance detection baseline for the more complex multi-label TAD with detection metrics and is very likely to encourage future TAD works to validate on these challenging multi-label benchmarks.\n+ The proposed method tackles concurrent instances and fine-grained classification by introducing query points to replace action segments, for flexible capture of boundary and semantic information at the same time. To my knowledge, PointTAD is the first to integrate point detector with query-based detector in the field of temporal action detection. \n+ The proposed method improves action decoding by designing a comprehensive local-to-global temporal modeling module at pointwise, intra-proposal (Instance-level) and inter-proposal (MHSA).\n+ Experimental results under detection-mAP are good and surpass previous single-label and multi-label methods. \n+ Ablations are solid. The performance improvement by newly designed modules, i.e., query points and multi-level interactive module are verified in ablations. Other important parameter choices are also discussed in ablations.\n+ Detailed visualizations (fig. 5, fig. A) demonstrate the ability of query points to capture different essential motion cues for fine-grained actions. \n+ The paper is well written and easy to follow, with motivation highlighted and major contributions well organized. The difference of this work and other query-based detectors is clearly addressed in the related work. \n \nWeaknesses:\n- The segmentation-mAP is weaker than previous SOTA. However, it’s understandable as instance detectors commonly get outperformed by segmentation methods under segmentation metrics. I would suggest the authors to try using sparse detections to refine SOTA segmentation results to improve segmentation-mAP (could use the practice in [M.1] but reversed).\n- Do the Nq action queries share the same set of query points, or each query has its own query points?\n- Qualitative results are suggested to add the comparison with segment-based variant to show the improvement of query points over action segments. \n- The wording in Line 42 is inaccurate. Segment-based detectors can at least detect some of the ground truth actions, it should be “segment-based action detectors mainly predict two kinds of error predictions”.\n- The captions of S1 and S2 in fig1 should specify the basic unit for the timestamps, e.g., second. \n\n[M.1] \"Cdc: Convolutional-de-convolutional networks for precise temporal action localization in untrimmed videos.\", Zheng et al., ICCV 2017.\n Please refer to the 'Weaknesses' for the detailed questions. Yes, discussed in supplements." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "sLxxsjWiggZ", "SNZ34LpXgU8", "qwDbSbC0buz", "BzOSwbiteML", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39", "BzOSwbiteML", "qwDbSbC0buz", "qwDbSbC0buz", "Zf9ih2zCrer", "o8vOZSqT5D", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39", "nips_2022__r8pCrHwq39" ]
nips_2022_c63eTNYh9Y
New Lower Bounds for Private Estimation and a Generalized Fingerprinting Lemma
We prove new lower bounds for statistical estimation tasks under the constraint of $(\varepsilon,\delta)$-differential privacy. First, we provide tight lower bounds for private covariance estimation of Gaussian distributions. We show that estimating the covariance matrix in Frobenius norm requires $\Omega(d^2)$ samples, and in spectral norm requires $\Omega(d^{3/2})$ samples, both matching upper bounds up to logarithmic factors. We prove these bounds via our main technical contribution, a broad generalization of the fingerprinting method to exponential families. Additionally, using the private Assouad method of Acharya, Sun, and Zhang, we show a tight $\Omega(d/(\alpha^2 \varepsilon))$ lower bound for estimating the mean of a distribution with bounded covariance to $\alpha$-error in $\ell_2$-distance. Prior known lower bounds for all these problems were either polynomially weaker or held under the stricter condition of $(\varepsilon,0)$-differential privacy.
Accept
This paper establishes improved and near-optimal lower bounds for private statistical estimation, specifically for private covariance estimation of a Gaussian and heavy-tailed mean estimation. The first result leverages a novel technical result, proved in this paper: a generalization of the fingerprint lemma (Bun, Steinke, Ullman' 17) to exponential families. The second result relies on a private version of Assouad's lemma (developed in recent work). The reviewers agreed that this is a technically novel and interesting work that clearly merits acceptance.
train
[ "sGjjfm3_Ioi", "UFrIN6aTcVNy", "95BLqa0FgLe", "Sy3jcu-g_h1a", "hHDlRD3wACM", "WDKyCJoDL-", "xHVThIbryo4", "uPt88MTYOrZ", "fVJQQYxkV2N1", "WtwOtppAaJT", "w7nsNcnudux", "rDomioTuPax" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for addressing my questions. I have no other questions or concerns for now.", " >I understand the challenges you mentioned, and I have no doubt that your extension of the FP lemma to exponential families is completely non-trivial. Yet, you cannot ignore the fact that the structure of the proof of your Lemma 2.1 is very similar to the structure of the proof of Lemma 3.6 in BSU17 https://arxiv.org/pdf/1604.04618 (Appendix A). For instance, your $g_j(\\eta)$ is analog to their $g(p)$, and your $Z^j$ is analog to their $f(x)\\sum_i(x_i-p)$. This similarity, which in my opinion, is important for understanding the proof of your fingerprinting lemma, is currently not mentioned at all. As a reader, it helped me a lot to understand your proof steps because I also read the proof steps of BSU17 in their much simpler setting. You have a good result, and there is no reason to hide this similarity unless you think I'm wrong. The prior work you mentioned in lines 215-218 is about Lemma 2.3 which is something else.\n\nWe thank the reviewer for their positive and constructive comments about our result. The reviewer is correct in the comparison they are drawing between the proof of Lemma 2.1 and the corresponding lemma from [BSU17]. We would like to note that [BSU17] is not the only paper to feature a statement and proof of this kind (e.g., see Lemmata 6.3 and 6.8 from [1], with a more detailed list of the main works on fingerprinting being in the introduction). Having stated these prior works in the introduction, along with the technical differences in Section 2.1, for brevity, we initially chose to not include a pointer to [BSU17] (or to any other work featuring a fingerprinting lemma) in the technical sections. We will add references to these works in the technical sections in the final version to enable better understanding for the readers.\n\n>Regarding your lower bound for averaging, thanks for clarifying it to me, but I am still missing something. You say that your lower (Thm 4.1) applies for any distribution with second moment bounded by 1, but in your proof you only focus on a specific distribution family {$D_v$}. Why is it ok? (i.e., why does this imply a lower bound for any distribution with second moment bounded by 1?)\n\nThe reason why we focus on a specific hard instance is that the lower bounds we prove in the paper are those in the *minimax sense*. In particular, when estimating a parameter $\\theta$ for a class of distributions $\\mathcal{P}$ with respect to a loss function $\\ell$ (here, the error of the estimator), we define the minimax risk of the problem as the expected value of $\\ell$ when the \"best\" estimator \"competes'' with the \"hardest\" distribution in the class. Thus, if there exists a subset of the family of distributions considered which is “hard to estimate”, then the lower bound holds. The formal definition for this is given in Lines 535-540 of our submission (also see Chapter 7 of [2] for more information). Thus, the correct way to interpret the phrasing of Theorem 4.1 is \"there exist hard distributions in the class of distributions with bounded second moments, such that no matter how good an $(\\varepsilon, \\delta)$-DP estimator is, it will need at least $\\Omega(d/(\\alpha^2 \\varepsilon))$ samples to have MSE less than $\\alpha^2$\".\n\n**References:** \n[1] Gautam Kamath, Jerry Li, Vikrant Singhal, and Jonathan Ullman. Privately learning high dimensional distributions. In Proceedings of the 32nd Annual Conference on Learning Theory, COLT ’19, pages 1853–1902, 2019. \n[2] John Duchi. Lecture Notes for Statistics 311/Electrical Engineering 377. https://web.stanford.edu/class/stats311/lecture-notes.pdf", " Thanks for the detailed explanation.", " I understand the challenges you mentioned, and I have no doubt that your extension of the FP lemma to exponential families is completely non-trivial. Yet, you cannot ignore the fact that the structure of the proof of your Lemma 2.1 is very similar to the structure of the proof of Lemma 3.6 in BSU17 https://arxiv.org/pdf/1604.04618 (Appendix A). For instance, your $g_j(\\eta)$ is analog to their $g(p)$, and your $Z^j$ is analog to their $f(x) \\sum_i (x_i-p)$. This similarity, which in my opinion, is important for understanding the proof of your fingerprinting lemma, is currently not mentioned at all. As a reader, it helped me a lot to understand your proof steps because I also read the proof steps of BSU17 in their much simpler setting. You have a good result, and there is no reason to hide this similarity unless you think I'm wrong. The prior work you mentioned in lines 215-218 is about Lemma 2.3 which is something else.\n\nRegarding your lower bound for averaging, thanks for clarifying it to me, but I am still missing something.\nYou say that your lower (Thm 4.1) applies for any distribution with second moment bounded by 1, but in your proof you only focus on a specific distribution family $\\set{D_v}$. Why is it ok? (i.e., why does this imply a lower bound for any distribution with second moment bounded by 1?)", " >Minor: Notation with $\\eta^i$ is a bit confusing at first glance because it looks like you're raising a vector to the power $i$. Subscript or parenthetical superscript would be clearer.\n\nWe will switch to parenthetical superscripts for $\\eta^i$ to make the notation clearer.\n\n>Heavy-tailed mean estimation: -Can you clarify what you mean by second moment bounded by 1? (e.g. coordinate-wise? central??) Also, the work https://arxiv.org/pdf/2106.01336.pdf gave a lower bound for strongly convex DP SCO (under zCDP -- not quite approximate DP) that seems to essentially be a reduction to mean estimation. So is your main contribution on this problem to be a strengthening of their construction from zCDP to approximate DP? Or am I misunderstanding something here?\n\nRegarding [10], the result given in that paper assumes that coordinate-wise moments are upper-bounded by $1$. On the other hand, our work assumes that the stronger moment assumption that the second moment of *any* single-dimensional projection is upper-bounded by $1$, under which it is more challenging to prove lower bounds. This is clarified in lines 506-507 (in the appendix about the preliminaries). We remark that this is the same moment assumption used in the works cited in line 79, all of which involve results that hold under pure differential privacy.\n\n>Limitations - yes Societal impacts - not addressed\n\nFinally, regarding the fact that the reviewer claimed that we didn't address the societal implications of our work, we would like to point out that our submission is about lower bounds and is purely theoretical. For that reason, we do not expect it to have any obvious societal impact.\n\n**References:** \n[1] Wikipedia, Exponential Family, Table of Distributions https://en.wikipedia.org/wiki/Exponential_family#Table_of_distributions \n[2] Mark Bun, Jonathan Ullman, and Salil Vadhan. Fingerprinting codes and the price of approximate differential privacy. In Proceedings of the 46th Annual ACM Symposium on the Theory of Computing, STOC ’14, pages 1–10, New York, NY, USA, 2014. ACM. \n[3] Raef Bassily, Adam Smith, and Abhradeep Thakurta. Private empirical risk minimization: Efficient algorithms and tight error bounds. In Proceedings of the 55th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’14, pages 464–473, Washington, DC, USA, 2014. IEEE 363 Computer Society. \n[4] Thomas Steinke and Jonathan Ullman. Interactive fingerprinting codes and the hardness of preventing false discovery. In Proceedings of the 28th Annual Conference on Learning Theory, COLT ’15, pages 1588–1628, 2015. \n[5] Thomas Steinke and Jonathan Ullman. Between pure and approximate differential privacy. The Journal of Privacy and Confidentiality, 7(2):3–22, 2017. \n[6] Thomas Steinke and Jonathan Ullman. Tight lower bounds for differentially private selection. In Proceedings of the 58th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’17, pages 552–563, Washington, DC, USA, 2017. IEEE Computer Society. \n[7] Cynthia Dwork, Adam Smith, Thomas Steinke, Jonathan Ullman, and Salil Vadhan. Robust traceability from trace amounts. In Proceedings of the 56th Annual IEEE Symposium on Foundations of Computer Science, FOCS ’15, pages 650–669, Washington, DC, USA, 2015. IEEE Computer 394 Society. \n[8] Gautam Kamath, Jerry Li, Vikrant Singhal, and Jonathan Ullman. Privately learning high dimensional distributions. In Proceedings of the 32nd Annual Conference on Learning Theory, COLT ’19, pages 1853–1902, 2019. \n[9] T. Tony Cai, Yichen Wang, and Linjun Zhang. The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy. arXiv preprint arXiv:1902.04495, 2019. \n[10] Gautam Kamath, Xingtu Liu, and Huanyu Zhang. Improved rates for differentially private stochastic convex optimization with heavy-tailed data. arXiv preprint arXiv:2106.01336, 2021.", " We thank the reviewer for their time and thoughtful comments. We address their questions here.\n\n>Insufficient clarity on the distinction between the proposed method and existing methods. It is explained at a high level in the introduction, but still not as clear or precise as I would like. For example: I don't think it's made clear enough in what sense you generalize the fingerprinting method. Your method applies to exponential families, but what class did the method of prior works apply to? Also, a comparison/discussion of your lemmas and theorems with corresponding results in prior work would be very helpful.\n\nWhen we say that we generalize the fingerprinting technique, we mean that we formulate and prove a fingerprinting-style result for a very large and useful family of distributions (see the table in [1] for a list of the basic exponential families), which covers prior results (see Appendix G), and provides a way to prove lower bounds for DP estimation of different parameters of many other useful distributions. Older fingerprinting-style arguments constructed hard instances for problems (e.g. query release) by resorting either to binary product distributions or Gaussians with independent marginals. Both of these distributions are instances of exponential families, but they constitute a very small subset of what can be expressed as exponential families, with the need for independent marginals being especially restrictive (it was the main obstacle to obtaining a lower bound for Gaussian covariance estimation under $(\\varepsilon,\\delta)$-DP, which was an open question going back to [9]). Thus, we believe that we significantly contributed to strengthening the toolbox of lower bounds for differential privacy. Indeed, the majority of lower bounds under approx-DP were applications fingerprinting. The main papers that used fingerprinting include [2-6] (which use binary product distributions) and [7-9] (which consider both binary product distributions and Gaussians with independent marginals).\n\n>Related to the above, I'm not sure I understand the main challenge in proving your results and how your proof approach/techniques differs from prior work.\n\nThere were various challenges involved in proving our results. The first has to do with the fact that it was not clear a-priori that using fingerprinting was the ``right’’ way to get a lower bound for covariance estimation. \nThe second has to do with the fact that, even after identifying the appropriate approach, coming up with the correct formulation for our fingerprinting result was not obvious. All previous results focus on the problem of mean estimation, whereas we argue that focus should be moved from mean estimation and towards estimating the parameter vector of exponential families. Additionally, in fingerprinting-style proofs, prior works considered the trade-off between accuracy and privacy by directly comparing the output of the estimator with the input samples, while in our lemma the samples are replaced by the corresponding sufficient statistics. Moreover, our choice to have the estimator estimate the deviation of the parameter vector from a point rather than the vector itself was not obvious. All the aforementioned issues are not reflected when looking at the final result itself, but they were crucial and non-trivial during the process of formulating and proving the statement. \nThe last challenge involves appropriately leveraging the properties of exponential families at crucial points when proving the intermediate lemmas that build towards our main result. In conclusion, while we agree that the basic steps of the proof follow a structure that is similar to that in prior fingerprinting-style results, we argue that there were a number of obstacles which necessitated thinking out of the box at some important stages.", " >In lines 56-59, you mention that the lower bound in Frobenius norm (Thm 1.1) matches the non-private sample complexity, while the lower bound in Spectral norm (Thm 1.2) does not (there is a $\\sqrt{d}$ gap). First, can you explain (or provide reference) why the sample complexity of estimating the covariance matrix in spectral norm without privacy is $O(d)$? Second, how can it be that Thm 1.1 is tight (w.r.t. d) also for non-private algorithms while Thm 1.2 is not? I’m asking because Thm 1.2 is proven directly from Thm 1.1, so why the same arguments do not follow in the non-private case (providing a non-private analog of Thm 1.2)? What am I missing?>\n\nFirst, regarding the reference about the sample complexity of non-private spectral estimation of Gaussian covariances, we refer the reviewer to Example 6.3 in [1]. However, under privacy constraints, all known algorithms required $\\Omega(d^{3/2})$ (see the paragraph \"The Covariance Estimation Bottleneck\" in [2] for a discussion about this). A matching lower bound was known only for worst-case data and it was an open problem to show this for a natural class of distributions (like Gaussians). Our work is the first to show this lower bound, thus verifying the existence of a gap between the non-private and private sample complexities for this problem. Conversely, such a gap does not exist when comparing the private and non-private sample complexities of Frobenius/Mahalanobis estimation of Gaussian covariances. Indeed, the non-private sample complexity for that problem is $\\widetilde{\\Theta}(d^2)$, which is also the sample complexity attained by the algorithm in [3] (which, in turn, nearly matches our LB in Theorem 3.1). Consequently, one can get privacy ``for free” for Frobenius/Mahalanobis estimation, but not for spectral estimation where, for $\\varepsilon \\le 1, \\alpha = \\mathcal{O}(1/\\sqrt{d})$, the cost increases by a dimension-dependent factor. We stress that the assumption $\\varepsilon \\le 1$ is crucial, since it appears in line 595. Having a larger $\\varepsilon$ would imply that we are in the low privacy regime, in which case the sample complexities end up being dominated by the non-private terms ($d^2/\\alpha^2$ and $d/\\alpha^2$ for Frobenius and spectral estimation, respectively), which correspond to the case where $\\varepsilon = \\infty$. \nFinally, we point out that our reduction for spectral estimation would give the appropriate lower bound for the non-private setting, as well ($\\Omega(d/\\alpha^2)$). The lower bound for Frobenius estimation without privacy is $\\Omega(d^2/\\alpha^2)$. Therefore, the same step (as in our reduction for DP spectral estimation) of setting $\\alpha$ to $\\alpha \\sqrt{d}$ would yield the aforementioned lower bound.\n\n>Minor issues/typos\n\nThe reviewer is correct regarding all the three typos they have identified. We will update the manuscript accordingly.\n\n**References:** \n[1] Martin J. Wainwright. High-Dimensional Statistics: A Non-Asymptotic Viewpoint. Cambridge University Press, 2019. \n[2] Gavin Brown, Marco Gaboardi, Adam Smith, Jonathan Ullman, and Lydia Zakynthinou. Covariance-aware private mean estimation without private covariance estimation. arXiv preprint arXiv:2106.13329, 2021. \n[3] Hassan Ashtiani and Christopher Liaw. Private and polynomial time algorithms for learning Gaussians and beyond. arXiv preprint arXiv:2111.11320, 2021. \n[4] Gautam Kamath, Jerry Li, Vikrant Singhal, and Jonathan Ullman. Privately learning high dimensional distributions. In Proceedings of the 32nd Annual Conference on Learning Theory, COLT ’19, pages 1853–1902, 2019. \n[5] Rina Foygel Barber and John C Duchi. Privacy and statistical risk: Formalisms and minimax bounds. arXiv preprint arXiv:1412.4451, 2014. \n[6] Gautam Kamath, Vikrant Singhal, and Jonathan Ullman. Private mean estimation of heavy-tailed distributions. In Proceedings of the 33rd Annual Conference on Learning Theory, COLT ’20, pages 2204–2235, 2020.", " We thank the reviewer for their time and helpful feedback. We address their comments here.\n\n>The proof of the FP lemma for exponential families, which seems like the main technical contribution of this paper, is achieved using a natural extension of the proof of the original FP lemma [Bun, Steinke, Ullman, SODA 17]. This extension is definitely not straightforward and requires some technical work, but I don’t see something conceptually new in the proof technique. This similarity is currently not mentioned at all. It should be emphasized in the paper for making the presentation fairer, and for helping the reader to understand all the long calculations (which look much nicer in the simpler setting of [BSU17]).\n\nThere were various challenges involved in proving our results. The first one has to do with the fact that it was not clear a-priori that using fingerprinting was the ``right’’ way to get a lower bound for covariance estimation. \nThe second has to do with the fact that, even after identifying the appropriate approach, coming up with the correct formulation for our fingerprinting result was not obvious. All previous results focus on the problem of mean estimation, whereas we argue that focus should be moved from mean estimation and towards estimating the parameter vector of exponential families. Additionally, in fingerprinting-style proofs, prior works considered the trade-off between accuracy and privacy by directly comparing the output of the estimator with the input samples, while in our lemma the samples are replaced by the corresponding sufficient statistics. Moreover, our choice to have the estimator estimate the deviation of the parameter vector from a point rather than the vector itself was not obvious. All the aforementioned issues are not reflected when looking at the final result itself, but they were crucial and non-trivial during the process of formulating and proving the statement. \nThe last challenge involves appropriately leveraging the properties of exponential families at crucial points when proving the intermediate lemmas that build towards our main result. In conclusion, while we agree that the basic steps of the proof follow a structure that is similar to that in prior fingerprinting-style results, we argue that there were a number of obstacles which necessitated thinking out of the box at some important stages. \nThat said, we would like to emphasize that we did acknowledge the similarities with the prior work in our manuscript, and tried our best to highlight the differences between the prior work and ours, while staying within the page limit. For example, in lines 215-218, we acknowledge the similarities between the proof of lemma 2.3 and similar results in prior work. More broadly, we believe that section 2.1 also identifies the conceptual similarities between our technique and previous results, while also pointing out the differences. For the final version, we could add a more detailed discussion about the ideas that were similar to or different from the prior work.\n\n>A comparison of the mean estimation lower bound (Thm 1.3) with previous lower bounds is missing. It is not clear to me if the statement itself is new or only the proof technique (Assouad’s method rather than FP lemma). See my second question in the next part.\n\nHere, we address both Weakness 2 and Question 2. The novelty in the lower bound of Theorem 1.3 lies primarily in the result/statement. We compare with two prior lower bounds: \n1) [4] prove an $(\\varepsilon,\\delta)$-DP lower bound for estimating the mean of a Gaussian distribution $N(\\mu,\\mathbb{I})$. They show a sample complexity lower bound of $\\Omega(d/\\varepsilon\\alpha)$, which is specific to Gaussians. Our lower bound applies to a broader class of distributions – those with bounded second moment. We show that this broader class is harder to estimate, requiring $\\Omega(d/\\varepsilon\\alpha^2)$ samples.\n2) [5, 6] prove an $(\\varepsilon,0)$-DP lower bound for estimating the mean of any distribution with bounded second moments. They show a sample complexity lower bound of $\\Omega(d/\\varepsilon\\alpha^2)$. We show the same sample complexity lower bound for the same problem under the weaker constraint of $(\\varepsilon,\\delta)$-DP.\n\nWe included discussion of 2) in lines 78-83, and will add a comparison there with 1) as well.", " We thank the reviewer for their time and positive evaluation of our work. Regarding the questions raised, we have the following comments.\n\n>The paragraph around line 146 and line 164 has some repetition w.r.t. discussion about T. Also T is not defined here and it is only defined in the appendix.\n\n$T$ is a function associated with exponential families, known as the ``sufficient statistics’’ of the family. We omitted its definition (as well as the definition of exponential families) from the main body due to space limitations. In a future update to the manuscript, we will try our best to move some of the material from the appendix to the main body and improve the clarity of the presentation.\n\n>Line 154: The second line is confusing to me: the authors say they assume the existence of vectors eta^1, eta^2 but for what purpose? What properties do these vectors need to satisfy? Is the eta drawn from some distribution that is supported between these two vectors?\n\nThe reviewer is correct: $\\eta$ is drawn uniformly from the box defined by opposite corners $\\eta^1$ and $\\eta^2$. In more detail: most arguments for proving lower bounds in these settings take a common form. The algorithm is uncertain about the value of some parameter of interest, and is given data from a distribution where the parameter is selected randomly from this uncertainty set. It is safe to assume that the algorithm is aware of this uncertainty set, and what distribution the natural parameter follows over this set. The vectors $\\eta^1$ and $\\eta^2$ define this uncertainty set: independently for each coordinate $j$, we let the $j$-th coordinate of the natural parameter vector $\\eta$ be drawn uniformly at random from the interval $[\\eta^1_j, \\eta^2_j]$.\nThe only necessary condition is that the Cartesian product of the previous intervals must be a subset of the range of natural parameters of the exponential family (again, defined in the appendix). However, in order to obtain strong lower bounds, one must choose $\\eta^1$ and $\\eta^2$ appropriately on a problem-by-problem basis.", " This paper generalizes the fingerprinting method of BUV14 to exponential families, and uses this result to prove lower bounds on approximate DP Gaussian covariance estimation and heavy-tailed mean estimation. Strengths: \n\n-The results are very interesting and seem to expand the DP lower bound toolbox. \n\nWeaknesses: \n\n-Insufficient clarity on the distinction between the proposed method and existing methods. It is explained at a high level in the introduction, but still not as clear or precise as I would like. For example: I don't think it's made clear enough in what sense you generalize the fingerprinting method. Your method applies to exponential families, but what class did the method of prior works apply to? Also, a comparison/discussion of your lemmas and theorems with corresponding results in prior work would be very helpful. \n\n-Related to the above, I'm not sure I understand the main challenge in proving your results and how your proof approach/techniques differs from prior work. \n\n-Minor: Notation with $\\eta^i$ is a bit confusing at first glance because it looks like you're raising a vector to the power $i$. Subscript or parenthetical superscript would be clearer. \n\n Heavy-tailed mean estimation: \n-Can you clarify what you mean by second moment bounded by 1? (e.g. coordinate-wise? central??) Also, the work https://arxiv.org/pdf/2106.01336.pdf gave a lower bound for strongly convex DP SCO (under zCDP -- not quite approximate DP) that seems to essentially be a reduction to mean estimation. So is your main contribution on this problem to be a strengthening of their construction from zCDP to approximate DP? Or am I misunderstanding something here? \n Limitations - yes \nSocietal impacts - not addressed ", " This paper provides new tight lower bounds for private estimating the covariance matrix of a Gaussian and private mean estimation of heavy-tailed distributions over $\\mathbb{R}^d$.\nPrevious results gave similar lower bounds under simplified assumptions (e.g., pure DP or product distributions). This paper fills a number of gaps by presenting lower bounds for non-product distributions under approximate DP.\nMost of the presentation is devoted to the former lower bound (covariance matrix estimation), which is achieved using a new generalized “Fingerprinting Lemma” for exponential families.\n\nThe fingerprinting lemma [Bun, Steinke, Ullman, SODA 17] gives a way to sample n codewords in dimension $d=O(n^2)$ such that every algorithm that estimates the average of the points “too well”, must be “too correlated” with one of the codewords, and thus cannot be private.\nThe main bottleneck of the above approach is that the codewords are sampled from product distribution, and therefore provide lower bounds only for that cases.\n\nThis paper provides a new FP lemma (Lemma 2.2) that extends the original one to exponential families which capture a wider range of distributions. Moreover, it does not focus on estimating the average of the points, but on estimating the “natural parameter vector” $\\eta$ of the family, which captures the parameters of a specific family (E.g., if the family is all the Gaussians of the form $N(0,\\Sigma)$, then $\\eta$ is essentially a vector representation of the covariance matrix $\\Sigma$). \n\nThe new FP lemma is used to prove a new lower bound for privately estimating the natural parameter vector of a distribution from a general exponential family (Thm 2.5). \nApplying Thm 2.5 to the problem of estimating the covariance matrix yields (with some additional work) a tight $\\tilde{\\Omega}(d^2)$ lower bound for estimating the covariance matrix in Frobenius norm (Thm 3.1). The latter is then used for proving a tight lower bound of $\\tilde{\\Omega}(d^1.5)$ for estimating the covariance matrix in the Spectral norm.\n\nFinally, in section 4 (page 9), the $\\tilde{\\Omega}(d)$ lower bound for private mean estimation of heavy-tailed distributions is given, which is based on a different known method of Assouad (rather than fingerprinting).\n Strengths:\n1) First tight lower bounds for the important problem of estimating the covariance matrix of a Gaussian under approximate DP.\n2) Interesting FP lemma for exponential families that can find other applications.\n3) New lower bound for mean estimation using Assouad’s method.\n4) Overall, good writing quality, well-organized, and looks sound (all missing proofs are given in the appendix).\n\nWeaknesses:\n1) The proof of the FP lemma for exponential families, which seems like the main technical contribution of this paper, is achieved using a natural extension of the proof of the original FP lemma [Bun, Steinke, Ullman, SODA 17]. This extension is definitely not straightforward and requires some technical work, but I don’t see something conceptually new in the proof technique. This similarity is currently not mentioned at all. It should be emphasized in the paper for making the presentation fairer, and for helping the reader to understand all the long calculations (which look much nicer in the simpler setting of [BSU17]).\n2) A comparison of the mean estimation lower bound (Thm 1.3) with previous lower bounds is missing. It is not clear to me if the statement itself is new or only the proof technique (Assouad’s method rather than FP lemma). See my second question in the next part.\n\nOverall, the strengths (especially the end results for covariance matrix estimation) seem to outweigh the weaknesses, and therefore I lean towards accepting this paper.\n 1) In lines 56-59, you mention that the lower bound in Frobenius norm (Thm 1.1) matches the non-private sample complexity, while the lower bound in Spectral norm (Thm 1.2) does not (there is a $\\sqrt{d}$ gap). First, can you explain (or provide reference) why the sample complexity of estimating the covariance matrix in spectral norm without privacy is $O(d)$? \nSecond, how can it be that Thm 1.1 is tight (w.r.t. d) also for non-private algorithms while Thm 1.2 is not? I’m asking because Thm 1.2 is proven directly from Thm 1.1, so why the same arguments do not follow in the non-private case (providing a non-private analog of Thm 1.2)? What am I missing?\n\n2) Can you please explain how Thm 1.3 is different from other known lower bounds for DP mean estimation? For example, [Kamath, Li, Singhal, Ullman, COLT 19] proved that an approximate DP algorithm that accurately estimates the mean of a Gaussian $N(\\mu,I_{dxd})$ must incur an additive error of $\\tilde{\\Omega}(d)$. Since $N(\\mu,I_{dxd})$ has a second moment bounded by 1, it seems stronger than your statement. Is that correct?\n\nMinor issues/typoes:\n1) In the second equality after line 192: should $\\eta_j$ be t?\n2) In line 203: change Lemma 2.2 to Lemma 2.1. \n3) In Lemma 2.3: I think that $s$ is not defined at this point (only later).\n In the “Conclusion and Open Problems” section (section 5), the authors address the limitations of their results.", " The goal of this paper is to prove lower bounds on the sample complexity needed for private statistical estimation. In the case of pure eps-DP, one can often use packing lower bounds to prove lower bounds. Lower bounds for approximate (eps, delta)-DP are often much more involved with one well-developed technique being the fingerprinting lemma. However, past works using the fingerprinting lemma required that underlying distribution be a product distribution. This left open lower bounds for some fundamental distributions such as Gaussian distributions.\n\nThe main technical contribution of this paper is a generalized fingerprinting lemma for exponential families. As an important application, they apply this new tool to derive Omega(d^2/alpha * eps) lower bound on the sample complexity needed to learn a single d-dimensional Gaussian, thus closing the gap for this distribution. As a second application, they provide a lower bound for mean-estimation of distributions with bounded second moments. **Strengths.**\nThis is a very fundamental problem and the literature was clearly missing a key tool that would allow us to prove optimal results in this area. This paper provided an important contribution in this area by giving us a new tool (based on fingerprinting) which allowed us to answer some fundamental questions in this area. Personally, I would like to dive further and learn more about the techniques in this paper.\n\n**Weaknesses.**\nNo concrete weaknesses. I only have a couple of rather minor comments.\n- The paragraph around line 146 and line 164 has some repetition w.r.t. discussion about T. Also T is not defined here and it is only defined in the appendix.\n- Line 154: The second line is confusing to me: the authors say they assume the existence of vectors eta^1, eta^2 but for what purpose? What properties do these vectors need to satisfy? Is the eta drawn from some distribution that is supported between these two vectors? Limitations discussion is fine." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "fVJQQYxkV2N1", "Sy3jcu-g_h1a", "xHVThIbryo4", "uPt88MTYOrZ", "WDKyCJoDL-", "WtwOtppAaJT", "uPt88MTYOrZ", "w7nsNcnudux", "rDomioTuPax", "nips_2022_c63eTNYh9Y", "nips_2022_c63eTNYh9Y", "nips_2022_c63eTNYh9Y" ]
nips_2022_AezHeiz7eF5
Theory and Approximate Solvers for Branched Optimal Transport with Multiple Sources
Branched Optimal Transport (BOT) is a generalization of optimal transport in which transportation costs along an edge are subadditive. This subadditivity models an increase in transport efficiency when shipping mass along the same route, favoring branched transportation networks. We here study the NP-hard optimization of BOT networks connecting a finite number of sources and sinks in $\mathbb{R}^2$. First, we show how to efficiently find the best geometry of a BOT network for many sources and sinks, given a topology. Second, we argue that a topology with more than three edges meeting at a branching point is never optimal. Third, we show that the results obtained for the Euclidean plane generalize directly to optimal transportation networks on two-dimensional Riemannian manifolds. Finally, we present a simple but effective approximate BOT solver combining geometric optimization with a combinatorial optimization of the network topology.
Accept
The paper presents novel structural and algorithmic results for solving the branched optimal transport problem. In the problem, flow is to be routed from sources to sinks (terminals) in the plane with the possibility of adding non-terminal intermediate nodes. The flow cost on each edge is proportional to the distance between endpoints and subadditive in flow amount; this encourages solutions with “branching”, where flow is routed along common paths. The problem is to select the topology of the graph, location of branching points, and flow amounts. The paper presents structural results about the optimal solution: it is always a tree, which also determines the optimal flow amounts. Results are also given about the branching factor and angles in an optimal solution, which are used in a heuristic algorithm for placing the branching points. Reviewers unanimously found the results to be novel and interesting, and the paper of high quality. They appreciated the theoretical and algorithmic work. Reviewers questioned what applications this work might have (especially to ML) — no concrete applications were given in the paper, but the authors speculated on some in the rebuttal. Given this, two reviewers questioned whether the scope of the paper was a good match for NeurIPS. The meta-reviewer finds the match appropriate, given the interest in optimization and OT within the NeurIPS community, but these reviewer comments indicate that the audience is likely narrow, and the paper could be strengthened by connecting it to concrete applications.
train
[ "3pMPlvkRFwe", "ylPh2gi6xn", "KC-TzCy628c", "GZ5pJnsRepz", "nKXWP8jVdojf", "enb4P-hXN5k6", "rFa9Tc-3Nl3", "2tb0IH8aOqT", "5rUZtcxpnD", "s4mT_ASaojb", "SG5LvrbfGKi", "ONC5PZ5g9L", "XMmavrpH0Hu", "JmRxS9U7AcL", "eGSHJG5gccD", "mpUUZQmbDx", "jQqo_E7N6go", "pLi6wTF0b0S" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am satisfied with the response to my review. So I can keep the score as it is.", " I see, thank you for the quick reply.", " Thank you for acknowledging our rebuttal. \nWLOG, the OT solution can be assumed to be acyclic [2]. As such, it provides a valid input topology for our greedy optimization (Alg. 3), even though it does not contain any branching points. Like in the case starting from a star graph input (see Fig. 24), the algorithm itself inserts (additional) branching points where beneficial/needed. Clearly, for $\\alpha \\approx 1$, the OT solution provides a reasonable starting point with relatively low cost (the objective function is almost the one of OT). From there, our algorithm may improve the solution only further by introducing branching points.", " Thank you very much for the extended responses. They've certainly been very informative, and I've also appreciated the improvements made in the updates. It was especially heartening to understand the authors' efforts in comparison attempts. \n\nI would like to keep my score as is however, as I still feel that the paper would be best split into a theoretical portion and a numerical/applied portion. The detailed arguments of the theoretical portion would be of little interest, I imagine, to potential users with the suggested applications in mind. Likewise, those interested in proving structural results about BOT may not care to know about numerical details or algorithms, or potential applications. A separate theoretical paper could be written more cohesively, without the need to push arguments to the appendix and to squeeze things into 9 pages. A numerical/applied paper could expand upon potential applications and provide more extensive experimentation and results within one or multiple use cases.\n\nUltimately, I do want to note that my estimation of the work has increased, and I would not be unhappy if it is accepted as is.", " Thank you for addressing my comments and questions. I am satisfied with the answers and so I would like to keep the score as it is.", " Thank you for clarify the running time.\nThe authors mentioned that \"For instance, for BOT problems with $\\alpha \\approx 1$ , the OT solution provides a strong initial guess for the greedy topology optimization.\" \nWould you please elaborate a bit on how does it work? I.e. how does one use OT solution to help on election of BP for BOT.\n", " Thank you for addressing my concern on motivation and practicality, and clarified the work flow of the topology selection.", " Thank you very much for your positive review and the detailed feedback! We will address the comments not covered in the overall response here.\n\nRegarding answers to the following topics, please have a look at the corresponding part of our overall response:\n+ Runtime of practical algorithms\n+ Comparing our solver against existing approaches\n\n**1 A real-world example for BOT**\n\nMany practically relevant real-world problems can be formulated as instance of BOT, such as water supply, mail delivery and other problems related to resource allocation. Let us also give a concrete example to illustrate how one may choose the $\\alpha$-parameter. For instance, when designing a water distribution network, the construction cost of pipes may be proportional to the amount of material used, i.e. proportional to their length and diameter. Since the flow through a pipe is proportional to the diameter squared, conversely, the cost is proportional to the square root of the flow, i.e. $\\alpha = 1/2$. The choice of $\\alpha$ in general relies on a careful inspection of the problem at hand.\n\n**2 Relation between topology and edge flows**\n\nAs explained in Sect. 2, WLOG, the topology is chosen to be a full tree topology that has $n-2$ branching points (BPs), each of degree 3. Each of the $n$ terminals must then have degree 1. This does not limit the generality of our explored solutions as higher-degree BPs can form effectively if two or more BPs are located at the same position (as illustrated in Fig. 2).\nFurthermore, for _any_ tree topology the edge flows are already uniquely determined by the flow constraints in Eq.(1) (i.e. supply, demand and flow conservation). The constraints form a linear system of equations which has a unique solution. In Sect. 2 (around l. 67), we state that this system can be solved in linear time, by an approach called ``elimination on leaves of a tree'' in [23]. \nLet us for a full tree topology explain very briefly how this elimination works: Given a full tree topology, choose any branching point that is connected to two terminals. The flows through the two terminal edges are known from their respective demand or supply. The flow through the third edge joint at this branching point is thus immediately obtained from flow conservation. On the tree defined by the remaining edges, this procedure is repeated until all flows are determined.\n\n**3 Clarification on coupled branching points**\n\nAs in the previous answer, WLOG, the topology of a BOT solution can be represented as full tree topology, in which every branching point (BP) has degree 3. Indeed, a degree-4-branching may be realized by two branching points coupling in the same location (see Def. of coupled BPs in Sect. 2). Regarding a final BOT solution, a BP that truly has degree 4 (on the graph level) and a coupled BP with effective degree 4 are fully equivalent. Therefore, by showing that coupled BPs located away from terminals cannot be globally optimal, we show the same for solutions which include degree-4-BPs on the graph level. Regarding the interpretation of our result, we do not have a satisfactory answer why diversifying a flow in 3 directions at the same time is never optimal. One can only conjecture that the subadditivity of BOT forces flow to accumulate, which penalizes branching and that therefore bifurcations are delayed as far as possible. \n\n**4 Distance between edge and \"connecting\" node**\n\nIn the greedy heuristic, after an edge has been cut, one of the nodes incident to the cut edge is connected to a new edge. This new edge needs to be chosen from the opposite connected component, such that afterwards the topology is connected again. We should clarify that the node which forms this new connection to an existing edge is denoted by $c$ (for \"connecting node\"). $c$ does here not refer to the whole component. The distance from the node $c$ to an edge $e$ is simply given by the distance between a point and a line segment (see Eq. (33) in appendix G.3). This will be clarified in the final version of our manuscript.\n\n**5 Are optimal BOT solutions unique?**\n\nIn general, the globally optimal solution to a BOT problem is not unique. This is most easily illustrated with a highly symmetric counterexample. Picture a BOT problem with two sinks of equal demand and two sources of equal supply positioned on the corners of a perfect square. The sources should be on opposite corners (similar to the setup in Fig. 6(a)). In this case two parallel horizontal paths from sources to sinks are an equally good solution as two vertical ones. In problems with little symmetry, it is however expected to be rather rare that two different transport plans have exactly the same cost.", " Thank you very much for your review! In our overall response, we present a number of arguments why we consider BOT a topic of interest for the ML community. We identify different concrete examples for which current approaches and challenges in ML can benefit from our work and hope to thereby address your concerns regarding the scope of our work adequately.", " Thank you very much for your detailed review and helpful remarks! Below, we provide responses to comments not addressed in our overall response.\n\nRegarding answers to the following topics, please have a look at the corresponding part of our overall response:\n+ Relevance of BOT to ML community and concrete applications\n+ Runtime of practical algorithms\n+ Structuring our theoretical and practical contributions\n+ Comparing our solver against existing approaches\n\n**1 Generalization of techniques and algorithms to higher-dimensional BOT**\n\n**Theoretical results for higher-dimensional BOT.** \nMuch of the theory developed in our paper generalizes straightforwardly to BOT problems defined in higher-dimensional Euclidean space. Firstly, optimal BOT solutions are acyclic also in higher dimensions. Furthermore, in Euclidean space, the geometry optimization is convex irrespective of the dimensionality of the problem. Thus, the optimal substructure of BOT solutions covered in Lemma 2.1, which is pivotal in our theoretical reasoning, applies to BOT in $\\mathcal R^d$. Additionally, the optimal branching angles, defining the optimal position for branching points of degree 3 (see Eq. (3)), generalize, simply because three points always lie on plane also in higher dimensions. \nHowever, the recursive construction for BOT solutions (presented in Sect. 3.2) does not generalize, as the pivot point degeneracy mentioned in the end of Sect. 3.2 gets substantially worse for BOT problems in higher dimensions. Admittedly, also the results on the degree limitation do not generalize, as the arguments rely on the fact that the angles between edges meeting at a coupled branching point sum up to $2 \\pi$ (cf. Sect. 4.1).\n\n**Practical algorithms for higher-dimensional BOT.**\nThe numerical geometry optimization as well as the greedy algorithm for the topology optimization presented in Sect. 6 are readily applicable to BOT problems in $\\mathcal R^d$. Moreover, in the course of the rebuttal, we have conducted extra experiments that seem to indicate that the runtime (see overall answer 2 on runtimes) as well as the quality of the solution achieved by our algorithm does not depend strongly on the dimensionality of the problem. Regarding, the quality of the obtained solutions, we repeated the experiment reported in Fig. 8 (comparison against brute force solutions), for higher dimensions (3,5,10,100). Independent of the dimensionality, our approximate solver again compared very well against the ground truth solutions. These experiments will be included in the final version of our manuscript.\n\nWe strongly agree with the reviewer that the generalization to higher dimensions is of great interest for various applications of BOT. We will make sure to put more emphasis on the mentioned aspects in the revised version of our paper.\n\n\n**2 Most arguments and constructions are generalizations**\n\nScience is incremental and therefore naturally builds on existing, approved methods. Our work on the generalization of many existing approaches is an important contribution to unify them in a larger framework, which we see as a strength of our work, not as a weakness.\n\n\n**3 Is our theoretically oriented work a good fit for NeurIPS?** \nWe agree with the reviewer that the emphasis of our work lies on the theory. However, amongst the most influential NeurIPS papers of the past years [I], we find several works which are largely theoretical in thrust and were appreciated by the community. \n\n[I] https://www.paperdigest.org/2021/02/most-influential-nips-papers/", " Thank you very much for your positive review and your helpful feedback! In the following, we provide responses to your comments.\n\nRegarding answers to the following topics, please have a look at the corresponding part of our overall response:\n+ Structuring our theoretical and practical contributions\n+ Comparing our solver against existing approaches\n+ The practical side of BOT on manifolds\n\n**1 Details on the greedy topology optimization**\n\nIn our greedy topology optimization (see Sect. 6.2), at each iteration the current best tree topology is modified by deleting an edge and replacing it with a new one. There are a number of design choices involved in this process. We obtained excellent results with choices that we considered intuitive. But surely this field is wide open for additional experimentation, which we consider beyond the scope of this already substantial work. Still, we report a few additional experiments here: \n\n**Influence of the edge sampling kernel.**\nIndeed, one hyperparameter of particular interest is the kernel (chosen to be Gaussian) and its width (chosen to be $d\\_{min}$), which together define the replacement probability of the edges. To study its influence on the performance we varied the width of the Gaussian kernel $\\exp(-d^2/(\\omega \\, d\\_{min})^2)$ by tuning the parameter $\\omega$. Based on 150 random problems of various sizes ($n \\in \\\\{10, 20, 30, 50, 70, 100, 150, 200 \\\\}$), we calculated the average cost ratio of the heuristic with given $\\omega$ to the default of $\\omega=1$:\n\n| | | | | | | | | |\n|--------------|-----------|------------|--|--|--|--|--|--|\n|**kernel width $\\omega$**| 0.1 | 0.5 | 1 | 2 | 3 | 4 | 5 | 10 | \n|**average cost ratio** | 1.002 | 0.999 | 1.0 | 1.006 | 1.012 | 1.018 | 1.023 | 1.041 |\n\nIn fact, the default choice of $\\omega = 1$ is quite strong and relatively robust given that $\\omega = 0.5$ or $\\omega = 0.1$ work similarly well. Clearly, wider kernels lead to larger (i.e. less local) changes of the topology. At later stages of the algorithm most of these topology changes will be unfavorable, explaining why the algorithm terminates with comparatively less optimal solutions. \n\n**Scaling of the greedy algorithm.**\nIn Section 6.2 we report that the average number of iterations until convergence roughly scales as $O(n^{1.4})$, where $n$ is the number of terminals in the BOT problem. This is merely an empirical observation but no theoretical guarantee. The number of iterations depends on the stopping criterion, the initial topology guess, the kernel defining the sampling of replacement edges and other design choices. Within the experiment described in the above paragraph, we could confirm that for wider kernels which encourage exploration more iterations were required. Consequently, for wider kernels the number of iterations scales roughly as $O(n^{1.7})$ rather than $O(n^{1.4})$.\n\n\n**2 The difficulty of the analytical proof**\n\nWe cannot rule out the existence of a fully analytical proof to the inequalities listed in Prop. 4.2. However, the main obstacle in the analytical treatment arises from the nature of the $\\arccos$-function (see Eq. (3)) and its derivatives, which are hard to treat analytically. Fortunately, the analytical results obtained by our careful reasoning were just enough to formulate a suitably tight lower bound (Eq. (23)) for $\\Gamma_{2,1}$ that enables the use of a feasible numerical scheme.\n\n**3 Clarification on Figure 3(c)**\n\nFigure 3(c) shows a stand-alone illustration of the central angle property used in the recursive construction (cf. Sect. 3.2). The central angle theorem requires only two points. The two nodes where denoted $a\\_1$ and $a\\_2$ to emphasize the usage of the central angle theorem in the construction.\n\n**4 Small remark on algorithms**\n\nAll algorithms in the appendix have an empty line prior to the return statement. We thank the reviewer for the keen eye and will remove these empty lines in the final version.", " **2 Runtime of practical algorithms (mpG7, ooE5)**\n\nThough we provide information on the runtime of the numerical geometry optimization and greedy topology optimization (cf. Fig. 23 and Fig. 25 in the appendix), these were not discussed in the text. In the revised version we will amend this. \n\nAs explained in Sect. 6.1, a single iteration of the geometry optimization has linear time complexity. However, the number of such iterations to optimize the geometry is not known from a theoretical point of view.\nParalleling the setup of Fig. 23, we have now conducted an experiment where we report the number of iterations, for a batch of BOT problems. We performed the same experiment in Euclidean space of different dimensions ($d \\in \\\\{3, 4, 5, 10,30 ,100\\\\} $) (see response to ooE5, paragraph 1 for a comment on the generalization to higher-dimensions BOT). Empirically, we found that independently of the convergence threshold $\\\\eta$ (see Alg. 1) and irrespective of the dimension of the BOT problem, the average number of iterations scales as $O(\\\\log(n))$. Given that a single iteration has complexity $O(nd)$, the full geometry optimization thus scales as $O(n \\\\log(nd))$, where $n$ is the number of terminals and $d$ the number of dimensions. \nDue to the theoretical focus of our work, we had not invested much time in optimizing our code. In particular, we used a standard solver to solve the system of equations in Eq. (7), which scaled worse than linearly (see appendix G.2). We have now developed an improved C++-implementation, which explicitly uses the linear elimination scheme, resulting in an order of magnitude speedup. We will make our improved implementation available together with the revised version of our manuscript.\n\n\n**3 Comparing our solver against existing approaches**\n\n- **Comparison against standard OT solvers (CZsf, mpG7)**: There exist efficient and also scalable approximate solvers (Sinkhorn-Knopp) [C, 5] for optimal transport (OT), i.e. BOT with $\\\\alpha=1$. On the contrary, BOT in general is NP-hard, and therefore our algorithm should not be seen as an alternative to OT solvers. Rather we can benefit from the existing fast solvers and build on them. For instance, for BOT problems with $\\\\alpha \\\\approx 1$, the OT solution provides a strong initial guess for the greedy topology optimization.\n\n- **Comparison against other BOT solvers (CZsf, ooE5)**: Unfortunately, the BOT solvers that have been proposed in the literature so far do not have code publicly available. We contacted the corresponding authors, but either did not get an answer or the code had not been maintained to be shared, with one notable exception: The author of [21] generously did share his code with us, but as mentioned in the beginning of Sect. 6, the algorithm requires some user supervision. Due to the large set of hand-crafted rules, this algorithm also requires a certain level of expertise. Still, we conducted experiments to the best of our ability, and png files with the results of the algorithm can be found in the supplemental material. Visually, these look worse than ours. Indeed, the algorithm generates unnecessary extra branching points with degree 2. Moreover, when $\\\\alpha=1$, i.e. the OT case, we observe that the algorithm still generates branches, while OT is characterized by the absence of branching. Finally, we stress the fact that for such a small example (6 terminals) the algorithm needs around 30 seconds, without counting the time invested in the supervision, while ours just takes a fraction of a second. \nThe authors of [D] approach the BOT problem by phrasing it as a limit of functional minimization problems. Their algorithm discretizes the plane and the continuous cost function in order to approximate the optimal function solution. Thus, a fair comparison with our algorithm is not straightforward, since on the one hand our output is a graph while theirs is a discretized function. On the other hand, the cost they minimize is an approximation of the actual BOT cost. We have tried to reimplement their approach, but could not make the optimization converge. \nConsequently, comparison against other algorithms forces researchers to reimplement algorithms described in the literature, which may have partially hindered the more practical development of BOT. By making our code available we hope to aid the evolution of the field.\n\n_References_ \n[C] Bonneel, N., Van De Panne, M., Paris, S., & Heidrich, W. Displacement interpolation using Lagrangian mass transport. (2011) \n[D] Oudet, Edouard and Santambrogio, Filippo. A Modica-Mortola approximation for branched transport and applications. (2011)", " **4 BOT on Riemannian manifolds (CZsf, ooE5)**\n\nWe find that the generalization of BOT to curved surfaces is not extensively studied yet, despite being interesting both from a theoretical and practical perspective. Our results go beyond existing work in the field of the Steiner tree problem [E] and our approach can be used to further generalize optimality criteria from Euclidean space to manifolds (see appendix F.1). Regarding example use-cases of BOT on manifolds, the most important example is surely BOT on the sphere to model transportation on planet earth. Furthermore, for potential applications of BOT in data science, it is a very desirable property to be able to restrict transportation plans to a given data manifold.\n\n**The practical side of BOT on manifolds.** Although much of the theory generalizes nicely to Riemannian manifolds, the generalization of the practical algorithms is highly non-trivial. Unlike in Euclidean space, realizing the optimal branching angles is a necessary but no longer sufficient condition for relatively optimal solutions. For instance, the meridians of three terminals located in the southern hemisphere at longitudes $0^\\circ$, $120^\\circ$, $240^\\circ$, will intersect at both poles at angles of 120$^\\circ$, which is the optimal angle for $\\alpha=0$. Nonetheless, only the south pole is the optimal branching point. In essence, the geometry optimization aims to assign simultaneously to each branching point the coordinates of the weighted $L^1$ center of mass of its neighbors, a problem that is considered in [F, G]. The topology optimization step of our algorithm could be easily generalized if the geodesic distance can be computed. Due to these obstacles, previous works of the Steiner Tree problem ($\\alpha=0$) have focused mostly on the sphere as important special case [H].\n\n**5 Structuring our theoretical and practical contributions (ooE5, CZsf)**\n\nOur paper comprises both theoretical and practical contributions which, in our mind, intertwine. For instance, the algorithm presented in Sect. 3, whose application is constrained to BOT in the two dimensional plane, establishes the theory for Sect. 4 (BOT properties) and 5 (BOT on manifolds). On the other hand, the studied theoretical BOT properties aid us in the construction of an efficient heuristic algorithm to approximate the optimal BOT solution (Sect. 6). We have considered splitting our paper (as suggested by reviewer ooE5) but we believe that the results presented in our work can be better understood together. Therefore, we have advocated for a self-contained manuscript which walks through the main results in the main paper, and provides the more technical details in the appendix. For the final version, we will try and make some of the technical results more accessible, and we will add more concrete references where needed. \n\n(Last not least we try our best to stem the bias towards smallest publishable units that can be observed occasionally.)\n\n_References_ \n[E] Xin-yao, Jiang. The Steiner problem on a surface. (1987) \n[F] Afsari, Bijan. Riemannian $L^{p}$ center of mass: existence, uniqueness, and convexity. (2011) \n[G] Afsari, Bijan, Roberto Tron, and René Vidal. On the convergence of gradient descent for finding the Riemannian center of mass. (2013) \n[H] Dolan, John, Richard Weiss, and J. MacGregor Smith. Minimal length tree networks on the unit sphere. (1991) ", " We thank all reviewers cordially for the time invested in reviewing our paper and appreciate their helpful comments. We are very happy to read that the reviewers find that the topic of our paper is important (CZsf, ooE5), that our contributions are novel, sound and impactful (ooE5, mpG7, C23R) and that our paper is clearly written (CZsf, mpG7). We will address questions raised by multiple reviewers here and answer to individual concerns in separate replies. Numbered citations refer to the references of the paper, while alphabetical citations refer to new references.\n\n\n**1 Relevance of BOT to ML community and concrete applications (ooE5, C23R)**\n\nWe all agree that \"standard\" optimal transport (OT) is by now an important tool in ML [1, 4, 20]. At the same time, routing problems have become a popular problem to challenge machine learning and amortized optimization algorithms with difficult optimization problems [3, 15]. In our eyes, branched optimal transport (BOT) provides the ML community with an interesting model, a very difficult challenge and an important additional tool: \n\n- **Optimizing BOT engenders non-trivial structure**: Many machine learning problems such as tracking of divisible targets (computer vision), skeletonization (image analysis), trajectory inference (bioinformatics) come with input that is essentially continuous (images, distributions) and require structured output that is discrete (lineage trees, cyclic and acyclic graphs). To our mind, the very transition from continuous to discrete is one of the most interesting aspects (and an unsolved problem) in current machine learning research. It is also a problem that cannot be solved by a mere upscaling of standard deep learning architectures. Now, BOT offers a mathematical formalism that is deceivingly simple (a one-liner objective function, plus constraints on mass conservation, see Eq. 1) and yet engenders non-trivial structure as soon as the exponent $\\alpha$ becomes smaller than one. We believe that BOT can be a highly instructive toy problem for machine learning, and maybe more: a future tool in the community's box. \n\nBelow we speculate on future use-cases, the first being one that we are actively looking into: \n\n- **ML for Science: single cell transcriptomics**: \nA challenging task in single-cell transcriptomics is to study the temporal evolution of cells developing from stem cells into highly specialized cells. Already popular approaches exist based on standard OT, such as [A] which states that cells behave like \"trains moving along branching railroad tracks\", but which yields only diffuse assignment fields. Biologists are interested in sparser abstractions of the raw data, which we believe BOT might provide. \n\n- **ML for Science: evolutionary optimization**: \nUnsurprisingly, a great variety of biological systems practically solve BOT in ambient space such as slime mold (Physarum polycephalum) [B], neurons, vasculature or ant colonies. It is interesting to see i) how well BOT approximates these systems, ii) how well these very different systems solve BOT, and iii) to estimate the systems' respective $\\alpha$-exponent which might afford new insights into the cost tradeoffs these systems have to implement. \n\n- **ML for Science: robotics**:\nFor path planning in adverse conditions, swarms of robots or drones may choose to travel together. This line of work has nontrivial ethical concerns, which is why we abstain from it. \n\n- **BOT for hierarchical assignment between two spatially ordered sets**: One of the most popular applications of standard OT is that of measuring the dissimilarity between two sets of points/distributions. While OT only studies the individual assignment of each element, BOT agglomerates the trajectories of different elements. Thus, BOT, in addition to the transportation cost, yields natural clusters on both sets (at no extra cost), revealing a more structured relation between them. The parameter $\\alpha$ regulates the level of coarseness of the correspondence (see Fig. 1).\n \nOn the question of NeurIPS scope: Optimization (convex, non-convex) is one of NeurIPS areas, according to the call for papers. We introduce an interesting problem that requires both, convex and non-convex optimization, along with its analysis and an optimization heuristic. More importantly, we expect BOT to be of potential interest not only in operations research but also in other fields including biology, environmental science and social sciences. We hope that our illustrations tellingly convey that BOT is of interest to the ML community. In the final version of our manuscript we will emphasize further the relation of BOT with ML by stressing the aforementioned points.\n\n_References_ \n[A] Schiebinger, Geoffrey, et al. Optimal-transport analysis of single-cell gene expression identifies developmental trajectories in reprogramming. (2019) \n[B] Kramar, M., Alim, K. Encoding memory in tube diameter hierarchy of living flow network. (2021)", " The authors consider the branched optimal transport (BOT) problem with multiple sources. There is a finite number of sources with supplies and sinks (terminals) with demands located at fixed positions. We optimize the transportation between sinks and demands with respect to a directed edge-weighted graph and positions of nodes. The edges interconnect the terminals with the help of additional nodes (branching points). The edge direction indicates the direction of mass flow. The edge weight specifies the absolute flows.\n\nFirst, the authors investigate the topology and geometry of BOT solutions. They prove sufficient and necessary conditions for a BOT solution to be optimal for a chosen topology (lemma 2.1). Then the authors proposed a geometric approach to optimize BOT solutions, a generalization of the previously proposed approach, developed for BOT problems with a single source.\n\nSecond, the authors consider conditions under which coupled branching points (BPs) are not optimal, which can be used to improve the transportation cost of a BOT solution. In particular, they proved that topology could be improved if its relatively optimal solution (ROS) contains coupled BPs. \n\nThird, they consider the case of two-dimensional Riemannian manifolds embedded into 3-dimensional Euclidean space. The authors proved a necessary condition for a ROS.\n\nFinally, they propose heuristics and numerical optimization, which contains an effective algorithm for geometry optimization, followed by a heuristic for topology optimization. The authors claim that the paper is the first to propose heuristics for multiple sources, which do not require user supervision. Strengths\n- The authors consider a problem statement of a branched optimal transport (BOT) problem with multiple sources. Although the problem statement is purely theoretical, many real-world problems can be formulated in this way. So I consider the topic of the paper important.\n- The results seem novel: the paper contains many new results about the properties of ROS for the BOT problem and new algorithms.\n- The text of the paper is clearly written.\n\nWeaknesses\n- The paper is a bit theoretical. In particular, the authors did not consider solutions to any applied problems based on the proposed methods.\n- I would propose to add some additional ablation studies; see comments below. - The paper (if we also consider the results in the Appendix) is very long. Still, I think it could be interesting to see some corner cases:\n1) The authors mentioned methods from [26, 28] for the single source case. What is about comparing their performance to the proposed approach? Also, a comparison with the method from [21] can be made.\n2) For alpha = 1, we get an OT problem. What is about comparing the proposed approach to some standard OT solvers?\n3) Performance of the simulated annealing procedure significantly depends on how the probability of sampling a particular edge is defined and how the normalizing factor d_min is defined. Any empirical results on how this influences the performance?\n\n- Description of Algorithm 2 in Appendix: step 7 is missing?\n\n- On the one hand, the theoretical part of the paper is well written; on the other hand, I think some additional structuring of the text is needed. The paper proposes a novel algorithm. Different parts of the algorithm are scattered across the main text, while their detailed description is in the Appendix. I would propose to include some description of the main steps of the algorithm and references to the corresponding sections of the main text and the Appendix where the detailed description is provided.\n\n- Figure 3: it seems that in Figure 3(c) a_0 is missing\n\n- Section E.1.4 about the “numerical” proof for the remaining parameter space. Any comments on specific technical difficulties why it is impossible to provide analytical proof?\n\n- Any fundamental qualitative explanation of the magic scaling n^{1.4} in line 1089 in the Appendix? What is the reason for such scaling?\n\n- Any experiments for the Riemannian case? The paper does not have any negative societal impact. It can have even positive societal impact since some resource allocation problems can be solved using the proposed approach, which is essential from ESG perspective.\n\nI think that the limitations of the proposed approach are also adequately addressed.", " The authors perform a theoretical study of the branched optimal transport (BOT) in the plane, and also suggest a heuristic optimization strategy for the problem. On the theoretical side, they give a geometric construction algorithm for plane BOT solutions, argue for degree 3 branching of the optimal networks via a generalized argument of Bernot et al., and generalize this statement to 2D Riemannian manifolds embedded in $R^3$. \n\nOn the numerics and optimization side, their algorithm has two subalgorithms: one aimed at optimization of branching points given a tree topology, and another aimed at optimizing the tree topology. The first is inspired and drawn from a method of Smith for finding Steiner trees, while the second follows an informed heuristic based on node-edge distance to delete random edges and replace them with improved topologies. The method is tested on small numbers of terminals and is shown to achieve optimality quickly in many instances compared to the brute-force approach. Strengths:\n\n- The theoretical arguments are rather extensive and seem valid, as far as I have checked them.\n- The problem tackled is an interesting one, from a theoretical perspective, and does not seem to be extensively studied. \n- The result on limited degree of branching points is a nice, easy to understand, and impactful result.\n\nWeaknesses: \n- The paper does not cover or recommend any concrete ML applications of the plane BOT problem.\n- Nearly all of the theoretical arguments and constructions, as well as the optimization procedure for fixed topology are generalizations of ideas presented in other approaches.\n- The result on generalization to 2D embedded surfaces seems relatively straightforward and not all that significant perhaps. Especially without algorithms adapted to that scenario, or example uses.\n- The exposition is challenging to follow, as it generally provides brief glimpses into various theoretical constructions, and then directs the reader to extensive appendices.\n- The numerical section is rather brief, given the applied nature of NeurIPS. Only one simple experiment is referenced (though more are in the appendix.\n\nThis paper leaves me wandering if it would be better split into a theoretical and numerical investigation. The theoretical portion could be submitted to a different venue, while the numerical portion could be more extensively explored and detailed. As it stands, the paper is sort of an odd omnibus. 1. Do you have any specific machine learning applications in mind for this problem? Many examples are given or suggested for OT with convex costs, but not for OT with concave costs. In particular, I imagine the plane restriction might be rather limiting, since most data in ML is high-dimensional.\n2. Do you have more detailed information on runtime that could be placed in the main text and/or summarized there? I think many researchers looking to apply your method would be interested to have them there.\n3. Do you have any insights on whether the techniques or heuristics would be valid for the problem in higher dimensions?\n4. Why didn't you compare to the algorithm of Oudet and Santambrogio (A Modica-Mortola approximation for\nbranched transport and applications)? The authors have acknowledged the limitations of their method, which mainly center around limited scope, but have not motivated why this limited scope is still interesting.", " The paper studies Branched Optimal Transport (BOT). The problem consists of finding the optimal topology, and achieving the optimal geometry of the BOT network of multiple sources and sinks, given the optimal topology. While it is known that the optimal topologies for such embeddings are trees, the authors make a further step by show that, in the optimal topology, every branching point has at most 3 neighbors. Given the optimal topology, this work also proposes a new geometric optimization strategy for the case of multiple sources, which is generalized from an approach in the literature. The results also generalize to BOT on Riemannian manifolds. Finally, based on the developed results, they propose heuristic algorithms to solve the BOT problem.\n My main concern is that the paper's scope might not be much relevant to the ML community, while certain technical contribution can be interesting. The theoretical results can fit the mathematics community more for its combinatorial and geometrical nature, or fit the OR community for its relevance to routing and/or transportation systems. Yet, I cannot see how this paper can much benefit the ML community. Note that though OT is a simpler formulation than that considered in this work, it has applications in GAN vastly, domain adaptation, and color transfer, etc. so research aiming to solve OT competitively or study OT has been helpful. \nThe authors may consider well motivating the paper by some concrete applications in ML, whereby BOT is part of the problem formulation or at least conveys some idea/intuition. N.A. As above. ", " The paper under review studied the problem of branched optimal transport with multiple sources.\nAs a generalization of OT, BOT optimizes the transportation by allowing extra branching points. \nA hyperparameter $\\alpha$ is used to control the efficiency of transporting mass together.\nThe transportation flow is captured as a graph, whose vertex set contains the sources, sinks and BP.\nIn order to obtain an optimal solution, \none needs to find both how vertices are connected (topology) and where the BPs are located (geometry).\n\nUnder the above framework, the authors proved that for a graph to be optimal, \nthe degree of a branch point has to be 3. \nMoreover, the authors provided an analytic approach and a numerical algorithm to find the optimal locations of BP.\nFurthermore, the authors presented an algorithm to approximate the optimal topology of BOT.\nIn addition, the authors considered generalization to 2-D Riemannian manifolds.\n Strengths:\nThe paper is well-written and easy to follow.\nThe idea of allowing multiple sources for BOT is natural, \nthe interpolation between OT and euclidean Steiner tree problem is interesting. \nThe results are novel and sound.\n\n\nWeaknesses:\n\nA little concern regarding the motivation and practicality.\n\nMotivation: it would be nice if the authors could provide some further explanation why would BOT be useful.\nFor example, describe a practical example where combined transportation is feasible,\nthen in that example, how do one pick $\\alpha$ to model the case (illustrate that OT is not optimal).\nAnd further how to use algorithms (or theoretical results) developed in this paper to obtain an optimal plan.\n\nPracticality: the running time is not shown in the paper, how fast is the algorithm comparing to Sinkhorn for OT,\ni.e. for a given problem, pick $\\alpha = 1$, $\\alpha = 0.5$, run Sinkhorn for $\\alpha = 1$, \nrun Alg 1 and Alg3 (from the supp) for $\\alpha = 0.5$, comparing the running time. \n Q1: Regarding the topology of $T$: it is unclear what information are enclosed in the topology of $T$.\nFrom the description in the paper, it seems only the graph structure is consider as the topology of a tree.\nHowever from the set up (eq 1), number of branch points ($|B|$), weights on edges ($m_{ij}$) are all variables.\nHow does one optimize these. In particular, for instance if two edges going out of a source, how does one \ndetermine how much flow each edge shall carry? \n\n\n\nQ2: (continuation of Q1) In line 106, the authors stated \"first, we determine all edge flows (see Sect. 2)\".\nHow are the edge flows are chosen? Could not find in section 2. Are the edge flows randomly initialized, \nthen optimized somehow?\n\n\nQ3: Line 289: \"Then, one calculates the distance $d(e, c)$\nbetween $c$ and every edge $e = (i, j)$ in the larger component\". How does the distance $d(e, c)$ is defined here?\n\nQ4: Some interpretation on \"no degree greater than 3 branch points \nare possible for optimal solution\" would be nice.\nIf it is just graph, one can easily switch between\na degree 4 branch point and two degree 3 branch points.\nIt is not feasible when cost are related,why?\n\nQ5: Is the optimal solution for BOT unique? \n\n\n\nMinor points: \n- change of disk sizes in Fig 1 is not that visible, could scale a bit more.\n\n- it is probably better to move algorithms to the main text if space allows.\n\n- line 293, \"node c\" --> \"component c\".\n The authors mentioned that it is difficulty \nto verify the efficiency of the proposed algorithm for large BOT problem. \nMy guess is that the running time will be long for the proposed BOT problem comparing to OT.\nConsidering BOT is a NP-problem, it is expected. It would be nice if the authors could say a bit \nmore along this line." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "SG5LvrbfGKi", "KC-TzCy628c", "enb4P-hXN5k6", "s4mT_ASaojb", "XMmavrpH0Hu", "ONC5PZ5g9L", "2tb0IH8aOqT", "pLi6wTF0b0S", "jQqo_E7N6go", "mpUUZQmbDx", "eGSHJG5gccD", "JmRxS9U7AcL", "JmRxS9U7AcL", "nips_2022_AezHeiz7eF5", "nips_2022_AezHeiz7eF5", "nips_2022_AezHeiz7eF5", "nips_2022_AezHeiz7eF5", "nips_2022_AezHeiz7eF5" ]
nips_2022_cy1TKLRAEML
Is $L^2$ Physics Informed Loss Always Suitable for Training Physics Informed Neural Network?
The Physics-Informed Neural Network (PINN) approach is a new and promising way to solve partial differential equations using deep learning. The $L^2$ Physics-Informed Loss is the de-facto standard in training Physics-Informed Neural Networks. In this paper, we challenge this common practice by investigating the relationship between the loss function and the approximation quality of the learned solution. In particular, we leverage the concept of stability in the literature of partial differential equation to study the asymptotic behavior of the learned solution as the loss approaches zero. With this concept, we study an important class of high-dimensional non-linear PDEs in optimal control, the Hamilton-Jacobi-Bellman (HJB) Equation, and prove that for general $L^p$ Physics-Informed Loss, a wide class of HJB equation is stable only if $p$ is sufficiently large. Therefore, the commonly used $L^2$ loss is not suitable for training PINN on those equations, while $L^{\infty}$ loss is a better choice. Based on the theoretical insight, we develop a novel PINN training algorithm to minimize the $L^{\infty}$ loss for HJB equations which is in a similar spirit to adversarial training. The effectiveness of the proposed algorithm is empirically demonstrated through experiments. Our code is released at https://github.com/LithiumDA/L_inf-PINN.
Accept
The reviewers reached a consensus that this paper meets the bar for being accepted at NeuRIPS, and therefore the AC recommends acceptance. Please refers to the reviews and author's responses for reviewers' opinion on the strength and weakness of the paper.
train
[ "lzKQ7ALCOtT", "qklb0Gkp8xD", "xfRTtnQ-uCp", "Nm1GEpESFoF", "kcZC1KDo9yI", "ekVW-eJqusS", "wvT6LIwTqTT8", "Zkqe2E3wDw_", "4kfqLYrhs6-", "5rkZpSq4tFA", "gDdT72sqm7p", "-v0u8h9LLck", "CAEN1eG69eu", "yIgmk8k0DJ", "1WLC2q3EYV_", "w5Wcc7psQBQ", "DTrMs5iM7Mq" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your re-evaluation and rating update. We will include the relevant discussions on Sobolev norms and $L^p$ norms in the next version of our paper.", " Thanks for the explanation, I suggest making some of these more concrete in the paper. Meanwhile, I have updated my score.", " Thanks for your feedback! We notice that your concern are two-fold: whether the negative results of PINN holds when the error is measured with the $L^p$ norm and why we investigate the setting under the Sobolev norm.\n\n**Regarding the first concern.**\nTheorem 4.4 has already indicated that vanilla PINN will fail when the error is measured in $L^p$ norm. To be specific, since $L^p$ norm is a special case of Sobolev norm ($L^p=W^{0,p}$ by definition), we can set $m=0$ in Theorem 4.4 and obtain that for any $p\\geq 1$, $\\\\|u^*-u_\\theta\\\\|_p$ could be arbitrarily large if the approximator $u_\\theta$ is learned with $L^2$ loss.\n\n**Regarding the second concern.**\nIn many practical PDE problems, people not only care about the approximation error of $u$ but also $\\nabla u$. Therefore, the Sobolev norm is a more proper metric than the $L^p$ norm in PDE literature [1-3] since it captures the properties of both the value and the derivatives of a function.\n\nTo be concrete, for our study on HJB equations, as is shown in Remark B.2 in Appendix B, $\\nabla u^*$ ($u^*$ denotes the exact solution) is of great significance in application since it's closely related to the optimal control function. Therefore, it is essential to obtain an accurate approximator for both $u^*$ and $\\nabla u^*$, which the Sobolev norm can precisely characterize.\n\nWe hope our explanations can address your concern and you can re-evaluate our work based on that.\n\n[1] Evans L C. Partial differential equations[M]. American Mathematical Soc., 2010.\n\n[2] Gilbarg D, Trudinger N S, Gilbarg D, et al. Elliptic partial differential equations of second order[M]. Berlin: springer, 1977.\n\n[3] Lieberman G M. Second order parabolic differential equations[M]. World scientific, 1996.", " Thanks, for addressing my concerns! One follow-up on question 2: My question was more pointed toward whether the negative results on regular PINNs also hold when targeting accuracy in $L^p$ spaces instead of Sobolev spaces or why one should care about the accuracy in Sobolev norms instead of p-norms. Put another way: Why should one care about your theory on accuracy in Sobolev norms, given that most often, accuracy in p-norms is reported?", " We have finished the paper revision and uploaded the updated version of our paper. The amendments are:\n\n+ We rewrite the derivation of equation, taking state-dependent control functions into consideration (Appendix B). \n+ We add performance comparisons between our model and baselines under more evaluation metrics, including $L^2$ norm and $W^{1,1}$ norm (Appendix G.2).\n+ We add comparisons with baselines on more equations (Appendix G.3).\n+ We show the time series of training loss and test error of original PINN and our method (Appendix G.4).\n+ We make detailed discussions on the failures of $L^p$ loss with large but finite $p$ (Appendix H). \n+ We add discussions on limitations and future directions (Section 7).\n\nThe author-reviewer discussion deadline is approaching. We sincerely hope that the reviewers can re-evaluate the quality of our work based on our responses and revision. May you have any further questions, please feel free to discuss with us in recent days and we are willing to address your concerns.\n\nRegards,\n\nPaper 1876 Authors", " We are delighted to see that our response has addressed your concerns. We will follow your suggestions to add more derivations and discussions in the appendix to illustrate the generality of the equation we consider.", " Thank authors for the detailed response. Most of my concerns have been addressed. I'll update my rating. As for reformulating HJB equation with state-dependent control function into PDE with state-independent control function, I would suggest authors include the derivation and discussion in the appendix because it does not seem obvious. ", " We thank AC for handling this paper and thank all the reviewers for their kind help and useful suggestions. The comments have enlightened us to ponder how to improve the quality of our submission.\n\nWe will add some more discussions and new experimental results to the paper, including:\n\n+ The performance comparisons between our model and baselines under more evaluation metrics, e.g., $L^2$ norm and $W^{1,1}$ norm, and more equations.\n+ Detailed discussions on the failures of $L^p$ loss with large but finite $p$ that are shown in Section 6.2.\n+ Error/loss-vs-time plots in the experiment part.\n+ More discussions on the related work, theoretical framework, and limitations. \n\nDue to the intensive discussions and conducting multiple experiments during the rebuttal, we can hardly finish the revision before the rebuttal deadline. We will update it as soon as possible.\n\nThanks!\n\nPaper 1876 Authors", " \nThanks for your careful review! We respond to your questions as below.\n\n**Response to Question 1&2.**\nIt is a good suggestion. We have examined the quality of the gradient of the learned solution in Figure 3, Appendix G. Following your advice, we further conduct experiments to compute $L^2$ and $W^{1,1}$ relative errors of our model and the baseline methods on the 100-dimensional HJB Equation (Eq.(12)). The results are shown below: \n\n| | $L^1$ | $L^2$ | $W^{1,1}$ |\n| ---- | ---- | ---- | ---- |\n| Original PINN | 3.47% | 4.25% | 11.31% |\n| Adaptive time sampling | 3.05% | 3.67% | 13.63% |\n| Learning rate annealing | 11.09% | 11.82% | 33.61% |\n| Curriculum regularization | 3.40% | 3.91% | 9.53% |\n| Ours | **0.27**% | **0.33**% | **2.22**% |\n\nClearly, our approach significantly outperforms the baselines in terms of both $L^p$ norms and Sobolev norms. Indeed, the Sobolev norm is stronger in the sense that if a PDE is $(L^p, L^q, W^{1,r})$-stable, then it must be $(L^p, L^q, L^{r})$-stable by definition. We will include these results in the paper revision to make our claims more convincing.\n\n**Response to Question 3.**\nWe follow the standard practice in other fields such as NLP and CV [BERT, RoBERTa and Vision Transformers] to use linear learning rate decay (i.e., decrease the learning rate linearly to 0 during training) for all experiments, including baselines and our models (see Appendix F). This strategy has been shown to lead to more effective optimization than using a constant learning rate. \n\n**Response to Question 4.**\nThanks for the suggestion! We will include error/loss-vs-time plots and add some corresponding discussions in the paper revision. Since it's hard to include figures in the response, we present our error/loss-vs-time result in the following tables: \n\nError/loss-vs-time result of original PINN\n| iteration | 1000 | 2000 | 3000 | 4000 | 5000 |\n|--------------------------|--------|--------|--------|--------|--------|\n| $L^2$ loss | 0.098 | 0.088 | 0.070 | 0.584 | 0.041 |\n| $L^1$ relative error | 6.18% | 5.36% | 3.86% | 3.94% | 3.47% |\n| $W^{1,1}$ relative error | 17.53% | 17.67% | 14.83% | 14.40% | 11.31% |\n\n\nError/loss-vs-time result of our method\n| iteration | 1000 | 2000 | 3000 | 4000 | 5000 |\n|--------------------------|--------|--------|--------|--------|--------|\n| $L^{\\infty}$ loss | 11.841 | 9.352 | 2.404 | 1.605 | 0.711 |\n| $L^1$ relative error | 15.22% | 4.26% | 0.97% | 1.10% | 0.27% |\n| $W^{1,1}$ relative error | 21.91% | 18.62% | 5.14% | 4.96% | 2.22% |\n\nIt's clear that for the original PINN approach, the $L^2$ loss drops very quickly during training, while its $W^{1,1}$ relative error remains high. This result indicates the optimization is successful in this experiment, and that the stability property of the PDE leads to the high test error. By contrast, our proposed training approach enables the test error goes down steadily during training, which aligns with the theoretical claims.\n\nWe sincerely hope that our responses address your concerns and you can reevaluate the quality of our submission. We are also willing to discuss with you if you have any further questions.", " Thank you very much for supporting our work. We respond to each of your concerns as below. \n\n**Regarding the focus of our work.**\nThis work is the first to rigorously demonstrate that choosing a proper loss function is critical for some practical (and important) PDEs. We believe the loss function design in PINN is under-explored and agree with the reviewer that the current theoretical analysis can be extended from many aspects, such as relaxing the assumptions and studying other PDEs. We will leave them as future work and provide a deeper understanding of PINN loss and differential equations.\n\n**Regarding contradiction between theorem and large $p$ training.**\nThere is no contradiction between our theorems and empirical results. Theorem 4.3 focuses on the approximation ability, which indicates that if we have a model whose $L^p$ loss is small, it will approximate the true solution well. The empirical results in Table 2 demonstrate the optimization difficulty of learning such a model. \nIntuitively, we randomly sample points in each training iteration in the domain/boundary to calculate the loss. When $p$ is large, most sampled points will hardly contribute to the loss, which leads to inefficiency and makes the training hard to converge. In Algorithm 1, we adversarially learn the points with large loss values, making all of them contribute to the model update (Step 8), significantly improving the model training. \n\nTechnically, directly applying Monte Carlo to compute $L^p$ loss in experiments will lead to large variance estimations. For a function $f$,\n$$\n \\int |f|^p \\mathrm{d}x=\\frac 1 N \\sum_{i=1}^N |f(X_i)|^p+O\\left(\\sqrt{\\frac{\\mathrm{Var} |f(X)|^p}{N}}\\right).\n$$\n\nThus, $||f||_p$ suffers from an $O((\\mathrm{Var} |f(X)|^p/N)^{1/2p})$ error.\n\nAs $p\\to\\infty, \\mathrm{Var} |f(X)|^p\\sim ||f||_{\\infty}^{2p}$. Therefore, the errors for estimating both Eq.(2,3) and the $L^p$ norm of the residual are very large when $p$ is large.\n\nWe appreciate your question which helps us realize the problem in the presentation. We will revise the paper accordingly.\n\nMay you have any further questions, please tell us and we are willing to address your concerns.", " Thanks for your careful review! We respond to your concerns as below.\n\n**Response to Weakness 1.**\nThanks for the question. In this work, we focus on learning an accurate solution. We prove that for some PDEs, accurate solutions cannot be learned by minimizing $L^2$ loss although the training is faster. We agree that adversarial training needs more computations and introduces two additional hyper-parameters. This problem has already been tackled by several recent works in the field of adversarial robustness [1-2]. We will try those efficient and robust adversarial training in our problem and explore more in this direction.\n\n[1] Wong, Eric, Leslie Rice, and J. Zico Kolter. \"Fast is better than free: Revisiting adversarial training.\" International Conference on Learning Representations. 2019.\n\n[2] Zhang, Dinghuai, et al. \"You only propagate once: Accelerating adversarial training via maximal principle.\" Advances in Neural Information Processing Systems 32 (2019).\n\n**Response to Weakness 2.**\nThanks for pointing out the problem and suggestion. Note that not all HJB equations have analytical solutions, with which we can compare the approximation quality of different algorithms. Therefore, we follow previous works and select several equations with closed-form solutions for evaluation. As for the lack of comparison with other baselines, we follow your advice to further conduct experiments on the equations in the appendix using other baseline methods. The relative errors are shown below: \n\n| | $c=1.25$ | $c=1.5$ | $c=1.75$ |\n| ---- | ---- | ---- | ---- |\n| Original PINN | 1.11% | 3.82% | 2.73% |\n| Adaptive time sampling | 1.18% | 2.34% | 7.94% |\n| Learning rate annealing | 0.98% | 1.13% | 1.06% |\n| Curriculum regularization | 6.27% | 0.37% | 3.51% |\n| Ours | **0.61**% | **0.15**% | **0.29**% |\n\nIt's clear that our approach outperforms other baselines in all these equations. We will include these results in the paper revision.\n\n**Response to Weakness 3 and Question 2.**\nThere is no contradiction between our theorems and empirical results. Theorem 4.3 focuses on the approximation ability, which indicates that if we have a model whose $L^p$ loss is small, it will approximate the true solution well. The empirical results in Table 2 demonstrate the optimization difficulty of learning such a model. \nIntuitively, we randomly sample points in each training iteration in the domain/boundary to calculate the loss. When $p$ is large, most sampled points will hardly contribute to the loss, which leads to inefficiency and makes the training hard to converge. In Algorithm 1, we adversarially learn the points with large loss values, making all of them contribute to the model update (Step 8), significantly improving the model training. \n\nTechnically, directly applying Monte Carlo to compute $L^p$ loss in experiments will lead to large variance estimations. For a function $f$,\n$$\n \\int |f|^p \\mathrm{d}x=\\frac 1 N \\sum_{i=1}^N |f(X_i)|^p+O\\left(\\sqrt{\\frac{\\mathrm{Var} |f(X)|^p}{N}}\\right).\n$$\n\nThus, $||f||_p$ suffers from an $O((\\mathrm{Var} |f(X)|^p/N)^{1/2p})$ error.\n\nAs $p\\to\\infty, \\mathrm{Var} |f(X)|^p\\sim ||f||_{\\infty}^{2p}$. Therefore, the errors for estimating both Eq.(2,3) and the $L^p$ norm of the residual are very large when $p$ is large.\n\nWe appreciate your question which helps us realize the problem in the presentation. We will revise the paper accordingly.\n\n**Response to Weakness 4 and Question 1.**\nThanks for the question. We kindly point out that the equation we target in the paper is general and can cover settings regarding state/control dimension mismatch and state-dependent control function by some simple reformulation. For example, the dimension mismatch issue can be solved by simply adding 0 to the vector with smaller dimensionality, e.g., changing 3-dimensional control $(0.1,0.2,-0.1)$ to $(0.1,0.2,-0.1,0,0)$ which is a 5-dimentional vector. Following the derivation in Chapter 3 of reference [33], we could also reformulate the HJB equation of an optimal control problem involving state-dependent control functions into a (same) PDE only involving state-independent control functions.\n\n**Response to Question 3.**\nWe thank the reviewer for pointing out this. The absolute value should be a better surrogate for the loss in Algorithm 1. But practically $L^2$ loss leads to a smoother gradient, and further experiments verify that this choice has little impact on the performance, with $L^2$ loss being slightly better.\n\nMay you have any further questions, please tell us and we are willing to address your concerns.", " Thanks for your careful review! We respond to each question as below.\n\n**Response to Question 1.**\nThanks for the suggestion! We further conduct experiments to compute $L^2$ and $W^{1,1}$ relative errors of our model and the baseline PINN method on the 100-dimensional HJB Equation (Eq.(12)). The results are shown below: \n\n| | $L^1$ | $L^2$ | $W^{1,1}$ |\n| ---- | ---- | ---- | ---- |\n| Original PINN | 3.47% | 4.25% | 11.31% |\n| Adaptive time sampling | 3.05% | 3.67% | 13.63% |\n| Learning rate annealing | 11.09% | 11.82% | 33.61% |\n| Curriculum regularization | 3.40% | 3.91% | 9.53% |\n| Ours | **0.27**% | **0.33**% | **2.22**% |\n\nClearly, our approach significantly outperforms the baselines by a large margin under all these evaluation metrics. We will include these results in our paper in the revision.\n\n**Response to Question 2.**\nIn the paper, we are careful not to make a general statement to cover all high-dimensional PDE problems as the property of different PDEs can vary significantly. However, we believe some general mathematical tools should be developed to analyze Physics-Informed Loss rigorously. This work takes the first step to tackling the problem, and we will investigate more in the future.\n\n**Response to Question 3.**\nThanks for your reference! Due to space limitations, we selected representative papers in the related work section. Han et al. (reference [11] in our paper) cast the problem into a backward stochastic differential equation (BSDE) which is further modeled by neural networks. Your mentioned work by Nüsken and Richter and many others [1-6] are all based on this framework. As is discussed in Line 83-87, this approach only learns the solution on a pre-defined time frame, while our PINN-based approach can learn the solution for any time frame. To the best of our knowledge, few prior works have applied PINN to learn any-frame solution for high dimensional HJB Equation, possibly due to the misuse of $L^2$ loss. We will add those references to the paper to make the related work section more comprehensive.\n\n[1] Pereira, Marcus, et al. \"Learning deep stochastic optimal control policies using forward-backward sdes.\" Robotics: science and systems (2019).\n\n[2] Yu, Yajie, Bernhard Hientzsch, and Narayan Ganesan. \"Backward deep BSDE methods and applications to nonlinear problems.\" arXiv preprint arXiv:2006.07635 (2020).\n\n[3]Pereira, Marcus, et al. \"Feynman-kac neural network architectures for stochastic control using second-order fbsde theory.\" Learning for Dynamics and Control. PMLR, 2020.\n\n[4]Beck, Christian, et al. \"Deep splitting method for parabolic PDEs.\" SIAM Journal on Scientific Computing 43.5 (2021): A3135-A3154.\n\n[5]Pham, Huyen, Xavier Warin, and Maximilien Germain. \"Neural networks-based backward scheme for fully nonlinear PDEs.\" SN Partial Differential Equations and Applications 2.1 (2021): 1-24.\n\n[6] Davey, Ashley, and Harry Zheng. \"Deep learning for constrained utility maximisation.\" Methodology and Computing in Applied Probability 24.2 (2022): 661-692.", " \n**Response to Question 4.**\nThere are a few works that use GAN loss in PINN training [7-9], mainly in a heuristic way. In contrast, we build a mathematical framework to study the relationship between PDE and PINN loss, which can guide the practitioners to choose loss functions (e.g., $L^\\infty$) in a principled way.\nAnother disadvantage of the GAN training is the well-known hyper-parameter sensitivity and optimization instability, while our algorithm doesn't have such issues.\n\n[7] Yang, Yibo, and Paris Perdikaris. \"Adversarial uncertainty quantification in physics-informed neural networks.\" Journal of Computational Physics 394 (2019): 136-152.\n\n[8] Daw, Arka, M. Maruf, and Anuj Karpatne. \"PID-GAN: A GAN Framework based on a Physics-informed Discriminator for Uncertainty Quantification with Physics.\" Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021.\n\n[9] Bullwinkel, Blake, et al. \"DEQGAN: Learning the Loss Function for PINNs with Generative Adversarial Networks.\" ICML 2022 2nd AI for Science Workshop.\n\n\n**Response to Question 5.**\nThe concept $(Z_1, Z_2, Z_3)$-stable is first proposed in this paper, inspired by the stability theory in PDE literature. \n\nStability is an important concept in analyzing PDEs, which have a much longer history than PINN. Roughly speaking, a PDE is stable if small perturbations in the equation can only lead to a slight change in its solution. \n\nFor PINNs, we found the stability theory is essential in understanding the plausibility of the loss function. However, we need to customize the concepts to capture some unique characteristics of PINN, such as the PDE residual and boundary residual in the loss terms. Therefore, we propose the notion of $(Z_1, Z_2, Z_3)$-stable, which can help better characterize the asymptotic behavior of the learned models.\n\n**Response to Question 6.**\nThanks for the question. The question helps us realize that there are some confusing parts in the paper when introducing the concept of stability. Here we make a more precise explanation and will revise the paper accordingly.\n\nIntuitively, for any function $u_{\\theta}$ with a small loss, it can be imagined that there is another PDE, slightly different from the original PDE, whose solution is $u_{\\theta}$. Here the term \"slightly different\" is defined by the loss. Then the central question becomes whether these two \"slightly different\" PDEs will always have slightly different solutions,which relates to the stability of the PDE. If the answer is yes (i.e., the PDE is stable), a small-loss solution will always approximate the true solution well. \n\nTo be specific, suppose we obtain an approximate solution $u_\\theta$ whose loss is small. \nThen $u_\\theta$ corresponds to the solution to a perturbed PDE,\n$$\n \\begin{cases}\n \\mathcal{L}u(x)=\\varphi(x)+R_1(x)& \\quad x\\in\\Omega\\\\\\\\\n \\mathcal{B}u(x)=\\psi(x)+R_2(x)& \\quad x\\in\\partial\\Omega,\n \\end{cases}\n$$\nwhere $R_1:=\\mathcal{L}u_\\theta -\\varphi,\\ R_2:=\\mathcal{B}u_\\theta-\\psi$ can be seen as perturbations. Ideally, one would hope the approximate solution can be close to the exact solution when the perturbation is sufficiently small, and this is what \"stable\" refers to.\n\n**Response to limitations.**\nThanks for the comments. These comments definitely figure out the potential of this research direction. We have responded to some of the points in the list of limitations above and will respond to other points here. Our definition of $(Z_1,Z_2,Z_3)$-stable is general and could be adapted directly to other PDEs. But determining whether certain PDE is stable and which loss is suitable to solve it may require problem-dependent analysis. This would be a promising future direction. HJB is a large class of PDEs that consists of instances with various properties. In the paper, we are careful not to make a general statement to cover all HJB equations. But we believe our technique can be used to analyze any given HJB class.\n\nMay you have any further questions, please tell us and we are willing to address your concerns.", " The paper questions the status quo of using L2-loss for optimizing PINNs. Theoretical results show that a learned solution to HJB is (Z1, Z2, Z3)-stable iff the physics-informed loss uses Lp loss with sufficiently large p. Empirical results partially support the theory and show that optimizing L-inf loss will result in lower L1 error in the case of a linear LQG problem. As L-inf loss is challenging to optimize with Adam, the paper proposes an adversarial training algorithm that indeed outperforms optimization of L-inf with Adam. \n Strength:\n- The work is significant. The broader field of learning surrogate models of PDEs can have wide-reaching impacts in physics, chemistry, fluid dynamics, biology, climate modeling and more. Within the field of phyisics-informed machine learning, the authors have identified and addressed a very important research question: Does there exist theorems that connect properties of PDEs to the optimal choice of PINN loss function. The choice of HJB equations is important and developing the theory on HJB equations only should be sufficient for acceptance. \n- The authors provide rigorous and sound theory to prove that a learned solution to HJB is (Z1, Z2, Z3)-stable iff the physics-informed loss uses Lp loss with sufficiently large p.\n- The authors clearly state the research question [L34-35a] and include necessary background math, e.g., Def. 4.2.\n\nWeaknesses: \n- One of the authors main contributions seems to be the notion of (Z1, Z2, Z3)-stable. However, it is still unclear to me 1) if (Z1, Z2, Z3)-stable actually is a novel concept that the authors came up with, 2) what the concept can and cannot be used for, and 3) how the concept relates the 'stability' in PDEs (see Qestuions 5-6). \n- It is challenging to fully evaluate originality of the work, because the related works section is very sparse. I have inluded some questions to evaluate originality and am willing to raise my score.\n- The empirical evaluation does not fully support the theorems, nor does it fully answer the research question. I have included some questions to evaluate the quality of the empirical section.\n Results and Impact:\n1) The authors evaluate the approximation quality of their method on L1 error (Table 1). What is the justification for only using L1 to empirically define a \"good approximator of the exact solution\" (L34)? Indeed, the proposed L-inf loss results in a lower L1 error than L2 loss on a high-dim. linear LQG problem. But, it is unclear to me if L-inf loss will also result in a lower L2 error. I would be willing to raise my score with a comparison of L2 error and a discussion of it. \n2) Can a general statement be made that L-inf loss will be a more appropriate loss choice for high-dimensional problems?\n\nRelated Works:\n\n3) What is the broader novelty wrt. prior works that use deep learning for HJB? The related works state that there exists \"several works\" that solve HJB with deep learning, yet the authors only mention two works [L82-90]. A quick Google shows that there is other works by, e.g., Nüsken and Richter, 2021; \n4) There exist works that use GANs instead of L2 loss. What is the advantage of using L-inf over the GAN loss?\n5) Is (Z1, Z2, Z3)-stable a new concept developed in this paper or does it already exist in the literature? If so, what is the citation for (Z1, Z2, Z3)-stable.\n6) It is confusing to me that authors use the word 'stable' to talk about approximation quality in Sec 4. More specifically, how does (Z1, Z2, Z3)-stable relate to the general definition of stability in PDEs [L93-95]? An \"equation is stable if the solution of the perturbed PDE converges to the exact solution as the perturbations approach zero [6]\"\n - There is no discussion of limitations nor of negative societal impacts.\n- It would be very helpful if the limitation section could list and analyze the assumptions in Definitions/Theorems 4.1-4.4 and discuss what the extra steps would be to adapt the Definitions to other classes of PDEs beyond HJB.\n- Are the theorems truly applicable to *all* variations of the HJB equations?\n- What does (Z1,Z2,Z3)-stable mean and what does it *not* mean?\n- It would be helpful if the authors mention that, while the work is very theoretical, it could be used to improve numerical modeling of applications that violate the NeurIPS ethical guidelines. \n", " This paper studies the objective function in physics-informed learning for HJB equations. Authors propose the concept of stability to analyze what is a good choice of loss function. The theoretical result challenges the common practice of $L^2$ and suggests that $L^p$ with large $p$ gives stability guarantee. The authors propose an adversarial training algorithm to minimize $L^{\\infty}$ loss based on their theoretical finding. The proposed method demonstrates superior empirical performance in the simulated high-dimensional problem. Strengths:\n1. This paper is well-motivated and well-written. Stability analysis is important for using physics-informed learning to solve PDEs. \n2. The theoretical results advance the understanding of loss design in physics-informed learning. \n3. The proposed method demonstrates superior empirical performance compared to other baselines in the simulated high-dimensional problem. \n\nWeakness:\n1. Adversarial training is time consuming and it introduces more hyperparameters to tune. \n2. The experiment only considers two special cases (including the one in the appendix) of HJB equations which have close form solutions. The one in the appendix doesn't compare with other baselines. The empirical evidence is not strong enough to me. \n3. The theoretical results suggest Lp loss with a large p guarantees better stability than L2. However, the results in Table 2 turn out to be the opposite. Explanation in the Section 6.2 seems vague. \n4. The setting seems restricted. More details are provided in the questions. 1. In the stochastic differential equation given by Eq (4), the output of control function has the same dimension as the state. Also, the control function only depends on time. But I think the control variable usually has different dimension from the state variable and the control function should be a function of both time and state. The setting described in Eq (4) seems too simple compared to what I found in the literature equation 2.10 in section 2.4 [1]. Is this a standard setting in optimal control? \n2. Given the results in Table 2 and what authors said in page 5 -\"$L^p$ and $L^{\\infty}$ -norm behave similarly when $p$ is large\", it seems that minimizing the exact $L^{\\infty}$ loss may have poor performance. The reason why the adversarial training works better than $L^2$ seems to be that adversarial training fails to approximate $L^{\\infty}$ -norm well. Is it possible to check \n3. Given the training objective in Eq (8), shouldn't the objective functions in algorithm line 5,7,8 use absolute value instead of $L^2$? \n\n[1] Lu, Q., and Xu Zhang. \"A mini-course on stochastic control.\" _Control and inverse problems for partial differential equations_ 22 (2016): 171-254. Listed in the weakness and question sections. ", " This paper studies the choice of loss functions when using neural networks to solve partial differential equations. The authors establish positive and negative results showing the conditions under which the problem of solving a Hamilton-Jacobi-Bellman equation is stable/unstable. Based on this, an $L^\\infty$ training approach is proposed. Experiments demonstrate that the proposed method has better performance than existing methods.\n Strengths: \n\nThe results of this paper are solid and comprehensive. \n\nThe paper is well-organized and easy to follow. \n\nThe experiments clearly demonstrate the advantage of the proposed $L^\\infty$ loss over the original PINN methods.\n\n\n\nWeaknesses: \n\nThis work focuses on a quite specific class of HJB equations and requires $\\bar{c} \\leq 2$ for the stability result to hold, which seems to require $\\alpha_i \\geq 2$ in the cost rate function.\n\nTable 2 in the ablation studies seems to contradict the theorem results. Some more discussions may be quite helpful here to explain why finite but larger values of p lead to worse performances.\n I suggest that the authors could address the weaknesses mentioned in the strengths and weaknesses section. This paper does not have any potential negative societal impact.", " Te present work is concerned with the application of physics-informed neural networks (PINNS) to the solution of high-dimensional Hamilton-Jacobi-Bellman equations (HJB). They show that in a wide range of cases directly applying the standard squared penalty to HJB and boundary conditions does not result in a stable problem, meaning that the PINN loss can go to zero without the PINN solution approximating the true solution. In order to overcome this problem, the authors propose to use the higher-order $L^p$ penalties. They show that higher-order $L^p$ penalties achieve stability under a wider range of conditions, with $L^\\infty$ being the extreme case. They then design an adversarial training procedure to train with the $L^\\infty$ loss in practice and show that it improves the relative accuracy of the solution compared to existing PINN variants. \n\n### After rebuttal:\nThe authors have addressed my concerns appropriately. I now recommend acceptance. Strengths: The method is well-motivated and seems to perform well in practice on the problem of interest.\n\nWeakness: As detailed under \"questions\", the present version of the paper leaves doubts regarding some of the claims. The experimental validation seems to be in a setting not covered by the theory and the most common diagnostics, such as error vs time, are missing.\n\nI want to emphasize that I think that this has the potential to be a solid paper suitable for acceptance. I am giving a somewhat lower score right now but hope to be able to raise it if the authors can clarify my concerns. 1. In the theorems 4.3 and 4.4, the target accuracy is measured in the $W^{1,r}$ norm, meaning in terms of the $L^r$ norm of both the solution and its gradient. However, in the experiments, the error is measured in terms of just the $L^p$ norms of the solution. Thus, experiments do not actually seem to illustrate the theoretical results as advertised.\n\n2. Following up on 1, are similar results true when measuring solution accuracy in $L^p$ spaces (instead of sobolev spaces)? If not, the authors should provide more motivation accuracy in sobolev spaces is important. \n\n3. Did you find that different learning rate schedules had an effect on the accuracy of Adam? \n\n4. The error plots of Figure 4 are not very meaningful instead, or complementing them I think it would be important to show learning to show time series of the training and test error. This would allow to distinguish difficulties of the finite batch setting to accurately approximate the infinite-batch-training loss from difficulties in optimizing the finite batch loss, from the problem of stability that is motivating the author's technique. \n As discussed above under \"questions\", there presently seem to be gaps between theory and experiments that are not commented on in the text." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "qklb0Gkp8xD", "xfRTtnQ-uCp", "Nm1GEpESFoF", "4kfqLYrhs6-", "nips_2022_cy1TKLRAEML", "wvT6LIwTqTT8", "gDdT72sqm7p", "nips_2022_cy1TKLRAEML", "DTrMs5iM7Mq", "w5Wcc7psQBQ", "1WLC2q3EYV_", "yIgmk8k0DJ", "yIgmk8k0DJ", "nips_2022_cy1TKLRAEML", "nips_2022_cy1TKLRAEML", "nips_2022_cy1TKLRAEML", "nips_2022_cy1TKLRAEML" ]
nips_2022_3v44ls_4dbg
Learning Infinite-Horizon Average-Reward Restless Multi-Action Bandits via Index Awareness
We consider the online restless bandits with average-reward and multiple actions, where the state of each arm evolves according to a Markov decision process (MDP), and the reward of pulling an arm depends on both the current state of the corresponding MDP and the action taken. Since finding the optimal control is typically intractable for restless bandits, existing learning algorithms are often computationally expensive or with a regret bound that is exponential in the number of arms and states. In this paper, we advocate \textit{index-aware reinforcement learning} (RL) solutions to design RL algorithms operating on a much smaller dimensional subspace by exploiting the inherent structure in restless bandits. Specifically, we first propose novel index policies to address dimensionality concerns, which are provably optimal. We then leverage the indices to develop two low-complexity index-aware RL algorithms, namely, (i) GM-R2MAB, which has access to a generative model; and (ii) UC-R2MAB, which learns the model using an upper confidence style online exploitation method. We prove that both algorithms achieve a sub-linear regret that is only polynomial in the number of arms and states. A key differentiator between our algorithms and existing ones stems from the fact that our RL algorithms contain a novel exploitation that leverages our proposed provably optimal index policies for decision-makings.
Accept
The paper tackles the challenging problem of online learning restless multi armed bandit (RMAB) policies. Among its contributions are the introduction of a new tractable class of RMAP policies to learn over, and tractable learning algorithms, with regret guarantees, along the lines of statistical upper confidence bounds. These could serve as useful building blocks for theoreticians and practitioners in the area alike. The contributions of the paper are unanimously acknowledged to be positive by the reviewers, after their initial reviews were responded to in detail by the paper's author(s) leading to helpful clarifications. In view of this, I recommend acceptance of the paper.
train
[ "fLVM0zkGHnM", "3Sx6kpsHG3u", "qZf1A87mjqZ", "RX3NqraIWzl", "DA7-oQ-LdFB", "nZIMtYPfoxY", "nS2k9E43qMN", "XtPI5mRP-_", "TZZMxJpIM1r", "6WXLvMqQJuI", "zxIbltrT2SD", "qZD7GWv8EYb", "QsKfAKUhsap", "JWbjjLO-z3C", "s1ptbfK0Jig", "Ap2gFcISm9c" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer again for clarifying this problem. In the following, we further discuss the reward function in the context of restless multi-armed bandits (RMAB). Note that we consider RMAB, more precisely, R2MAB in this paper, rather than the classical MAB (which is stateless in general), while each arm is stateful (via a MDP) in RMAB. \n\nIn general, there are two reward models considered in the RMAB literature: \n\nModel 1: All arms yield reward no matter the arms are activated or not;\n\nModel 2: Only activated arms yield rewards. \n\nBoth models have been widely used and have arisen in different applications. For example, model 1 is widely adopted for queuing problems, e.g., References [15] [16] [26] [30] in our main paper, where all queues incur holding reward (or cost), along with others e.g., References [5] [14] [24] [36] in our main paper. Model 2 is widely adopted for cognitive radios, e.g., References [9] [19] [20] [42] in our main paper, where rewards are generated only on the state of selected channels, along with many other learning augmented RMAB literatures, e.g., References [2] [56] [57] [59] [62] in our main paper. These two models are similar without fundamental differences as discussed in [2], and they are exactly the same under the assumption that r(s,0)=0. \n \nOur design of index policy and then index-aware RL algorithms are general and hold for both models with minor differences in the performance guarantees. First, the designed ERC index policy works on both reward models. As given in equation (6), the reward model only affects the absolute values of the ERC indices, while it has no effect on the implementation of the proposed index policy. Second, the regret bound only has a marginal difference in the multiplicative prefactor that goes with the time-dependent function in the regret bound under these two models, where the difference lies between the number of arms N and the number of maximum activated arms, which is upper bounded by B. In the current setting (i.e. Model 2), the reward generated by activated arms will not exceed B due to the fact that r\\in [0,1]. This gives the bound in Lemma 4, Lemma 6, Lemma 7, Lemma 10, and Lemma 15 (see supplementary materials), where B only shows in the prefactor that goes with the time-dependent function in the regret bound. If we switch to Model 1, where passive arms also generate reward, then the reward generated will not exceed N due to the fact that r\\in [0,1], which changes slightly in the prefactor of regret bound. It will not affect the sublinear regret order under the square root. Similar arguments were discussed in Reference [2] for learning the classical Whittle index policy. \n \nWe thank the reviewer's valuable comment and provide us this opportunity to clarify this issue. We believe this clarification will further improve the quality of the paper. We will remove the w.lo.g., and instead to discuss the aforementioned difference of these two models, as well as the generalization of our proposed policies for these two models in the camera-ready version of the paper, e..g, adding a remark. \n", " I have read through the authors' rebuttal as well as the other reviews and have found the responses mostly satisfactory. I'd like to keep my score unchanged.\n\nHowever, wrt the comment on line 114 \"w.l.og...\", I think in general, there can be a monumental difference caused by a reward function that accrues reward from passive arms vs one that doesn't. There are also several papers that consider reward from all arms (not just active). In light of this, clarifying that the model is r(s,0)=0 (not without loss of generality) would be valuable in my view. \n\n", " Since the reviewer-author discussion period is ending soon, we just wanted to check in and ask if our rebuttal clarified and answered your questions. We would be very happy to engage further if there are additional questions. \n\nAlso, we wanted to check if our additional clarifications regarding the merits of the paper would convince the reviewer to raise the score. Thank you! ", " Since the reviewer-author discussion period is ending soon, we just wanted to check in and ask if our rebuttal clarified and answered your questions. We would be very happy to engage further if there are additional questions. \n\nAlso, we wanted to check if our additional clarifications regarding the merits of the paper would convince the reviewer to raise the score. Thank you! \n", " **Your comment:** “Limitations: While the proposed approach is direct at multi-action RMABs and not the binary action case, the paper does not discuss the two algorithms' performance when the restless arms are indexable. A brief discussion of how indexability affects regret upper bounds would be interesting.\"\n\n**Our response:** As we are considering general RMAB with multiple actions, there is hardly an argument related with indexability. Indexability condition is established upon the binary action setting, where a=0 represents passive action and a=1 as active action, and thus Whittle’s index is defined for binary-action RMAB. To the authors’ best knowledge, indexability conditions related to multiple actions have not been well defined yet. In this sense, it is difficult to claim any connection between whittle’s index and our proposed indices for the general RMAB setting. \n\nNevertheless, when considering RMAB with binary action setting and indexability condition is satisfied, the Whittle index is also asymptotically optimal and thus it belongs to the category of the proposed index policies. \n\n\n**Your comment:** “Societal impact: The paper presents two algorithms for RMABs in the average rewards' setting, including one medical trial case in India. Hence, an application-specific analysis should be performed by those wishing to use either algorithm (especially in medical trials' cases).\"\n\n**Our response:** Our research shows how the proposed two low-complexity index-aware RL algorithms, GM-R2MAB and UC-R2MAB perform for online infinite-horizon average-reward restless multi-action bandits theoretically and numerically. For the sake of exposition and reproducibility, we have used public dataset of the TB care in India [37], which are interpreted and leveraged without specialist medical-care domain knowledge, and without private human information (patients are divided into four types with a ratio, and other parameters are synthetic). However, the proposed methods are potentially relevant to any scientific application that can be formulated as a R2MAB framework. As for societal impact of our work, we highlight the need for specific information about involved individuals, or network metadata, which may lead to privacy issues and we hope to raise awareness of these potential issues of privacy.\n\n", " **Your comment:** “Both GM-R2MAB and UC-R2MAB are able to learn index policies without the indexability condition (needed for the Whittle index). However, if the considered RMAB is indexable, how would that affect the two algorithms' regret bound? would indexability give a tighter regret bound?”\n\n**Our response:** Thank you for your insightful comments and pointing out this clarity issue. The indexability or non-indexability is a property that is needed for designing index policies for conventional approaches. For example, if a problem is non-indexable, then the Whittle index policy is not feasible. Also, the indexability is defined by Whittle in the seminal paper [61] for the classical RMAB (i.e., two actions). To the best of our knowledge, there is no rigorous definition of indexability in the multi-action settings. To circumvent this limitation/difficulty, we propose a more general linear programming approach to design index policies without the indexability requirement (also see Remark 1, Lines 168-170). In other words, if we consider a 2-action setting, where R2MAB reduces to RMAB, then our ERC index policy is always feasible no matter whether the underlying RMAB is indexable or non-indexable. However, the Whittle index policy is only feasible if the RMAB is indexable. \n\nWe designed GM-R2MAB and UC-R2MAB for online R2MAB, which is quite challenging, and we listed three challenges/limitations of state-of-the-art methods in the introduction (Lines 41-61). Our key contribution is that we advocate index-aware reinforcement learning (RL) solutions to design RL algorithms operating on a much smaller dimensional subspace by exploiting the inherent structure in restless bandits. To achieve this, we first need to design ERC index policy to exploit the inherent structure in R2MAB to address the dimensional concerns, and then two learning algorithms on top of our ERC index policy, i.e., GM-R2MAB and UC-R2MAB leverage ERC index policy to make decisions, rather than contending directly with the extremely high-dimensional state-space for decision making (lines 48-53), e.g., via repeatedly solving complicated Bellamn equations as in existing approaches for making decisions. To this end, the indexability does not affect our learning algorithms since our index policy is well-defined regardless of indexablity. \n\n**Your comment:** “Assuming the RMAB is indexable, is there a characterization of the two algorithms' learned indices and the Whittle index if it exists? How would the learned index relate to the Whittle index?”\n\n**Our Response:** As we are considering general RMAB with multiple actions, there is hardly an argument related with indexability. Indexability condition is established upon the binary action setting, where a=0 represents passive action and a=1 as active action, and thus Whittle’s index is defined for binary-action RMAB. To the authors’ best knowledge, indexability conditions related to multiple actions have not been well defined yet. In this sense, it is difficult to claim any connection between whittle’s index and our proposed indices for the general RMAB setting. \n\nNevetless, when considering RMAB with binary action setting and indexability condition is satisfied, the Whittle index is also asymptotically optimal and thus it belongs to the category of the proposed index policies. For example, we numerically verify the asymptotic optimality of these index policies in Figure 1 with 2 actions, i.e., the classical RMAB setting. \n\n**Your comment:** “In the checklist under 4.d, I couldn't locate in the main text or the appendix where consent was obtained for the used data, which should be explicitly mentioned in the revised manuscript.”\n\n**Our response:** Thank you for the comment. The data set we used is a public data set as mentioned in [37], which are interpreted and leveraged without specialist medical-care domain knowledge, and without private human information (patients are divided into four types with a ratio, and other parameters are synthetic). We follow the same setting as [37] for ease of exposition and reproducibility. \n", " **Your comment:** “In the experiments' section, I understand that the page limit prevents a longer description of the results. In the revised manuscript, the author(s) should dedicate more explanation as to why the two algorithms perform better than the baselines. Currently the discussion rightly mentions that the two algorithms solve an ELP, but it would help if the author(s) discuss UC-R2MAB and GM-R2MAB limitations compared to the baselines. In addition, it would be nice to aggregate all algorithms' results into one plot rather than splitting them between two plots (one in the main text and the other in the appendix). I currently need to look at two graphs that consider a single case study to understand how the two algorithms perform.”\n\n**Our response:** Thank you for your insightful comments and suggestions. First, as we discussed in the introduction (lines 44-47), though Whittle index policy is a celebrated heuristic for RMAB, finding it is typically intractable since the Whittle index policy is “well-defined” or “feasible” only if a so-called “indexability” condition is satisfied, which is hard to establish. In contrast, we circumvent this limitation by developing a more general linear programming approach, and hence our ERC index policy is well defined with a low complexity. In addition, we focus on a multi-action setting, i.e., R2MAB, while the Whittle index policy is defined for conventional RMAB with two actions. \n\nSecond, we mainly discussed the limitations of existing learning based algorithms for RMAB in the introduction section (lines 39-59). The main contribution of this paper is then to develop low-complexity learning algorithms for R2MAB (a generalization of RMAB) with order-of-optimal regret. For example, our UC-R2MAB only needs to solve LP, compared to state-of-the-art colored-UCRL2, which is much more computationally efficient. These are the merits of our proposed algorithms compared to these baselines. To the best of our knowledge, some baselines lack finite-time performance analysis. \n\nWe fully understand the reviewer’s concern and we are sorry that we have to relegate some experimental results to the supplementary material due to space constraints. Here we would like to make a clarification: As we study the general restless multi-action multi-armed bandits problem (R2MAB), we consider the performance impact of the number of actions. This is different from the classical RMAB which only has 2 actions. To this end, we consider 2, 3, 5, and 10 actions. For each performance metrics, we consider these 4 cases. \n\nFor example, Figure 1 (2 action) and Figure 2 (10 actions), as well as Figure 1 (3 actions) and Figure 2 (5 actions) in the supplementary materials are all for the evaluation of “asymptotic optimality”, which all deliver the same message that the index policies are asymptotically optimal. We decided to choose two cases in the main paper: 2 actions, since this is the conventional setting for RMAB, and 10 actions, since this is one example for multiple actions setting. Similar observations or conclusions can be made for cases with 3 and 5 actions and hence are relegated to the supplementary materials. These plots are parallel to each other and hence cannot be aggregated into one plot. Similar reasons hold for Figure 3 (2 action) and Figure 4 (10 actions), as well as Figure 5 (3 actions) and Figure 6 (5 actions) in the supplementary materials for the regret comparison. \n\nLastly, as mentioned above, in the classical RMAB, there are just 2 actions, and there are several baselines designed to learn Whittle index policy (2 actions), and hence we compare all in Figures 7 and 8 in the supplementary material (see Lines 244-263). Since these two plots are in the same setting, they can be aggregated into one plot. Per the reviewer’s approval, we can move some of these results to the main paper since we will have an additional content page for the camera-ready version. \n\n**Your comment:** “While the paper's technical analysis and sections are well-written, the paper's organization\\spelling accuracy can be significantly improved as I highlight below:” \n\n**Our response:** We thank the reviewer very much for your patience to read the paper and pointing out these typos. We have addressed these typos in the paper and we will carefully go through the paper to further improve its quality in the camera-ready version. ", " Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:\n\n**Your comment:** “I find lemma 1 statement to be vague and the proof a bit general. The lemma statement depends on the value definition V that is only mentioned in the appendix with the scaling parameter ρ (for theorem 1). I find having an explicit definition of the average reward value necessary before going into the proofs. For lemma 1 proof, the wording should be modified to indicate that the original problem's feasible region Γ is reachable for all points of the relaxed problem's region Γ/.”\n\n**Our response:** Thank you for this insightful comment and pointing out this clarity issue. The logic of Lemma 1 goes as follows. We have the original R2MAB in (1) (between lines 120-121 in the main paper). Then, we relax the “hard” constraint in the original problem to be an averaged constraint, and formulated a relaxed problem as in A.1 in the supplementary material (between lines 8-9 in the supplementary material). Therefore, the optimal value achieved by this relaxed problem is an upper bound of that of the R2MAB (1) since the constraint of this relaxed problem expands the feasible region of the original R2MAB (1), i.e., the original problem (1)’s feasible region is a subset of the above relaxed problem’s feasible region due to the relaxation. \n\nIn addition, as shown in [3], the above relaxed problem can be equivalently reformulated as the LP in (2)-(5) using the definition of occupancy measures \\{\\omega_n(s,a)\\}. Due to the equivalence between the LP (2)-(5) and the above relaxed problem, and the fact that the optimal value achieved by the relaxed problem is an upper bound of that of the R2MAB (1), we reach the conclusion in Lemma 1 that the optimal value achieved by the LP (2)-(5) is an upper bound of that of the R2MAB (1). We are sorry that this is not clear, partly due to the space constraints. We can move the discussions in A.1 in the supplementary material to the main paper since we will have one additional content page for the camera-ready version. \n\n", " Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. ", " **Your comment:** “The authors mark N/A in their checklist for both limitations and social impacts ....”\n\n**Our response:** Our research shows how our proposed two low-complexity index-aware RL algorithms, GM-R2MAB and UC-R2MAB perform in the setting of online infinite-horizon average-reward restless multi-action multi-armed bandits. Our main contributions are primarily analytic in nature, i.e., mainly in the theory part. The evaluation of our algorithms are conducted through a combination of mathematical analysis (e.g., finite-time analysis) and simulations. For sake of exposition and reproducibility, we used a public dataset of the TB care in India [37], which are interpreted and leveraged without specialist medical-care domain knowledge, and without private human information (patients are divided into four types with a ratio, and other parameters are synthetic). However, the proposed methods are potentially relevant to any scientific application that can be formulated as a R2MAB framework. As for societal impact of our work, we highlight the need for specific information about involved individuals, or network metadata, which may lead to privacy issues and we hope to raise awareness of these potential issues of privacy.\n\nOne limitation of the method may come from the above discussions regarding the technical assumption of “global attractor” to prove the asymptotic optimality of ERC index policy. Though this is a standard assumption and widely used in the literature, it is hard to be established analytically. A possible direction or an open problem is to establish a sufficient condition to rigorously establish the global attractor property. \n\nWe would add these statements in the camera-ready version per the reviewer’s approval. \n\n**Your comment:** “More evaluation would be helpful. Specifically, authors vary the number of actions, but not the number of arms, which is they key scaling parameter in RMAB. Authors should provide some experiments to show both regret and runtime scaling as N increases for at least one domain (Fig 13-16, Fig 17-18). ...”\n\n**Our response:** Thank you for your insightful comments and suggestions. We have added additional numerical results to the supplemental materials. We place them there for the sake of explanation, and we can add or replace some results in the main paper per the reviewer’s approval since we will have one additional content page for the camera-ready version. \n\nAsymptotic optimality: We evaluate the asymptotic optimality of index policies in consideration with the number of arms. Similarly, we consider four cases, 2, 3, 5, and 10 actions. In each case, we vary the number of arms up to 20,000. From Figures 9-12 in the supplementary material (on page 12), Again, we observe that all policies are asymptotically optimal. Though the optimality is established in the asymptotic regime (i.e., a large number of arms), we observe that the optimality gap of each policy quickly decreases and gets close to zero. \n\nRegret and running time: We consider 2 and 10 actions, with 200 and 2,000 arms. The corresponding accumulated regrets are shown in Figures 13-16. We also compare our algorithms with a state-of-the-art method named LPQL [36]. Again, we observe that UC-R2MAB achieves the lowest accumulative regret. We also compare the average running time of these algorithms. For two-action (10-action) setting with a total of 200 arms, the average running time of GM-R2MAB, UC-R2MAB, MAIQL, LPQL, and TS is 86s (144s), 308s (607s), 348s (702s), 314s (623s) and 359s (681s), respectively. Similarly, when the total number of arms is 1,000, the average running time with two-action (10-action) of GM-R2MAB, UC-R2MAB, MAIQL, LPQL, and TS is 114s (188s), 443s (813s), 512s (912s), 470s (947s) and 560s (901s), respectively. Finally, when the total number of arms is 2,000, the average running time with two-action (10-action) of GM-R2MAB, UC-R2MAB, MAIQL, LPQL, and TS is 179s (261s), 703s (1187s), 810s (1354s), 724s (1247s) and 823s (1340s), respectively. It is clear that our GM-R2MAB, and UC-R2MAB are more efficient in running time. As pointed out by the reviewer, LPQL (multi-action setting) and NeurWIN (conventional 2-action setting) were born for discounted rewards, rather than the infinite-horizon average-reward setting considered in this paper. To make a relatively fair comparison, we implemented them with a discount factor of 0.99 to be close to 1. \n\nThe regret comparison using two real settings are also presented in Figures 17 and 18. We again observe that UC-R2MAB achieves a sub-linear regret and outperforms all baselines. \n", " Thank you very much for your review and constructive comments. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:\n\n**Your comment:** “My understanding is that the global attractor property of a policy class is data dependent, and thus needs to be verified for each new dataset to which a policy class is applied. ...\"\n\n**Our response:** Thank you for this important comment. We agree with the reviewer’s suggestions. We would like to first clarify two important concepts in restless multi-armed bandits (RMAB) literature: (1) indexability, and (2) global attractor. Then we discuss how to improve our paper based on the reviewer’s suggestions. \n\n(1) “Indexability”: As is known, Whittle [61] proposed the celebrated heuristics called Whittle index policy for addressing the hardness of RMAB. However, the “feasibility” or the definition of Whittle index policy is based on the condition that a so-called “indexability” property must be satisfied. In other words, if a problem is not indexable, then the Whittle index policy is not feasible, i.e., cannot be defined and hence cannot be applied to solve that problem (i.e., applied to the dataset). Exacerbating this challenge is the fact that establishing the indexability of RMABs is typically intractable [48], and hence Whittle indices of many practical problems remain unknown except for a few special cases. This is due to the fact that many practical problems are naturally not indexable [56], and hence many efforts have been focused on designing index policies without the requirement of the indexability, i.e., making index policies feasible even if the problem is nonindexable, e.g., [31,64,65,62]. However, these cannot be applied to our problem. See Remark 1, particularly lines 165-170 for discussions. \n\nIn summary, “indexability” is a condition that must be satisfied in order to define or make Whittle-like policies **feasible**. We focus on designing index policies that are always feasible without this condition, and hence can be always defined for problems that can be formulated as RMABs (or more precisely R2MAB), no matter if this problem is indexable or not. \n\n(2) “Global attractor”: The asymptotic optimality of index policies is often shown using a fluid limit analysis by considering the regime of a large-scale system. For instance, the seminal work [60] established the asymptotic optimality of Whittle index policy by showing that the state distribution under Whittle index policy and the steady-state distribution under the optimal policy for the corresponding relaxed problem diminishes to zero in the asymptotic regime. To this end, [60] defined a technical condition “global attractor” to prove the asymptotic optimality of Whittle index policy, which is feasible conditioned on that the indexability condition is satisfied, as discussed above. Following [60], most existing literature, e.g., [29,58,67,23] on proving the asymptotic optimality focuses on such a fluid limit and often makes the **technical assumption** that a fixed point of the proposed index policies satisfy the global attractor condition. \n\nIn summary, “global attractor” is a technical assumption that is made to prove the performance (i.e., asymptotic optimality) of index policies. As we stated in Remark 2 (lines 182-185), though it is hard to analytically establish that a fixed point is a global attractor, we numerically show that the fixed point of our process indeed satisfies this assumption, and hence this assumption is indeed valid. (Since this is a technical assumption, many works did not even numerically verify it). \n\nWe thank the reviewer again for this important suggestion. When we characterize the regret, we leveraged the asymptotic optimality of the ERC index policy, and the proof of asymptotic optimality is based on the global attractor property. The regret in [2] is defined in a similar manner (with respect to Whittle index policy, which is asymptotically optimal). Part of the reason is that finding the offline optimal policy for RMAB or R2MAB is typically intractable. We propose two ways to make this clear in the paper since we will have one additional content page for the camera-ready version. One is to update Definition 2 of the regret (Lines 192-197), the other is to add an additional remark to illustrate how the regret is computed, similar to [2]. Per the reviewer’s approval, we will include them in the camera-ready version. \n", " Thank you very much for your review and constructive comments, as well as giving the positive rating of our work. Here we would like to address the reviewer's concerns and hope that can help raise the rating of our paper. The detailed responses are as follows:\n\n**Your comment:** “The writing/exposition is not clear at some places: For instance, the abstract says each arm evolves according to a Markov chain. However, later in section 2, each arm is said to evolve according to an MDP.”\n\n**Our response:** Thanks for pointing out this clarity issue. An MDP is a controlled Markov chain and the state transition depends on the controlled action. To be consistent, we will change the “Markov chain” in the abstract to “an MDP”.\n\n**Your comment:** “grammatical issues/typos involving usage of plural/singular quantity at few places”\n\n**Our response:** We are sorry for these typos. We have addressed some and will carefully go through the paper again to address all typos in the final version. \n\n**Your comment:** “Line 114 says w.l.o.g r(s,0)=0. However, how is this general given that r() could potentially depend on s, i.e. r(s,0) =s?” \n\n**Our response:** Thank you for your insightful comments and pointing out this clarity issue. In the RMAB setting, it is widely assumed that passive arms (i.e., with action a=0) lead to no reward, i.e., r(s,0)=0. However, this reward model does not affect the policy design and its impact on bounding the regret is minor. It only has an impact on the multiplicative “pre-factor\" that goes with the time-horizon dependent function in the regret, i.e., we still achieve a sub-linear regret with a polynomial prefactor. In our simulations, positive rewards are generated only for active arms. \n\n**Your comment:** “Lines (70-74) say that the paper takes a more general approach, hence is more efficient. However, this seems a bit counterintuitive; shouldn’t there be a trade-off? Is there something else that is being given up?”\n\n**Our response:** Thank you for this insightful comment. We respond to this question from two perspectives: (1) The feasibility of index policies (i.e., indexability or not); and (2) The performance of index policies (i.e., asymptotic optimality). \n\n---“Feasibility”: As we discussed in the introduction (lines 44-47), though Whittle index policy is a celebrated heuristic for RMAB, finding it is typically intractable since the Whittle index policy is “well-defined” or “feasible” if a so-called “indexability” condition is satisfied, which is hard to establish. As a result, Whittle indices of many practical problems remain unknown except for a few special cases. Exacerbating this challenge is the fact that the Whittle index policy is defined for the conventional RMAB, i.e., only two actions (passive or active), while we consider a general RMAB with multiple actions, i.e., R2MAB. In this paper, we bypass this issue to design index policy via a general LP approach. Our proposed framework to design index policy does not require the indexability condition, and our index policy is well defined and feasible for both indexable and non-indexable problems, with the latter extensively existing in practice (e.g., [58], and see lines 165-170). \n\n---“Asymptotic optimality”: The asymptotic optimality of index policies is often shown using a fluid limit analysis by considering the regime of a large-scale system. For instance, the seminal work [60] established the asymptotic optimality of Whittle index policy by showing that the state distribution under Whittle index policy and the steady-state distribution under the optimal policy for the corresponding relaxed problem diminishes to zero in the above asymptotic regime. [60] defined a technical condition “global attractor” to prove the asymptotic optimality of Whittle index policy, which is feasible conditioned on that the indexability condition is satisfied, as discussed above. We also focus on such a fluid limit in this paper to show the asymptotic optimality of our ERC index policy under the technical condition “global attractor”. This technical condition is difficult to establish analytically and is only verified numerically, as in many prior works [60, 29, 58, 67, 23]. See Remark 2 (lines 182-185). \n", " The paper considers the online RMAB problem, with multiple available actions and when the MDP on each arm is unknown. The paper derives an index policy for the online RMAB problem, different from the WHittle index policy and then proposes two index-aware RL algorithms. The paper shows that both algorithms achieve sublinear regret. **Strengths**\n\n\n\n\n- The approach seems principled and theoretically solid. The paper proves that their proposed index based approach is asymptotically optimal. \n - Empirical analysis: The experiments seem to be comprehensive; they cover a wider range of settings and test against most of the relevant baselines, including fairly recent baselines. \n- The context is well set up: explaining where the sota is and what gap needs to be filled.\n\n**Weaknesses**\n\n- The writing/exposition is not clear at some places: For instance, the abstract says each arm evolves according to a Markov chain. However, later in section 2, each arm is said to evolve according to an MDP. \n\n- Minor issues: grammatical issues/typos involving usage of plural/singular quantity at few places\n - Line 114 says w.l.o.g r(s,0)=0. However, how is this general given that r() could potentially depend on s, i.e. r(s,0) =s? \n- Lines (70-74) say that the paper takes a more general approach, hence is more efficient. However, this seems a bit counterintuitive; shouldn’t there be a trade-off? Is there something else that is being given up?\n Yes ", " Authors provide two learning algorithms for the multi-action restless bandit problem (R2MAB), with new regret bounds that advance state of the art. Their algorithm and bounds rely on a new index policy which is asymptotically optimal under a global attractor condition. Authors provide numerical experiments on three domains that support their claims. Strengths:\n - The draft is mostly well-written, making the contributions clear.\n - Authors provide experimental results on three domains showing wins on each (though modest compared to MAIQL on scheduling and TB)\n - The authors provide attractive regret bounds or the average reward, R2MAB case, advancing current state of the art.\n - Authors introduce a new index policy class for R2MAB, ERC, which seems to have good performance and will be of general interest to the RMAB community, though it seems to rely on a numerically verifiable global attractor condition.\n - Authors provide two learning algorithms, GM-R2MAB and UC-R2MAB, where the former is \"offline\" in that it takes time to collect enough samples to build a confident world model, and the latter is \"online\" in that it follows more closely to an upper confidence bound approach, exploring and exploiting in tandem. UC-R2MAB seems to perform better than all baselines in experiments.\n\nWeaknesses:\n - My understanding is that the global attractor property of a policy class is data dependent, and thus needs to be verified for each new dataset to which a policy class is applied. All of the authors' core results seem to depend on the global attractor condition, i.e., optimality of the ERC index policy they propose, and thus also the regret bounds (which rely on the ERC being asymptotically optimal). So, unless I am missing something, all of the results here depend on the users' ability to numerically verify the global attractor policy for the data to which they hope to apply the authors' algorithms. This may be ok, but the authors need to make it much more clear throughout the draft if this is the case. For instance, in Remark 1, authors claim that \"ERC does not require the indexability condition, which is often hard to establish especially when the transition kernel of the underlying MDP is convoluted\" -- while true, one seems to have to verify a global attractor condition instead to use the authors' policy, which is ultimately very similar since the indexability condition is a necessary condition for the global attractor property of the Whittle index policy, and itself is often verified numerically for new MDP classes. So authors should clearly state the tradeoff.\n - The authors mark N/A in their checklist for both limitations and social impacts -- this is not acceptable in the current iteration of NeurIPS. Authors need to engage in some amount of discussion in the limitation of their methodology. Also, since the authors present experimental results on a tuberculosis care domain, it is reasonable to ask that they discuss how their algorithm might have adverse social impact, and so they should engage in this part of the discussion as well.\n - More evaluation would be helpful. Specifically, authors vary the number of actions, but not the number of arms, which is they key scaling parameter in RMAB. Authors should provide some experiments to show both regret and runtime scaling as N increases for at least one domain. Additionally, the authors compare against MAIQL from Killian et al. 2021, but not LPQL, which seems to be the preferred R2MAB learning algorithm from that paper. Presumably they did not compare against LPQL because it handles the discounted reward case, rather than average. However, in the appendix, authors compare against NeurWIN which handles the discounted reward case. So authors should also compare against LQPL for completeness.\n - Please respond to the weaknesses above.\n\n\nNote: the paper potentially has enough merit to be considered for acceptance in my view, but the points listed in \"weaknesses\" need to be clarified.\n\n-------\nI have read the other reviews and author responses and am satisfied with their answers to my questions, and have increased my score. Authors should add more clarifying language about the global attractor property's reliance on data distributions as is done in Verloop 2016 section 6. - Please see weaknesses, second comment", " The authors address infinite horizon average reward restless multi-action bandits. They propose a new type of index, and two algorithms that calculate this index when the transitions and rewards are unknown - one offline and one online. The authors provide regret bounds for their algorithms and compare them empirically to other methods. Originality\nThe paper is novel to the best of my knowledge. The authors emphasize that their regret bounds are highly novel, I have no knowledge to contradict that fact.\n\nQuality\nThe paper is of very high quality - the presentation of the problem is clear and well written, the regret bounds are impressive and strong, and the experiments form a convincing argument. \n\nClarity\nDespite the results being mostly theoretical, I found the paper to be very clear and very well written. \n\nSignificance\nRestless multi-action bandits is a a bit of a niche so the significance of the paper is somewhat limited. The proposed algorithms and simulations correspond to discrete MDPs whose model is learned, which further hampers the significance of the work for applications. Inside this niche, the results do seem convincing, interesting and filling in missing knowledge.\n\n The paper is very clear and I have no questions or suggestions. Not relevant for this paper.", " The paper proposes two reinforcement learning algorithms for the multi-action multi-armed restless bandits' framework: GM-R2MAB (offline learning) and UC-R2MAB (online learning). An index-based policy is advocated for to avoid the dimensionality issue with increasing arms' count. A linear program (LP) is described to find a value upper bound for the R2MAB, and to characterize the asymptotic optimality of the ERC index-based approach. When the transition kernel and reward function are unknown, regret bounds were proven for both RL algorithms, and a comparison against other RMAB learning algorithms is provided on two experiment cases. The online learning algorithm (UC-R2MAB) was shown to give the lowest empirical regret in all cases. ### Strengths:\n\nThe paper contributes well in the finite-time analysis of R2MABs in the average reward case, and offers detailed theoretical regret analysis for both the offline (GM-R2MAB) and online (UC-R2MAB) algorithms. The introduction also explains in detail the lack of understanding of R2MABs' regret performance. The experiments include results from recent RMAB learning algorithms, and the experiment cases cover both the two-actions and multiple actions' cases. The necessary regret proofs (for theorems 2 and 3) provided in the appendices form the bulk of the paper's contribution and are well articulated.\n\n\n### Weaknesses:\n\nI find lemma 1 statement to be vague and the proof a bit general. The lemma statement depends on the value definition $V$ that is only mentioned in the appendix with the scaling parameter $\\rho$ (for theorem 1). I find having an explicit definition of the average reward value necessary before going into the proofs. For lemma 1 proof, the wording should be modified to indicate that the original problem's feasible region $\\Gamma$ is reachable for all points of the relaxed problem's region $\\Gamma^/$.\n\nIn the experiments' section, I understand that the page limit prevents a longer description of the results. In the revised manuscript, the author(s) should dedicate more explanation as to why the two algorithms perform better than the baselines. Currently the discussion rightly mentions that the two algorithms solve an ELP, but it would help if the author(s) discuss UC-R2MAB and GM-R2MAB limitations compared to the baselines. In addition, it would be nice to aggregate all algorithms' results into one plot rather than splitting them between two plots (one in the main text and the other in the appendix). I currently need to look at two graphs that consider a single case study to understand how the two algorithms perform.\n\nWhile the paper's technical analysis and sections are well-written, the paper's organization\\spelling accuracy can be significantly improved as I highlight below:\n\nStarting in line 19 and throughout the paper: *Restless multi-armed bandits (RMAB)...*\nshould be RMABs throughout the paper when the plural bandits is used.\n\nLine 29 *\"This is restrictive since the decision maker in many applications often has access to multiple actions for each arm.\"*\nThe author(s) should provide examples or cite work for this claim in the introduction.\n\nLine 31 *\"which we call the restless multi-action bandits, dubbed as R2MAB\"*\nthe full name would be better here: multi-action multi-armed bandits. \n\nLine 46 *Second, existing RL algorithms with theoretical guarantee of a sub-linear...*\nalgorithms with **a** theoretical guarantee.\n\nLine 58 *[57] achieved low-complexity... and not easy to be directly generalized*\n**a** low-complexity policy... and **is** not easy.\n\nLine 123 *In the remaining of the paper,*\nIn the remainder of the paper.\n\nBoth remarks 3 (line 237) and 4 (line 298) can be written as normal non-italicized paragraphs for easier reading. \n\nLine 288 at the end of equation 12 (theorem 3), it should be a period and not a comma.\n\nLine 301 *and the regret is exponentially in the number of arms and states*\nand the regret is **exponential** in the number...\n\nLine 320 *as a specific birth-and-death process*\nbetter to call it **birth-death** process for consistency.\n\nLine 324 *For multi-action setting*\nFor **the** multi-action setting\n\nLine 337 *between this award difference and the number of arms as optimality gap.*\nbetween this *reward* difference and the number of arms as **the** optimality gap.\n\nLine 351 *It is clear that our GM-R2MAB and UC-R2MAB is more efficient in running time.*\nIt is clear that our ... **are** more efficient...\n\nLine 372 *Each action has varying cost and effective*\nshould be *effectiveness*.\n\nAs a minor note, the author(s) should also cite the paper in the introduction section:\nFrancisco Robledo, Vivek Borkar, Urtzi Ayesta, and Konstantin Avrachenkov. 2022. QWI: Q-learning with Whittle Index. SIGMETRICS Perform. Eval. Rev. 49, 2 (September 2021), 47–50. https://doi.org/10.1145/3512798.3512816\n\n\n\n---------------------------------------------\n### After rebuttal:\n\nI thank the author(s) for responding to all of my comments. I read their responses and I find it to answer my concerns. In the updated paper, the author(s) should include the revised description of lemma 1 and the TB care dataset usage. \n\nI have updated the soundness score from 3 to 4 to reflect the updates the author(s) provided. I will keep my overall score at 7.\n\n \n1. Both GM-R2MAB and UC-R2MAB are able to learn index policies without the indexability condition (needed for the Whittle index). However, if the considered RMAB is indexable, how would that affect the two algorithms' regret bound? would indexability give a tighter regret bound?\n\n2. Assuming the RMAB is indexable, is there a characterization of the two algorithms' learned indices and the Whittle index if it exists? How would the learned index relate to the Whittle index? \n\n3. In the checklist under 4.d, I couldn't locate in the main text or the appendix where consent was obtained for the used data, which should be explicitly mentioned in the revised manuscript. **Limitations:** While the proposed approach is direct at multi-action RMABs and not the binary action case, the paper does not discuss the two algorithms' performance when the restless arms are indexable. A brief discussion of how indexability affects regret upper bounds would be interesting. \n\n**Societal impact:** The paper presents two algorithms for RMABs in the average rewards' setting, including one medical trial case in India. Hence, an application-specific analysis should be performed by those wishing to use either algorithm (especially in medical trials' cases)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 4 ]
[ "3Sx6kpsHG3u", "qZD7GWv8EYb", "6WXLvMqQJuI", "qZD7GWv8EYb", "nZIMtYPfoxY", "nS2k9E43qMN", "XtPI5mRP-_", "Ap2gFcISm9c", "s1ptbfK0Jig", "zxIbltrT2SD", "JWbjjLO-z3C", "QsKfAKUhsap", "nips_2022_3v44ls_4dbg", "nips_2022_3v44ls_4dbg", "nips_2022_3v44ls_4dbg", "nips_2022_3v44ls_4dbg" ]
nips_2022_SbAaNa97bzp
Understanding Robust Learning through the Lens of Representation Similarities
Representation learning, \textit{i.e.} the generation of representations useful for downstream applications, is a task of fundamental importance that underlies much of the success of deep neural networks (DNNs). Recently, \emph{robustness to adversarial examples} has emerged as a desirable property for DNNs, spurring the development of robust training methods that account for adversarial examples. In this paper, we aim to understand how the properties of representations learned by robust training differ from those obtained from standard, non-robust training. This is critical to diagnosing numerous salient pitfalls in robust networks, such as, degradation of performance on benign inputs, poor generalization of robustness, and increase in over-fitting. We utilize a powerful set of tools known as representation similarity metrics, across 3 vision datasets, to obtain layer-wise comparisons between robust and non-robust DNNs with different architectures, training procedures and adversarial constraints. Our experiments highlight hitherto unseen properties of robust representations that we posit underlie the behavioral differences of robust networks. We discover a lack of specialization in robust networks' representations along with a disappearance of `block structure'. We also find overfitting during robust training largely impacts deeper layers. These, along with other findings, suggest ways forward for the design and training of better robust networks.
Accept
The authors study representations obtained from image classifiers and contrast the classic training with adversarial training, so-called non-robust and robust networks, respectively. The authors primarily use the CKA metric on CIFAR10 and subsets of ImageNet2012 provide several novel insights on "salient pitfalls" in robust networks which suggest that robust representations are less specialized with a weaker block structure, early layers in robust networks are largely unaffected by adversarial examples as the representations seem similar for benign vs. perturbed inputs, deeper layers overfit during robust learning, and that models trained to be robust to different threat models have similar representations. The reviewers agreed that these contributions are interesting to the larger community and that the presentation of the results is clear and straightforward. The main issues raised by the reviewers were carefully addressed in the rebuttal. Please update the manuscript as discussed.
train
[ "7HpjJ94RyAW", "s1oLM5BKF3", "tRrEM0kYLbr", "_tjIwXoMgsB", "Kh6CK-frVT-", "ZA4rYsKE9Pe", "gGvNlbV4AVR", "CMF8HzGYsUtX", "2BBuLo8GFI3", "nF_rZCebApx", "vklEEhwbbZPp", "lckD6fDNP9F", "PBnLSj1IjTm", "DO13pufvfsy", "3jVYrkflzXU", "zHE4_I-J1kO", "YMUytX7qYxe", "zr6vVvkyLP", "YriM7CZhnoC", "TOno0nETe5x" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " In light of the authors' willingness to sufficiently guard the reach of the claims made and clarify the wording, I've updated my score.", " We thank the reviewer for engaging with our rebuttal. \n\nIn the literature, the notion of adversarial examples is commonly associated with pixel-wise perturbation-based adversarial attacks [1], hence our lack of distinction between adversarial examples generated using perturbations and other types. However, we do agree with reviewer that adversarial examples can be generated with numerous other methods, such as changes in color [2], manipulation of semantic attributes [3], patches [4], and natural adversarial examples [5]. Given the reviewer's concern on this matter, we are willing to emphasize this distinction in detail in the title, introduction, and experimental setup of the camera-ready, if accepted. We will also modify the broad term 'robust model' to 'model robust to adversarial perturbations' wherever appropriate in the text. Given the paper's experimental nature, we hoped that readers would be cognizant of the fact that the conclusions are limited to the models, datasets and threat models used, which while comprehensive, are not exhaustive. We will nonetheless articulate this point better throughout, using the extra page in the camera-ready.\n\nWe hope that our rebuttal addresses most the concerns raised by the reviewer. If you find our response satisfactory, we would greatly appreciate a reconsideration of your score.\n\n1. Akhtar, Naveed, et al. \"Advances in adversarial attacks and defenses in computer vision: A survey.\" IEEE Access 9 (2021): 155161-155196.\n2. Shamsabadi, Ali Shahin, Ricardo Sanchez-Matilla, and Andrea Cavallaro. \"Colorfool: Semantic adversarial colorization.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.\n3. Hosseini, Hossein, and Radha Poovendran. \"Semantic adversarial examples.\" Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 2018.\n4. Brown, Tom B., et al. \"Adversarial patch.\" arXiv preprint arXiv:1712.09665 (2017).\n5. Hendrycks, Dan, et al. \"Natural adversarial examples.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.", " We hope that our new experimental results and detailed comments address all of your concerns. If you find our rebuttal and revision satisfactory, we would greatly appreciate a reconsideration of your score for the paper.", " Thank you for updating the draft and the clarifying comments. My primary concern is regarding the scope of the claims made as well as the robustness definition.\n\nI don't believe robustness to adversarial examples is the correct term for the robustness studied here. My initial comment was intended to point out adversarial examples would encompass those in a dataset such as ImageNet-A. The variety of robustness studied in this work is in my opinion better described as robustness to adversarial perturbations (a term the authors do use in the text). The term *adversarial examples* used in the title, introduction, and text could be misleading to readers. I suggest the authors amend the use of the term to incorporate the notion of perturbations both the title, abstract, and introduction to avoid confusion in addition to clarifying the definition in the text. Claims such as \"robust models exhibit X\" found throughout the text are well beyond what the experiments show—they need to be appropriately phrase to account for the particular type of robustness, architecture, and limited datasets studied.\n\nOverall, I still find the insights gained from the authors’ analysis interesting—provided it's sufficiently clear the settings within which those claims extend.", " Thanks for the detailed response and updates! Please remember to update the camera-ready with relevant items (discussion and references for representational dimensionality and necessity of late-layer overfitting).\n\nI have updated my scores as follows:\nSoundness: 2 -> 3\nContribution: 2 -> 3\nRating: 4 -> 7", " Thank the authors for their detailed reply. The involved results and discussions make the picture clear for me. I tend to keep my score and vote for acceptance. ", " Dear authors,\n\nThanks for your detailed response and for integrating some of the suggested changes in your manunscript! I know believe that your paper is a solid contribution to the field and fits into NeurIPS. However, as the immediate impact of this paper is a bit unclear to me, I tend to keep my current overall rating as its description (\"Technically solid, moderate-to-high impact paper [...]\") best describes your work.", " We thank the reviewer for their positive appraisal of our paper and insightful comments for improving it. We appreciate the detailed line-by-line review of the paper and have revised the paper to address the issues raised by the reviewer.\n\nWe address their specific questions and concerns below. The paper and supplementary have also been revised to account for all the reviewers’ feedback (see Summary of Revisions).\n\n**Supporting references, typos, overclaims, language issues and further interpretation:** We thank the reviewer for their careful reading of the paper. Since there are a large number of small changes, we have addressed them directly in the revision.\n\n**‘critical discussion of the possible shortcomings of these metrics’:** We have added a detailed discussion of these metrics in Section A.1. of the Supplementary and updated Section 2.2 to justify the choice of CKA better. We provide more details below (as in the discussion with Reviewer 4F8X):\n* CCA and variants have some undesirable properties: The original CKA paper (Kornblith et al., 2019) points out that Canonical Correlation Analysis (CCA) and its variants are invariant to invertible linear transformations, while neural network training is not. This makes CCA fail basic sanity checks on the layer-wise similarity of networks with different random initializations (Section 6.1 of Kornblith et al.).\n* CKA is much faster: We find CKA to be 10x faster than the Procrustes metric and up to 30x faster than CCA and its variants. This speed-up allows us to get results for much larger architectures. In addition, as shown in Appendix B.1., both the CKA and Procrustes metric show a similar increase in similarity among layers for a robustly trained model, with CKA maintaining a more distinct visual structure. While it is clear that different metrics will lead to somewhat different similarity numbers, we believe our high-level conclusions will hold across valid metrics.\n\n**Impact of accuracy on similarity plots:** We carefully design our experiments to include networks that cover a wide accuracy range. For example, we consider Wide-ResNets with width 1 to 10 (fig 3 in main paper, fig 8,9,11,12,13,15 in appendix). The size of the network increases near quadratically with width and impacts subsequent accuracy in robust training. For example, WRN-28-1 with width=1 is two orders smaller than the largest WRN-28-10 networks and achieves much lower clean and robust accuracy. By ablating across a large range of network widths, we ensured that our conclusions are agnostic to the accuracy/performance of the network. In the rebuttal, we’ve also added another ablation using models from RobustBench [2] (fig 27 in appendix) where we show that our observations are agnostic to various factors in the robust training setup. \n\n**Figure 2 and 3 for adversarially perturbed data:** From Figures 8 and 10 in the Appendix, we establish that for the robust networks we train, adversarial and benign representations are essentially identical. Thus, for space and clarity considerations, we omitted copies of Figures 2 and 3 with adversarially perturbed data. If the reviewer thinks this is a critical addition, we can add this in a straightforward manner to a future revision and the camera-ready. Further, comparing adversarially perturbed representations for benign networks is not particularly meaningful since they are not classifying the inputs correctly.\n\n**Size of Figure 4:** We will enlarge Figure 4 to reside on two rows in the camera ready, utilizing the extra page. We cannot currently enlarge it without going over the page limit.\n\n**L185f:** Yes, we could indeed reproduce similar observations to those from Nguyen et al. for non-robust networks by varying the width of the network and observing the change in block structure. These results are in Column 1 of Figure 15 in the Appendix. \n\n**L252f:** We apologize for the omission of the training loss in Figure 6. We logged the training accuracy and found that it increased steadily, but the loss was unfortunately not logged. We have the checkpoints and will add the training loss values to the plot in a further revision later this week.\n\n**‘different adversarial attack (with the same threat model)’:** We have included experiments with different attacks but for the same threat model in Section F.3 of the Appendix, observing similar results. \n\n**‘JPEG compression doesn't seem like an additive perturbation’:** We use ‘JPEG’ to refer to the JPEG attack developed in [1], not normal JPEG compression. The attack involves using JPEG compression to compress an image and then applying an adversarial perturbation to the image in the JPEG encoding space. Details of the attacks we used are included in Section A.2 of the Appendix.\n\n[1] Kang, Daniel, Yi Sun, Dan Hendrycks, Tom Brown, and Jacob Steinhardt. \"Testing robustness against unforeseen adversaries.\" arXiv preprint arXiv:1908.08016 (2019).\n\n[2] https://robustbench.github.io/\n", " **“[I]t’s important to have a negative baseline—how low is “low”?”**:\n\nWe agree that context on the expected range of CKA is necessary for its use as a meaningful tool. In our experience, we have found that CKA is appropriately low when fed data that represents a random baseline. When creating a CKA similarity plot between all of the layers in a single network in which one of the activations in each comparison is the output of shuffled data, the CKA similarities for all of the comparisons are quite low (less than 0.1) and the plot appears entirely black (when visualized with the same color scale as the other plots in our paper).\n\n**“I would suggest a follow-up experiment to more thoroughly test this claim: freeze early layers (blocks 1 and/or 2) early in training. If early layers aren’t involved in the overfitting and training instability, then the freezing should have minimal effect on these phenomena”**:\n\nWe thank the reviewer for suggesting this intriguing experiment, as we feel that it helps to provide a conceptual reference for the utility of our work and a practical application of the techniques we present. We have conducted experiments to test the impact of layer freezing according to the epochs indicated by our analysis. Our results indicate that a model’s accuracy increases the fastest when its internal representations are converging the fastest towards the final learned representations. To test this, we froze the first block of a WRN-28-5 at different points in the first 40 epochs of training and recorded the maximum adversarial validation accuracy achieved over 100 epochs of training. As shown in the data below, when increasing the epoch freezing occurs at, accuracy starts off low (below 43%) and steadily increases until epoch 20, after which it levels off around 46-47%. This matches the trend of the CKA similarity convergence of convolutional layers within block 1 of a WRN-28-5 network, as shown in Section F.1 of the Appendix. These results suggest that early layers are indeed not subject to the same degree of overfitting that is experienced in later layers.\n---------------- -------------------\nFreezing Epoch | Max Adv. Accuracy\n---------------- -------------------\n 2 42.73\n 5 44.21\n 8 44.76\n 10 44.89\n 12 45.55\n 15 45.43\n 18 45.86\n 20 46.15\n 22 46.11\n 25 46.55\n 28 46.45\n 30 46.25\n 32 46.23\n 35 47.05\n 38 46.51\n\n\n**“This result seems very specific and not particularly useful. I would encourage the authors to find a more general result”**:\n\nWhile we agree that this correspondence isn’t very impactful in and of itself, we believe that the primary contribution of this result is as an example of a novel discovery that can be made using CKA that is obscured when using other coarse-grained metrics like loss and accuracy. This result hints that representation similarity can be used to guide joint robust training (to multiple classes of adversarial attacks), by determining which threat models can learn similar representations and what layer. Thus, joint robust training can be guided in a more careful manner, leveraging, for example, layer freezing and weight-based regularization. We have also since conducted preliminary experiments with common corruptions (Figure 28 in Section F.5 of the Appendix), and are happy to add more results comparing representations from average- and worst-case robust models to the camera-ready, if accepted.\n\n‘**insufficient description of Nguyen et al.**’ and ‘**limitations.. nuanced**’: We have updated the paper to address both of these issues.\n\n[1] Rice, Leslie, Eric Wong, and Zico Kolter. \"Overfitting in adversarially robust deep learning.\" International Conference on Machine Learning. PMLR, 2020.\n[2] https://advml-workshop.github.io/icml2021/\n", " We thank the reviewer for their detailed and constructive critique of our work and address their concerns below. The paper and supplementary have also been revised to account for all the reviewers’ feedback (see Summary of Revisions). \n\n**“It’s not clear how important of a problem adversarial robustness is … I would encourage the researchers to more systematically examine how their findings extend to other types of robustness”**:\nThe answer to this concern is a nuanced one, and we appreciate the opportunity to discuss it here. We believe that there is considerable merit in the study of adversarial robustness (see the overview of a recent workshop [2] for a nice summary), given the fundamental gaps it exposes in our understanding of the working of complex machine learning models and the fact that it represents a theoretical paradigm shift when considering issues of convergence and generalization. However, it is true that focusing too narrowly on Lp threat models, as argued by Gilmer et al., is a concern for the field. Hence, in Section 6, we have carried out experiments with other threat models as well and we agree that including other types of corruption in our analysis would strengthen our arguments. Given the flexibility of our codebase, we were able to run experiments looking at the cross-layer similarity of models robust to different types of common corruptions (Figure 28 in Section F.5 of the Appendix). We find that the effects of increased local similarity and differences between the layer-wise similarity plots for robust vs. non-robust networks are not as pronounced as they are in networks robust to worst-case adversarial attacks, implying that adversarial examples do have a particularly strong effect on representation similarity. \n\n**“Are the results from a single instance of each model? If so, I would strongly encourage the authors to repeat their analyses in triplicate at a minimum”**:\nWhile the reported results are from single instances, we trained multiple copies of models in most of our experiments to verify the validity of our conclusions. We have found that CKA exhibits very low standard deviation when comparing models trained with the same parameters. In our revision, we have included a new Figure 29 in Section F.6 in the Appendix that displays the standard deviation of a CKA computation (for both robust and non-robust networks). The results show a very close alignment between the three computations. In the camera-ready, if accepted, we will report standard deviations along with CKA values for our important results throughout.\n\n\n**“Previous work examining the relationship between representational dimensionality and adversarial robustness … How does adversarial training reduce representational dimensionality?”**\nWe thank the reviewer for making a connection between current work and previous work on the representation dimensionality of robust networks. Our observation complements the previous work in showing that representations in adversarially robust networks do exhibit a lack of differentiation among layers, which we hypothesize is linked to greater usage of model capacity (e.g, disappearance of block structure in cross-layer similarity - Fig. 2). However, we believe that examining a causal effect of adversarial robustness on intrinsic representation dimensionality deserves an independent and rigorous evaluation of its own, including careful experimental design, which falls outside the scope of current work. We’ll be sure to discuss this interesting connection in the camera-ready version, if accepted.\n\n**“The claim that “Deeper layers overfit during robust learning” seems … Is this overfitting a necessary component of robust learning?”**\nWe clarify that our objective is to understand why overfitting happens in adversarial training. Overall, it is well-known that adversarial loss at the output layer overfits [1]. Our contribution is to demonstrate that the level of overfitting differs across layers, as deeper layers overfit more during robust learning. We have also conducted experiments with early stopping, and found a large difference in the representations learned at later layers with and without early stopping, in certain cases. We can add a further detailed discussion in the camera-ready, if accepted. If the reviewer believes that our “Deeper layers overfit during robust learning” phrase is still causing a misunderstanding, we are happy to update it. \n\n**“I think to make this claim the authors need to directly compare CKA(Benign, Perturbed) to CKA(Benign, Benign) and CKA(Perturbed, Perturbed).”**\nWe have included plots that display CKA (Benign, Perturbed) for multiple threat models in Figures 8, 9, and 10 in the Appendix. Plots showing CKA (Perturbed, Perturbed) for robust models were omitted due to their high visual similarity to the CKA (Benign, Benign) plots already included in the paper. If necessary, we can include a figure that directly compares the three types of plots.\n", " We thank the reviewer for their positive appraisal of our paper and insightful comments for improving it. We address their specific questions and concerns below. The paper and supplementary have also been revised to account for all the reviewers’ feedback (see Summary of Revisions).\n\n**‘claims should be sufficiently couched within the experimental settings studied’:** We acknowledge that most of our experiments are on ResNet based models but this is largely due to their prominence in SOTA benchmarks (see top-performing models on CIFAR-10 at https://robustbench.github.io/). For the datasets we consider, MLP-based architectures do not achieve good performance, as observed with previous works that find adversarial training vision transformers challenging [1, 2]. In the meantime, we will update the text to acknowledge these limitations. \nRegardless, in the updated Section B of the Supplementary material, we have added layer-wise similarity plots for 7 additional robust training methods from the RobustBench [3] benchmark, as well as plots that use 3 additional adversarial example generation methods to generate perturbed representations. These are still ResNets, but of different widths and depths, and cover a wide range of training methods.\n\n**‘Robust representations is a broad term’:** We have edited the text where necessary to make it clear that most of our results concern robustness to adversarial examples. We would like to clarify that starting in Line 27 of the Introduction, we make it clear that the ‘robustness’ we discuss throughout the paper is worst-case robustness with respect to adversarial examples, and not random noise. In addition, the revised version of the paper now also contains baseline experiments with models robust to common corruptions (taken from the Robustbench model zoo) in Section F.5. Thus, our codebase is easily extensible to other types of corruptions and we will add further experiments to the camera-ready, if accepted.\n\n**‘later layers overfit…findings are confined to the L-infinity definition of robustness’:** We are glad the reviewer found our experiments on learning dynamics interesting and appreciate the reference to “Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations”. We will add a discussion relating our results to this paper in the camera-ready, if accepted. We have also now added results on the learning dynamics of models trained to be robust to the other threat models we consider in Section F.2 of the Supplementary. These confirm our findings from the L-infinity threat model that overfitting largely happens in later layers.\n\n\n[1] Shao, Rulin, Zhouxing Shi, Jinfeng Yi, Pin-Yu Chen, and Cho-Jui Hsieh. \"On the adversarial robustness of vision transformers.\" arXiv preprint arXiv:2103.15670 (2021).\n\n[2] Edoardo Debenedetti, “Adversarially Robust Vision Transformers”, Masters Thesis, EPFL (2022)\n\n[3] https://robustbench.github.io/ ", " We thank the reviewers for their thoughtful and constructive engagement with the paper. As reviewers ourselves, we greatly appreciate the reviewers’ efforts at providing thorough and insightful commentary on the paper. \n\nWe are glad that multiple reviewers found the paper to be well-grounded and motivated (**EXva, 9y9Q**), with an interesting approach (**HbNq**). We appreciate the acknowledgement of systematic and well-formulated experiments (**mj83, 9y9Q**) leading to interesting results (**mj83, EXva, 9y9Q**) and well-described insights (**9y9Q**). Keeping in mind the importance of open-source code for reproducible research, we are happy that our code was deemed to be ‘clean and well-documented’ (**HbNq**).\n\nWe have revised both the main body and supplementary material in accordance with suggestions from the reviewers. Due to space limitations on the main body for the revision, all **new experiments and results are in the self-contained Section F in the Supplementary Material for ease of reading**. We will move results and explanations to the main body for the camera-ready (utilizing the extra page) as appropriate, if accepted. We list the key revisions to the paper and additional experiments in the Supplementary (tagged by reviewer id) below:\n1. Corrected typos and tightened captions throughout\n2. Added clarifications to the Introduction regarding the scope of architectures and types of robustness considered (**EXVa**, **9y9Q**)\n3. Justification of CKA in Section 2.2 (**4F8X**, **HbNq**)\n4. Updated limitations in Section 7 (**4F8X**)\n5. Layer freezing experiment details and results in Section F.1 of the Supplementary (**mj83**, **4F8X**, **EXVa**)\n6. Threat model training plots in Section F.2 of the Supplementary (**9y9Q**)\n7. Layerwise similarity plots for further attack types in Section F.3 of the Supplementary (**HbNq**)\n8. Layerwise similarity plots for more training methods and architectures in Section F.4 of the Supplementary (**4F8X**,**9y9Q**)\n9. Layerwise similarity plots for common corruptions in Section F.5 of the Supplementary (**9y9Q**)\n10. Baseline experiments in triplicate (**EXva**)\n", " ‘**analysis … actually be used in practice**’: This is a great question (also asked by Reviewer mj83) and one which we hoped would arise from our analysis of robust representations. We will just note that the actual training of more robust networks is somewhat tangential to the goals of this paper, which were mainly to explore properties of robust representations from current training methods. \n\nNevertheless, we envision **3 key ways** in which our results could be used to train better robust networks in a more efficient manner:\n- *Staggered freezing of layers during training*: Our results from Section 5 indicate that early layers do not need to be updated post a few epochs of training, since their learned representations do not change much during training. As pointed out by Reviewer EXva, we have conducted experiments to test the impact of layer freezing according to the epochs indicated by our analysis. Our results indicate that a model’s accuracy increases the fastest when its internal representations are converging the fastest towards the final learned representations. To test this, we froze the first block of a WRN-28-5 at different points in the first 40 epochs of training and recorded the maximum adversarial validation accuracy achieved over 100 epochs of training. As shown in the data below, when increasing the epoch freezing occurs at, accuracy starts off low (below 43%) and steadily increases until epoch 20, after which it levels off around 46-47%. This matches the trend of the CKA similarity convergence of convolutional layers within block 1 of a WRN-28-5 network, as shown in Section F.1 of the Appendix. These results suggest that knowledge of a network’s training dynamics derived from CKA analysis can be used to increase the efficiency of training through the freezing of early layers.\n\nFreezing Epoch | Max Adv. Accuracy\n---------------- -------------------\n 2 42.73\n 5 44.21\n 8 44.76\n 10 44.89\n 12 45.55\n 15 45.43\n 18 45.86\n 20 46.15\n 22 46.11\n 25 46.55\n 28 46.45\n 30 46.25\n 32 46.23\n 35 47.05\n 38 46.51\n\n*Increasing layer-wise differentiation during training*: Our results show there is a much greater degree of local similarity among learned representations for robust networks when compared to benign ones. This similarity also increases when the training budget is increased. We suspect this lack of layer-wise differentiation may be part of the reason why robust networks do not achieve high accuracy on clean data. Using regularization methods that promote increased layer-wise differentiation during robust training may alleviate this issue, and is a compelling and immediate experiment for future work. \n\n*Choosing threat models for joint robust training*: Past work on the training of models jointly robust to multiple types of attacks has largely focused on different types of Lp perturbations. We posit that our analysis of the similarity of representations obtained from different threat models can be utilized to determine against which sets of threat models joint robustness is possible, and if the model has sufficient capacity for that purpose. Further, the layer-wise analysis can be used to add appropriate regularizers to ensure convergence, an issue exacerbated by the presence of multiple types of adversarial examples.\nWe will add a summary of this discussion to the camera-ready as well, if accepted. The results on layer freezing will be added to Section 5, and the other future work to Section 7.\n\n‘**Limitations are not properly explained**’: We apologize for the lack of a more detailed limitations section and have revised the paper to clarify its limitations. In particular, we acknowledge limitations with respect to the dependence of the results on the particular metric used, the sometimes tenuous link between properties such as accuracy and layer-wise structure, and that detailed experiments on improved robust training needed to be pushed to future work for space and time considerations.", " We thank the reviewer for their considered critique of the paper and address their concerns and comments below. The paper and supplementary have also been revised to account for all the reviewers’ feedback (see Summary of Revisions).\n\n‘**analyze whether two inputs from the same class have similar representations**’: This is a very interesting question that we have already investigated in the paper. In Section B.5. of the Supplementary, we derive a small extension of the CKA metric needed to answer this question. We split up the CKA computation into an intra-class and inter-class component. We summarize our findings here and request the reviewer to go through our detailed results in B.5. We find that while there are far fewer intra-class terms in the CKA computation (for balanced label sets), they have a far higher contribution than the inter-class terms. Thus, we do observe that for both robust and non-robust networks, representations from the same class have a high degree of similarity, while those from different classes are far less similar. In addition, we tested whether inter-class similarities were lower for robust networks than non-robust ones, to understand whether this could be contributing to their lower accuracy. This was true for the CIFAR-10 and Imagenette datasets, but not for Imagewoof. \n\n‘**only one version of robust network**’: We respectfully disagree with this comment from the reviewer, since the epoch-wise layer similarity plots in Figure 6 of the main paper and Figure 5 in the supplementary both consider another state-of-the-art training method, TRADES. This method uses a weighted sum of losses on benign and robust data to train robust models. Nevertheless, in the updated Section F of the Supplementary material, we have added layer-wise similarity plots for 7 additional robust training methods from the RobustBench benchmark, as well as plots that use 2 additional adversarial example generation methods to generate perturbed representations. \n\n‘**rationale of using CKA**’: We thank the reviewer for this clarifying question. We have added further details justifying our choice of CKA in Section 2.2, which we summarize here:\nCCA and variants have some undesirable properties: The original CKA paper (Kornblith et al., 2019) points out that Canonical Correlation Analysis (CCA) and its variants are invariant to invertible linear transformations, while neural network training is not. This makes CCA fail basic sanity checks on the layer-wise similarity of networks with different random initializations (Section 6.1 of Kornblith et al.).\nCKA is much faster: We find CKA to be 10x faster than the Procrustes metric and up to 30x faster than CCA and its variants. This speed-up allows us to get results for much larger architectures. In addition, as shown in Appendix B.1., both the CKA and Procrustes metric show a similar increase in similarity among layers for a robustly trained model, with CKA maintaining a more distinct visual structure. While it is clear that different metrics will lead to somewhat different similarity numbers, we believe our high-level conclusions will hold across valid metrics.", " We thank the reviewer for their positive appraisal of our paper and interesting questions. We address their specific concerns and comments below. The paper and supplementary have also been revised to account for all the reviewers’ feedback (see Summary of Revisions).\n\n‘**practical effects of the proposed analysis**': This is a great question (also asked by Reviewer 4F8X) and one which we hoped would arise from our analysis of robust representations. We will just note that the actual training of more robust networks is somewhat tangential to the goals of this paper, which were mainly to explore properties of robust representations from current training methods.\n \nNevertheless, we envision **3 key ways** in which our results could be used to train better robust networks in a more efficient manner:\n- *Staggered freezing of layers during training*: Our results from Section 5 indicate that early layers do not need to be updated post a few epochs of training, since their learned representations do not change much during training. As pointed out by Reviewer EXva, we have conducted experiments to test the impact of layer freezing according to the epochs indicated by our analysis. Our results indicate that a model’s accuracy increases the fastest when its internal representations are converging the fastest towards the final learned representations. To test this, we froze the first block of a WRN-28-5 at different points in the first 40 epochs of training and recorded the maximum adversarial validation accuracy achieved over 100 epochs of training. As shown in the data below, when increasing the epoch freezing occurs at, accuracy starts off low (below 43%) and steadily increases until epoch 20, after which it levels off around 46-47%. This matches the trend of the CKA similarity convergence of convolutional layers within block 1 of a WRN-28-5 network, as shown in Section F.1 of the Appendix. These results suggest that knowledge of a network’s training dynamics derived from CKA analysis can be used to increase the efficiency of training through the freezing of early layers.\n\nFreezing Epoch | Max Adv. Accuracy\n---------------- -------------------\n 2 42.73\n 5 44.21\n 8 44.76\n 10 44.89\n 12 45.55\n 15 45.43\n 18 45.86\n 20 46.15\n 22 46.11\n 25 46.55\n 28 46.45\n 30 46.25\n 32 46.23\n 35 47.05\n 38 46.51\n- *Increasing layer-wise differentiation during training*: Our results show there is a much greater degree of local similarity among learned representations for robust networks when compared to benign ones. This similarity also increases when the training budget is increased. We suspect this lack of layer-wise differentiation may be part of the reason why robust networks do not achieve high accuracy on clean data. Using regularization methods that promote increased layer-wise differentiation during robust training may alleviate this issue, and is a compelling and immediate experiment for future work. \n- *Choosing threat models for joint robust training*: Past work on the training of models jointly robust to multiple types of attacks has largely focused on different types of Lp perturbations. We posit that our analysis of the similarity of representations obtained from different threat models can be utilized to determine against which sets of threat models joint robustness is possible, and if the model has sufficient capacity for that purpose. Further, the layer-wise analysis can be used to add appropriate regularizers to ensure convergence, an issue exacerbated by the presence of multiple types of adversarial examples.\n\nWe will add a summary of this discussion to the camera-ready as well, if accepted. The results on layer freezing will be added to Section 5, and the other future work to Section 7.\n", " This paper studies the robustness of deep neural networks based on the perspective of representation similarity. Such a perspective provides an interesting direction to delve deeper into the properties of robust representation learning. The authors make several novel discoveries on \"salient pitfalls\" in robust networks. According to the observations, the author introduces several ways to design and train better robust networks. Strength:\n1. Analyzing the properties of robustness from the representation similarities is intuitive and has been explored by some previous works. However, the paper provided a very systematic study and provided lots of interesting discoveries. \n\n2. The paper is well written. The analysis and discussion are conducted in a very logical way. \n\n3. There are extensive experiments conducted to demonstrate the conclusion and observation. I believe the results are solid. \n\n\nWeakness:\n1. I appreciate the efforts in providing different views/frameworks for an important research problem. While I would also like to find more practical effects of the proposed analysis and observations. \n 1. Is there possible to involve more experimental results on how the proposed ways for better robust learning could improve upon the previous methods? yes. ", " This paper presents a probing analysis on clean (non-robust) vs. adversarially trained robust models. The paper's novelty is questionable and the insights gained are also sort of obvious, e.g., that the representational differences between inputs increase as one goes deeper down the layers. While this is true (and has been observed by the authors in their experiments), the authors didn't make an attempt to analyze whether two inputs from the same class are do have similar representations, which is a good thing. How does this compare between robust and non-robust models?\n\nThe paper introduces a lot of defence mechanisms (mainly centred around the min-max idea). However, it only employs PGD asdversarial training for obtaining a \"robust\" model. What about other approaches that provide defence mechanisms against potential attacks, e.g. the references 23, 32, 43 etc. that the authors themselves cite?\n\n Strengths:\n\n1. Good analysis work on robust vs. non-robust networks.\n2. Uses CKA to measure representational similarities.\n\nWeaknesses:\n\n1. Only one version of robust network considered - those trained with PGD based adversarial examples.\n2. The rational of using CKA is not appropriately justified.\n3. Some analysis could/should have been at a more detailed level, like one would still want the differences between different classes to be high and similarties between identical classes to be high. What observations can we make regarding this expected behavior in a robust and a non-robust network? \n How do the analysis from the observational differences between robust and non-robust networks can actually be used in practice? Can we use these insights to guide us towards constructing more robust models? Limitations are not properly explained. The expression \"fundamental disjunct between aggregate properties and layer-wise representation similarity metrics...\" is rather vague.", " This paper uses an existing method for comparisons of intermediate neural network activations (CKA) for a comparison of robust and non-robust networks. The authors analyze the similarities in different aspects and try to deduce insights in adversarial training. - The paper uses an interesting & potentially insightful approach to better understand how adversarially trained networks process information and how they differ from non-robust networks.\n- The attached code for this submission is well documented and looks clean, making the results more trustworthy.\n- It feels like the paper spends too much time describing plots/results compared to interpreting the results and presenting hypotheses for what results mean. - Missing related work: There has been previous work on comparing the features of robust vs. non-robust neural networks that should be cited, e.g. using feature visualizations [1].\n- L25: Reference missing.\n- L45f: Reference [18] should be placed behind Imagewoof and not behind Imagenet; also, a reference for ImageNet is missing. \n- L93: E.g., JPEG compression doesn't seem like an additive perturbation.\n- L108: If this is well known, I encourage the authors to support this sentence with (multiple) references.\n- L118: A more detailed critical discussion of the possible shortcomings of these metrics and why they don't impact the results of this paper would be good.\n- L123f: Why doesn't this impact your results? This definitely needs to be supported by a strong argument. \n- Figure 2/3: On which datasets were these metrics calculated? On the vanilla test data or on adversarially perturbed versions of the test data? What does the figure look like for the other data type (either benign or adversarial)?\n- Figure 2/3: Given that the clean accuracy goes down for adversarial training, I'm wondering whether the change in the similarity plots is really due to a special aspect of adversarial training or just due to the drop in performance, i.e. how do these plots look like for non-robust networks that are not trained until convergence but until they reach a similar test accuracy as the robust networks?\n- L163f: That seems a bit like an overstatement: The block structure is still clearly visible for 2 out of the 3 datasets - it just gets weaker.\n- Multiple typos/grammar mistakes that make it sometimes break the flow of the text, e.g. L172, L187, L201, L216, L221, L235.\n- Figure 4: These plots should be larger - the text is hard to read.\n- Figure 4: The first sentence of the caption doesn't read right/is difficult to parse.\n- L185f: Did the authors observe the same behavior as described by Nguyen et al. when they tried to reproduce their observations for non-robust networks? At the moment it is difficult to confidently say that the observation reported here is because of the robust network or because of the experimental setup used by the authors.\n- L213: If this is well known, I again encourage the authors to support this sentence with (multiple) references. Furthermore, this is actually not such a clear property, and there are specific attacks that just aim to create adversarial examples that transfer well between different models.\n- Minor comment: The paper uses inconsistent notation - sometimes the authors say \"Figure\", sometimes \"figure\"; sometimes \"Appendix\", sometimes \"App.\".\n- L252f: Where can we see both validation and training loss? The figure only shows one \"loss\" but doesn't say which one it is.\n- L265: I wouldn't call these \"adversarial perturbations\" but rather use the more commonly used expression common corruptions. Especially, since at least for JPEG compression there is nothing you can optimize - so that doesn't really fit the overall adversarial framework.\n- Section 6: What are the specific parameters of the JPEG, snow, and Gabor corruptions?\n- L289: What is the conclusion/interpretation of the results?\n\n[1] Leveraging Sparse Linear Layers for Debuggable Deep Networks. Eric Wong, Shibani Santurkar, Aleksander Mądry. 2021 - While the authors mentioned criticism on the similarity metric they use (CKA), they don't really explain why this doesn't apply to their analysis. This should be properly addressed.\n- It is also unclear whether the results differ if a different adversarial attack (with the same thread model) was used for generating the adversarial perturbations - this should ideally also be addressed.", " The authors examine the effects of adversarial robustness training on representation. Specifically, they use CKA to compare robust vs non-robust networks, benign vs. adversarial inputs, and how these comparisons change over the course of learning. They find the following results:\n- Robust representations are less specialized; distant layers are more similar in robust networks, and block structure is weaker\n- Early layers in robust networks are largely unaffected by adversarial examples; representations are similar for benign vs. perturbed inputs\n- Deeper layers overfit during robust learning\n- Models trained to be robust to different threat models have similar representations Strengths: The paper follows in a well-established tradition of using representational similarity analysis to understand neural network behavior and training interventions. The analyses are well-motivated and straightforward. The results are generally presented clearly and easy to understand. The results are interesting.\n\nWeaknesses: The paper seems somewhat limited in scope: It primarily addresses adversarial (worst-case) robustness, which is only one type of robustness. I'm not totally convinced of the utility of this work; it could at the very least do a better job situating itself within existing robustness research. It's unclear if any of the experiments were run in replicate. The figures captions could be more informative and self-contained.\n\nOverall, I think this work could be suitable for publication if it is sufficiently revised. This work primarily examines adversarial (i.e. worst-case) robustness, which is one of many types of robustness, among others including average-case robustness (Hendrycks and Dietterich, Benchmarking Neural Network Robustness to Common Corruptions and Perturbations) and natural adversarial examples (Hendrycks et al.). It’s not clear how important of a problem adversarial robustness is (see Gilmer et al.’s Motivating the Rules of the Game for Adversarial Example Research). The authors examine robustness to Gabor, snow, and jpeg attacks, but the analysis and results are limited. I would encourage the researchers to more systematically examine how their findings extend to other types of robustness.\n\nAre the results from a single instance of each model? If so, I would strongly encourage the authors to repeat their analyses in triplicate at a minimum. If this is computationally cost-prohibitive, then perhaps they could focus on the most important results.\n\nThere is relevant previous work examining the relationship between representational dimensionality and adversarial robustness (Leavitt and Morcos, Linking average- and worst-case perturbation robustness via class selectivity and dimensionality; Sanyal et al., Robustness vai Deep Low-rank Representations; Nayebi and Ganguli, Biologically inspired protection of deep networks from adversarial attacks). These papers show that low rank representations confer adversarial robustness. Can you reconcile these results with your findings that adversarial robustness is associated with greater utilization of model capacity? Additionally, most of these works (certainly Leavitt and Morcos and Sanyal et al.) regularized representational dimensionality and found that it improved robustness; it would be quite interesting to investigate whether that the inverse holds: does adversarial training reduce representational dimensionality?\n\nThe claim that “Deeper layers overfit during robust learning” seems like it could be leveraged to improve the generalization gap caused by robust learning. The simplest approach would be to simply stop training when the adversarial loss begins to rise (as in Figure 6). I assume this has drawbacks, but the authors don’t present the data (e.g. accuracy curves over training). Is this overfitting a necessary component of robust learning?\n\nLines 59-61: “On the other hand, the representations of benign and perturbed inputs from robust networks are indistinguishable from one another with regards to representation similarity metrics.” I think to make this claim the authors need to directly compare CKA(Benign, Perturbed) to CKA(Benign, Benign) and CKA(Perturbed, Perturbed).\n\nThe caption for Figure 2 should describe the model architecture(s) used to generate the results, as well as whether the data are benign or perturbed. There are other Figures which lack important experimental details, such as whether the data are benign or perturbed.\n\nComparing the CKA results in Figure 2d to the results in Figures 2a-c make it clear that the similarity between robust and non-robust networks is lower than the similarity between networks of the same type, but I think it’s important to have a negative baseline—how low is “low”? Accordingly, I think the authors should repeat the analysis (CKA between robust and non-robust networks) using sample-shuffled data (and/or some other suitable random baseline).\n\nShowing that the block structure effect varies with the strength of adversarial training is a nice experiment and result.\n\nThe results presented in lines 174-188 (“Impact of robust training strength” and “Impact of architecture”) would be easier to interpret (and their motivation clearer) if you introduced them with Nguyen et al.’s finding that “block structure in the internal representations arises in models that are heavily overparameterized relative to the training dataset.” While you do cite their work (“Previously Nguyen et al. [28] observed that increasing network width, thus capacity, leads to emergence of block-structure in non-robust networks”), I think this is an insufficient description of their results and should be presented earlier.\n\n“Overfitting is predominantly visible in later layers”: I would suggest a follow-up experiment to more thoroughly test this claim: freeze early layers (blocks 1 and/or 2) early in training. If early layers aren’t involved in the overfitting and training instability, then the freezing should have minimal effect on these phenomena.\n\nLines 278-280: “When using CKA to compare against the Snow threat model, we observe that the highest average similarity is achieved with Gabor. This represents a novel insight into these threat classes, as correspondence between the two was not previously known.” This result seems very specific and not particularly useful. I would encourage the authors to find a more general result (see my earlier comment about examining different attack types).\n\nminor comments:\n\nLines 21-22: “...such as images, speech or text”. It’s a matter of personal taste, but I am a proponent of the Oxford comma: “...such as images, speech, or text”\n\nLine 22: What does “meaningful” mean in this context? Meaningful with regards to what?\n\nLine 187: “These results suggest that while increasing width in robust networks doesn’t lead a drastic shift in similarity of internal layer representations.” Typo?\n\nTypo in caption of Figure 5 (...to understand similairty)\n The authors devote a paragraph at the end of the discussion to the limitations of their study, which seems appropriate given the space limitations. I do, however, think the authors could be more careful and nuanced about some of their claims.", " The paper contrasts representation similarities of networks trained to perform image classification with and without adversarial noise. To do so, the authors measure the similarity of representations using the Centered Kernel Alignment (CKA) metric (as well two other similarity metrics in appendix) for CIFAR-10 and two subsets of ImageNet. The authors highlight 1) networks trained with adversarial noise have layers similar to one another compared to those of standard trained networks (which have a block structure) 2) representations in early layers are unaffected by adversarial perturbations both for standard and adversarially trained networks 3) networks trained with and without adversarial perturbations have similar representations until the last 10 or so layers 4) early layers converge faster and later layers overfit to local minima. The authors also analyze the similarity of representations when other perturbations are applied JPEG, Gabor, and Snow, finding This work investigates representations learned for image classification by contrasting the representation similarities of networks trained with and without adversarial noise—leading to several insights into properties of the learned representations as well as learning dynamics of modern networks. Such properties as the authors point out are not captured by aggregate performance metrics such as loss or accuracy, leading to insights about the learned representations. The paper is well-written, experiments clearly described, and motivation is well-grounded.\n\nThe experiments conducted are convincing and well-formulated. For example, multiple similarity metrics are compared, variants of the adversarial perturbations are explored, and experimental claims are well-founded. \nHowever, the authors'claims should be sufficiently couched within the experimental settings studied: supervised image classification for ResNet-based models on CIFAR-10 and two subsets of ImageNet. For example, the work only studies ResNet-based architectures yet claims to cover “DNNs with different architectures” (line 12) and the “impact of choice of architecture” (line 37). I would expect a comparison of “different architectures” to encompass for example transformer-based architectures, MLP-architectures, etc. I suggest the authors more explicitly couch claims in the introduction, abstract, title, and conclusions within the confines of the experimental settings studied. For example \nThe comparison of representations for \"threat models\" (JPEG, Gabor, and Snow) in Section 6 was not particularly informative and seemed removed from the primary findings of the remainder of the paper. \n\nRobust representations is a broad term. This work studies robustness to random noise in the input. Yet, the authors intermix “robustness to adversarial examples” and often plainly use the term “robust network” to describe a particular type: robustness to noise (defined via L-p bounds on the input) throughout the work. For example, robust in the context of image classification can just as well refer to robustness with respect to rendering method ImageNet-Sketch, artifacts such as blurring (ImageNet-C), adversarial examples (ImageNet-A), or even robustness to natural transformations such as pose (Alcorn et al.). I suggest the authors clarify the wording to only include the specific definitions of robustness studied here.\n\nThe insights gained from the authors’ analysis are interesting and well-described. The finding of most value, in my opinion, is that later layers overfit (matching existing work relating overfitting in later layers to spurious correlations [1]). While the analysis sheds light on differences between networks trained with and without adversarial noise as well as their learning dynamics the findings are confined to the L-infinity definition of robustness studied for supervised image classification using ResNet-based architectures (for the main claims in section 4 and 5).\n\n[1] “Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations” [https://arxiv.org/abs/2204.02937](https://arxiv.org/abs/2204.02937)\n included above Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4, 4 ]
[ "s1oLM5BKF3", "_tjIwXoMgsB", "YMUytX7qYxe", "vklEEhwbbZPp", "2BBuLo8GFI3", "3jVYrkflzXU", "CMF8HzGYsUtX", "zr6vVvkyLP", "nF_rZCebApx", "YriM7CZhnoC", "TOno0nETe5x", "nips_2022_SbAaNa97bzp", "YMUytX7qYxe", "YMUytX7qYxe", "zHE4_I-J1kO", "nips_2022_SbAaNa97bzp", "nips_2022_SbAaNa97bzp", "nips_2022_SbAaNa97bzp", "nips_2022_SbAaNa97bzp", "nips_2022_SbAaNa97bzp" ]
nips_2022_W-Z8n9HrWn0
Why Do Artificially Generated Data Help Adversarial Robustness
In the adversarial training framework of \cite{carmon2019unlabeled,gowal2021improving}, people use generated/real unlabeled data with pseudolabels to improve adversarial robustness. We provide statistical insights to explain why the artificially generated data improve adversarial training. In particular, we study how the attack strength and the quality of the unlabeled data affect adversarial robustness in this framework. Our results show that with a high-quality unlabeled data generator, adversarial training can benefit greatly from this framework under large attack strength, while a poor generator can still help to some extent. To make adaptions concerning the quality of generated data, we propose an algorithm that performs online adjustment to the weight between the labeled real data and the generated data, aiming to optimize the adversarial risk. Numerical studies are conducted to verify our theories and show the effectiveness of the proposed algorithm.
Accept
The recommendation is based on the reviewers' comments, the area chair's personal evaluation, and the post-rebuttal discussion. This paper studies how synthetic data can be useful for improving adversarial robustness. All reviewers find the results convincing and valuable. The authors' rebuttal has successfully addressed the reviewers' concerns. Given the unilateral agreement, I am recommending acceptance
train
[ "BS-0NgHtECu", "4HYQyArU4S5", "sECGNQXlE5L", "ECE-pvfKVMb", "H1h5v2CZyUO", "kEqC9TzkKL", "6BiWr1bWh5o", "Ps8T0bJkom1", "2aqzdEwE73R", "DtOwPKU_nlC", "Ms1YIziHEQF", "keBElZRFlzg", "oMS-Gnnv8n" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank you again for providing us with such a constructive and encouraging review! We will try to polish our paper to fully emphasize the motivation and make the mathematical formulas easier to understand in the camera-ready version.", " I thank the authors for answering the questions. One of my main concerns is that the motivation and reasons for giving the theorem are missing. However, I appreciate that the authors have adequately addressed most of my other questions. Therefore, I increase my score to 6.", " Thank you so much for your response! We updated the full paper (in supplementary material) with the new section \"Limitations of This Work\" (new Section A in the appendix) and the proof for Proposition 3 in Section G.3 in the appendix. As mentioned in the common response before, we will modify the main content to add essential connections to the new things in the appendix in the camera-ready version.\n\nPlease let us know if you have further suggestions or questions.", " 1. missing proofs: I can see how the proof for the Proposition 3 works. However, however simple the proof is, I still believe that you should have a proof in the appendix or at least a short line about why the risk becomes smaller.\n2. Thanks for correcting the typos.\n3. I can understand that it is hard to make a big change.\n4. I can understand that it is hard to make a big change.\n5. I think that you are encouraged to create a separate \"Limitations\" section \"in your paper\" (see https://neurips.cc/public/guides/PaperChecklist) and Page 12 is not a part of the main body in your paper.", " We appreciate your effort in reviewing our paper! There are some common updates in the paper that mentioned at the top comment, and below are answers for your questions:\n\n1. Weakness, how to train a good data generator: We appreciate you sharing this question with us! We agree that studying how to improve the generator quality is vital.\n \n * In our Example 2, we are showing how to better estimate the data distribution if $X\\sim (\\textbf{0},\\Sigma)$ for some unknown $\\Sigma$. With proper model assumptions, it is possible to improve the generator quality for simple models.\n \n * In terms of real practice, it is still an open question of how to generate better synthetic data. In [1], they try diffusion models and GAN models, and DDPM shows the best performance among all the models in their paper.\n \n In [1], besides comparing different generation models, they use different evaluation criteria to evaluate the performance of different models and how they affect the final adversarial training performance. For example, they consider coverage and complementarity and show that these criteria are related to adversarial training performance. Besides the metrics in [1], there are some other criteria, e.g., in [2].\n \n Once the relationship between the evaluation metrics for generators and adversarial training performance is well established, one can strive for better data synthesis under those metrics. This will be helpful to improve adversarial performance and, more importantly, can apply to many other applications for different robustness needs.\n\n2. Q1, $n_2$ in Figure 2: Thank you for your useful suggestion! Figure 2 aims to explain the most important observations in the label cost and the generator cost, which are most clear when $n_2\\rightarrow\\infty$. In our revision, we provide two additional figures similar to Figure 2 to show how $n_2/n_1$ affects the performance under ideal/poor (independent of $S_1$) generators in Section E. In general, \n\n * When $n_2/n_1\\rightarrow\\infty$, the label cost is minimized, and the generator cost only depends on the generator quality.\n\n * When $n_2/n_1\\rightarrow 0$, the sum of the label cost and the generator cost gets slightly decreased due to a bias-variance trade-off compared to $n_2=0$.\n\n * When $n_2/n_1$ is finite and away from zero, because the generator cost and the label cost are both related to $n_2$ and are in the same order, it is hard to describe exactly how the sum of the costs is changed. But in general, with an ideal generator, a larger $n_2$ could lead to better adversarial robustness. Our simulations also verify this.\n\n3. Q1, the small order term $o$: We use many $o$ in our derivations and formulas such that the representation of our result focuses on the most important term. In general, for consistent $\\widetilde{\\theta}(\\epsilon)$ and $\\widehat\\theta(\\epsilon)$, the $o$ term is always a negligible term.\n\n4. Q2, \"unbiased\": In line 127, the word ``unbiased\" refers to the scenario in Assumption A2, i.e., an asymptotically \"unbiased\" generator satisfies that $\\|\\mathbb{E}_{\\mathcal{P}_a\\otimes\\mathcal{P}_y} \\partial/\\partial\\theta_\\epsilon l_\\epsilon(X,Y,\\theta_\\epsilon)\\|=o(1)$. Thanks for pointing out this issue. We updated the statement in our revision.\n\nReferences:\n\n[1] Sven Gowal, Sylvestre-Alvise Rebuffi, Olivia Wiles, Florian Stimberg, Dan Andrei Calian, and Timothy A Mann. Improving robustness using generated data. Advances in Neural Information Processing Systems, 34, 2021.\n\n[2] Ahmed Alaa, Boris Van Breugel, Evgeny S Saveliev, and Mihaela van der Schaar. How faithful is your synthetic data? sample-level metrics for evaluating and auditing generative models. In International Conference on Machine Learning, pages 290–306. PMLR, 2022.", " Thank you for your constructive comments for our paper! We have some common response to all reviewers about our updates in the paper, and below are some answers for your specific questions:\n\n1. Weakness, theory when $p=\\infty$: Thanks for pointing out this! Our general results of Theorem 1 and the intuitions for the label cost (Section 4.2) and generator cost (Section 4.3) all apply to $L_\\infty$ attack. In our revision, in the appendix, we provide simulation experiments in $L_{\\infty}$ attack. All the empirical behavior observed in $L_{\\infty}$ attack are the same as in $L_2$ attack. \n\n2. Weakness, generalize to multi-class classification: As mentioned in our reply to Reviewer RigD (Weakness, scenarios where assumptions do not hold), the actual assumptions required by Theorem 1 are not as tight as the assumptions that appear in the paper. Therefore, Theorem 1 may hold for multi-class classification as long as the loss function has a good shape and the gradient and Hessian matrix are well-behaved. Besides, our intuitions in the label cost and generator cost are also applicable to multi-class classifications.\n\n3. Weakness, experiments are in the appendix: We appreciate your suggestion. Because of the nine-page limit, we did not have enough space to put more experiment results in the main content. As mentioned in the common response, we added one extra page in the appendix (to be integrated into the main content in the camera-ready version). On this extra page, we added some experiment result summaries.", " We appreciate your effort in reviewing our paper! We have some updates and common response to all reviews at the top. Below is a list to answer your questions:\n\n1. Weakness, proofs in the appendix: Thank you very much for this useful suggestion! We did not provide this in the main content because of the page limit. As mentioned in the common reply, we added a proof sketch on the extra page in the full paper (in the supplementary material) and will move it to the main content in the camera-ready version.\n\n2. Weakness, scenarios where assumptions do not hold: Conceptually, we expect to derive important theoretical insights under relatively simple model and these insights generalizes well under complex model (via empirical justifications). On the other hand, although we need strong assumptions regarding the shape of the loss function, moments of the loss gradient, and the moments and eigenvalues of the Hessian matrix, they can be relaxed to certain degree. To avoid explaining mathematical conditions too much and losing the focus on our main insights, we choose to keep our assumptions in our representation. Instead, we added a small section (Section D) in the appendix to explain this. We also explained the possible outcomes when the assumptions do not hold. \n\n3. Weakness, equations are hard to follow: We appreciate you sharing your feelings about reading our paper. We would try to improve the readability of the formulas.\n\n4. Weakness, lack of baselines: Our paper focuses on theoretical investigation of some phenomenons in adversarial training, so it is not a methodology development paper that aims to beat SOTA results. Therefore, most of our simulations are used to justify theoretical finding, rather than competing performance with other methods. There is one exception where we propose algorithm 1 to determine proper weights.\n \n For the real-data experiments of this algorithm, based on RobustBench (https://robustbench.github.io/), the best robust accuracy of CIFAR-10 under $\\mathcal{L}_{\\infty}$ attack for WideResNet28-10 and WideResNet34-10 are around 62\\% to 63\\%, which is similar to the result in our paper (Table 3 in the appendix).\n\n5. Q1, Taylor expansion in (4): The Taylor expansion in $R(\\widetilde{\\theta}(\\epsilon),\\epsilon)-R(\\theta_\\epsilon,\\epsilon)$ aims to transform the difference in the loss into the distance between $\\widetilde{\\theta}(\\epsilon)$ and $\\theta_\\epsilon$. This is a standard step to linearize complicated representations.\n\n6. Q2, simulations v.s. assumptions: In our simulation studies, we use $X\\sim N(\\textbf{0},\\Sigma)$ and $Y$ is designed based on Assumption A1. Therefore, the simulation scenario satisfies the assumptions.\n\n7. Q4, when assumptions do not hold: As mentioned before, we added discussions on the some possible outcomes if the assumptions do not hold. \n \n In addition, some of our numerical experiments consider scenarios where the assumptions do not hold. For example, in the right panel of Figure 4, the poor generator (correlated) is a scenario that the generator is related to the labeled data set $S_1$ but not good enough (i.e., also violates Example 2). In this case, there is a correlation between the generated data $S_2$ and $S_1$, and the green curve in Figure 4 is quite different from the red curve (a poor but independent generator).", " Thank you very much for reviewing our paper! Besides the common response to all reviewers, below is a list of answers for your specific questions:\n\n1. missing proofs: We did not provide the proof of Proposition 2 and 3 in our submission. For Proposition 2, it is only a simple extension of Theorem 1. In terms of Proposition 3, its statement already implies how we prove it. The last sentence, \"a larger $n_2$ always gives a better $\\widetilde{\\theta}$\", is the key conclusion for Proposition 3, and the other sentences are logic derivations leading to this conclusion. We added a short proof for Proposition 2 in the appendix Section F.3.\n\n2. typos: Thank you very much for figuring out our typos, and we have corrected them in the main content (the full version in the supplementary material).\n\n3. paper structure: We appreciate your suggestion on our paper structure. Due to the current nine-page limit, we cannot make big changes to the main content now. We will consider this in the camera-ready version.\n\n4. Q4, notation $\\otimes$: Thank you for pointing out this issue! We will take this into account in the camera-ready version.\n\n5. Limitations, negative social impacts: Our paper mainly focuses on the theories and uses public data sets in our real-data experiments. We are not aware of any direct negative social impact by our results. We mentioned this in the checklist on Page 12.", " We greatly appreciate the reviewers reviewing our paper and providing many insightful suggestions.\n \nWe update the full paper (with the appendix) in the supplementary material to fix the minor issues and address reviewers' concerns. Below is a summary of important updates:\n\n1. We update a new Section A.2 to do simulation studies under $\\mathcal{L}_{\\infty}$ attack as a part to address the weakness mentioned by Reviewer bXme besides the theory part. Briefly speaking, all the observations are the same as for $\\mathcal{L}_2$ attack.\n\n2. A new Appendix Section D is updated to explain how Assumption A1 can be relaxed and the possible outcomes when the assumptions do not hold. This new section aims to answer the questions of Reviewer RigD and Reviewer bXme. In short, when the loss has a good shape and the data distribution is well-behaved, our results can be applied to other scenarios, e.g., other loss function, or multi-class classification.\n\n3. A short proof of Proposition 2 is added in Section F.3 to address the concern of Reviewer Htxg.\n\n4. We add in Appendix Section E some new figures similar to Figure 2 but with changing $n_2$ based on the comment of Reviewer VSRZ. Briefly speaking, with $n_2/n_1$ increases, for an ideal data generator, the label cost gets larger and the generator cost is reduced. For poor data generators, it is hard to effectively reduce the generator cost.\n\n5. Due to the nine-page limit in the revision stage, we add one extra page of content at the beginning of the appendix to include the proof sketch (Reviewer RigD) and some summary of numerical experiments (Reviewer bXme). We will integrate this page to the main text in the camera-ready version.\n\n * To summarize the numerical results, we conduct various simulations and real-data experiments to verify the correctness of our theory. \n \n * For simulation, we verify: (1) given the ideal data generator, the performance of $\\widetilde\\theta(\\epsilon)$ is better than $\\widehat{\\theta}(\\epsilon)$ when $\\epsilon$ deviates from zero; (2) the better quality of the data generator implies the better performance of $\\widetilde\\theta(\\epsilon)$; and (3) balancing the weight between $S_1$ and $S_2$ improves the performance.\n \n * For real-data experiments, we verify that the label cost and the generator cost are important factors in deep learning. We show (1) adding more unlabeled samples from the ideal generator will improve adversarial robustness, and (2) adding unlabeled samples from a poor generator with a small $n_2$ will slightly improve the performance.\n\nCurrently, these major changes are not reflected in the main text due to the strict page limit. In the camera-ready version, we will update the main text accordingly, e.g., add related discussion or remarks. All the revision changes in the supplementary material are highlighted in blue color.", " This paper includes several theoretical analyses about introducing additional artificial data in adversarial training: its benefits and the relationship between the generator performance and training performance. The analysis starts from their main theorem that decomposes the access risk into two parts: label cost (from mislabeling the artificial data) and generator cost (from the poor performance of the generative model).\n\nThen, the authors use this decomposition to investigate their two research questions further. The answer to the first question is that, assuming an ideal generator (with no generator cost), the excess risk after introducing artificial data is smaller than the excess risk of vanilla adversarial training. This result shows the benefits of introducing artificial data in adversarial training. Also, the author decomposed the generator cost further to bias and variance terms and showed that the bias term is upper bounded by the dissimilarity between the data distribution and the distribution of generated data. Because the variance term converges to 0 as the number of generated samples grows, the result shows that we can reduce the generator cost by having a better-quality generative model. This paper's last contribution is the strategy of weighting the usual training samples and the introduced artificial samples.\n Originality: To the best of my knowledge, the paper contains novel ideas.\n\nQuality: \n\n[[Strength]]\n1. Assuming the correctness of lemmas, the proof of Theorem 1 seems correct.\n\n[[Weakness]]\n1. The proofs for Proposition 2 and Proposition 3 are missing. I don’t think they are trivial statements, but the proofs are neither in the main part nor the Appendix.\n\nClarity: There are a few typos and grammatical errors. See Questions for more details.\n\nSignificance:\n\n[[Strength]]\n1. This paper is dense with theoretical discussions on adversarial training. Considering the lack of theoretical understanding of adversarial training in adversarial machine learning research, I believe that this paper provides valuable insights into the field. 1. Where are the missing proofs for Proposition 2 and Proposition 3? If they do not need proof, please explain. I consider those proofs missing quite seriously, so if the proofs are in the paper but I missed them somehow, please let me know so that I can adjust the rating.\n2. I recommend the authors proofread the writings once again. Some typos and grammatical errors that I spotted are as follows.\n\n - Line 83: “$\\theta_\\epsilon = \\min R(\\theta, \\epsilon)$” -> “$\\theta_\\epsilon = \\arg\\min_\\theta R(\\theta, \\epsilon)$” ($\\min R(\\theta, \\epsilon)$ is the value of minimum risk, but $\\theta_\\epsilon$ must be the model parameter minimizing the risk.)\n\n - Line 175: “The simulations results in” -> “The simulations result in” (Grammar)\n\n - Line 480: “Figure B.1” -> “Figure B.2” (It looks like that Figure B.1 is the result for Section B.2)\n\n - Line 516: “Lemma 1” -> “Theorem 1 (According to the structure of this section, the proof for Theorem 1 comes last.)\n\n - Equation after Line 547: “$\\mathcal P_a \\otimes \\mathcal P_\\epsilon$” -> “$\\mathcal P_a \\otimes \\mathcal P_y$” (I don’t think that $\\mathcal P_\\epsilon$ is defined in the article.)\n\n3. I don’t understand why the authors separated Section 5 from subsection 4.4, whereas it can be just a continuation of subsection 4.4. Also, it would be better to separate subsection 4.1 and the other parts, because subsection 4.1 looks to be the main insight and the other parts are analysis/design from the main insight.\n4. In my opinion, you use the notation \"$\\cdot\\otimes\\mathcal P_y$\" multiple times, consuming too much space in the paper. Defining a shorter notation for \"$\\cdot\\otimes\\mathcal P_y$\" and changing all the occurrences would save some space.\n This paper does not have a part assigned to address the limitations and potential negative societal impact. I understand the hardship of putting many results in the page limit, but it must be possible to condense the contents further to ensure space for this. (I don’t know whether moving some contents to the Appendix at this stage is allowed, but I recommend it if it is allowed.)", " This paper investigated the phenomena when using simulated data to improve adversarial robustness and provided the theoretical analysis. Specifically, the author decomposed the adversarial risk used in adversarial training to explain why and how unlabeled data can help and how its quality affects the resulting robustness. Strengths:\nThe authors clearly shape the research questions and provide the theorem and experiments to verify their points of view. The theoretical analysis for to answer the interesting questions that artificially data helps robustness is the main contribution of this paper. This paper is also generally well-written, delivering the main message clearly. \n\nWeaknesses:\nThe proofs for the proposed theorems/lemmas/propositions are all in the appendix, I would suggest including more details, especially the motivation and reasons when giving the theorem. \n\nIt is intuitive to ask if there have some scenarios that the theorems cannot explain; the authors use too many assumptions in giving the derivations, however, if the assumptions should be reasonable not for convenience. The discussion on counterexamples where the theorems might not hold would shed light on the paper.\n\nSecondly, in my opinion, most of the equations are hard to understand and follow, this might be due to the loss of the explanation before giving the statement. \n\nThird, the experiments seem to lose some baselines for comparison, it would be great if compare with more others.\n\nLast, the paper requires careful proofreading, e.g., \"minimiax\" in line 185, “… is the current model” in line 183, etc.\n 1. Why perform Taylor expansion for Eq. 4?\n2. When doing simulation, how do authors make sure they are aligned with your assumptions? For example, the datasets used are in sub-Gaussian distribution.\n3. When do the assumptions A1 and A2 hold?\n4. In what situation, the assumption would not hold. In this case, how would the robustness performance be affected?\n Please refer to my review.", " This paper provides statistical insights to explain why the artificially generated data improve adversarial training. In particular, it studies how the attack strength and the quality of the unlabeled data affect adversarial robustness in the adversarial training framework of Carmon et al. (2019); Gowal et al. (2021). The results show that with a high-quality unlabeled data generator, adversarial training can benefit greatly from this framework under large attack strength, while a poor generator can still help to some extent. It then proposes an algorithm that performs online adjustment to the weight between the labeled real data and the generated data, aiming to optimize the adversarial risk. Numerical studies are conducted to verify the theories and show the effectiveness of the proposed algorithm. I think this paper has the following strengths: \n\n1. The theoretical analysis is novel and the results seem sound, though I don't check the proof of the theories carefully. It is important to understand why and how using unlabeled data can benefit adversarial training. It is also important to understand how the quality of the unlabeled data generator affects the adversarial robustness. \n\n2. It proposes an algorithm that dynamically adapts the weight during the training of neural networks and shows its promising performance empirically. \n\n3. It is well-written and the ideas are clearly presented. The related works are properly discussed. \n\nHowever, this paper has the following weaknesses: \n\n1. There are some mismatches between the theory and the actual experiments. For example, it only considers $p=2$ in the theorems while in the experiments, it sets $p=\\infty$. Do the theoretical results still hold for $p=\\infty$? \n\n2. It only considers binary classification for the theoretical analysis. Could the results generalize to multi-class classification? \n\n3. It puts simulations and most real experiments in Appendix. I think it should at least summarize the results and findings in the main body of the paper. \n\n 1. Do the theoretical results still hold for $p=\\infty$? \n\n2. Could the theoretical results generalize to multi-class classification? The limitations and potential negative societal impact of the paper are properly addressed. ", " In this paper, the authors provide a statistical insight to explain the reason that using unlabeled generated data can improve model's robustness. The experiments verify that the theories in this paper are correct and can enhance the model's robustness. Strengths:\n\n1. The writing is good. This paper is very easy to follow.\n\n2. This paper gives detailed theoretical analysis of the reason that using unlabeled data can improve model's robustness. \n\n3. The experimental results strongly verify the analysis is correct.\n\nWeaknesses:\n\n1. As the quality of unlabeled data matters, how to choose a proper generator for adversarial training is vital. However, the theory in this paper cannot directly guide people to train or choose such a generator. On the other hand, it seems like judging a poor generator is much easier. Could the authors attempt to solve this challenge?\n 1. In Figure 2, the authors suppose that $n_2 \\to \\infty$. Later, the authors analyze the case where $n_2$ is finite. I think there may exist a gap. Could the authors add more details for different ratio of $\\frac{n_1}{n_2}$, and analyze the remainder term $o$?\n\n2. What does ``unbiased'' mean in Line 127? I guess the authors judge a generator based on the gap between real data distribution and a data distribution generated by a generator. So, an ideal generator means it can generate a real data distribution. But, how do we define a good one and a poor one without any threshold? I do not see any limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 4 ]
[ "4HYQyArU4S5", "6BiWr1bWh5o", "ECE-pvfKVMb", "Ps8T0bJkom1", "oMS-Gnnv8n", "keBElZRFlzg", "Ms1YIziHEQF", "DtOwPKU_nlC", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0", "nips_2022_W-Z8n9HrWn0" ]
nips_2022_t4vTbQnhM8
A Kernelised Stein Statistic for Assessing Implicit Generative Models
Synthetic data generation has become a key ingredient for training machine learning procedures, addressing tasks such as data augmentation, analysing privacy-sensitive data, or visualising representative samples. Assessing the quality of such synthetic data generators hence has to be addressed. As (deep) generative models for synthetic data often do not admit explicit probability distributions, classical statistical procedures for assessing model goodness-of-fit may not be applicable. In this paper, we propose a principled procedure to assess the quality of a synthetic data generator. The procedure is a Kernelised Stein Discrepancy-type test which is based on a non-parametric Stein operator for the synthetic data generator of interest. This operator is estimated from samples which are obtained from the synthetic data generator and hence can be applied even when the model is only implicit. In contrast to classical testing, the sample size from the synthetic data generator can be as large as desired, while the size of the observed data that the generator aims to emulate is fixed. Experimental results on synthetic distributions and trained generative models on synthetic and real datasets illustrate that the method shows improved power performance compared to existing approaches.
Accept
Decision: Accept This paper introduces a non-parametric (NP) Stein operator to allow implicit models to be used in KSD. So this enable the use of KSD for evaluating the performance of implicit models, and the new test statistic shows better test power compared to MMD test. Reviewers commended that the paper writing is clear, and the contribution is solid and novel. There were a few technical concerns regarding the proposed KSD as well as comparisons to MMD, which were mostly addressed in author-reviewer discussions. In revision for camera ready, I'd encourage the authors to include the additional experiments & discussions provided in the author feedback. Perhaps adding more MMD-based test baselines would strengthen the paper even further.
train
[ "4L5ab32O07v", "uaOTOZ-YAUo", "NzqLVYEGtjd", "UGBL24xUGvK", "gF_o3sNb5ko", "_bkAVmya47X", "74weE6g9PDN", "GExauiBuoKb", "3Mypwq9zmz9U", "ZlNmJ1zqs5", "GQGhqWhRfc_", "eRdRs3IXR5A", "rCZy6idlYox", "82E1jiFV4b" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the update. We are very pleased that we have addressed most of your concerns and you now support accepting it!\n", " I appreciate the detailed response from the author. It addresses most of my concerns. I will raise my rating.", " Many thanks for your suggestions. We have amended the text to further include the discussion in the revised version. In the main text,\nLine 249, we now explicitly point out why MMD is not compared against and include the pointer to the Appendix F.1 where detailed illustrations and experiments are given. The expanded text reads:\n\n\n``We note that MMD tests do not have controlled type-I error when $n \\ll N$, thus are not suitable in this setting. However, MMD-based methods to compare and criticise two generative models have been explored [Lloyd and Gharahramani, 2015, Sutherland et al. 2017]. Detailed discussions and illustrations of why MMD tests are not included in the comparison list are found in Appendix F.1.''\n\nAppendix F.1 also shows that MMD is not consistent in this highly imbalanced situation; Table 5 shows that its type-1 error does not reach the correct level. Instead, we compare to MMDAgg because it is a non-asymptotic test with controlled type-1 error even when $n \\ll N$, see Table 5 in Appendix F. Fig.1b assesses the effect of the number $N$ of generated samples (on the x-axis). As discussed in lines 260-265, the power of NP-KSD increases faster with sample size than that of MMDAgg.", " The clarification on the sample size and more elaborated discussion on MMD vs NP-KSD clears my original concerns.\nI suggest you to blend your discussion here into the main paper, which would help the presentation.\n", " Thank you for your question. As your question almost implies, the answer lies in the conditioning on the summary statistic $t$. Without conditioning on a summary statistic, our Stein operator is the same as the Langevin Stein operator (up to scaling), which we show in Proposition D.1 in Appendix D. \n\nThen observing that we can decompose the Langevin Stein operator into operators which characterise univariate conditional distributions, we estimate the score function of the univariate conditional distributions, which is much easier than estimating the multivariate score function (which is what one would naturally do for the Langevin operator) in particular when the data are in high dimensions. \n\nThe main use of this decomposition however and the key novelty of the paper lies in the conditioning on the summary statistic $t$. Each individual Stein operator now characterises a different conditional distribution. Looking at Eq.(29) in Appendix D, we use \n\n$$\\partial_i \\log (q ( x^{(i)} | t(x^{(-i)})) $$\nwhereas if in the Langevin operator we condition on $t(x)$ we would obtain \n\n$$\n\\partial_i \\log (q (x | t(x))) =\n \\partial_i \\{ \\log (q (x^{(i)} | t(x), x^{(j)}, j \\ne i )) q( x^{(j)}, j \\ne i | t(x)) \\}\n$$\n$$ = \\partial_i \\log( q (x^{(i)} |x^{(j)}, j \\ne i, t(x)) + \\partial_i \\log (q( x^{(j)}, j \\ne i | t(x)) .\n $$\n\n\nThus the sum of the component-wise conditional Stein operators given $t(x^{(-i)})$ is not the same as the conditional Langevin operator given $t(x)$. \n\n\nTo better understand the effect of the choice of summary statistic, it may be good to first not consider the summary statistic but just the conditional Stein operator. If we were to write our Stein operator as second-order operator then it is the generator of a Markov process which picks an index $I$ from $\\{1, ..., m\\}$ at random, and if $I=i$, replaces the observation $x^{(i)}$ by an observation $x^{(i)'}$ which is drawn from the conditional distribution of $x^{(i)}$, given $x^{(j)}, j \\ne i$. This procedure is described for example in Reinert (2005). Our conditional Stein operator again picks an index $I$ from $\\{1, ..., m\\}$ at random, and if $I=i$, replaces the observation $x^{(i)}$ by an observation $x^{(i)'}$ which is now drawn from the conditional distribution of $x^{(i)}$, given $t(x^{(j)}, j \\ne i)$. In general this is no longer the generator of a Markov process, but we show in the paper that we can still give theoretical guarantees for its behaviour, and it is useful as ingredient for our NP-KSD test statistic. \n\nRegarding the limitation phrasing, we appreciate that a better understanding of the choice of summary statistic would be very useful. In Section 5 of the paper we already discussed that the choice of summary statistic may have a large effect. We have now added a sentence, to read\n\n\n``Future work will devote more attention on analysing the choice of summary statistic.''\n \n\nWe hope that this explanation and addition alleviates your concerns.", " I am very appreciated for the detailed response from the author. It address some of my concerns. I still want to ask the following:\n\n1. In the limitation, I was saying more like a detailed analysis on simple distribution with simple summary statistics. Your Poission example is a good example. \n\n2. If for each of the conditional distribution and a Langevin-Stein operator is applied, the resulting Stein operator is equivalent to applying Langevin-Stein operator to the joint distribution, no? If so, this alone is not a novel Stein operator. \n\n", " \nMany thanks for your update! Thank you also for your follow-up question. To clarify the disadvantage of using just samples from the generator, as a simple example we take the mean value as test statistics. Instead of an NP-KSD test, we now use a simple Monte Carlo test, with the MNIST data as an example. We calculate the mean value over each sample image (overall 28x28=784 pixel values), and we compare the mean of generated sample images with that of the real sample images.\nFor each generator, we generate 100 images and calculate the mean for each image. We order the means of the sampled images and reject the null hypothesis if the mean of the real data is too large or too small, compared to the sampled images, choosing as significance level $\\alpha=0.05$. For each data generator, we carry out 100 such tests. \nThe proportion of rejected tests at significance level $\\alpha=0.05$ are:\n\n\n| MNIST dataset | GAN\\_MLP | DCGAN | VAE | NCSN | Real samples |\n| ---- | ----------- | ----------- |----------- |----------- | ----------- |\n| mean statistic | 0.08| 0.03| 0.06| 0.05| 0.02|\n\n\nFor all generating methods, the proportion of rejected tests is close to the significance level, although the different generators produce samples with considerable differences from the real data, which can easily be spotted visually. This example illustrates that, using the mean value in a Monte Carlo test instead of as a test statistic in NP-KSD is not powerful enough to distinguish the generators from the real sample. This finding is in contrast to Table 1 in the paper, which shows that NP-KSD and the mean-conditioned variant NP-KSD\\_m reject almost all tests for the synthetic data generators. Hopefully, this answers your question; if we misunderstood your question, please let us know. \n", " Many thanks to the authors for their detailed reply. I appreciate the points on and that the MMD based methods I pointed out are unsuitable for the given context. This in fact addressed my most major concern re the usefulness in practice. The only caveat I have here is the question what would happen if we only sample samples from the generator? Is the performance of such a test so much worse? A simple example to illustrate that would be great in my opinion. Also thank you for your thoughts on using goodness-of-fit tests to evaluate black box generative models in meaningful ways, I agree with most. I have increase my initial score.", " You ask why we do not use MMD. MMD tests can be very useful in a two-sample problem for two sets of samples of comparable size, but in our setting the sample sizes can be very different. The sample size $n$ of the observed data is fixed and can be small, whereas the size $N$ of the sample generated by the synthetic data generator can be as large as desired. Prop.3.1 and Th.3.2, justifying NP-KSD, hold when N tends to infinity with n fixed. Our setting $n \\ll N$ makes MMD unsuitable as a comparison method. The same argument applies to the optimised MMD test [1], the linear time version from Jitkrittum et al. (2016), and your suggested references [1] and [2]; for completeness, these are included in the new version. Lines 232-234 allude to issues arising in MMD when the sample size is small. Line 249, pointing to why MMD is not compared against, is now expanded:\n\n``We note that MMD tests do not have controlled type-I error when $n\\ll N$, thus are not suitable in this setting. However, MMD-based methods to compare and criticise two generative models have been explored [Lloyd and Gharahramani, 2015, Sutherland et al. 2017]. Detailed discussions and illustrations why MMD tests are not included in the comparison list are found in Appendix F.1.''\n\nAppendix F.1 also shows that MMD is not consistent in this highly imbalanced situation; Table 5 shows that its type-1 error does not reach the correct level. \nInstead, we compare to MMDAgg because it is a non-asymptotic test with controlled type-1 error even when $n \\ll N$, see Table 5 in Appendix F (not because of its kernel selection feature without data splitting). Fig.1b assesses the effect of the number N of generated samples (on the x-axis). As discussed in lines 260-265, the power of NP-KSD increases faster with sample size than that of MMDAgg.\n\nRegarding learning the score using a neural network instead of the conditional distribution: we use the conditional scores because they are fast to estimate and lend themselves to a theoretical analysis: For sliced score matching, including one-dimensional score matching, Song et al. (2020) give conditions for the assumptions of Prop.3.1 and Th.3.2 to hold. To assess the effect of score matching estimators, we both learned the score function directly, and the conditional score with the mean as the summary statistic, using NP-KSD and NP-KSD\\_mean, see Fig. 1, Table 1 and 2. NP-KSD learns the full distribution with the score function parameterised by a particular deep neural network which corresponds to the conditional marginal distributions, as \n$$ \\frac{\\partial}{\\partial x^{(i)}}\\log q(x) = \\frac{\\partial}{\\partial x^{(i)}}\\log q(x^{(i)}, x^{(-i)}) = \\frac{\\partial}{\\partial x^{(i)}}\\log q(x^{(i)}|x^{(-i)}) + \\underset{=0}{\\underbrace{\\frac{\\partial}{\\partial x^{(i)}}q(x^{(-i)}) }} . $$\nDeriving further alternative score estimators and their theoretical behaviour, and assessing their performance in NP-KSD will be part of future work. \n\nConcerning the minor issues which you raised, we have amended the KSD reference in the new version; thank you for pointing this out. While the wild bootstrap process in Chwialkowski et al. (2014) can deal with non-independent data, we used it because it is part of the standard KSD procedure in Chwialkowski et al. (2016). However, estimating the scores violates the assumptions of the wild bootstrap; we show in Appendix F.2 that it can lead to erroneous results. A simple permutation test cannot be applied if one sample set is obtained.\n\nFinally, you raise the issue of the general value of a goodness-of-fit test. Comparing different generative models can be carried out via measuring sampling quality without significance level, e.g. in the setting of [1], or via a relative testing procedure, e.g. [3]. In contrast when only one particular generative model is of interest, then a goodness-of-fit testing procedure can be very useful as the model assessment can be performed just by comparing the p-value with the significance level. \nIf the test does not reject a particular generator, then the test can aid the selection of reliable sample batches based on p-values, for example when small batches of high-quality samples are required, again without reference to any external models. \nThrough inspecting accepted and rejected, guidance can be obtained for the development of alternative synthetic data generators. Exploring this further is part of our future work. \n\nWe hope that these explanations have addressed your concerns. \n\nAdditional references: \n\n[1] Sutherland, D. J. et al. (2017) Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy. In ICLR (Poster).\n\n[2] Lloyd, J. R., and Ghahramani, Z. (2015) Statistical model criticism using kernel two-sample tests. Advances in Neural Information Processing Systems 28.\n\n[3] Kanagawa, H. et al. (2019). A kernel Stein test for comparing latent variable models. arXiv preprint arXiv:1907.00586.\n", " Thank you for your review. We are pleased that you appreciate that we tackle an important problem. However what we propose is more than several tricks; it is a fundamentally different view to obtaining Stein operators and results in novel procedures. \n \n Addressing your major concerns, indeed the paper focuses on GAN-type data generators, as indicated by the first three words in the abstract --- synthetic data generators are to be assessed. In standard statistical problems, usually the distribution underlying the observed data is the target; not so here. Here our novel viewpoint comes into play: the ``target'' distribution is now the distribution from which the data generator generates samples. The test problem is to assess whether the observed sample could be viewed as coming from this target distribution.\n This viewpoint is explained in lines 52-56. \n \n Regarding theoretical guarantees, due to estimation error, it is not the case that NP-KSD=0 even when p=q. However, Proposition 3.1 and Theorem 3.2 give consistency guarantees. If p=q then KSD=0, and consequently, for $N$, the number of generated samples (and resamples $B$) tending to infinity, NP-KSD would approach 0 in probability, under the conditions stated in the theoretical results, as long as the score function is estimated consistently. \n \n When we use the conditional score function, then the Stein operator will characterise the conditional distribution, now playing the role of p. Hence we can only hope for NP-KSD tending to 0 when p and q have the same conditional distributions given the summary statistic on which we condition. Of course, one can easily construct two distributions which are different but have the same conditional distribution (for example, a Poisson distribution conditioned on being at most 1 is a Bernoulli distribution, but in general Poisson and Bernoulli distributions are different). The effect of conditioning is illustrated for example in Figure 1 where NP-KSD and NP-KSD\\_mean are compared: the first is unconditional, the second conditions on the mean. [You request such a comparison in your paragraph on limitations; it was already there.]\n \n We note that as there is no parametric model available for the distribution which generates the data, KSD cannot be used, and this has been a main motivation for the development of NP-KSD. A key novelty is that we estimate the KSD for the generator $G$, which can be carried out to arbitrary precision as we can generate as many samples from $G$ as desired; Proposition 3.1 gives the theoretical justification and Theorem 3.2 extends it to the sampling scenario. Naively estimating this score function would be very difficult in high dimensions. Instead, we estimate univariate conditional score functions and combine them in Equation (7) to yield a Stein operator, which is the basis of the KSD test. When it is possible to sample from the conditional distributions (which we do not assume to be the case), a similar idea has been carried out in Singhal, R. et al. (2019) as you kindly indicated.\nNot citing this paper was an oversight which has now been amended. However, the idea of representing the Stein operator as a sum of Stein operators actually goes back to Reinert (2005), a paper which is cited in the references. \n\nTo answer your question whether \n$\\mathbb R^m$ in line 86 should be $\\Omega_q$, the answer is that it is intended as stated; \n as $\\Omega_q \\in \\mathbb R^m $, the function is well defined. The requirement of the function belonging to the canonical Stein class ensures that the Stein identity holds. \n \n We would like to emphasise that NP-KSD is developed for a very unbalanced situation in which standard two sample tests such as MMD tests and permutation tests fail. NP-KSD can treat the situation that only a small number $n$ of observations are available (say, one image, so that $n=1$), whereas we can generate as many samples $N$ as desired, using the synthetic data generator. Typically $n \\ll N$; the asymptotic results hold in the regime that $N$ tends to infinity, with $n$ fixed. \n\nThe NP-KSD approach required novel theoretical underpinnings as well as thoughtful construction of test statistics. Moreover, it has a novel viewpoint, viewing as target distribution the unknown distribution which underlies the synthetic data generator, rather than the distribution from which the observed sample comes. Creating an empirical Stein operator and assessing its properties is also a novel addition to the literature which in our view goes far beyond ``tricks''. In summary, NP-KSD is able to solve an important problem using novel ideas and theoretical justifications. \n\nWe hope that these explanations have clarified the contents of the paper and have addressed your concerns. ", " \nThank you for the comments and suggestions. We are pleased that you appreciate the importance of the problem as well as our contribution to its solution. \n\nThe main issue arising seems to be the discussion of comparison with MMD.\nIndeed, similarly to MMD, we also consider a two-sample problem. MMD can be very useful when two sets of samples are of comparable size. \nA key difference to MMD is that in our setting one of the sample sizes ($n$, the observed data) is usually quite small and cannot easily be increased. In an extreme case, only one sample (such as one image) may be available. \nIn contrast, the size of the other sample size, $N$, can be chosen as large as desired, as this set of samples is generated by the synthetic data generator $G$. Thus, the test situation is very imbalanced in sample size. In Algorithm 2, we may use as many samples from $G$ as is desired; and our theoretical results Proposition 3.1 and Theorem 3.2 underpinning the procedure are valid in the regime $N\\rightarrow \\infty$ with $n$ fixed, and Theorem 3.2 gives a bound on the rate of convergence. \n\nThis imbalance makes MMD not suitable for a comparison method. \nThe text in lines 232-234 alludes to issues arising in MMD when the sample size is small. \nIn particular, in Appendix F.1 we illustrate that the MMD is not consistent in this highly imbalanced situation; Table 5 shows that the type-1 error in an MMD test does not reach the correct level. The somewhat terse sentence in line 249 points to the reasons why MMMD is not compared against. We have now expanded this sentence, to read\n``We note that MMD tests do not have controlled type-I error when $n\\ll N$, thus are not suitable in this setting. However, MMD based methods to compare and criticise two generative models have been explored [Lloyd and Gharahramani, 2015, Sutherland et al. 2017].\nDetailed discussions and illustrations why MMD tests are \nnot included in the comparison list are found in Appendix F.1.''\n\nInstead, we compare with MMDAgg, a non-asymptotic test which is consistent even when $n\\ll N$, see for example Table 5 in Appendix F. For this comparison, Algorithm 2 uses the same number of samples from $G$ as MMDAgg, to ensure a fair comparison. In our experiments shown in Figure 1, NP-KSD based tests (in red with dots in Figure 1) outperform MMDAgg (in orange with triangles in Figure 1) in terms of test power. \n\nRegarding your question whether in Algorithm 1, we effectively learn the score function so to estimate KSD, this is indeed a high-level summary. A key novelty is that we estimate the KSD for the generator $G$, which can be carried out to arbitrary precision as we can generate as many samples from $G$ as desired; Proposition 3.1 gives the theoretical justification. Naively estimating this score function would be very difficult in high dimensions. Instead, we estimate univariate conditional score functions and combine them in Equation (7) to yield a Stein operator, which is the basis of the KSD test. When it is possible to sample from the conditional distributions (which we do not assume to be the case), a similar idea has been carried out in Singhal, R., Han, X., Lahlou, S., and Ranganath, R. (2019). Kernelized complete conditional Stein discrepancy. arXiv preprint arXiv:1904.04478.\nNot citing this paper was an oversight which has now been amended. \n\nWe hope that this response addresses your questions and alleviates your concerns.", " The paper considers the problem of assessing implicit generative models using kernelised Stein discrepancy (KSD).\nThe paper introduces a non-parametric (NP) Stein operator to allow implicit models to be used in KSD (which wasn't possible due to the fact KSD requires at least the unnormalised density of the model).\nResults on both synthetic and real datasets compared to MMD (which also only needs samples to compute) show that the proposed NP-KSD method has a better test power than MMD. Pros\n- The paper proposes a novel method that extends KSD to implicit models.\n- The paper is well written and technical. The paper is not hard to follow.\n- The paper is a solid contribution to better access the generation quality of generative models. This is becoming more important as synthetic data generation is seen as an important tool to solve privacy.\n- The results look promising. It would be interesting to include a neural sampler trained by MMD, which would be an even stronger indication of the stronger test power of NP-KSD.\n\nCons\n- Discussion compared to MMD is not enough; see my Qs below. - In algorithm 1, are we effectively learning the score function so to estimate KSD?\n - Is this what the author(s) mean(s) in Line 232-234 when criticising MMD?\n- In algorithm 2, do we effectively use more samples from G compared to MMD? Limitations and potential negative societal impact are discussed.", " The paper presents a kernel Stein discrepancy based test for black box generative models.\nSince the normal KSD cannot be applied, as the true score function of the black box model is unknown, the authors proposed a KSD variant where this unknown score function is estimated from samples from the model.\nThe resulting hypothesis test then is whether a given dataset that was used to fit the generative model is distributed according to the estimated score function of that model.\nThe score estimation is achieved via estimating a component wise distribution conditioned on all other components, through of summary statistics.\nSome consistency results are provided.\nAn experimental evaluation on toy data and and simple real datasets shows some benefits against the original KSD test where applicable, and one MMD based baseline otherwise. A well written paper that addresses an important issue.\nThe proposed methodology is based around a few nice tricks on estimating the conditional distributions and carried our in a thorough manner.\nIn the current form of the paper, however, it is not clear whether the proposed methodology is really as useful as the authors claim. The experimental evaluation is not thorough, and lacks baselines and comparisons to alternative strategies.\n\nA few concrete points:\n* There is no like to like comparison with an MMD based method. Only MMDAgg as a very specific case; the point of MMDAgg is that it can select a kernel without having to sacrifice samples, but in the case of a generative model, we can generate many samples so it doesn't seem to be a well fitting baseline). What about e.g. a plain MMD test with the median heuristic, or with a learnt optimal kernel? What about e.g. the linear time tests that learn feature locations (E.g. https://arxiv.org/abs/1605.06796).\n* It is actually not clear what is gained compared to an MMD based test. This surely depends on the number of samples drawn from the generator, as well as computational costs. See also question below. An empirical exploration of the performance of the various approaches as a function of drawn samples and compute would be very helpful. The authors only make some vague comments on this.\n* It is not clear why we could not just use score matching to learn the score using a powerful neural network instead of the conditional distribution approach? This would allow using many samples from the generator as well.\n* There are no empirical explorations of how the proposed score estimation approach scales in any direction. As that is crucial for this method to work, and since it is also a kind of harsh approximation, that weakens the paper. Related: what is the impact of the NN architecture for the score estimation in this case.\n\nMissing paper references that are highly relevant, as the problem of assessing performance of generative models was first discussed here (among others)\n* Statistical Model Criticism using Kernel Two Sample Tests by LLoyd et al\n* Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy by Sutherland et al\n\nMinor:\n* l97 The KSD was originally proposed in Chwialkowski et al 2016 and Liu at al 2016, not Gorham Mackey 2017\n* l109 The wild bootstrap is only really used if there is correlation to break (e.g. for time series data). Otherwise plain permutation test is sufficient and has no free parameters\n* l198 the normality result of Song is in a different context. Could you at least empirically explore this? A high level question is: Does it make sense to asses the quality of generative models via a hypothesis test? What do we really learn if we e.g. fail to reject the null hypothesis? What do we learn if we can reject it? It shouldn't actually be a surprise that it is easy for most statistical tests to reject the null. Some of the references I put above go further and explain what features in the data lead to the rejection. Something like this could be used to improve a generative model, but that is actually not done in this paper.\n ", " This paper proposes an adaptation of kernelized Stein discrepancy, called non-parametric KSD (NP-KSD), for assessing the implicit generative models. In particular, the author considered a scenario that (1) we do not have access to the (unnormalized) densities of the generative model; (2) the data dimension is large; (3) fixed number of true observations. To enable the assessment of this type of model, the author proposed a non-parametric Stein operator based (1) conditional distributions with summary statistics; (2) resampling the conditional dimensions; (3) score matching for gradient estimation. Together with the kernel trick and Monte Carlo based goodness-of-fit test, it results in the NP-KSD method. \n\n **Strength**:\n\nThe paper is clearly written and easy to follow. Although the motivation of the proposed method can be improved. \nThis paper aims to tackle an important problem: the evaluation of implicit models, which can have significant impacts to the current deep generative models. To sidestep the computational challenges, the author also provides several tricks to enable the NP-KSD. Theoretically, the author shows the consistency and convergence results of NP-KSD. \n\n**Weakness**:\n\nFirst, the motivation of the proposed method can be improved. For example, two-sample test is a typical method for evaluating implicit models. Why do we need goodness-of-fit test instead? Goodness-of-fit test is typically used when we have access to the density function, that is where its name comes from. Why do we want to use this for the implicit model with the gradient estimation? To me, it seems that we achieve what the two-sample test is designed for but with an alternative method. So a better motivation for goodness-of-fit test is needed. \n\nThere are several tricks used in deriving NP-KSD. The author has shown that NP-KSD satisfied Stein identity. However, a more interesting (or more important) aspect is to show when NP-KSD =0, it means p=q. Currently, I don't think it is true, especially with summary statistics and score matching. If not, then the goodness-of-fit test procedure will not be valid. The author mentioned equivalence class, but it is unclear how this will impact the evaluation of the implicit model. E.g. with a summary statistics, the distribution inside an equivalence class has similar visual quality?\n\nAnother concern is about its novelty. To me, the Stein operator \\Tau is defined similarly as the Langevin-Stein operator but with conditional distributions. This is equivalent to the original Langevin-Stein discrepancy. This idea has also been explored in paper [1] and the author should consider citing it. If so, the main novelty of NP-KSD is score-matching, resampling and summary statistics. However, they seem to be more like tricks rather than novel methodologies. Although I agree the theoretical consistency and convergence results give some guarantees of using the resampling and score matching, which seems to be novel. \n\n[1] Singhal, R., Han, X., Lahlou, S., & Ranganath, R. (2019). Kernelized complete conditional Stein discrepancy. arXiv preprint arXiv:1904.04478.\n The questions are mainly the summary of what I have described in Weakness.\n\n**Major**:\n1. The motivation for using the goodness-of-fit test should be clearer. E.g. advantages compared to two-sample test? Also, the implicit model has two application scenarios: (1) sampler, where we want to generate samples from a target distribution and (2) data synthesis, where we want to generate observations that mimic the real data samples, like GAN. From the context of this paper, it seems that it focuses on (2) instead of (1). It would be better to clarify this. \n2. Do we have theoretical guarantees when NP-KSD =0 $\\Rightarrow$ p=q? Does it only hold for equivalence class? What is the impact of equivalence class on the evaluation of the implicit model?\n3. The concern about the novelty, as detailed in the weakness section. Particularly, if I understood correctly, the main framework of the proposed method is based on goodness-of-fit test but with estimated gradients from score matching. Those ideas are not new. So maybe consider elaborating more on the novel contributon?\n4. For synthetic distribution experiments, why the proposed method is better than KSD? From my understanding, NP-KSD is an approximation of KSD. \n\n**Minor**:\n1. $\\mathbb{R}^m$ in line 86 should be $\\Omega_q$?\n2. In the related work section, maybe consider adding citations regarding various types of Stein discrepancies and their usage in generative models. \n3. I wonder why the proposed method is better than two-sample tests. Since the gradient estimation used in NP-KSD is based on the generated samples, it means NP-KSD does not have access to more information compared to the two-sample test. Where does this performance increase come from? The author mentioned NP-KSD can distinguish distributions in equivalence classes. But the potential impact of the equivalence class to model evaluation can be discussed in more detail. It would be great to consider some simple cases with simple summary statics like mean, and show some properties of the equivalence class. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "uaOTOZ-YAUo", "gF_o3sNb5ko", "UGBL24xUGvK", "GQGhqWhRfc_", "_bkAVmya47X", "ZlNmJ1zqs5", "GExauiBuoKb", "3Mypwq9zmz9U", "rCZy6idlYox", "82E1jiFV4b", "eRdRs3IXR5A", "nips_2022_t4vTbQnhM8", "nips_2022_t4vTbQnhM8", "nips_2022_t4vTbQnhM8" ]
nips_2022_KqI-bX-TfT
Learning Consistency-Aware Unsigned Distance Functions Progressively from Raw Point Clouds
Surface reconstruction for point clouds is an important task in 3D computer vision. Most of the latest methods resolve this problem by learning signed distance functions (SDF) from point clouds, which are limited to reconstructing shapes or scenes with closed surfaces. Some other methods tried to represent shapes or scenes with open surfaces using unsigned distance functions (UDF) which are learned from large scale ground truth unsigned distances. However, the learned UDF is hard to provide smooth distance fields near the surface due to the noncontinuous character of point clouds. In this paper, we propose a novel method to learn consistency-aware unsigned distance functions directly from raw point clouds. We achieve this by learning to move 3D queries to reach the surface with a field consistency constraint, where we also enable to progressively estimate a more accurate surface. Specifically, we train a neural network to gradually infer the relationship between 3D queries and the approximated surface by searching for the moving target of queries in a dynamic way, which results in a consistent field around the surface. Meanwhile, we introduce a polygonization algorithm to extract surfaces directly from the gradient field of the learned UDF. The experimental results in surface reconstruction for synthetic and real scan data show significant improvements over the state-of-the-art under the widely used benchmarks.
Accept
All reviewers were clearly in favor of accepting the paper pre-rebuttal. There was limited discussion post-rebuttal. The AC examined the paper, the reviews, and the authors' response and is inclined to accept the paper. The AC encourages the authors to use their extra page to incorporate their responses to the reviewers into the final version of the paper. In particular, the AC would encourage carefully considering the feedback on presentation from 1bdf.
train
[ "HZaG6h1VuPD", "KDWg4f28MgV", "uFSjNbGUTDw", "-aQDpKoJpKY", "id-iRuH1Xx", "YaaYuYVw7XZ", "BU2uxgnOUkc", "HUNByEGv-q", "jWjC69phTF9", "M1MBLwT1_Q0", "AH6hmdOPeRA", "IiSUPkOhvYU", "RNPJu-qVdjz" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Aixv,\n\nFollowing your questions, we will expand figure captions with detailed descriptions in revision. We would like to know whether you believe we have addressed your concerns, and please let us know if you have any other questions.\n\nThanks for your time,\n\nThe Authors\n\n", " Dear Reviewer tXas,\n\nFollowing your questions, we provided additional explanations on scaling our method to large-scale scenes and reported the computational cost compared to the state-of-the-arts to illustrate the scalability and efficiency of our approach. And we are willing to release source code and pretrained models within 2 weeks after acceptance. \n\nWe would like to know whether you believe we have addressed your concerns, and please let us know if you have any other questions.\n\nThanks for your time,\n\nThe Authors\n", " Dear Reviewer 1bdf,\n\nFollowing your questions, we provided additional explanations and reported the normal consistency score compared to the state-of-the-arts to demonstrate the quality of our reconstructions. We further detailed each of our technical contributions to clarify our ideas and designs, and explained how we will modify the paper as suggested. \n\nWe would like to discuss with you to further clarify our paper and answer your questions. And if you believe we have addressed your concerns, we hope that you would be willing to increase your score.\n\nThanks for your time,\n\nThe Authors\n", " Dear Reviewer 3mEs,\n\nFollowing your questions, we provided additional explanations and reported the computational cost compared to the state-of-the-arts to illustrate the effectiveness and efficiency of our approach. We further demonstrate our potentials to handle noises by the provided results on the real scanned shapes and scenes. \n\nWe would like to discuss with you to further clarify our paper and answer your questions. And if you believe we have addressed your concerns, we hope that you would be willing to increase your score.\n\nThanks for your time,\n\nThe Authors\n", " Hi Reviewers,\n\nThe discussion period is closing soon. Please take a look at the responses from the authors. If you have further questions, please ask them now, since the authors will be unable to respond soon. It's substantially more productive, effective, and reasonable to have a quick back-and-forth with authors now than to raise additional questions or concerns post-discussion period that the authors are unable to address. \n\nThanks,\n\nAC", " We appreciate that the reviewer finds our paper promising, novel, and well-written. We address additional comments below.\n\n**Q1: Are there plans to release source code or pretrained models to the community?**\n\nYes. We will make source code and pretrained models public within 2 weeks after acceptance.\n\n**Q2: The paper did not talk much about the scalability of the proposed method.**\n\nAs suggested, we will add more discussions of the scalability of our method in the conclusion.\n\n**Q3: Can the proposed method handle millions of points, city-scale LiDAR scans, etc?**\n\nWe believe the answer is yes if we adopt the sliding window strategy to reconstruct surfaces part by part. Due to the catastrophic forgetting problem of the neural networks, it is extremely difficult to represent large-scale scenes within a single network. To solve this issue, recent works (e.g. DeepLS [ECCV 2020] and BlockNeRF [CVPR 2022]) propose to use the sliding window strategy to represent large scale scenes using separate parts and have shown promising results. We also consider this as an interesting future work to transfer the sliding window strategy to our method for representing large scale data and thanks for pointing it out! We will add it to the future work.\n\n**Q4: How much computation time/computation resources does the proposed method need?**\n\nThanks for the question. We make a comparison with Neural-Pull, IGR, Point2mesh on the computational cost of optimizing for a single point cloud in the following table:\n\n|methods|Neural-Pull|IGR|Point2mesh|Ours|\n|:-:|:-:|:-:|:-:|:-:|\n|Time (s)|1150|1212|4028|**667**|\n|Memory (GB)|2.2|6.1|5.2|**2.0**|\n\nThe optimization time is evaluated on a single GTX 3090 GPU. It shows that our method converges faster than all the baselines. We will include the table in the supplementary, and we also provided the efficiency comparison of surface generation in Table 2 of the supplementary.\n", " We thank the reviewer for his constructive feedback. In the following, we address the main concerns and explain how we will modify the paper.\n\n**Q1: Sec.3.1 and 3.2. are written only for people who are very familiar with the 3 most related papers and has far too many forward and backward pointers.**\n\nWe agree that more descriptions of the background should be provided, but due to the page limitation, we have to simplify the introduction of the background methods. We will add more explanations to guide readers' understanding within the space allowed, and will add additional contents with detailed descriptions to the supplementary. As suggested, we will also detail the overall pipeline first and clarify our designs before providing results or comparisons.\n\n**Q2: ‘consistency aware' / ‘field consistency loss' and ‘adversarial optimization' are not explained.**\n\nIndeed, learning the ‘consistency' is a frequently discussed topic in Neural Implicit Functions (e.g. NeRF and SDF). For example, DietNeRF [ICCV 2021] proposed a semantic consistency loss for few-shot view synthesis and SparseNeuS [ECCV 2022] proposed a consistency-aware learning scheme for improving reconstruction quantities. However, due to the different characteristics of implicit fields, there are great differences in learning consistency. In this paper, we focus on learning a consistent UDF with a carefully designed loss. The ‘adversarial optimization’ is having opposite optimization directions which have a highly negative effect on the accuracy and continuity of fields, we will revise our terminology in revision.\n\n**Q3: The real technical contribution. Is the first contribution a very small modification of Neural-Pull?**\n\nOur novelty lies in the analysis of implicit fields which is seldom discussed in previous works. We did get inspiration from Neural-Pull on how to learn distance fields by moving queries. However, the nature of SDF prevents Neural-Pull from representing most real-world objects with open surfaces or geometries with inner structures, and the direct extension of Neural-Pull to UDF fails drastically as shown in Table 5. This observation drives us to design a consistency-aware learning scheme with a carefully designed loss as described in Sec.3.2 which leads to an accurate and continuous field as shown in Fig 1 of the supplementary. In Sec.3.3, we proposed to progressively estimate the mapping relationship between 3D queries and the approximated surface by updating the raw point cloud with well-moved queries as additional priors for promoting further convergence. Finally, previous UDF approaches fail to extract surfaces directly which greatly limits their practicability. We resolve this problem by introducing an algorithm for directly extracting surfaces with arbitrary topology from the gradient vector field of UDF as described in Sec.3.4.\n\n**Q4: The progressive surface approximation seems novel but not claimed clearly.**\n\nWe claimed the progressive surface approximation in the introduction (l.48 - l.51) and we will make it more clearly in the revision.\n\n**Q5: The surface extraction seems to be a relatively simple adaptation of marching cube.**\n\nWe believe that one of the most important factors preventing the development of UDF approaches is the inability to extract surfaces directly. By observing the gradient vector field of UDF, we propose to classify whether two points are in the same or the opposite side using dot product of the gradients from the learned UDF, and extract surfaces based on this relationship. Our proposed surface extraction algorithm is an efficient method to mesh the UDF, which is not only designed for our method but also suitable for other UDF approaches like NDF [NeurIPS 2020]. As shown in Table 1, the improvement is significant by adopting our surface extraction algorithm to NDF ($NDF_{gradRA}$) than reconstructing the surface from generated dense point clouds using BPA as proposed by NDF ($NDF_{BPA}$).\n\n**Q6: Metrics related to meshes.**\n\nWe further report the normal consistency score proposed in OccNet to evaluate the accuracy of meshes on the MGD dataset.\n|methods|Neural-Pull|NDF| Ours|\n|:-:|:-:|:-:|:-:|\n|Normal Consistency|91.83|92.84|**97.80**|\n\nWe will add the results in revision.\n\n**Q7: Confusion of the ‘low confidence range' experiment (Table 7).**\n\nThe ‘low confidence range’ is the standard deviation of the Gaussian function for sampling auxiliary points. Specifically, as mentioned in l.269 – l.271, a Gaussian function $\\mathcal{N}(\\mu, \\sigma^2)$ with $\\mu=p_i$ and $\\sigma$ as the distance between $p_i$ and its 50-th nearest points on $P$ is adopt to sample query points for $p_i$ (high confidence range). After the convergence of the first stage, we sample auxiliary points using a Gaussian function with $\\sigma^{'} = 1.1\\sigma$ (or 0.9, 1.0 and 1.2 as listed in Table 7) for aggressive surface approximation. We will describe the settings and explanations more clearly in revision.\n", " We appreciate the reviewer for his insightful and thorough comments. We will further incorporate the suggestions in the next version. \n\n**Q1: Potentials to handle noises in the raw point clouds.**\n\nWe agree that we don’t have special designs for noisy point clouds, especially when learning to reconstruct surfaces from raw point clouds without any supervision. While the related works (e.g. Neural-Pull, NDF, SAL, GIFS) also struggle from handling noisy point clouds, since it is still a challenge to handle clean point clouds. Alternatively, a denoising algorithm (e.g. PointCleanNet) can be used for preprocessing the noisy point clouds first. Furthermore, we have provided the reconstruction results of real scanned shapes and scenes which contain noises with unknown distributions in Sec.4.3 and Sec.4.4, where we significantly outperform the other methods. We also consider this as an interesting future work to reconstruct surfaces from noisy point clouds in an unsupervised way, which will be added in revision.\n\n**Q2: How to guarantee that the gradient is always accurate for surface extraction?**\n\nIndeed, it is extremely difficult to learn a perfect unsigned distance field where the gradient values are guaranteed exactly accurate. However, our proposed surface extraction algorithm only focuses on the direction of gradient which is easy to guarantee since our optimization is conducted by moving queries against the direction of gradient to the approximated surface. Hence, the gradients are highly correlated to the moving direction in the optimization. Eventually, the direction of the gradient can be guaranteed to be broadly correct. Besides, to extract surfaces correctly, we only need to determine whether the gradients at two queries are approximately in the same direction (inner product is positive) or the reverse direction (inner product is negative), which is highly robust.\n\n**Q3: Will the optimization fall into local minimum with the Chamfer distance Loss?**\n\nOur method does not guarantee the global minimum strictly in theory. Actually, since the point cloud is only a discrete representation of the surface, and the topology of the point cloud is ambiguous, it is impossible to converge to an actual global minimum in a strict sense in theory with only raw point clouds as input. What our method guarantees is the consistency of the learned unsigned distance field in contrast to Neural-Pull loss in Eq.2 which will form a distorted field as demonstrated in Fig 3 and Fig 4.\n \n**Q4: What is the performance of directly extending Neural-Pull to unsigned distance field?**\n\nThe quantitative results obtained by directly extending Neural-Pull to UDF have been shown in ‘NP loss' of Table 5, and the simulation experiment of this extension has been shown in Fig 4. Furthermore, the visualization of the unsigned distance field learned by Neural-Pull and our method has been shown in Fig 1 in the supplementary. Note that all the designs and experimental settings are kept the same as ours except for the loss. Besides, the quantity and visualization comparisons with the original Neural-Pull which learns SDF were given in Table 2, Table 4, Fig 8 and Fig 9, respectively.\n\n**Q5: The ablation study on the design of Progressive Surface Approximation.**\n\nIn Table 6 and Table 7, we have provided the ablation studies for the design of Progressive Surface Approximation. Specifically, we explore the effect of step numbers in Progressive Surface Approximation in Table 6, where we reported the performance of training our network with different numbers of steps St=[1,2,3,4]. And we further explore the range of low confidence regions (described in l.206 – l.210 and l.218 – l.220) about Progressive Surface Approximation in Table 7, where we reported the performance of 4 different range values.\n\n**Q6: What is the computational cost to learn the unsigned distance field?**\n\nWe make a comparison with Neural-Pull, IGR, Point2mesh on the computational cost of optimizing for a single point cloud in the following table.\n\n|methods|Neural-Pull|IGR|Point2mesh|Ours|\n|:-:|:-:|:-:|:-:|:-:|\n|Time (s)|1150|1212|4028|**667**|\n|Memory (GB)|2.2|6.1|5.2|**2.0**|\n\nThe optimization time is evaluated on a single GTX 3090 GPU. It shows that our method converges faster than all the baselines. We will include the table in the supplementary. We also provided the efficiency comparison of surface generation in Table 2 of the supplementary.\n\n**Q7: Failure cases and discussion on when the proposed method would fail.**\n\nIndeed, our visualization examples are selected randomly from the dataset without checking all the results yet. We will look through all the results later and put the failure cases in revision. We also admit that our method may fail to reconstruct a perfect surface when raw point clouds are extremely noisy or sparse since we don’t require any prior or ground truth supervision, and we will add more discussions of our limitations in the conclusion.\n", " We thank the reviewer for considering our method interesting, promising and novel. And we are pleased to hear the figures helpful and ablation studies convincing. We will expand figure captions with detailed descriptions in revision. Thanks for your suggestions!", " This paper focuses on the problem of extracting or reconstructing mesh surface from raw point cloud. The key idea is to learn an unsigned distance function to progressively get to the real surface. The unsigned distance field is critical to deal with objects that are not watertight but with inner parts. The authors proposed a consistency aware loss to keep the consistency of the learned unsigned distance fields to avoid adversarial optimization. A surface extraction algorithm is also proposed to extract mesh surface from the learned unsigned distance function. Strengths:\nFirst, using unsigned distance function is critical and important to handle complicated object structures. They have demonstrated better performance on public dataset both visually and in numbers. \n\nWeakness:\nFrom my understanding, this proposed method doesn't have potentials to handle any noise in the raw point cloud, which means they require a clean point cloud as input. But in real scenario, the raw point cloud is not noise-free.\nAnother issue is that the surface extraction algorithm is a bit tricky. The extraction is mainly controlled by computing the sign of cls(.) function, but how could we guarantee the gradient of the unsigned distance field is always accurate. \nFinally, the authors haven't presented any failure cases or any dicussion on when the proposed method would fail. I have some questions or confuse on some technical details:\n1) Will the optimization fall into local minimum with the Chamfer distance Loss of Eq.(3)? If yes, then how would this local minimum affect the optimization? If no, why the global minimum is guaranteed?\n2) what is the performance of directly extending Neural-Pull to unsigned distance field?\n3) The authors have spent much efforts on designing the Progressive Surface Approximation, but I didn't see ablation study on this component which I think it is critical.\n\nAlso, what is the computational cost to learn the unsigned distance field? The authors have mentioned one limitation or possible future work which is to have a coarse-to-fine divided grids. But they haven't clearly discussed the limitations and also they haven't demonstrated any failure cases. Please refer to the Weakness part about some of my thoughts on the limitations.", " This paper presents a method to mesh point clouds. It performs optimization on a single scene (without any training) . It claims 2 contributions:\n1. a loss function and optimization strategy, which in my understanding is essentially the one presented in neural pull [26] for signed distance function used for unsigned distance function and symetrized. It is often refered to as a \"consistency aware\" / \"field consistency loss\" and as fighting \"adversarial optimization\" (which makes little sense to me)\n2. a meshing strategy, which to me seems an adaptation of marching cube to unsigned distance function\nI would say there is a 3rd contribution, which is not claimed in the intro but is a part of the method section, which is the progressive (i.e. 2 step in practice) surface approximation, even if the quantitative gains associated to it are small.\nIt presents results on several dataset that seem to improve state of the art While I am not an expert of the area, the benefits of the proposed approach in term of results seem clear to me, which I think is the main strength of the paper. The proposed approach also seem to make a lot of sense and is quite simple.\n\nI see several weaknesses in the paper:\n1. I found the paper very hard to parse while the proposed approach is quite simple:\n- this is particularly true for 3.1 and 3.2. I think this is written only for people who are very familiar with the 3 most related papers + has far too many forward and backward pointers. For example, in 3.1, before anything about the method has been explained (no loss function, nothing on optimization), there are results, comparisons with 3 baselines and discussion of the differences (l 126-151 and figure 2) I do not think it can make sense before the full paper has been read and understood. Similarly, l. 167 discusses results obtained when using equation 3 which is presented l. 185. If this was a journal submission this could easily be solved with a \"major revision\", for a conference paper this is much harder to trust the authors with a major rewriting of the paper...\n- another thing that annoyed me is that I could understand none of what the paper was doing from the abstract and intro. Terms like \"consistency aware\" / \"field consistency loss\" and as fighting \"adversarial optimization\" are not explained while they refer to very simple idea, and I think they are designed to impress but make little sense/are not adapted (not sure if it's the fault of the authors or if they re-use terms from other paper)\n\n2. I am unsure what the real technical contribution are: \n- to me the first contribution, which is a big part of the method section (3.1 and 3.2), is actually a very small modification of neural pull [26]. I think this is not recognized enough in the paper and find that a very annoying issue.\n- the progressive surface approximation seem novel but this is not claimed clearly, so I am unsure whether this might be following another paper\n- the surface extraction seem to be a relatively simple adaptation of marching cube: if the authors agree, this again should be acknowledge much more clearly in the abstract, intro and 3.4\n- to me, the real contribution is actually taking the previous small idea together and making them into a very effective algorithm, which could make for a great paper if only it was acknowledged better and each part explained much more simply. Unfortunately, this again put me on the verge of recommending rejection for a conference paper.\n\n3. smaller concerns are associated with the experiments (which again I found in general convincing)\n- since the output is a mesh I would like to see metrics related to meshes, not only point cloud. For example, it would be quite easy to measure normal distances (up to flip)\n- I am confused by the \"low confidence range\" experiment (table 7) I guess the low confidence range should be understood as \"in addition to sigma\", so 0.9\\sigma for example actually means \"between 1 and 1.9\\sigma from the origin\": is that right? If that's right, why not experiment with much smaller values (e.g. 0.1 and 0.5 sigma), and in any case with larger values (2 and 4 sigma)? That would make the trend much clearer. In any case, this should be better explained, a small figure (earlier, in the method section) could help\n\nAll in all, because I think the method makes sense and because (as far as I can judge not being an expert) the results seem very good and the ablation convincing, I would still tend to recommend accepting the paper, trusting the authors with a major rewriting. \n Please address in details may questions in weaknesses 1 and 2, and explain how you will modify the paper to clarify it (if I am unconvinced with the answer on these points I am likely to change my rating) yes", " This paper proposed a method for surface reconstructions by training a neural network to predict unsigned distance fields (UDF). The learned UDFs are consistent-aware, and can be trained without ground truth distance fields, point normals, or large scale training datasets. A high quality surface can be extracted from the gradient vector field of the learned UDFs. The paper has achieved appealing results compared to some of the state of the art algorithms. 1) The paper carefully examined the current failure mode of the UDF approximation methods, thus proposing the consistency-aware field learning loss, and the progressive approximation paradigm. These strategies greatly improved the quality of the learned UDF, as illustrated by the paper.\n2) Traditional marching cube algorithms cannot be directly applied on UDFs since there is no inside/outside information in an UDF. The paper proposed a novel surface extraction algorithm by looking at the gradient vector field of the learned UDF. From the originality and quality perspective, the paper has done well.\n3) Presentation is well done, language and visualization are clear.\n4) From a significance perspective, the reviewer believe the paper has boosted the SOTA by a quite large margin. The reconstructed surface has much higher quality in many challenging scenarios. 1) Are there plans to release source code or pretrained models to the community?\n2) The paper did not talk much about the scalability of the proposed method. For example, it would be interesting to know if the proposed method can handle millions of points, city-scale LiDAR scans, etc? How much computation time/computation resources does the proposed method need? The author addressed the limitations of uniformly dividing grids for surface extraction.", " This paper presents a framework to learn Unsigned Distance Functions (UDF) from point clouds. The learned continuous UDF can then be used to extract surface to represent 3D geometry. One of the challenges of learning a continuous UDF from a discrete point cloud is the instability of gradient due to the sparsity of points. To this end, the authors propose a novel loss function with a field consistency constrain. They also designed a progressive scheme to learn more local details. Unlike SDF that can recover surfaces using the Marching Cubes algorithm directly, UDF cannot pass the inside-outside test due to the lack of direction information (i.e., sign). Therefore, this paper propose to use the relative angle between query points to test whether they cross the iso-surface. Experiments demonstrate the proposed method outperform existing methods and ablation studies verify the design choices. Strengths:\n- The paper is well written. It is easy to follow\n\n- The figures are greatly helpful for readers to understand the idea.\n\n- The proposed idea is interesting and effective as it is supported by the superior performance in comaprisons against existing methods. Furthermore, ablation studies are sufficient to validate design choices. \n\nWeaknesses:\n\n- figure captions\nI would recommend to expand figure captions so that readers don't need to jump back and forth between text and figure.\n N/A N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "jWjC69phTF9", "YaaYuYVw7XZ", "BU2uxgnOUkc", "HUNByEGv-q", "nips_2022_KqI-bX-TfT", "IiSUPkOhvYU", "AH6hmdOPeRA", "M1MBLwT1_Q0", "RNPJu-qVdjz", "nips_2022_KqI-bX-TfT", "nips_2022_KqI-bX-TfT", "nips_2022_KqI-bX-TfT", "nips_2022_KqI-bX-TfT" ]
nips_2022_LT6-Mxgb3QB
Bilinear Exponential Family of MDPs: Frequentist Regret Bound with Tractable Exploration $\&$ Planning
We study the problem of episodic reinforcement learning in continuous state-action spaces with unknown rewards and transitions. Specifically, we consider the setting where the rewards and transitions are modeled using parametric bilinear exponential families. We propose an algorithm, $\texttt{BEF-RLSVI}$, that a) uses penalized maximum likelihood estimators to learn the unknown parameters, b) injects a calibrated Gaussian noise in the parameter of rewards to ensure exploration, and c) leverages linearity of the exponential family with respect to an underlying RKHS to perform tractable planning. We further provide a frequentist regret analysis of $\texttt{BEF-RLSVI}$ that yields an upper bound of $\tilde{\mathcal{O}}(\sqrt{d^3H^3K})$, where $d$ is the dimension of the parameters, $H$ is the episode length, and $K$ is the number of episodes. Our analysis improves the existing bounds for the bilinear exponential family of MDPs by $\sqrt{H}$ and removes the handcrafted clipping deployed in existing $\texttt{RLSVI}$-type algorithms. Our regret bound is order-optimal with respect to $H$ and $K$.
Reject
The paper presents a tractable algorithm for bilinear exponential MDP with regret bound that improves from the best known result and achieves \sqft{d^3 HK} regret. The result appears to be correct with strong technical analysis. Reviewers and ACs appreciate merits of the analysis for this specific problem class. However, both the reviewer team and the AC found that the authors miss to discuss several important and closely related works, such as Zanette et al, '19; Yang and Wang, '19 and a line of works on kernel RL and model-based RL with Eluder dimension analysis. In particular, Table 1 only compares the new result with several recent results on specific MDP models published after 2021, which is far from comprehensive. During the rebuttal, the authors acknowledged that they were not aware of these related works. However, they didn’t revise the submission to include the missing discussions pointed by the reviewer. It remains unclear how the submission’s analysis relates to the aforementioned results that were not discussed in the paper. The authors provided some high-level discussion after rebuttal, but they would need a lot more technical details to be convincing. For example, regret analysis using Eluder dimension for general function class is often a go-to benchmark for non-linear models. The proposed model appears to be a generalized linear model, which is a standard special case of the Eluder dimension analysis. Then one would expect such analysis to lead to a O(d poly(H)\sqrt{T}) regret, (with \sqrt{d} coming from Eluder dimension and \sqrt{d} coming from metric dimension), better than result of this paper. Note that this is just a conjecture, and rigorously working out this analysis would likely need extra work (nontrivial, as the authors pointed out). However, it is still not appropriate to overlook the possibility of using a more general analysis and just focus on a specific parametric model. A careful and honest discussion is necessary. Beyond using Eluder dimension, there are actually a handful of RL theory papers on general function approximation and general model classes. We strongly recommend the authors to redo their paper survey and properly place their contribution in the context of state-of-art RL theory. We have reviewed a very competitive batch of RL papers this year. This submission has strengths but falls on the borderline. After consulting with the senior AC member who is also expert in RL theory, we regretful commend the authors further revise the paper and submit to the next venue.
train
[ "gSU_whVX3IP", "05mCyko5nt", "W6NMI07SZqA", "N4f5exetHrt", "Wa2tf89kZB0", "kDRkCg5obHm", "DLFltiRkjs6", "BwOluoXS-ZE0", "vCZz2abGu9I", "_96fsWUtt4S", "NikBNkZ4js1", "3a-r5Bkyvd-", "VMP-NKkz3GL", "bEH77796apX", "4TNrI0w4eoM", "8ZqaaRfW-ny", "TTUxCLQMBjD" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications! I encourage the authors to include these details in the paper. \n\nI don't have any further questions, and I will adjust my rating to 6 accordingly. I hope our conversation can help you revise the paper.", " Thank you for your helpful input and for engaging with our rebuttal.\n\nRegarding our claim that the Eluder dimension analysis cannot work in our setting: We apologize for our choice of words, indeed, we meant that the Eluder dimension analysis isn't suitable for our algorithm and not for the setting as a whole. We explain our claim hereafter:\n1. Ayoub et al, 20': it is assumed that the parameter space has a finite diameter (see Corollary 2), and the latter is essential to obtaining a finite covering number. Indeed, since the UCB approach transfers the boundedness to the estimated parameter, then the estimated value function space has a finite covering as well. \n2. Yang and Wang, 2019': It uses this boundedness as well in Eq. 5 therein.\n3. For BEF-RLSVI: since we inject a noise in the estimation and we don't use clipping, the parameter $\\tilde{\\theta}$ is unbounded. Therefore we cannot readily apply the proof techniques from the literature, and we are unsure how to adapt them to obtain a finite effective dimension or Eluder dimension.\n\nIf you believe that the Eluder dimension proof argument can still be used, please don't hesitate to share any intuition about adapting the analysis for our algorithm.\n\nWe would like to thank you again for your help in improving the quality of our submission. We will add all the discussions of this rebuttal in the revised version.", " We thank the reviewer again for engaging with us to improve our manuscript. \n\n1. **Computation of the maximum likelihood:** We include here some useful references to shed light about the approximation techniques that exist for the exponential family. We will add an appendix to discuss this estimation further as it seems to be more interesting than we initially believed.\n - Integral approximation techniques:\n 1. The paper \"Annealed Importance Sampling\", Neal 1998', approximates ML computations using simulated annealing, a method consisting in starting from a tractable distribution and updating it sequentially to resemble the distribution at hand.\n 2. \"Probabilistic Structured Predictors\", Vembu et al 2012', proposes MCMC techniques for approximating the partition function.\n 3. \"On Contrastive Divergence Learning\", Carreira-Perpinan and Hinton 2005', shows that optimizing a different objective, called the contrastive divergence leads to a good approximation of the ML. \n - If the support of the distribution and its natural parameter are bound, the paper \"A Computationally Efficient Method for Learning Exponential Family Distributions\", Shah et al 2021', shows that an $\\alpha$-approximation can be derived in $\\mathcal{O}(\\operatorname{poly}(k/\\alpha))$ time. The latter assumes a specific definition of compactness of the representation as well as knowledge of the support and shows how to re-parameterize the density to a specific class of exponential families that are easier to study. \n - Score matching: this is a technique that avoids approximation the partition function and is well studied in literature, see \"Maximum likelihood estimation and large-sample inference for generalized linear and nonlinear regression models\", JØRGENSEN et al 1983'. More recently, \"Exponential Family Model-Based Reinforcement Learning via Score Matching\" Li et al 2021', proposed an adaptation of this technique to the exact setting we consider. The latter shows that under certain conditions, the estimation can be solved in $\\mathcal{O}(d^3)$ time.\n - \"Kernel Exponential Family Estimation via Doubly Dual Embedding\" Dai et al, 2019', studies exponential families such that the natural parameter belongs to some RKHS. The latter proposes a method that improves over score matching in time and memory complexity.\n\n2. We agree that the assumptions made should be very clear to the reader. As such, we will add the separability assumption to Table 1 as suggested, and we will add a relevant discussion as well.", " Thanks for the clarifications. It helps understanding. Note that these detailed discussions should be in the paper.\n\n- It is easy to show that approximate log covering number for RKHS reduces to effective dimension of RKHS. It is not correct to say that Eluder dimension analysis doesn't work in the submission's setting. \n\nWe strongly encourage authors to expand the related work section, to not overlook closely related existing results. ", " Thanks for the author's detailed response.\nI have no further questions.", " Thanks again for the clarifications!\n\nYes, you're right that BEF-RLSVI is model-based. I apologize for my mistake.\n\nRegarding the computation of the maximum likelihood estimation, I took a quick look of [CCM21], and it seems that they also only mentioned that it can be solved via integral approximation in the general case, similar to the comments in line 176~177 in the current paper. I would appreciate it if the authors can provide more details on this.\n\nI think the discrepancy between the model assumptions here and that in [CCM21] should be noted in the paper, especially as the algorithms are compared in Table 1.", " We thanks the Area Chair ZgFv for his comments and mentioning the relevant references. Here, we add brief discussions on these references in comparison with our work. We will add them further in the final version.\n\n1. **Yang and Wang, 2019:** We agree that this is an important related work, and we will discuss its relation with our work in the revised version. In particular we will discuss: \n - *Similarity in setting:* their RKHS setting is close to ours and can recover our value function. \n - *Optimality of our regret bound:* applying their result yields a regret bound $H \\log(T)^d$ higher than ours. The $\\log(T)^d$ is the order of the information gain, see \"Gaussian process optimization in the bandit setting: No regret and experimental design\" Srinivas et al. 09'. Whereas our result is very close to an existing lower bound, see lines 164 to 169. \n - *Difference in setting:* Their RKHS result dose not recover our result, the explanation is that they only assume knowledge of the RKHS and not the precise features like in their finite dimensional result. More precisely, they have to deal with estimation in an RKHS, which is not similar to this paper nor to their linear setting. Estimation in an RKHS can be computationally inefficient and incurs a larger regret.\n2. **Ayoub et al, 2020:** \n - *Similarity in setting:* their kernel version can recover our setting like that of Yang and Wang, 2019. Their setting seems realistic as well. Indeed, they claim that a special kind for queuing networks admits a discrete-time Bernoulli approximation, which is well recovered by their assumptions. However, the quality of this approximation is not discussed therefore we are unsure of its theoretical validity.\n - *Differences in the analysis:* They use an Eluder-dimension based analysis, therefore, in our case it reduces to RKHS analysis similar to Yang and Wang, 2019. Indeed, \"A Short Note on the Relationship of Information Gain and Eluder Dimension\" by Huang et al, 2021 showed that for RKHS settings, Eluder dimension and the information gain are strictly equivalent. And while the discussion of bounds using Eluder dimension is out of the scope of our paper, we recall from Huang et al, 2021, verbatim, that \"Eluder dimension was originally proposed as a general complexity measure of function classes, but the common examples of where it is known to be small are function spaces (vector spaces)\". \n - *Intractable planning:* Planning is not tractable with UCB-approaches, and Eluder dimension analysis cannot work in our setting since it assumes finite covering numbers, it is not clear whether the latter can be modified to work for us since our value is not bounded as it is not clipped. This is rather a significant contribution of our paper as we avoided the non-linear behavior of value functions emerging from UCB-style algorithms thanks to optimism and clipping.\n - *Conclusion:* We respectfully disagree that the said paper subsumes our results. In fact, the regret bound achieved by their analysis in our setting is $\\log(T)^d \\sqrt{H}$ higher to ours. Also, their algorithm's planning is intractable as opposed to ours.\n - *Closely related to the area chair's request:* We have discussed (lines 256 to 260) the relationship of our paper with existing work that generalize linear RL by assuming that the considered value functions belong to some RKHS or to some space with bounded Eluder dimension.\n3. **Foster et al, 2022:** We refer to lines 273 to 276. We have already included a discussion about this paper.\n\nWe would like to thank the Area Chair a second time for interacting with us and we hope that our response can clarify the missing connections.", " We thank the reviewer for interacting with our rebuttal and provide some clarifications to the raised concerns.\n\n1. **Definition of the Bilinear Exponential Family Model:** Yes, you are correct. Nonetheless, as we said before, our intuition is that the separability is not restrictive as it is verified by all the examples provided in [CGM21].\n2. **Maximum likelihood estimation:** We are not sure whether this is a major computational overhead in common cases. But we agree that a discussion on this estimation complexity is important and we will add it as suggested.\n3. **Model free / Model based distinction:**\n - We agree that in the work of [JYWJ20] the estimation comes back to an estimation of the value function. In this sense, it can be considered as model free. However, there is a subtlety here: the setting is model-based while the algorithm is model-free.\n - The setting and algorithm in our paper are both model based since we need to estimate the model and rewards to estimate the value function. We could obtain a model-free algorithm for this setting in two ways: 1) Using UCRL like [CCM21], but the planning would be intractable. 2) Applying a linear RL algorithm on the linear approximation mentioned in lines 200 to 202. However, the latter entails a regret scaling with a dimension of the approximation, which is of order $\\mathcal{O}(p H^2 K)$.\n - Notice that the complexity of learning the value function parameter or learning the transition and reward parameters in [JYWJ20] is similar. Also, the setting and assumptions are the same. Therefore, we don't see the advantage of a model-free algorithm over a model-based one in this case. \n4. **About the covering argument** This is bypassed in our paper by using transportation inequalities, concentrations of the parameters, and a \"good vs bad rounds\" analysis that enables us to avoid clipping. At a higher level, we can paraphrase by saying that we first show that the \"bad rounds\" are finite and we analyze the good rounds by the transportation and concentration inequalities.", " First I thank the authors for their explanations. I still have a few questions and comments as follows:\n\n1. Regarding the model of the bilinear exponential family, I agree that when $h(s', s, a) \\propto h_1(s') h_2(s,a)$, everything is fine. But my concern is that when $h(s', s, a)$ cannot be decomposed in this way, then I believe we don't have Eq. (4). Is this correct?\n2. Regarding solving $\\hat \\theta^p(k)$ and $\\hat\\theta^r(k)$, my point is that this is the main computational overhead for the proposed algorithm. It should be discussed the computational complexity of solving these.\n3. For the linear MDP studied in [JYWJ20], I'm wondering why is this work model-based? Although it is assumed that the transition kernel is a linear function, their LSVI-UCB algorithm does not estimate the transition model at all but instead directly estimates the value functions. In this sense, their algorithm is model-free, the same as the BEF-RLSVI in this paper.\n4. It is claimed in the general comment that the analysis technique here can be applied to linear RL. One thing I'm wondering is that in the regret analysis of linear MDP in [JYWJ20], some covering argument is used in order to apply the self-normalized concentration inequality, and how is this bypassed in this paper?", " It seems that, after revision, a number of important related works on model-based regret are are still missing:\n\n1) Yang and Wang, 2019. Reinforcement learning in feature space: Matrix bandit, kernels, and regret bound\nIt studies a bilinear family, including both finite-dim models and infinite-dim kernel models. \n\n2) Ayoub, et al. Model-Based Reinforcement Learning with Value-Targeted Regression, 2020. \nIt provides a regret for a general nonlinear model family. Shows that the regret scales linearly with model dimension. This result seems to subsume results of this paper.\n\n3) Foster et al. The Statistical Complexity of Interactive Decision Making, 2022.\nThis paper provides a general framework for regret analysis, and one section of it specifically shows how to apply to bilinear class MDP.\n\n1,2) appear to be the closest related work to the submission. It is a bit disappointing that authors didn't mention the connections at all.", " Thank you for your time, careful review, and for the kind words about the soundness of our results and the novelty of our analysis. \n\n**Comparison with linear MDPs:** We emphasize that our observation of linearity does not simply reduce the problem to a linear MDP. This is because our linearity is in an infinite-dimensional RKHS and also we don't have linearity in the parameter. Therefore, classical linear RL algorithms (JYWJ20, ZBB+20, Yang and Wang;2020) cannot be used for this setting. Moreover, while we also solve planning using linearity, the latter is merely a direct consequence of the -realistic- considered model. In contrast, in the linear RL literature, linearity is a strong assumption especially in a finite dimensional space. Please refer to the general comment for a detailed discussion.\n\n**Reward estimation:** We agree that estimating the transition model is harder than rewards and we also agree that compared to [CGM21], estimating the reward is only a minor improvement. However, the base algorithm, RLSVI, was proposed in [ZBB+20] assuming a known transition, so in regard to the linear literature our work is novel and our contribution is significant.\n\n**Notations:** We agree that our notation is confusing at times, to clarify, $\\pi$ should be indexed by $k$ and it is the BEF-RLSVI's policy at step $k$, everything denoted by $\\star$ will be changed to $\\pi^\\star$ and just means that it's the value function under that set of parameters while acting with the optimal policy. \n\n**Stochastic optimism sketch:** We would like to clarify the reasoning in line 230, the first inequality comes from two facts: **1)** If the MDP's parameters are $\\hat{\\theta}^p,\\tilde{\\theta}^r$ then $\\pi_k$ is the optimal policy and we obtain: $V_{\\hat{\\theta}^p,\\tilde{\\theta}^r,1}^{\\pi_k} (s_1) = Q_{\\hat{\\theta}^p,\\tilde{\\theta}^r,1}^{\\pi_k} (s_1, \\pi_k(s_1)) \\ge Q_{\\hat{\\theta}^p,\\tilde{\\theta}^r,1}^{\\pi^\\star} (s_1, \\pi^\\star (s_1))$. And **2)** By definition we have: $V_{\\theta^p,\\theta^r,1}^{\\pi^\\star} (s_1) = Q_{\\theta^p,\\theta^r,1}^{\\pi^\\star}(s_1,\\pi^\\star(s_1))$. The decomposition in below line 230 can be understood using: $Q_{\\hat{\\theta}^p,\\tilde{\\theta}^r,1}^{\\pi^\\star} (s_1, \\pi^\\star (s_1)) - Q_{\\theta^p,\\theta^r,1}^{\\pi^\\star}(s_1,\\pi^\\star(s_1)) = V_{\\hat{\\theta}^p,\\tilde{\\theta}^r,1}^{\\pi^\\star} (s_1) - V_{\\theta^p,\\theta^r,1}^{\\pi^\\star} (s_1)$, the other terms just telescope. We will add more comments about this in the revised version so that the decomposition is clear for the reader.\n\n**Minor comments:** We confirm that $\\theta$ denotes $(\\theta^p,\\theta^r)$, the $\\pi$ policy is indeed the one derived in algorithm 2, we insist however about our choice of indexing the value function with the policy because of its utility in the proof of the stochastic optimism for example. \\textit{i.e.} we sometimes need to consider the estimated state-action value function but with the optimal policy instead. \n\n**Experiments:** We agree about the advantages of empirical evaluation and would like to confirm that it is indeed a direction we intend to explore in the future. However, we focus here on the theoretical challenges of the considered setting and we decided to put emphasis on such tools, results and explanations rather than extensive experiments. Consequently, this work is fairly extensive and slightly notation heavy for the NeurIPS page limit so we think it is best to devise a longer version in the future to include experiments.\n\nWe wish to thank you again for your time, careful review, and for acknowledging the strength of our contribution. We hope that our response clarified some confusions and would appreciate if you can adjust the score accordingly.", " We would like to thank you for your time, thorough feedback, and for the kind words about the clarity of the contribution and novelty of the analysis. \n\n**Model expressivity:** Regarding the difference with the original model of [CGM21], thank you for catching this honest mistake, it is indeed less generic, we followed the definition of [LLS+21] that we thought was the same as [CGM21]. Note however that the model we consider still recovers the original model if the base measure $h$ verifies: $\\exists h_1, h_2: h(s',s,a) \\propto h_1(s') h_2(s,a)$, this is true because we can keep linearity in Eq 4 simply by multiplying $\\phi^{\\mathrm{p}}(s, a)$ by $h_2(s,a)$ and $\\mu^{\\mathrm{P}}(s^{\\prime})$ by $h_1(s^{\\prime})$. We do not believe that this is restrictive as it is verified by all the examples provided in [CGM21] (see Section 4 therein). It also seems intuitive for the base measure to decouple $(s,a)$ from $s'$ like is the case in the exponent. \n\n**Linear MDPs:** Thank you for catching the error in line 43, the comparison with linear RL is indeed not straightforward, please refer to the general comment for more details about this comparison.\n\n**Notations:** The notation for matrix A in Algorithm 1 is indeed used before its definition, we will move its definition as suggested. Thank you for catching the typo in $(\\hat{\\theta}^p,\\tilde{\\theta}^r )$, it should indeed be indexed with k, they are the parameters estimated in the same algorithm, we will modify this in the final draft. Also, $\\zeta_{hk}$ being a martingale sequence follows since $V_{\\hat{\\theta}^p (k),\\theta^r,h+1}$ depends on $\\hat{\\theta}^p (k)$ that comes from previous data. We apologize and we will fix these typos. \n\n**Estimation tractability:** Regarding the complexity of the maximum likelihood estimation, we know that this is tractable for simple distributions like the Gaussian and for Linearly controlled dynamical systems. For generic transitions, it may indeed requires integral approximations, however, we believe that this estimation problem is far simpler than the planning problem since the latter traditionally involves approximating an integral for all $s^{\\prime}, a$. We will add a proper discussion in the paper as we feel that this is an important information to the reader.\n\n**Minor comments:** We thank you for your minor comments, we will fix mentioned typos and add the definition of the expected reward as suggested. \n\nWe would like to thank you again for your careful reading and for helping us improve the quality of our writing, and we hope that you adjust your score if you believe that we clarified the issues that were raised.", " We would like to thank you for acknowledging our contributions, and for the kind words regarding significance of the improvements over existing art both in results and theory, and the quality of explanations.\n\n**Regarding our position with respect to literature**, please refer to the general comment where we recall some key aspects of the linear setting. For instance, we emphasize that linear RL settings (Jin et al, '20; Zanette et al, '19; Yang and Wang, '19) are all model-based, and that the literature is yet to provide a single example of a continuous state-action space MDP that is well represented by said models (with finite parameters). This being said, we would also like to acknowledge an error in line 43 of our submission, indeed we -mistakenly- stated that the bilinear exponential family model encompasses the linear one. We thank the reviewer for catching this and we ensure that this will be removed in the revised version. We also want to clarify that similar to Jin et al, 2020, the considered transition kernel is represented using the scalar products, and can therefore be fully described using an order of $d$ parameters.\n\n**Concerning the clipping improvement:** Our result to remove clipping is actually applicable for the linear setting as well. The main result that allows our improvement is Lemma 19 and, interestingly, it would be even easier to apply it to the linear setting. \nLet us explain how our enhancement follows in two steps. 1) The need to clip emerges when in order to bound the regret, we make appear a quantity of the form: $R_T \\le \\sum_t \\langle x^* - x_t, \\theta \\rangle $ (see Eq 14 and Eq 15 of Zanette et al, '20, and Eq 15 of Jin et al, '19), and then use Cauchy-Schwarz to obtain $R_T \\le \\sum_t ||x^* - x_t ||_{ V_{t-1}^{-1}} ||\\theta||_{V_{t-1}}$. The problem is that $||x^* - x_t||_{V_{t-1}^{-1}}$ can be large. Clipping techniques handle this if we know that the rewards are bounded. \n2) Our solution is to show that the large norms are not a real problem because they only occur a finite amount of times (see Lemma 19). Therefore, our improvement is applicable to RLSVI of Zanette et al, '20.\n\nWe would like to thank you a second time for their careful review, we hope to have answered your concerns and we pledge to include relevant discussions in the revised version. If you feel that your concerns have been answered, we would appreciate it if you can adjust your score accordingly.", " We would like to thank the reviewers for acknowledging the strengths and soundness of the contribution as well as for their thoughtful comments and efforts towards improving our manuscript. In the following, we highlight general concerns of reviewers that were common and our effort to address these concerns. We then address comments specific to each reviewer by responding to them directly.\n\nWe would like to highlight the contrast with the linear RL literature. By definition, a linear MDP (cf [JYWJ20], [ZBB+20]) is such that: for each $t \\in[H]$ there exist a feature map $\\psi_{t}: \\mathcal{S} \\rightarrow \\mathbb{R}^{d}, s \\mapsto \\psi_{t}(s)$ and a parameter $\\theta_{t}^{r} \\in \\mathbb{R}^{d}$ such that:\n$r_{t}(s, a)=\\phi_{t}(s, a)^{\\top} \\theta_{t}^{r}$ and $P_{t}\\left(s^{\\prime} \\mid s, a\\right)=\\phi_{t}(s, a)^{\\top} \\psi_{t}\\left(s^{\\prime}\\right)$. The only difference between [JYWJ20] and [ZBB+20] is that the latter assumes known feature $\\psi_t$ while the former does not. Note that the setting of (Yang and Wang, 2019a) is very similar since they assume a bilinear transition: $P(s' \\mid s, a) = \\phi(s, a)^{\\top} M^* \\psi(s')$ where $M^*$ is the parameter.\n\nWe can now clarify certain confusions: \n\n**First**, we highlight that this entire line of work is model-based since it explicitly assumes a model on the transition.\n\n**Second**, although it can be appealing to assume a linear transition model (due to its interplay with the Bellman operator), assuming a *finite* dimensional linear transition model has been acknowledged to be a stringent, and not practical assumption. Indeed, this model was only shown to capture tabular MDPs and we have yet to see any concrete example of continuous state-action spaces that it is able to capture efficiently (that is, with few parameters).\n\n**Third**, while the bilinear exponential family model is very different, we insist that our proof techniques also hold for linear MDPs: Indeed, **A)** our analysis uses transportation inequalities (Lemma 13) that elegantly bound our regret by the complexity of learning a bilinear form (the exponent of the transition model), **B)** Controlling the latter is like controlling the regret in linear MDPs, and in both cases it proceeds similarly to the analysis of linear bandits, **C)** Our analytical improvements, *e.g.* Lemma 19 (rendering clipping unnecessary) and Lemma 18, intervene in step *B* of the analysis, which is identical in linear RL. Consequently, the contributions of Lemma 18 and Lemma 19 also hold for the analysis of Linear RL algorithms.", " The work consider RLSVI-type algorithms on a bilinear exponential family MDP.\nThey make several contribution, including achieving tractable exploration.\n\nI believe this work represents a significant improvement over existing art, both in terms of result as well as techniques involved. The work is well explained and it flows easily. I appreciate the work: I think it is well explained, there is a solid contribution to a well defined problem.\nI therefore recommend acceptance. \n\nHowever, I have one major complaint / observation.\nThis work is model based, but the authors claim that it applies to the linear MDP model. Specifically, there is a choice that results in linear MDP. However, as I understand, not all linear MDP as defined in (Jin et al, 20’) can be solved with this algorithm. This is because fixing \\phi on a linear MDP still yields many choice for \\mu (proportional to the state space). In this case, one would need order S (the state space) parameters to fully describe the transition kernel of the linear MDP. Jin et al, 20’ bypasses this problem as it only needs to compute inner products, but the submission here puts a specific model on P; I would expect this model to be of order S in terms of dimensionality. Rather, the model assumed here seems to be that of Yang et al, “Reinforcement learning in feature space: Matrix bandit, kernels, and regret bound” which is far more restrictive. If that’s the case, the comparison / improvement with Zanette et al “Frequentist Regret Bounds for Randomized Least-Squares Value Iteration” is also not as clear, as the two operates in quite different models. More precisely, this work seems to be model based (as explained above); as such, it requires estimating different quantities that the work by Zanette et al. It follows that the improvement that removes the artificial clipping of the value function (and improves the bound) used in Zanette et al. may be enabled by the setting where the current algorithm operates. I think this aspect should be clarified, both in the rebuttal as well as in the paper (apologies if I missed something). See above in details In the specific linear setting considered by the authors, only a restricted linear MDP model seems to be covered", " This paper studies episodic reinforcement learning where the reward and transition probability functions belong to parametric bilinear exponential families. A randomized least square value iteration algorithm is proposed to perform tractable exploration and planning for bilinear exponential family of MDPs. Novel analysis techniques are introduced to show that the regret of the proposed algorithm is bounded by $\\widetilde{\\mathcal{O}}(\\sqrt{d^3H^3K})$. This paper is well-written and easy to follow. The authors have clearly summarized their contributions and discussed the improvement in their method over existing works. The theoretical analysis seems to be novel and non-trivial. However, the bilinear exponential family model defined in this paper seems to be more restrictive than the original definition in [CGM21]. Also, some notations are confusing. See more details below. Some comments and questions are as follows:\n\n1. The notation for matrix $\\mathbb{A}$ in Algorithm 1 is used before its definition. It's better to move earlier the definition of $\\mathbb{A}$.\n2. Compared with the original definition of the bilinear exponential family model in [CGM21], the factor $h(s',s,a)$ is omitted. Is this critical for the observation of linearity of transitions in Eq.(4)?\n3. It is said in line 41-43 that linear MDP is a special case of the bilinear exponential family model. Could the authors elaborate on this?\n4. In line 11 of Algorithm 1, solving $\\hat\\theta^p(k)$ and $\\hat\\theta^r(k)$ involves integral approximations as remarked in line 176-177. Can this be implemented efficiently? More details on this should be provided.\n5. The $({\\hat\\theta}^p, \\tilde\\theta^r)$ in line 6 of Algorithm 1 should be indexed with $k$? Similarly, in the regret decomposition in Eq.(9) and the corresponding analysis in Section 5 and Appendix B, what is $\\hat \\theta^p$? Is it an arbitrary estimator or some estimate constructed during running Algorithm 1?\n6. Following the previous question, it is not clear why $\\zeta_{hk}$ in line 508 in the appendix is a martingale sequence. Does $V_{\\hat\\theta^p,\\theta^r,h+1}$ depend on previous data? Details about this $\\hat\\theta^p$ need to be specified.\n\n---\nMinor issues:\n\n1. In line 22, '$t = 1, \\ldots, H$', missing a comma.\n2. It seems that the definition of $\\mathbb{E}_{s,a}^{\\tilde\\theta^r}[r]$ is missing in the main context.\n3. In line 40, a few extra '-'.\n4. The inner product in Eq.(5) and the equation under line 191 should be $\\langle\\cdot, \\cdot \\rangle_{\\mathcal{H}}$?\n5. In line 140, '$[p, r]$' should be {$p, r$}? The authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper studies model-based reinforcement learning for episodic Markov decision processes whose rewards and transitions are parametrized by bilinear exponential families with features of state and action. To balance the exploration-exploitation trade-off, the author suggested a randomized algorithm that injects a calibrated Gaussian noise in the parameter of rewards. The proposed algorithm achieves $\\tilde{\\mathcal{O}}(\\sqrt{d^3 H^3 K})$ regret bound with high probability. \n <Strengths>\n\n1. In this paper, the author not only presented a provable algorithm for a model-based algorithm whose transitions and rewards are parametrized by a bilinear exponential family that extends the linear transition assumption but also improved the regret by $\\sqrt{H}$ compared to the upper-confidence based algorithm in a similar setting. \n\n2. Unlike the other model-based algorithms that assume the reward information is known, this paper solves the problem under the assumption that neither transition model nor reward information is known. In addition, it is interesting that the analysis is carried out by dividing the “Good rounds” and “Bad rounds” according to the weighted norm of the state-action feature without using the handcrafted clipping value function.\n\n3. Based on the concentration results of the upper-confidence based algorithm in a similar problem setting, the author presented a randomized algorithm which is known to be practical but difficult to analyze with a frequentist regret bound guarantee for the first time. \n\n<Weakness>\n\n1. Although the author dealt with the setting where the model is parametrized by the bilinear exponential family, but it seems to solve the problem by using linearity in RKHS. This seems to rely on the information on the RKHS (e.g., RBF kernel $k(x,y)$) corresponding to the transition probability that the agent estimates. If not, what is the difference between the problem in this paper and the linear MDP method in ([JYWJ20], [ZBB+20], Yang & Wang;2020) ?\n \nc.f) Yang & Wang, 2020: Reinforcement Learning in Feature Space: Matrix Bandit, Kernels, and Regret Bound\n\n2. As the author said, the most existing works on model-based RL usually assume that information about reward is given. The reason is that it is more difficult to estimate the transition model than the reward, and in the end, the part for reward estimation in regret is not the leading term. This can be seen in the main theorem presented in the paper. Therefore, it is questionable whether learning without information on reward can be said to be a special contribution.\n\n I would appreciate it if the author could tell me their opinion on the aformentioned 'Weakness' part.\n\nSome more questions are as follows:\n\nLine 93: I think $\\theta$ denotes $(\\theta^p, \\theta^r)$. It would be better to mention that before referring to the optimal policy for clarity.\n\nLine 191: For the $Q$ function that the agent calculates using the estimated parameter, if $Q^{\\pi}_{\\hat{\\theta}^p, \\tilde{\\theta}^r, h}$ in line 191 is the same $Q\\_{\\tilde{\\theta}, h}$ as the calculated in Algorithm 2, shouldn't it be correct that there should be no $\\pi$ symbol in $Q$? Also, I think it should be fixed in the regret decomposition (eq. 10).\n\nLine 230 (Stochastic optimism): I am a little confused because of the $\\pi$ notation in the value function calculated with the estimated parameter (e.g., $V^{\\pi}\\_{\\hat{\\theta}^p, \\tilde{\\theta}^r, 1})$, but I am curious how the $Q^*_{\\hat{\\theta}^p, \\tilde{\\theta}^r, 1}$ appears in the first inequality. Also, I am not sure how the second inequality decomposes? \n\nMost of explanations were clear and well written, but with appearance of a lot of notations, it would be very helpful to understand this paper if the author mentions why the inequality holds in the appendix. I think there are no issues related to social impact. However, although this paper is highly related to theoretical part, considering that many recently published theoretical papers about model-based RL also present numerical experiments, I think it would be better if there was an experimental result in this paper. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "W6NMI07SZqA", "N4f5exetHrt", "kDRkCg5obHm", "DLFltiRkjs6", "NikBNkZ4js1", "BwOluoXS-ZE0", "_96fsWUtt4S", "vCZz2abGu9I", "3a-r5Bkyvd-", "bEH77796apX", "TTUxCLQMBjD", "8ZqaaRfW-ny", "4TNrI0w4eoM", "nips_2022_LT6-Mxgb3QB", "nips_2022_LT6-Mxgb3QB", "nips_2022_LT6-Mxgb3QB", "nips_2022_LT6-Mxgb3QB" ]
nips_2022_3I8VTXMhuPx
Hiding Images in Deep Probabilistic Models
Data hiding with deep neural networks (DNNs) has experienced impressive successes in recent years. A prevailing scheme is to train an autoencoder, consisting of an encoding network to embed (or transform) secret messages in (or into) a carrier, and a decoding network to extract the hidden messages. This scheme may suffer from several limitations regarding practicability, security, and embedding capacity. In this work, we describe a different computational framework to hide images in deep probabilistic models. Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution. As an instantiation, we adopt a SinGAN, a pyramid of generative adversarial networks (GANs), to learn the patch distribution of one cover image. We hide the secret image by fitting a deterministic mapping from a fixed set of noise maps (generated by an embedding key) to the secret image during patch distribution learning. The stego SinGAN, behaving as the original SinGAN, is publicly communicated; only the receiver with the embedding key is able to extract the secret image. We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security. Moreover, we show the flexibility of the proposed method in terms of hiding multiple images for different receivers and obfuscating the secret image.
Accept
This paper studies a novel variation of image steganography. The proposed approach is different from prior work (mostly building on autoencoders) and uses a GAN and hide a secret image in one particular location of the learned distribution. The central idea of the paper seems novel and interesting. The reviewers raised several concerns about limited evaluation and complexities of comparing to other methods that generate directly images. Overall, this paper seems to have novelty and interesting ideas and the benefits seem to overcome the limitations, based on the rebuttal and discussions.
test
[ "5R1L5wgYI8", "NbbXjgChTuU", "8yszKUkpN6g", "lZQE79MVV4J", "7DRMwIJzsBM", "PsMmUYcWP5", "heUQ5AdGxY" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for appreciating our work. The summary of the current paper is indeed thorough and accurate. \n\n1. We thank the reviewer to recognize the ability of the proposed method to hide multiple images for different receivers as a significant advantage over previous methods, despite that we choose to down-weight this part in our writing to give prominence to the new and general probabilistic hiding framework. \n2. We appreciate the reviewer to recognize the primary contribution of the current work is the mathematical formulation of probabilistic image hiding, which is also computational feasible.\n", " Thanks for recognizing the merits of our work and for helpful suggestions. \n\n**1. Regarding only $20$ images as the testbed:** The primary reason to use the $20$ images from the original SinGAN’s repository is that it is much easier to demonstrate the normal behaviour of the stego SinGAN in comparison to the original SinGAN for security analysis. Note that as a completely different hiding scheme, image-based steganographic analysis methods cannot be applied. After all, it is effortless to reimplement the proposed SinGAN-based image hiding scheme by minor modification of official SinGAN implementations [3][4]. And thus, the generality of the proposed method on a wide range of natural images can be easily verified. Moreover, in the domain of single-image generative models, it is common practice to demonstrate the model feasibility on only dozens of images due to the computational burden during training[1][2]. Similar to these works, it is sufficient to train 20 images to show the effectiveness of our proposed solution.\n\nWe are comfortable performing larger-scale experiments (for example, 200 image pairs) to further demonstrate the feasibility and effectiveness of the proposed hiding scheme, subject to the satisfaction of the reviewer.\n\n\n**2. Regarding one SinGAN for each cover/secret image pair:** We totally agree with the reviewer that the proposed method needs to train one SinGAN for each cover/secret image pair, which may be considered as a disadvantage. Nevertheless, the authors should point out that this is a general challenge in all secret-in-network data hiding works [5], and we are **the first** to propose an image-in-network hiding framework with improved extraction accuracy (see the third response on the exaggerated performance of HiNet), model security, and flexibility (hiding multiple images for different users, which is not accomplished before). \n\n\n**3. Regarding the exaggerated performance of HiNet and others in the current Table 1:** The current HiNet obtains nearly perfect extraction accuracy in terms of PSNR ($\\ge 45$ dB) and SSIM ($\\ge 0.99$) in the original paper and in Table 1 of our manuscript. After careful re-examination of the official HiNet implementations and personal communication of the original authors, we find that the stego image by HiNet is not quantized to $8\\times 3$ bpp (three for RGB channels) for transmission. Instead, each pixel of the stego image is the single-precision floating-point format of $32\\times 3$ bpp. This accidentally incorrect implementation allows for a trivial hiding solution: we have more space to accommodate the cover and secret images by simple concatenation. The results of HiNet after quantization is shown in the table below.\n\n| Method | PSNR | SSIM | DISTS |\n|----------|-------|-------|-------|\n| Baluja17 | 23.75 | 0.853 | 0.109 |\n| HiDDeN | 25.89 | 0.875 | 0.106 |\n| Weng19 | 33.98 | 0.935 | 0.057 |\n| HiNet | 32.53 | 0.935 | 0.054 |\n| Ours | 34.58 | 0.951 | 0.039 |\n\n\n**4. Regarding low PSNR when obfuscating the image:** Thanks for pointing it out. We have compared our method with HiNet (with proper quantization) in the presence of image obfuscation. The results are listed in the below table, which we find that our method significantly outperforms HiNet. \n\n| Method | PSNR | SSIM | DISTS |\n|--------|-------|-------|-------|\n| HiNet | 16.43 | 0.398 | 0.254 |\n| Ours | 20.68 | 0.722 | 0.179 |\n\n**5. Regarding one-to-one mapping between the secret noise and the secret image:** We agree with the reviewer that it is difficult to mathematically ensure the bijective mapping between the secret noise and the secret image. Thus, we designed the experiment to empirically study the possibility of secret image leakage. As suggested by the reviewer, we will test this aspect of model security using a much larger image pairs, and update the results accordingly.\n\n\n**6. Regarding Line 97 and Lines 197-201:**\nThe proposed method can be treated as both secret-in-network hiding, where the secret is a natural image, and constructive image hiding, in the sense that we hide a secret image during the construction of a probability density function. In terms of extraction accuracy, we have updated results with corrected implementations of Baluja17, HiDDeN, Weng19, and HiNet, and modified the descriptions accordingly.\n\n[1] Shaham et al., SinGAN: Learning a generative model from a single natural image. In IEEE/CVF International Conference on Computer Vision, pages 4570–4580, 2019.\n\n[2] Hinz et al., Improved techniques for training single-image GANs. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1300–1309, 2021.\n\n[3] https://github.com/tamarott/SinGAN\n\n[4] https://github.com/tohinz/ConSinGAN\n\n[5] Uchida et al., Embedding watermarks into deep neural networks. In ACM on International Conference on Multimedia Retrieval, pages 269–277, 2017.", " **4. Regarding the exaggerated performance of HiNet and others in the current Table 1:** The current HiNet obtains nearly perfect extraction accuracy in terms of PSNR ($\\ge 45$ dB) and SSIM ($\\ge 0.99$) in the original paper and in Table 1 of our manuscript. After careful re-examination of the official HiNet implementations and personal communication of the original authors, we find that the stego image by HiNet is not quantized to $8\\times 3$ bpp (three for RGB channels) for transmission. Instead, each pixel of the stego image is the single-precision floating-point format of $32\\times 3$ bpp. This accidentally incorrect implementation allows for a trivial hiding solution: we have more space to accommodate the cover and secret images by simple concatenation. The results of HiNet after quantization is shown in the table below. \n\n| Method | PSNR | SSIM | DISTS |\n|----------|-------|-------|-------|\n| Baluja17 | 23.75 | 0.853 | 0.109 |\n| HiDDeN | 25.89 | 0.875 | 0.106 |\n| Weng19 | 33.98 | 0.935 | 0.057 |\n| HiNet | 32.53 | 0.935 | 0.054 |\n| Ours | 34.58 | 0.951 | 0.039 |\n\n\n**5. Regarding fair comparison to existing methods:** Thanks for bringing up the issue of comparing the proposed framework to existing ones. As also realized by Reviewer bS3X, the proposed probabilistic hiding framework and its SinGAN instantiation are the first of their kind, which are completely different from previous schemes. As a result, the proposed method cannot be directly compared to previous schemes, let alone fair comparison. As well recognized and appreciated by Reviewer bS3X, the authors try their best to identify the closest baselines (i.e., Baluja17, HiDDeN, Weng19, and HiNet), and test them on the same 20 image pairs (as shown in the Appendix) for extraction accuracy. The proposed method outperforms them in terms of extraction accuracy (see the above corrected and updated table) and efficiency. Moreover, the authors proposed three different computational tests to evaluate the model security.\n\n\n**6. Regarding more possibilities in experiments:** We have covered a number of possibilities, and we are glad to cover more possibilities in our future work. \n\n**7. Regarding stego metrics like \"db\" and \"bpp\":** As a completely different hiding framework, no stego image (i.e., cover + secret image) is generated, and thus no stego metrics such as \"db\" and \"bpp\" can be computed. Instead, we only have the stego SinGAN to be publicly transmitted, whose embedding capacity cannot be directly measured by \"bpp\". To the best knowledge of the authors, there is no stego metric for image-in-network hiding, and it is of interest define one in the future, as suggested by the reviewer.\n\n**8. Regarding input image size/resolution:** One significant advantage of SinGAN is that it can work with and generate images of arbitrary resolution. The output resolution can even be different from the input resolution (e.g., using a $256\\times 256$ training image to generate $1,024\\times1,024$ images). The proposed image hiding scheme directly inherits the advantage from SinGAN, and thus the image size/resolution is not an issue. Nevertheless,\nwe agree with the reviewer that different image sizes/resolutions should be taken care of, and we will provide experimental results for images with high resolution (e.g., $1,024\\times1,024$) in the Appendix.\n", " Thanks for spending time and effort in providing the comments, which the authors highly appreciated. However, the reviewer seems to misunderstand the key contribution of our paper and we humbly disagree with the reviewers’ claim. We would like to clarify our key contributions and novelties as follows.\n\n**1. Regarding Novelty:** As well recognized by Reviewer bS3X, the primary contribution of the paper is a novel general computational framework for probabilistic image hiding (also can be considered as a form of image-in-network hiding and constructive hiding), which significantly departs from existing autoencoder-based image-in-image hiding schemes. In principle, the proposed framework can be implemented by a variety of deep probabilistic models, including diffusion-based and autoregressive models, provided that the guided sampling (illustrated in Fig. 1 (d)) can be feasibly designed. The proposed SinGAN approach is just a working instantiation of the more general hiding framework. Our framework has the advantages over the autoencoder-based hiding scheme in four ways. First, there is no need to communicate privately with the receiver the decoding network, which may be substantially large than the images to be hidden. Second, the proposed hiding framework is more secure because 1) it naively bypasses existing image-based steganographic analysis tools, traditional or deep, and 2) it demonstrates normal SinGAN behaviours in various ways. Third, it has improved extraction accuracy than autoencoder schemes, e.g., Baluja17, HiDDeN, Weng19, and HiNet. Note that after re-examining the current implementations of autoencoder schemes and personal communication of some of the original authors, we find all of them do not quantize the stego images into 24 bpp, leading to trivial hiding solutions and exaggerated performance (see the detailed analysis below). Fourth, our framework is capable of hiding multiple images for different receivers, a very challenging task that has not been accomplished before.\n\n**2. Regarding choosing the images from the SinGAN’s repository as the testbed:** We respectfully and firmly disagree the comment that `` the paper looks like this is a preliminary result’’. The primary reason to stick to the SinGAN’s repository is that it is much easier to demonstrate the normal behaviours of the stego SinGAN in comparison to the original SinGAN for model security analysis. Note that as a completely different hiding scheme, image-based steganographic analysis methods cannot be directly applied. After all, it is effortless to reimplement the proposed SinGAN-based image hiding scheme by minor modification of official SinGAN implementations [3][4]. And thus, the generality of the proposed method on a wide range of natural images can be easily verified. Moreover, in the domain of single-image generative models, it is common practice to demonstrate the model feasibility on only dozens of images due to the computational burden during training [1][2]. Similar to these stuides, it is sufficient to train 20 images to show the effectiveness of our proposed solution. We are comfortable performing larger-scale experiments (for example, 200 image pairs, which amounts to train 200 SinGAN generative models) to further demonstrate the feasibility and effectiveness of the proposed hiding scheme, subject to the satisfaction of the reviewer.\n\n**3. Regarding the location to hide the secret image in deep probabilistic models:** In the SinGAN instantiation, as we are working with implicit generative models, the location is implicitly determined by the fixed set of noise maps (fully determined by the embedding key) as inputs to the SinGAN. The way we hide the secret image is to fit a deterministic mapping from the fixed set of noise maps to the secret image during patch distribution learning. This corresponds to minimize the second reconstruction term in Eq. (4).\n\n[1] Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. SinGAN: Learning a generative model from a single natural image. In IEEE/CVF International Conference on Computer Vision, pages 4570–4580, 2019.\n\n[2] Tobias Hinz, Matthew Fisher, Oliver Wang, and Stefan Wermter. Improved techniques for training single-image GANs. In IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1300–1309, 2021.\n\n[3] https://github.com/tamarott/SinGAN\n\n[4] https://github.com/tohinz/ConSinGAN\n", " This paper describes a technique for hiding images in images by means of SinGAN. The authors learn how to create a key-generated noise to be embedded into the cover image by exploiting SinGAN. Also discussion on robustness of the technique on discoverability is presented with some hints on obfuscation. The main strength of this paper is the application of the SinGAN architecture to the problem of image hiding. This is a first. \nThere are several weaknesses on the other hand: \n1) the paper looks like this is a preliminary result, they employed only the 20 images available in the SinGAN Github repository;\n2) some part of the text are not clear: (i) the authors claim to put the secret into a \"location\" of the cover... where? how? not clear to me... (ii) experiments were carried out on the 20 images of the SinGAN repository but what about the comparing SOTA techniques? They are comparable? I know that HiNet works with images with higher dimensions and quality... I dont feel like the comparison is fair\n3) experiments need to cover much more possibilities\n4) usual stego metrics are not shown like \"db\" or \"bpp\"\n5) Figure 2 is shown but not reference in text 1) Why did you use only SinGAN images for cover/secret? Can you do/propose some more examples?\n2) How you carried out comparison with SOTA techniques? Did you do experiments on the same images or just reported each SOTA techniques paper results (I found the same PSNR/SSIM values in HiNet paper... I dont feel like this is a fair comparison).\n The authors do not take adequately care of image size/resolution. In my opinion this is a great limitation of the work that should be addressed. Images elaborated are too small/low resolution to expose visible alterations... Maybe this is a limit... But we dont know because further experiments need to be carried out.", " The task is about image-in-image steganography. Unlike previous methods which follow the autoencoder approach, they use GAN-based network to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution. Strengths\n(1) The idea of hiding message in the learned distribution is interesting. It is different from previous methods, which use an autoencoder scheme.\n(2) Compared with previous work, the proposed method doesn’t need to transmit the decoder to receiver via a subliminal channel. Their model can be publicly transmitted and only the key needs to be sent via the subliminal channel.\n(3) The proposed method doesn’t directly generate stego images, thus avoiding the issue of possible detection by steganalysis methods.\n(4) Multiple images can be hided in one model for different receivers, which is a challenging task that has not been accomplished before.\n\nWeakness\n(1) The dataset contains only 20 images. \n(2) The other methods can use one model to accommodate different cover-secret images. But for the proposed method, it needs to train one sinGAN for each cover image. \n(3) The proposed method performs much worse than HiNet in terms of the extraction accuracy of the secret image.\n(4) The extraction accuracy of secret image with obfuscating has low PSNR (20dB).\n(5) The mapping between the secret noise (generated by the embedding key) and the secret image may not be bijective (i.e., one-to-one), which means it is possible to obtain the secret image with random sample. In this paper they randomly draw 100, 000 samples from each of the 20 trained stego SinGANs to prove the possibility of secret image leakage is less than 0.001%. But I think 20 is a small number, it is sufficient to prove the security.\n (1) Lines 197-201 are unclear. The proposed method performs worse than Weng19 and HiNet in terms of the extraction accuracy of secret image. It is claimed that secret-in-network hiding is generally considered much more difficult than secret-in-image hiding. But at line 97, the proposed method is classified as constructive image hiding, not secret-in-network hiding. The authors should explain clearly why the extraction accuracy is low. The other methods can use one model to accommodate different cover-secret images. But for the proposed method, it needs to train one sinGAN for each cover image. ", " The paper proposes a method to hide images in deep probabilistic models. The proposed method is novel and can be framed as a particular case of the existing methods. One of the significant properties of the proposed method is its ability to be used for multiple receivers. The authors introduce the problem by discussing the current different approaches to hiding data using DNNs and outline the contributions of the paper. Next, related work for each of the described current approaches is provided. The authors then give a mathematical model formulation and how the data hiding computations are carried out. The approach being novel cannot be directly compared to previous works and hence, the authors identify the closest baselines to the proposed model and using experimental results, show that the proposed model either performs better or is more efficient than the considered baselines. The authors also evaluate the model from a security perspective using three methods. Lastly, the authors describe experiments evaluating the models' ability to hide images for multiple receivers and obfuscate the secret image. Typos:\n1. Line 54: adopts -> adopt/adopted NA NA" ]
[ -1, -1, -1, -1, 3, 5, 7 ]
[ -1, -1, -1, -1, 4, 2, 3 ]
[ "heUQ5AdGxY", "PsMmUYcWP5", "lZQE79MVV4J", "7DRMwIJzsBM", "nips_2022_3I8VTXMhuPx", "nips_2022_3I8VTXMhuPx", "nips_2022_3I8VTXMhuPx" ]
nips_2022_iMK2LP0AogI
CUP: Critic-Guided Policy Reuse
The ability to reuse previous policies is an important aspect of human intelligence. To achieve efficient policy reuse, a Deep Reinforcement Learning (DRL) agent needs to decide when to reuse and which source policies to reuse. Previous methods solve this problem by introducing extra components to the underlying algorithm, such as hierarchical high-level policies over source policies, or estimations of source policies' value functions on the target task. However, training these components induces either optimization non-stationarity or heavy sampling cost, significantly impairing the effectiveness of transfer. To tackle this problem, we propose a novel policy reuse algorithm called Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP utilizes the critic, a common component in actor-critic methods, to evaluate and choose source policies. At each state, CUP chooses the source policy that has the largest one-step improvement over the current target policy, and forms a guidance policy. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy. Then the target policy is regularized to imitate the guidance policy to perform efficient policy search. Empirical results demonstrate that CUP achieves efficient transfer and significantly outperforms baseline algorithms.
Accept
The paper proposes a method for how to leverage a list of pretrained policies for learning a new task, by picking the guidance policy through maximal one-step policy improvement evaluated with the learned critic. Contribution is simple, but writing, theories, and experiments/ablation studies are clean and easy to follow. There is a consensus among the reviewers for the acceptance of the paper. Minor comments: - adding a mechanism for automatically growing and pruning source policies could be nice extension, especially on life-long continual learning environment, where once you learned a novel-enough high-reward policy you may want to add it to the source, so when environment changes and changes back, it can reuse that learn optimal behavior. [1] - a fun experiment to include is to ignore reward and only do imitation during policy improvement (just KL term), while still using reward critic for policy selection. If we know the source policies sufficiently cover the full optimal policy, then this could be a good debugging test. [1] Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., ... & Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671.
train
[ "EKEW60cOsiC", "E29WE-DjOTn", "TMhaXisxxt", "9dVEW3J2d9G", "-2KfdDbm45", "fVvEsE9PXBI", "IZNfFjr3Ld", "ehkzH_bgZt", "8AfYOG1X6mO", "TldoHwZ1Q_F", "N0fNiYKluSt", "Yfi2-v1shYi", "C7tW11TiH-8", "ys65rbSDyUe", "G4UZx3ZSwqE", "TXPjdnK2YlP", "gfo-IuBEJzb", "EUDOBQDIv-f", "aLhzSxG0PV9", "NVTR0evMDc" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the detailed response! The additional clarification and experiments address my problems. ", " Thank you for the encouraging response! We are glad that our response addresses your concerns. We are grateful for your valuable questions and suggestions, which help improve the paper.", " Hi Authors,\n\nThanks for your answers! I agree with your answers by and large, and it looks like you've added material to the paper to address them already.\n\nAs a sidenote, I'm impressed at the extra experiments you've run to address some of my points! I wasn't requiring/expecting that my suggestions be implemented immediately, but I like the paper even better for having them!", " Dear Reviewer 9Jza,\n\nSince the author-reviewer discussion period is approaching the deadline, we would appreciate it if you could check our response to your review comments soon. This way, if you have further questions and comments, we can still reply before the author-reviewer discussion period ends. Thank you very much for your time and efforts!\n\nBest,\n\nThe authors", " Dear Reviewer biXP,\n\nSince the author-reviewer discussion period is approaching the deadline, we would appreciate it if you could check our response to your review comments soon. This way, if you have further questions and comments, we can still reply before the author-reviewer discussion period ends. Thank you very much for your time and efforts!\n\nBest,\n\nThe authors", " Dear Reviewer nMQw,\n\nSince the author-reviewer discussion period is approaching the deadline, we would appreciate it if you could check our response to your review comments soon. This way, if you have further questions and comments, we can still reply before the author-reviewer discussion period ends. Thank you very much for your time and efforts!\n\nBest,\n\nThe authors", " Thank you very much for the positive re-evaluation and prompt feedback. We provide clarifications to your further questions as below.\n\n**Q1.** It seems not serious to consider HAAR and PTF as the SOTA method of HRL. They are works that published at 2019 and 2020, and I believe there are more HRL papers in recent years.\n\n**A1.** In recent years, considerable progress has been made in the field of HRL [1-13]. Most of the works do not focus on solving the problem of policy reuse. Instead, they solve problems such as exploration [1,2,3], subgoal representation learning [1,4], unsupervised skill discovery [5,6,7], decomposing complex tasks via subgoals [8,9], learning hierarchical policies from offline datasets [10,11], and learning options [12,13]. We have made our best on literature survey, but there is still a chance that we may miss related works. We appreciate it if you can provide more recent works on solving the problem of policy reuse with HRL.\n\n**Q2.** I highly suggest the authors to revise include all discussions in their final version to help readers understanding the method.\n\n**A2.** Thank you for your kind advice. We are thankful that reviewers have raised many valuable questions which help improve the paper, and we will include these discussions in the final version of the paper. \n\n\n\n### Reference\n\n\n\n\n[1] Li, S., Zhang, J., Wang, J., Yu, Y., & Zhang, C. (2021, September). Active Hierarchical Exploration with Stable Subgoal Representation Learning. In International Conference on Learning Representations.\n\n[2] Gehring, J., Synnaeve, G., Krause, A., & Usunier, N. (2021). Hierarchical skills for efficient exploration. Advances in Neural Information Processing Systems, 34, 11553-11564.\n\n[3] Bagaria, A., Senthil, J. K., & Konidaris, G. (2021, July). Skill discovery for exploration and planning using deep skill graphs. In International Conference on Machine Learning (pp. 521-531). PMLR.\n\n[4] Li, S., Zheng, L., Wang, J., & Zhang, C. (2020, September). Learning subgoal representations with slow dynamics. In International Conference on Learning Representations.\n\n[5] Kim, J., Park, S., & Kim, G. (2021, July). Unsupervised Skill Discovery with Bottleneck Option Learning. In International Conference on Machine Learning (pp. 5572-5582). PMLR.\n\n[6] Zhang, J., Yu, H., & Xu, W. (2020, September). Hierarchical Reinforcement Learning by Discovering Intrinsic Options. In International Conference on Learning Representations.\n\n[7] Fang, K., Zhu, Y., Savarese, S., & Fei-Fei, L. (2021). Discovering Generalizable Skills via Automated Generation of Diverse Tasks. arXiv preprint arXiv:2106.13935.\n\n[8] Kim, J., Seo, Y., & Shin, J. (2021). Landmark-guided subgoal generation in hierarchical reinforcement learning. Advances in Neural Information Processing Systems, 34, 28336-28349.\n\n[9] Gürtler, N., Büchler, D., & Martius, G. (2021). Hierarchical reinforcement learning with timed subgoals. Advances in Neural Information Processing Systems, 34, 21732-21743.\n\n[10] Rao, D., Sadeghi, F., Hasenclever, L., Wulfmeier, M., Zambelli, M., Vezzani, G., ... & Heess, N. (2021, September). Learning transferable motor skills with hierarchical latent mixture policies. In International Conference on Learning Representations.\n\n[11] Ajay, A., Kumar, A., Agrawal, P., Levine, S., & Nachum, O. (2020, September). OPAL: Offline Primitive Discovery for Accelerating Offline Reinforcement Learning. In International Conference on Learning Representations.\n\n[12] Araki, B., Li, X., Vodrahalli, K., DeCastro, J., Fry, M., & Rus, D. (2021, July). The logical options framework. In International Conference on Machine Learning (pp. 307-317). PMLR.\n\n[13] Veeriah, V., Zahavy, T., Hessel, M., Xu, Z., Oh, J., Kemaev, I., ... & Singh, S. (2021). Discovery of options via meta-learned subgoals. Advances in Neural Information Processing Systems, 34, 29861-29873.", " I thank the author for the further discssions to address my concern. Overall, I think the method is reasonable and interesting and now it seems more complete to be accepted. Now I am rasing my score. However there is one more question, it seems not serious to consider HAAR and PTS as the SOTA method of HRL? They are works that published at 2019 and 2020, and I believe there are more HRL paper in recent years.\n\nFinally, I highly suggest the authors to revise include all discussions in their final version to help readers understanding the method.", " Thank you so much for the prompt feedback and thoughtful advice. We are pleased that our previous response has addressed most of your concerns. Further clarification and discussion about the remaining questions are provided as follows. \n\n**Q1.** CUP seems like a curriculum way of implicit distillation.\n\n**A1.** Yes, your understanding is sensible. CUP is a dynamic implicit distillation, as the aggregation is based on the current critic as well as the current target policy, which are gradually improving during learning. \n\n**Q2.** (1) You said `HRL methods suffer from non-stationarity issues`, this seems not well-justified. (2) Did you mean that CUP `avoids the non-stationarity problem` by `monotonic improvement`?\n\n**A2.** (1) The non-stationarity issue is a common challenge in the field of HRL [1,2,3,4,5]. Many prior HRL works have proposed methods to alleviate this problem, such as designing intrinsic rewards [2], using off-policy corrections [3], adding regularizations to subgoal representations [4], and using hindsight transitions [5]. We provide further empirical evidence on the non-stationarity issue in **A3.** below.\n\n(2) CUP `avoids the non-stationarity problem` by avoiding training high-level policies, and instead uses expected advantages to choose source policies. The `monotonic improvement` is a property of CUP, and is not a direct reason why CUP avoids the non-stationarity problem. \n\n**Q3.** Can the authors put more evidence on the advantage of CUP compared with HRL methods? \n\n**A3.** As demonstrated in Figure 2, two SOTA HRL algorithms, HAAR and PTF, both significantly underperforms CUP in Meta-World benchmark tasks. To further analyze the advantages of CUP and demonstrate the non-stationarity problem of HRL methods, we illustrate the percentages of each low-level policy being selected by HAAR's high-level policy in two representative tasks, as shown in Figure 18(a) and Figure 18(b) in Appendix A.15, respectively. Results show that HAAR's low-level policy selection suffers from a large variance over different random seeds, and oscillates over time. This is because that as the low-level policy keeps changing, the high-level transition becomes non-stationary and leads to unstable learning. In comparison, as shown in Fig. 18(c) and Figure 18(d), CUP's source policy selection is much more stable, as it selects source policies according to expected advantages instead of high-level policies, and avoids the non-stationarity problem.\n\n**Q4.** Can you put the \"insights on choosing $\\beta_1$ and $\\beta_2$\" along with the description of task difficulty into your paper? \n\n**A4.** Thank you very much for your advice on improving the clarity of the paper. To facilitate current discussions during the rebuttal phase, we temporarily keep the current paper structure for consistency. We will move these discussions about choosing $\\beta_1$ and $\\beta_2$ as well as task difficulty descriptions to the main paper in the final version of our paper.\n\n**Q5.** You can format your appendix by concluding similar subsections into a separate appendix section.\n\n**A5.** Thank you very much for your advice on improving the presentation of the paper. To facilitate current discussions during the rebuttal phase, we temporarily keep the current paper structure for consistency. We will re-format our appendices in the final version of our paper. \n\nWe hope our responses address your concerns. We appreciate any further feedback.\n\n\n### Reference\n\n[1] Hutsebaut-Buysse, M., Mets, K., & Latré, S. (2022). Hierarchical Reinforcement Learning: A Survey and Open Research Challenges. Machine Learning and Knowledge Extraction, 4(1), 172-221.\n\n[2] Li, S., Wang, R., Tang, M., & Zhang, C. (2019). Hierarchical reinforcement learning with advantage-based auxiliary rewards. Advances in Neural Information Processing Systems, 32.\n\n[3] Nachum, O., Gu, S. S., Lee, H., & Levine, S. (2018). Data-efficient hierarchical reinforcement learning. Advances in neural information processing systems, 31.\n\n[4] Li, S., Zhang, J., Wang, J., Yu, Y., & Zhang, C. (2021, September). Active Hierarchical Exploration with Stable Subgoal Representation Learning. In International Conference on Learning Representations.\n\n[5] Levy, A., Konidaris, G., Platt, R., & Saenko, K. (2018, September). Learning Multi-Level Hierarchies with Hindsight. In International Conference on Learning Representations.", " I've read all responses and have seen the revised paper, it is now more clear and more complete paper to be accepted and the author has addressed most of my concerns. I am now increasing my score to 5, but I still have questions to discuss with the authors for potential improvements. \n\n\n- `CUP does not explicitly distill source policies into a single policy network`, I understand this is not an explicit distillation, but it seems like a curriculum way of implicit distillation? Furthermore, you said `HRL methods suffer from non-stationarity issues`, this seems not well-justified. Did you mean that CUP `avoids the non-stationarity problem` by `monotonic improvement`? \n\n- `another way to decide which source policy to reuse rather than training high-level policies` is good, but can the authors put more evidences on the advantage of CUP compared with HRL methods? Along with the above question, if additional discussions/experiments can be conducted in the Appendix it will be much more convincing and fully discussed.\n\n- Can you put the \"insights on choosing $\\beta1$ and $\\beta2$\" along with the description of task difficulty into your paper? It will helps to understand the proposed algorithm.\n\n- A slight advice for formating your appendix: you can format your appendix by concluding similar subsections into a separate appendix section, for example, `Appendix A Proofs; Appendix B Experimental Settings; Appendix C Additional Results` (for example like the PPO paper does). The current appendix is complete, though, it seems a little bit hard reading. Just kindly reminds.\n\nI am willing further increase my score if the author can further improve their paper into a better version.\n", " ### Reference\n\n[1] Van Laarhoven, T. (2017). L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350.\n\n[2] Wu, Y., Tucker, G., & Nachum, O. (2019). Behavior regularized offline reinforcement learning. arXiv preprint arXiv:1911.11361.\n\n[3]Sodhani, S., Zhang, A., & Pineau, J. (2021, July). Multi-task reinforcement learning with context-based representations. In International Conference on Machine Learning (pp. 9767-9779). PMLR.\n\n[4] Pateria, S., Subagdja, B., Tan, A. H., & Quek, C. (2021). Hierarchical reinforcement learning: A comprehensive survey. ACM Computing Surveys (CSUR), 54(5), 1-35.\n\n[5] Li, S., Wang, R., Tang, M., & Zhang, C. (2019). Hierarchical reinforcement learning with advantage-based auxiliary rewards. Advances in Neural Information Processing Systems, 32.", " Thank you for the insightful comments. We provide clarification to your questions and concerns as below. We appreciate any further questions or comments.\n\n**Q1.** It seems that the proposed method tends to be affected by the chosen of hyper-parameters.?From the formulation, I find it will be a little tricky to tune the two $\\beta$s to reach a balanced imitation.\n\n**A1.** To show CUP's robustness on hyper-parameters, we provide additional results in Figure 10 in Appendix A.7. We test CUP on a wider range of hyper-parameters. Results demonstrate that CUP achieves stable performance even if $\\beta_1$ and $\\beta_2$ are more than three times as large as their default values. On the other hand, as stated in Section 4.3.1, CUP uses the same set of hyper-parameters for all the six tasks presented in the paper, which also suggests that CUP is robust to the choice of hyper-parameters.\n\nHere are some insights on choosing $\\beta_1$ and $\\beta_2$. Note that the maximum weight for the KL regularization is $\\beta_1$\\*$\\beta_2$\\*|$\\tilde{V}$\\_{tar}^{t}(s)| , and the original actor loss $L_{actor}$ has roughly the same magnitude as |$\\tilde{V}$\\_{tar}^{t}(s)|. So $\\beta_1$\\*$\\beta_2$ roughly determines the maximum regularization weight. Following previous works on regularization [1,2], (0.1, 1) is a reasonable range for $\\beta_1$\\*$\\beta_2$. As a consequence, we choose (0.04, 1) as the range of $\\beta_1*\\beta_2$ for our hyper-parameter ablation studies. What's more, as $\\beta_2$ upper bounds the maximum confidence on the expected advantage estimation (Section 3.2), $\\beta_2$ should be decreased if a large variance in performance is observed. These two insights efficiently guide the design of $\\beta_1$ and $\\beta_2$.\n\n**Q2.** List all hyper-parameters used in your experiments and provide both default and more guidance to the hyper-parameter settings will help relieve this concern.\n\n\n**A2.** CUP has two additional hyper-parameters compared to SAC, $\\beta_1$ and $\\beta_2$. We provide their default values in Appendix A.4, which are used in all six tasks. We discuss their robustness and design choices in **A1.**. We adopt the default parameters for SAC from [3]. All hyper-parameters are listed in Table 1 in Appendix A.8.\n\n\n\n\n**Q3.** You mentioned that¡°MULTIPOLAR fails in more complex tasks¡±but the algorithm works in the last two figures. So what is the order of difficulty of these six tasks?\n\n**A3.** For the problem of policy reuse, task difficulty is generally determined by two factors: usefulness of source policies on the target task, and difficulty of learning the target policy on states where source policies are not useful. MULTIPOLAR works on Push-Wall-V2 and Peg-Insert-Side-V2, because the Push source policy is useful on Push-Wall-V2 (implied by HAAR's good jump-start performance), and learning residuals (discussed in Section 5) on Peg-Insert-Side-V2 is easier (implied by SAC's fast learning). In Pick-Place-Wall-V2, the Pick-Place source policy is useful, but the residual is difficult to learn, so MULTIPOLAR does not work. For the remaining three tasks, the source policies are less useful, so they are more difficult.\n\n**Q4.** Why not provide the percentages and expected advantages figure for all tasks (in the Appendix)? Better include all figures in the Appendix for completeness.\n\n\n\n**A4.** We have added all the percentages and expected advantages figures for all tasks in Appendix A.9. These results accord to our original analysis in Section 4.2, and reflect the usefulness of source policies on target tasks.\n\n\n**Q5.** It seems the proposed method reuses the source policies by distilling them into a single one. Can the author provide more discussion about the advantage (e.g., stability, intuition, efficiency) to do so compared to those HRL works who learned to choose different source policies? Why distill all knowledge into a single policy a better idea? \n\n**A5.** CUP does not explicitly distill source policies into a single policy network. Instead, in each iteration, CUP forms a guidance policy that is a dynamic aggregation of source policies and the current target policy by querying the current critic. The guidance policy is theoretically guaranteed to be a monotonic improvement over the current target policy.\n\nAs discussed in the second paragraph in Section 1, HRL methods suffer from non-stationarity issues, as they require jointly training of high-level and low-level policies [4,5]. One intuition for CUP is to find another way to decide which source policy to reuse rather than training high-level policies. Using expected advantages to reuse source policies is conceptually simple, easy to implement, has theoretical guarantees, and avoids the non-stationarity problem.\n\n\n\n", " Q6. Additional experiments on other Metaworld tasks (either the full suite of 50 or select \"hard\" tasks that are less similar to the source tasks) would improve the paper in a worthwhile way.\n\nA6. We provide results on two \"harder\" tasks that source policies are less useful on them (supported by results in Figure 18, which illustrates the frequency of the source policies being selected by the guidance policies on these two tasks). Figure 17 in Appendix A.13 demonstrates that the effect of policy reuse decreases as source policies become less useful on target tasks. As for the full suite of Meta-World tasks, many of the tasks are so easy that learning without policy reuse is already very efficient.\n\nQ7. The conclusion section of the paper is pretty limited. I appreciate how space is limited, but some discussion of broader limitations and possible avenues for future improvement would be nice if space can be found.\n\nA7. As suggested by Reviewer nMQw and biXP, one limitation of CUP is the assumption of the source policies and the target policy sharing the same state and action spaces. We have added discussions on CUP's limitations as well as possible future directions to Appendix A.14. We will move this discussion to the main paper in the final version of our paper.\n", " Thank you for the insightful comments. We provide clarification to your questions and concerns as below. We appreciate any further questions or comments.\n\n**Q1.** (1) Taking an argmax among policies using a partially-trained value function seems prone to bias/error magnification. (2) It does make me wonder how well performance will hold up as the difference between source and target tasks increases. Basically, can CUP be used to gain a training benefit from weak teacher policies?\n\n\n\n**A1.** (1) The reviewer raises a good point. Although CUP may over-estimate values on rarely selected actions, this over-estimation serves as a kind of exploration mechanism, encouraging the agent to explore actions suggested by the source policies and potentially improving the learning target policy. If the source policies give unsuitable actions, then after exploration this over-estimation is resolved and these unsuitable actions will not be selected again.\n\n(2) Figure 15 in Appendix A.11 demonstrates that even if all source policies are random and do not give useful actions, CUP still performs similarly to the original SAC and is almost unaffected by the over-estimation issue, as over-estimation is addressed after exploring these actions. We also add a Reach source policy to the three random source policies and test CUP on Push-Wall-V2, a task in which the Reach source policy is not high-performing. Figure 15 also demonstrates that even when there is only one less-useful source policy accompanied with three random source policies disrupting policy reuse, CUP is still able to improve learning efficiency by reusing the meaningful source policy.\n\n**Q2.** Relatedly, the bound in theorem 2 is dependent on the difference between source and target policies (as well as reward magnitude), and could be a very large bound given adversarial values. I'm willing to accept that this isn't an issue in practice (at least for Metaworld), but I'm curious to see how those factors impact empirical performance.\n\n\n**A2.** The bound is dependent on the difference between the current target policy and the guidance policy, and it generally will not be too large because: (1) we minimize the KL divergence between the target policy and the guidance policy during training (Eq. 11), and (2) the guidance policy is an aggregation of source policies and the current target policy (Eq. 5). The reward magnitude is closely related to the value's magnitude, so the gap would not be too large. \n\n\n**Q3.** In Figure 3, I'm surprised how little CUP seems to use any guide policy throughout training.\n\n**A3.** Figure 3 illustrates the percentages of the guidance policy selecting each source policy and the current target policy. Although each single source policy does not seem to be selected very often, they sum up to be selected for about 40\\% of the time. As the target policy is continuously improving and becomes more competitive, the guidance policy will gradually decrease its usage of source policies.\n\n\n**Q4.** Related to that, I wonder what the percentages would be if training on one of the source tasks? For example, would the push policy get used more if training on the push task? It could provide a useful indicator for whether there's more to be gained from the source policies.\n\n\n**A4.** Thank you for your suggestion. We conducted an experimental as suggested by the reviewer. As demonstrated in Fig. 16, while training on Push-V2, the corresponding source policy Push is selected frequently. After the target policy converges, CUP selects the target policy and the Push policy for roughly the same frequency, as they can both solve the task. \n\n\n**Q5.** Ideally I'd like to have some qualitative evidence for how different aspects of source versus target tasks affect performance for CUP. In the current results it looks like CUP improves over SAC less on Hammer and Peg-Insert-Side, the two \"more novel\" tasks, but without more tasks or deeper analysis it's hard to say anything conclusive.\n\n\n\n**A5.** The usefulness of source policies on the target task can be evaluated by the frequency of the source policies being selected by the guidance policy during training. As demonstrated in Figure 11, Hammer-V2 and Peg-Insert-Side-V2 are \"more novel\" tasks, as the target policies are being selected for about 80\\% of the time at convergence (while for other tasks the number is about 60\\%), which indicates that source policies are less useful on these two tasks. This result implies that the performance improvement bought by policy reuse is closely related to the usefulness of source policies on the target task. We also test CUP on another two \"more novel\" tasks, as discussed in **A6** below.\n\n", " Thank you for the thoughtful comments. We provide clarification to your questions and concerns as below. We appreciate any further questions or comments.\n\n**Q1.** I don't see a noticeable improvement in the comparison of CUP's performance with different numbers of source policies, which cannot be concluded that CUP is able to utilize the additional source policies to further improve its performance.\n\n**A1.** We analyze the percentages of the six source policies being selected by CUP during learning on Push-Wall-V2. Results in Figure 13 demonstrate that the first three source policies are quite related to the target policies and provide sufficient support for policy reuse, which explains why additional source policies do not improve performance greatly. To demonstrate CUP's ability to utilize additional source policies, we design another two sets of source policies. Set 1 consists of three source policies less related to the target task Push-Wall-V2, while Set 2 adds another three more useful source policies to Set 1. As shown in Figure 14 in Appendix A.10, CUP is able to utilize the additional source policies to improve performance.\n\n**Q2.** The conclusion merely repeats what was said in the introduction, lacking the limitations of CUP. The author should discuss the limitations of CUP.\n\n\n\n**A2.** Thank you for your insightful comments on CUP's limitations and possible future directions. We have added discussions on CUP's limitations to Appendix A.14. We will move this discussion to the main paper in the final version of our paper.\n\n\n\n\n\n**Q3.** As the number of random policies grows, the performance should not be affected even though the computational complexity will increase. But why CUP can only adapt to 3 random policies?\n\n\n\n**A3.** With further analysis we found that, with 3 random seeds for the additional random policy experiments, the variance is large in the original results and the performance drop looks significant with the number of random policies greater than 3. In the revised paper, we have run 6 random seeds and updated Figure 5(b). Results demonstrate that adding 4 and 5 random source policies leads to a slight drop in performance. This drop is because that as the number of random policies grows, more random actions are sampled, and taking argmax over these actions' expected advantages is more likely to be affected by errors in value estimation.", " Thank you for the thoughtful comments. We provide clarification to your questions and concerns as below. We appreciate any further questions or comments.\n\n**Q1.** The proposed method theoretically relies on a well-trained critic. However, the choice of source policy might be problematic if the value estimate by the critic is not accurate enough. To get the guidance policy according to equation (7), what will happen when $\\pi$ and$\\pi_{tar}^{t}$ are very different and $\\tilde{Q}$\\_${\\pi^t_{tar}}$ suffers from the over-estimation issue?\n\n**A1.** The reviewer raises a good point. Although CUP may over-estimate values on rarely selected actions, this over-estimation serves as a kind of exploration mechanism, encouraging the agent to explore actions suggested by the source policies and potentially improving the learning target policy. If the source policies give unsuitable actions, then after exploration this over-estimation is resolved and these unsuitable actions will not be selected again. To verify this hypothesis, we show additional results in Figure 15 in Appendix A.11. These results suggest that even if all source policies are random and do not give useful actions, CUP still performs similarly to the original SAC, and is almost unaffected by the over-estimation issue, as over-estimation is addressed after exploring these actions.\n\n\n**Q2.** Could the proposed method perform robustly on settings with different sets of source policies? For example, what will happen if there are some policies really unsuitable or even harmful for the target task?\n\n\n\n**A2.** To investigate CUP's ability to ignore unsuitable source policies, we design two source policy sets: the first set consists of three random policies that are all useless for the target task, and the second set adds the Reach policy to the first set. We evaluate CUP on Push-Wall-V2. As demonstrated in Figure 15 in Appendix A.11, when none of the source policies are useful, CUP performs similarly to the original SAC, and its sample efficiency is almost unaffected by the useless source policies. If only one of the four source policies is useful, CUP can still efficiently utilize the useful source policy to improve learning performance.\n\n\n\n**Q3.** How to calculate the soft estimated advantage for each source policy according to equation (4)?\n\n**A3.** In practice, to be efficient, we estimate the expectation by sampling a few actions (e.g., 3 actions) from each action probability distribution proposed by the source policies, and find it sufficient to achieve stable performance.\n\n\n\n**Q4.** Any intuitive explanation about why 4 random policies hurt the performance much in Figure 5(b)?\n\n\n\n**A4.** With further analysis we found that, with 3 random seeds for the additional random policy experiments, the variance is large in the original results and the performance drop looks significant with the number of random policies greater than 3. In the revised paper, we have run 6 random seeds and updated Figure 5(b). Results demonstrate that adding 4 and 5 random source policies leads to a slight drop in performance. This drop is because that as the number of random policies grows, more random actions are sampled, and taking argmax over these actions' expected advantages is more likely to be affected by errors in value estimation.\n\n**Q5.** It seems that the authors just briefly mention one limitation in section 2. ...... One possible choice is to learn state and action correspondence to transfer the source policy to the target state and action space, e.g., 'Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency'.\n\n\n\n\n\n**A5.** Thank you for your insightful comments on CUP's limitations and possible future directions. We have added discussions on CUP's limitations to Appendix A.14. We will move this discussion to the main paper in the final version of our paper.", " This paper considers the problem of policy reusing in reinforcement learning. It assumes that there are a bunch of source policies pre-trained on related tasks. The agent is interacting with the environment to learn a target policy for the target task and hopes to make use of the available source policies. The problem is to determine when and how to use which source policy.\n\nThis paper proposes CUP to employ the critic learned on the target task to select the proper source policy in each state. To be specific, there is a set of source policies and the agent's current target policy is also considered as one possible choice in the source policy set. CUP chooses the source policy with the largest one-step improvement over the current target policy. The chosen source policy in each state together forms the guidance policy. It is theoretically proved that the value of the guidance policy can be higher than the value of the current target policy if the learned critic is accurate enough. Then the target policy is trained to imitate the guidance policy by minimizing their KL divergence. The weight of this KL divergence term in policy learning is adaptively changed during training, according to the estimated advantages of the guidance policy.\n\nThe authors conduct experiments on Meta-World and compare CUP with basic SAC, recent works HAAR, PTF, MULTIPOLAR, and MAMBA. The experimental results show the advantages of CUP. The ablative study shows that CUP is relatively robust to the choice of hyper-parameter value. Adding more source policies can be beneficial if the source policy is related to the target task. In the set of source policies, adding up to 3 random policies does not hurt the performance of CUP, but adding 4 random policies is problematic. The paper is well-organized and generally written clearly. The proposed method CUP is novel and interesting with theoretical and empirical support.\n\nPros:\n\nThe proposed method is technically reasonable and supported by the theoretical ground.\nThe evaluation is solid and analysis of CUP in ablation study help understand CUP better.\n\nCons:\nThe proposed method theoretically relies on a well-trained critic. However, the choice of source policy might be problematic if the value estimate by the critic is not accurate enough, especially when the source policy and target policy are quite different.\n\nOne critical detail is not clearly explained in the paper. How to calculate the soft estimated advantage for each source policy according to equation (4)? Getting the expectation seems not very simple given continuous action space. Then it is hard to tell whether CUP is really much more convenient than prior works using hierarchical reinforcement learning or source policy value estimation. Could the proposed method perform robustly on settings with different sets of source policies? For example, what will happen if there are some policies really unsuitable or even harmful for the target task? Could CUP properly ignore these source policies?\n\nAny intuitive explanation about why 4 random policies hurt the performance much in Figure 5(b)?\n\nTo get the guidance policy according to equation (7), what will happen when $\\pi$ and $\\pi^t_{tar}$ are very different and $\\tilde{Q}_{\\pi^t_{tar}}$ suffers from the over-estimation issue? For example, the target policy $\\pi^t_{tar}$ may rarely select an action $a_0$ at the state $s$, so the value estimate $\\tilde{Q}_{\\pi^t_{tar}}(s,a_0)$ is much higher than the true value $Q_{\\pi^t_{tar}}(s,a_0)$. Then the source policy selecting action $a_0$ often at the state $a$ will be chosen at this state. Yet, it may not be really beneficial for the target policy learning. This choice of source policy will hurt the sample efficiency of CUP. Do you observe this issue? Any comments about preventing it? It seems that the authors just briefly mention one limitation in section 2. \"We assume that the source policies and the target policy share the same state and action space\". This assumption is widely used in prior works about policy transfer, and the authors did not propose to solve this issue. One possible choice is to learn state and action correspondence to transfer the source policy to the target state and action space, e.g., 'Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency'.", " This paper proposes a novel policy reuse algorithm Critic-gUided Policy reuse (CUP), which avoids training any extra components and efficiently reuses source policies. CUP chooses the source policy at each state that has the largest one-step improvement over the current target policy and forms a guidance policy. The target policy can be regularized to imitate the guidance policy to perform an efficient policy search. Strengths:\n\n1. This paper not only proves that the guidance policy is guaranteed to be a monotonic improvement over the current target policy but also proves that the target policy is theoretically guaranteed to improve by imitating the guidance policy.\n\n2. The experimental part of this paper is adequate in addition to transfer performance, including analyzing the guidance policy, the sensitivity to hyper-parameter settings and the number of source policies, and the interference with random source policies. The writing ideas of this part are also very clear.\n\nWeaknesses:\n\n1. Some descriptions are not clear and should be clarified. In the experimental part, I have questions about the analysis of some experimental results. For example, I don't see a noticeable improvement in the comparison of CUP’s performance with different numbers of source policies, which cannot be concluded that CUP is able to utilize the additional source policies to further improve its performance. \n\n2. The conclusion merely repeats what was said in the introduction, lacking the limitations of CUP. For example, CUP obeys a very strong assumption that the source policies and the target policy share the same state and action spaces, which is not common in the real world. The strong assumption limits the extension of CUP to more general scenarios[1-3]. The author should add some reflections on directions for future improvements.\n\n[1] Mutual Information Based Knowledge Transfer Under State-Action Dimension Mismatch. UAI 2020.\n\n[2] Learning Cross-Domain Correspondence for Control with Dynamics Cycle-Consistency. ICLR 2021.\n\n[3] Cross-domain Adaptive Transfer Reinforcement Learning Based on State-Action Correspondence. UAI 2022.\n\n 1. In the experimental part, I don't see a noticeable improvement in the comparison of CUP’s performance with different numbers of source policies, which cannot be concluded that CUP is able to utilize the additional source policies to further improve its performance. \n\n2. From how CUP works, it should not be affected by random policies because it chooses the largest soft expected advantage at each state $s$. As the number of random policies grows, the performance should not be affected even though the computational complexity will increase. But why CUP can only adapt to 3 random policies? The author should discuss the limitations of CUP. For example, CUP obeys a very strong assumption that the source policies and the target policy share the same state and action spaces, which is not common in the real world. For example, how can CUP be applied to different robot transfer settings, e.g., different types or different numbers of joints? The strong assumption limits the extension of CUP to more general scenarios. ", " CUP is an algorithm for re-using previously learned policies to guide the training of a new policy on a different-but-related task. CUP does this by selecting a single guide policy from among the library of pretrained policies at each step, which the student policy is then trained to behave similar to using a KL divergence regularization term. Overall, I liked this paper. The algorithm is novel, clearly presented, and conceptually simple (a plus). The experiments provide reasonable evidence that CUP improves performance compared to both baseline and alternative teacher-student transfer algorithms.\n\nI do have a few concerns, though I don't think any of this invalidates the results presented:\n\n-If I understand correctly, taking an argmax among policies using a partially-trained value function seems prone to bias/error magnification. Given relatively poor Q estimates of equal magnitude for each policy's sampled actions (as can happen early in training), the guide policy selected will tend to be the one that samples actions which Q is most (unrealistically) optimistic about. Further, the estimated advantage KL term weighting makes these updates larger. \nI think the authors appreciate this, hence their value function upper bound on the weighting term and not using the KL term for the first 0.5M environment steps (as per A.4), and the result is an algorithm that works in practice (as shown by the experiments). That said, it does make me wonder how well performance will hold up as the difference between source and target tasks increases (where many actions sampled by source policies will be bad for any given state). The random-source-policy ablation sort-of tests this, but still assumes a subset of source policies are relatively high-performing. Basically, can CUP be used to gain a training benefit from weak teacher policies?\n\n-Relatedly, the bound in theorem 2 is dependent on the difference between source and target policies( as well as reward magnitude), and could be a very large bound given adversarial values. I'm willing to accept that this isn't an issue in practice (at least for Metaworld), but I'm curious to see how those factors impact empirical performance.\n\n-Connected to the above two points, while it may be something of a stereotype for reviewers to ask for more experiments, additional experiments on other Metaworld tasks (either the full suite of 50 or select \"hard\" tasks that are less similar to the source tasks) would improve the paper in a worthwhile way. Ideally I'd like to have some qualitative evidence for how different aspects of source versus target tasks affect performance for CUP. In the current results it looks like CUP improves over SAC less on Hammer and Peg-Insert-Side, the two \"more novel\" tasks, but without more tasks or deeper analysis it's hard to say anything conclusive. \n The conclusion section of the paper is pretty limited. I appreciate how space is limited, but some discussion of broader limitations and possible avenues for future improvement would be nice if space can be found.\n\nIn Figure 3, I'm surprised how little CUP seems to use any guide policy throughout training. I'm not sure offhand how to tap into it, but this seems like it might be a sign of leaving performance on the table? The least trained target policy is only getting updated with the KL term about half the time on a task where the push guide policy should be highly informative.\n\nRelated to that, I wonder what the percentages would be if training on one of the source tasks? For example, would the push policy get used more if training on the push task? It could provide a useful indicator for whether there's more to be gained from the source policies. I included discussion and suggestions for limitations in the previous sections, and while I'd like to see more \"hard\" test cases the existing experiments do provide some idea of the limitations of CUP. The potential for negative social impact from this work is limited but is addressed in the appendix. ", " This paper is meant to achieve efficient policy reuse for resolving complex tasks. Specifically, they introduce Critic-gUided Policy reuse (CUP), evaluating and choosing appropriate source policies to regularize the training on the target tasks. Experiments shows convincing improvements over kinds of baselines\n Strength: \nThe topic of reusing simple source policies for resolving complex tasks is important. The paper is clearly stated, well written, and easy to follow. The proposed method is intuitive, simple and straightforward. The experiment comparison is strong and convincing.\n\nWeakness: \n- It seems that the proposed method tends to be affected by the chosen of hyper-parameters. Although the authors show in Figure 5 that “CUP performs well on a wide range of hyper-parameters”, it is not quite a large range. From the formulation, I find it will be a little tricky to tune the two $\\beta$s to reach a balanced imitation. \n- List all hyper-parameters used in your experiments and provide both default and more guidance to the hyper-parameter settings will help relieve this concern.\n\nI am willing to vote for an accept to this paper, but I would like to do so after the authors can relieve my concern about the hyper parameters and completeness (also see Questions below).\n\n===\n\nAfter the first round of rebuttal, the author addressed most of my concerns, and I am increasing my score to 5.\n\n===\n\nAfter the second round of rebuttal, the author further addressed my concerns, and I am increasing my score to 6. 1. You mentioned that “MULTIPOLAR fails in more complex tasks” but the algorithm works in the last two figures. So what is the order of difficulty of these six tasks?\n\n2. Why not provide the percentages and expected advantages figure for all tasks (in the Appendix)? Better include all figures in the Appendix for completeness.\n\n3. It seems the proposed method reuses the source policies by distilling them into a single one. Can the author provide more discussion about the advantage (e.g., stability, intuition, efficiency) to do so compared to those HRL works who learned to choose different source policies? Why distill all knowledge into a single policy a better idea?\n The authors have addressed their limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "TXPjdnK2YlP", "TMhaXisxxt", "9dVEW3J2d9G", "aLhzSxG0PV9", "EUDOBQDIv-f", "gfo-IuBEJzb", "ehkzH_bgZt", "8AfYOG1X6mO", "TldoHwZ1Q_F", "Yfi2-v1shYi", "Yfi2-v1shYi", "NVTR0evMDc", "ys65rbSDyUe", "aLhzSxG0PV9", "EUDOBQDIv-f", "gfo-IuBEJzb", "nips_2022_iMK2LP0AogI", "nips_2022_iMK2LP0AogI", "nips_2022_iMK2LP0AogI", "nips_2022_iMK2LP0AogI" ]
nips_2022_NJr8GBsyTF0
Non-Markovian Reward Modelling from Trajectory Labels via Interpretable Multiple Instance Learning
We generalise the problem of reward modelling (RM) for reinforcement learning (RL) to handle non-Markovian rewards. Existing work assumes that human evaluators observe each step in a trajectory independently when providing feedback on agent behaviour. In this work, we remove this assumption, extending RM to capture temporal dependencies in human assessment of trajectories. We show how RM can be approached as a multiple instance learning (MIL) problem, where trajectories are treated as bags with return labels, and steps within the trajectories are instances with unseen reward labels. We go on to develop new MIL models that are able to capture the time dependencies in labelled trajectories. We demonstrate on a range of RL tasks that our novel MIL models can reconstruct reward functions to a high level of accuracy, and can be used to train high-performing agent policies.
Accept
The reviewers have agreed on many points (at least after some help from the author's explanations and changes in the rebuttal): the problem formulation is interesting (in particular as it relates to evolving human preferences, but also in the practical experimental cases), the writing is clear and the technical solutions are interesting. While there is also a general consensus that more, larger experiments would be desirable, I note this is much more difficult to achieve in the paper's setup than most "vanilla=Markov" RL, as significant modifications are needed to any standard environment to fit this paradigm. Lunar Lander was well appreciated during the rebuttal, and I believe the paper will now have a strong impact as-is (although if the authors can find the time for another similarly sized env prior to the final version, it will be welcome).
train
[ "Zi7qVm-ykC", "IWpq9O-mJ1Z", "PbJ_wbnbyq", "Tqp0RaGVCIv", "yNg18aPob7i", "6jiubyZfqiE", "jjiZ_Gnt75_7", "MflSMAJMBSne", "aCvRYQHqBBZ", "L1Oyq79128F", "Ai5hwij5M92", "77Z6KqURxA6", "YUGrHi0Adpr", "K3dZT8Ce91d", "sOYehp1fl7N", "swivzwOOSo4" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your comments. Please see our recently submitted final revision, which completes both sets of changes that we laid out in our General Rebuttal. ", " We have now submitted our final revision of the paper. Below we detail the overall changes between this final version and the **original** version. Please note this encapsulates and adds to the changes of our two previous revisions. In this final submission, we have achieved everything we set out in our planned changes (please see the General Rebuttal comments below). The final revised version has five additional pages of appendices compared to the original version. This is due to new experiments, discussions, and clarifications as requested by the reviewers. We also include details below on how we plan to utilise the additional content page for the camera ready version.\n\n**Overall List of Paper Changes**\n\n*New Experiments*\n* Added experiments using a more complex task - an adapted version of the Lunar Lander Open AI Gym Task where the agent must land, wait for 50 timesteps, then hover. This is more complex than our previous tasks as it uses a higher dimensional state space (8D rather than 2D), and runs for more timesteps (500 rather than 200).\n* Added reward reconstruction results for the Lunar Lander task in Table 1. We find our MIL models do indeed scale to this more complex task, and exhibit similar trends to our existing results.\n* Added results for RL agent training on the Lunar Lander task to Figure 5. This demonstrates improved RL agent performance compared to the oracle baseline, again following similar trends to our existing results.\n\n*Additional Appendices*\n* Discussed the differences between oracle and human labelling in the Appendix C.\n* Added a detailed discussion of our new experiments in Appendix E, including further suggestions of how our approach could be improved for this task.\n* Conducted a further analysis of the RL training results for the new Lunar Lander task in Appendix G.\n* Included an investigation of the learnt hidden state embeddings for the new Lunar Lander task in Appendix H.\n\n*Minor changes*\n* Reduced the use of specialised language in the abstract.\n* Stated that our results given in Table 1 come from the test set.\n* Added a note on the need for more complex environments to Limitations and Future Work (Section 5.3).\n\n**Planned Changes for Camera Ready Version**\n* Replace Figure 4 with Figure A1 (showing all the task environments in the main body of the paper rather than in the Appendix). \n* Include hidden state analysis of the new Lunar Lander task in the main body (i.e., move Figure A4 into the main body of the text).\n* Expand our interpretability analysis of the Lunar Lander environment with probe trajectory plots (as we did for the other tasks).\n* Move certain parts of our discussions from the Appendix into the main body to aid the narrative of the paper\n(for example, expanding our Limitations and Future Work section with content currently in the Appendix).\n\nFinally, we would like to thank everyone involved in reviewing this paper for reading and engaging with our work!\n", " We have uploaded a draft of our second revision of the paper according to our planned changes (please see General Rebuttal Comments below). This revision demonstrates that our approach works on more complex environments (as was requested by all three reviewers). Please note that this is a draft revision - we felt it was best to submit this prior to the final deadline in order to facilitate further discussion before submitting our final version of the paper. The changes are detailed below (note these are in addition to the changes made for Revision One). We also provide information as to what will change between this submission and the final submission. Pending the further minor changes for the final submission, we have achieved everything we set out to in our planned changes (see General Rebuttal Comments below).\n\n**Paper Changes**\n1. Added experiments using a more complex task - an adapted version of the Lunar Lander Open AI Gym Task where the agent must land, wait for 50 timesteps, then hover. This is more complex than our previous tasks as it uses a higher dimensional state space (8D rather than 2D), and runs for more timesteps (500 rather than 200).\n2. Added reward reconstruction results for the Lunar Lander task in Table 1. We find our MIL models do indeed scale to this more complex task, and exhibit similar trends to our existing results.\n3. Added results for RL agent training on the Lunar Lander task to Figure 5. This demonstrates improved RL agent performance compared to the oracle baseline. Please note this figure is still pending further updates as some models are still training.\n4. A detailed discussion of our new experiments in Appendix D, including further suggestions of how our approach could be improved.\n\n**Further planned changes for final submission**\n1. Update Figure 5 with complete results once the final RL training runs are complete.\n2. Make any further changes based on reviewer feedback.\n3. Update supplementary material.", " Thank you for your comments and facilitating a discussion. Please see our responses below.\n\n**Q1.** *These timer-based NMRDP, in my opinion, have no practical applications because it is trivial to add a timer to the observation space and make the task markovian (both in simulation and in real-world deployments). The key and charger examples can be similarly solved trivially by augmenting the observation with easily measurable information.*\n\n**A1.** We agree that the fact that we know the internal structure of our oracles means that it is trivial to augment the observation space with (for example) the timer value, but the point of our experiments was to treat the oracles *as if* they were black boxes and evaluate how well their structure could be recovered from data. Knowing the ground truth was essential for such an evaluation. As for the other implication of your comment - whether our timer, moving, key, and charger oracles have any relevance to our ultimate goal of modelling non-Markovian aspects of human preferences - we would argue that they do. Although these oracles are simple and heavily abstracted, the underlying concepts of gradual accumulation, thresholds, and temporal if-then dependencies (e.g., if you collect the key, then you can access the treasure) could actually serve as valuable building blocks for modelling the affective or preferential dynamics occurring within a human evaluator's mind. Certainly, for any interesting problem, such building blocks would be combined in complicated ways, but in presenting and evaluating the foundations of our approach we feel it was advantageous to strip the concept of a hidden state back to its basics.\n\n**Q2.** *Are there interesting examples (where simply adding a timer to the observation does not make the reward markovian) of NMRDPs which are not full-on POMDPs, besides human-assigned rewards?*\n\n**A2.** We consider this to encompass any task with sequential dependency for which we do not know *a priori* how to augment the state space to make it Markovian. This might be because the nature of the sequential dependency is somehow hidden from us, for simply too complicated to encode manually. We think the human-centric framing is most interesting though, which is why we use it in the introductory sections of our paper.", " We have uploaded our first revision of the paper according to our planned changes (please see General Rebuttal Comments below).\nThe changes are as follows:\n1. Reduced the use of specialised language in the abstract.\n2. Stated that our results given in Table 1 come from the test set.\n3. Added a note on the need for more complex environments to Limitations and Future Work (Section 5.3).\n4. Discussed the differences between oracle and human labelling in the Appendix (Appendix F).\n\nFor added clarity, we have attached the Appendix to the main body of the paper (previously it was only in the supplementary material). \nWe are continuing with our ongoing changes for the second revision (additional experiments; please see General Rebuttal Comments below).", " Thank you to the authors for their thorough response. I'm mostly satisfied with the answers given, with some reservations remaining about the POMDP vs. NMRDP distinction (which are minor and do not warrant a rejection).\n\nMy remaining comment is about A1 and A3. You talk about the << clean separation between state information \"out there\" in the environment and hidden information \"in the head\" of the human, both of which have a predictive effect on the reward. >>, which I agree makes a strong case for the NMRDP formulation, but your examples are non-markovian not because of the human aspect, but because of the partial-observability aspect; the time since the beginning of the episode is not part of the observations yet is necessary for reward computation. This includes the new proposed experiment using lunar lander. These timer-based NMRDP, in my opinion, have no practical applications because it is trivial to add a timer to the observation space and make the task markovian (both in simulation and in real-world deployments). The key and charger examples can be similarly solved trivially by augmenting the observation with easily measurable information. My question is this:\n\nAre there interesting examples (where simply adding a timer to the observation does not make the reward markovian) of NMRDPs which are not full-on POMDPs, besides human-assigned rewards?\n\nI do not think this is a deal-breaker for this paper, because the paper is focused on reward-modeling where the non-markovian rewards come from imperfect human labeling. But because the experiments do not match that claimed application, this question begs asking.", " I am writing this comment to acknowledge the response to the reviewers, and that I am awaiting the two described revisions. I am satisfied with the proposed changes, and if all goes according to what's described, I will change my score to an \"Accept\".", " **Q1.** *It would be better if there were comparisons with other models in the more complex RL tasks such as Montezuma's Revenge where modeling non-Markovian rewards is important.*\n\n**A1.** Please see our discussion of more complex environments in our general rebuttal. The reason we steered clear of Atari environments with visual observations was partly due to complexity, but primarily due to the fact that these are well known to be partially observable, so have non-Markovian *dynamics* (not just rewards) with respect to their observations. In our view, untangling the two modes of departure from the Markovian case (partial observability and non-Markovian task specifications) would have led to a messy and needlessly complicated paper whose core message may have been lost. We should note, though, that our models could easily be combined with a CNN that maps (stacked) image observations into environment state vector, so could be deployed on Atari games in future.\n\n**Q2.** *How important the proposed non-Markovian reward modeling is compared to other temporal information encoding models (such as Flare or SPR)?*\n\n**A2.** Although we are not intimately familiar with the particular models you mention, they both appear to be specialised methods for improving RL from partial (pixel) observations. Flare aims to learn a compact representation of the smooth evolution of a dynamical system, while SPR aims to learn a representation that is \"self-predictive\" of its own state a small number of timesteps into the future. Hence, both are concerned with modelling the short-timescale dynamics of a general partially-observed state, and not for reconstructing a reward function with potentially long-term dependencies and no guarantee of smooth evolution over time. For the latter case, we believe that our LSTM-based approach is far more suitable.\n\n**Q3.** *How many timesteps the proposed LSTM-based model can encode?*\n\n**A3.** The most we have tested on so far is 100 timesteps per episode. This number comes from our RL tasks (Section 4.1), which used a fixed episode length of 100. LSTMs have no hard constraint on the number of sequential inputs that can be processed, although long-term dependencies become increasingly challenging to maintain without corruption. In our planned revised version of the paper (please see our general rebuttal), we will add a modified LunarLander environment from Open AI Gym with upwards of 400 timesteps per episode.\n\n**Q4.** *What did you use for the LSTM hidden states 2D visualization? T-SNE?*\n\n**A4.** We did not need to use any dimensionality reduction for the visualisation. As stated at the end of Section 4.1, \"...we know a priori that it is possible to capture the temporal dependencies in at most two dimensions, therefore we constrain our models to use 2D hidden states.\" As the models used 2D hidden states, we were able to produce the plots directly from the hidden states without any sort of transformation. This was an intended part of the model design. However, we are aware that for more complex models/environments, hidden states with higher dimensionality may need to be used, and as such, a method like t-SNE would indeed be required to produce visualisation similar to the ones we produced in this work.", " **Q5.** *It could be also informative as an additional ablation to consider simple Markovian tasks (with Markovian reward).*\n\n**A5.** We do precisely this in our \"Toggle Switch\" toy dataset in Appendix C (C.3 for the results). We found that the baseline Instance Space NN did work on this task as expected, but was still outperformed by our CSC Instance Space LSTM for both return and reward prediction. Although this is only one example, it does suggest there is some potential for non-Markovian models to provide performance benefits even in the Markovian case, perhaps because they are able to learn a more flexible, reward-relevant representation of the environment than is available in the default (usually hand-engineered) state vector.\n\n**Q6.** *My biggest concern in this paper is the complexity of the evaluation environments. While the proposed environments and tasks work sufficiently well to demonstrate the proof of concept, they are rather simplistic (grid world)...*\n\n**A6.** Please see our comments in the general rebuttal with regards to using more complex environments and adding another experiment to a revised version of the paper. As a technical point, please note that the environments studied so far are not strictly grid worlds (which have discrete state spaces and are amenable to tabular RL algorithms) but 2D continuous environments that require function approximation to solve.\n\n**Q7.** *Could the authors elaborate why the baseline that just replaces each state with frame stacked state would not suit this problem?*\n\n**A7.** Frame stacking is a reasonable proxy for recurrence when a small number of timesteps (typically $3-5$) is sufficient to create an approximately Markovian representation. The long-term dependencies studied in this work do not satisfy this assumption, and would necessitate tens of timesteps of stacking, creating an unwieldy and inefficient representation. Furthermore, the Timer and Moving tasks fundamentally require the recovery of indexical information (i.e., the current timestep $t$) which no amount of stacking would provide. It might have been reasonable to consider frame stacking with $5$ timesteps as a low-quality baseline in our experiments, although we can be extremely confident that it would have been ineffective given the structure of the tasks.\n\n**Q8.** *The authors train Deep Q-Network, but as far as I understand the states are just 2-dimensional, how important is it to have deep architectures in this case?*\n\n**A8.** As mentioned above, the continuous nature of the state space means the agent must utilise some form of function approximation, regardless of the dimensionality. Older (pre-2013) work on RL with function approximation explored many options such as linear functions and radial basis functions, although these constrained function classes (especially linear) would likely perform poorly in the highly nonlinear and discontinuous world of \"regions\" and \"thresholds\" in our experimental environments. Since the 2013 DQN paper, neural network function approximation has come to dominate the literature, so we see our decision to follow this direction as entirely consistent with what most RL researchers would have done. As for the implication of a \"deep\" (vs shallow) network, we suggest that our architecture size is rather modest by modern standards, although we concede that further tuning may have found a yet smaller architecture that performs well on the tasks considered. Such tuning was not the focus of our work.\n\n**Q9.** *The authors say that the transformer architecture would be unsuitable for the temporal structure of the problem. Why would it be the case with temporally masked transformers?*\n\n**A9.** Could the reviewer please indicate a particular paper or example of a temporally masked transformer that they believe would be suitable? As a general note, we opted against transformers in this work because they lack an explicit analogue of a hidden state which is carried forward through time, which is the most parsimonious way of representing non-Markovian rewards. The hidden states recovered by our LSTM models also provide a meaningful object of study in our interpretability investigations; the propagation of dependencies within a transformer architecture is far less easy to visualise and interpret.\n\n**Q10.** *How was the dataset for RM training (from lines 210-213) split into training and test?*\n\n**A10.** As discussed in the Appendix (C.3 and D.2), we used an 80/10/10 dataset split (train/validation/test), which was controlled using fixed seeds.\n\n**Q11.** *Are the results in Table 1 on the test set?*\n\n**A11.** Yes, these are on the test set. We're happy to add that clarification to the table caption.\n\n", " **Q1.** *I am not completely convinced about the baseline ``Base Case: Embedding Space LSTM\" model where the rewards are obtained by taking the difference in partial bag labels. Suppose the task is to grasp an object. If I am given two sequences to annotate: 1) s1=not grasped, s2=not grasped, s3=not grasped, s4=grasped, s5=grasped and 2) s1=not grasped, s2=not grasped, s3=not grasped, s4=grasped, I could assign both of them label 1. Then, using the difference between partial bags would result in the reward of s5 to be 0 which is intuitively not correct. / Could the authors clarify if the issue with the difference in partial bags as I described takes place in their experiments?* \n\n**A1.** You are correct in saying s5 would be attributed a value of 0. However, this may not necessarily be incorrect; it depends on how a hypothetical human evaluator chooses (or is told) to score the model. If the agent's goal is just to *grasp the the object at some point*, then strictly rewarding only s4 is correct: is does not matter if the agent drops the ball later, so no further reward should be attributed to s5. If the goal is to *grasp the object and not let go*, for example with a score of 1 if the object is grasped and held, and 0 otherwise (never grasped or grasped and subsequently dropped), then one could envisage the reward breakdown being +1 for grasping and -1 for letting go, in which case only attributing s4 is again correct. If the goal is to *grasp the object for as long as possible*, then the scores assigned to the two scenarios should be different (e.g., 2 for the first example you give and 1 for the second). In summary, we see no issue here; the model can and should assign rewards in a way that is consistent with how the evaluator scores the trajectories.\n\n**Q2.** *Maybe it is better to have a recurrent agent architecture that trains its own latent representation. Could the authors clarify why it is chosen for the agent to reuse latent states from reward modelling rather than to train its own LSTM?* \n\n**A2.** Firstly, the question of whether the reward modelling process, and the learnt LSTM model, should be seen as situated inside the agent, as opposed to within some distinct software system, is one of arbitrary semantics, and you might find it helpful to reframe our entire reward modelling pipeline as an activity performed *by the agent* alongside its primary task of policy learning. A more fundamental distinction is between *offline* reward modelling, where the agent learns and fixes its reward model prior to commencing policy learning, and *online* reward modelling, where both the reward and policy models are learnt concurrently from streaming human feedback. We only investigated the former case here, but the latter is likely to be more common in practice. In section 5.3 of the paper, we identify experiments with the online case as a major area for future work.\n\n**Q3.** *For the baselines in Figure 5, I think it would be more informative to have RL with recurrent architecture + oracle rewards as the upper bound on the performance and it would avoid the situation when an agent with learnt reward outperforms the oracle.*\n\n**A3.** If we understand correctly, your suggestion is to baseline against an LSTM-based agent that is given oracle rewards but *not* oracle hidden states, as a kind of \"halfway house\" between direct oracle access and full reward modelling. From a technical perspective, this would have been an entirely reasonable thing to do, and our main response is simply that this would have added further complexity to some already-dense figures. We would also argue that such a halfway house doesn't really correspond to a realistic setup in the reward modelling context, where it is generally assumed that the oracle (or in reality, the human) cannot provide reward labels directly, but only high-level feedback such as trajectory labels. However, we will bear this suggested baseline in mind for future experiments.\n\n**Q4.** *I believe the non-Markovian rewards might be important in practice in some tasks that are inherently non-Markovian and the success can't be determined only by a given state, but a whole sequence must be considered. Some of such tasks are studied in the experiments, but the point that these are different types of tasks was not made clear.*\n\n**A4.** We provide a preliminary taxonomy of non-Markovian tasks in Appendix B; does this provide the kind of discussion you are looking for?", " Please refer to the general rebuttal for answers to your questions about using more complex environments and real human labels. Below are responses to other concerns raised: \n\n**Q1.** *My main gripe with the method and theory is the use of the NMRDP (non-markovian reward decision process), which can be viewed as a specialisation of a POMDP... / I think the method would be better presented as reward-modeling for POMDPs... / I think the idea of using LSTMs specifically to handle non-markovian reward is simply a specialisation of this existing idea of using RNNs to encode past observations in POMDPs.* \n\n**A1.** We are sympathetic to your suggestion that our use of LSTMs to handle non-Markovian rewards could be framed as a specialisation of the existing idea to use RNNs to encode past observations in POMDPs. While it is probably true that we could have phrased our entire paper in more general and abstract terms, our primary contribution is precisely that of naming and formalising non-Markovian reward modelling as an interesting and previously-unexplored special case, with great relevance to the problem of AI alignment and human-in-the-loop learning (see Appendix B for a discussion of potential use cases). While our initial set of model architectures provide a starting point which may be familiar to those with experience of POMDP learning, we do employ several techniques inspired by the isomorphism we identify with multiple instance learning (the particular way in which the labelling loss is computed and backpropagated during training, the use of a concatenated skip connection to promote a simple hidden state, the focus on interpretability for understanding and probing the hidden state dynamics). The use of LSTM for POMDPS is referenced in the paper in Section 2, Lines 74 and 75. If there are additional points/references that you think should be included, we welcome further suggestions from you during the discussion period.\n\n**Q2.** *The abstract uses a lot of specialized language RM, MIL,...* \n\n**A2.** Thanks for this feedback and the specific example given; we'll find a clearer way of phrasing the content of the abstract. \n\n**Q3.** *I find it awkward that the reward is a function of both $s_t$ and $h_{t+1}$. I realize that it leads to better performance in practice, but it shouldn’t be necessary and maybe drop it from the math.*\n\n**A3.** We have some sympathy with this view, and debated this notation for a while before settling on the current presentation. We felt that the inclusion of both $s_t$ and $h_{t+1}$ as arguments leads more naturally to the introduction of concatenated skip connections (which, as you say, markedly improves performance), and also helps to reinforce the idea that (unlike in a general POMDP) non-Markovian RM contains a clean separation between state information \"out there\" in the environment and hidden information \"in the head\" of the human, both of which have a predictive effect on the reward. Notational issues like this are always tricky, but on balance we still think this is the better option.\n\n**Q4.** *Line 229 mention an “interesting” case when CSC and instance-space LSTM beats the oracles; the reviewer does not think it is interesting but rather suspicious. This could be stemming from a lack of hyperparameter search or uncertainty in the results.*\n\n**A4.** As we allude to in Section 4.3, we suspect that the learnt hidden states may be easier to exploit for policy learning than the raw oracle timer states. This is reinforced by Figure 7, where we show that the learnt hidden states are nonlinear with respect to time and are sparse around an inflection point very close to $t=50$. This likely helps the agent's value network to distinguish this critical timestep in comparison to the linear encoding given in the oracle hidden states. Regarding uncertainty in the results, in Figure 5, we observe that there is no overlap between the lower quartile of the CSC Instance Space LSTM method and the upper quartile of the oracle method, suggesting that this difference is indeed significant. Regarding hyperparameter tuning, although this wasn't a major focus in our work we note that the values given in Appendix E are very much standard for environments of this size, and the same values are used throughout (so the runs using our models were given no inherent advantage through extra tuning).", " **Revisions** \nGoing forward, we plan to submit a first revision of the paper based on reviewer feedback (written changes only). Following on from that, we will then submit a further revision with an additional experiment (written and experimental changes, including an update to the supplementary material).\n\n**Revision One** \nWe will make the following revisions to our paper in light of reviewer feedback (ordered by appearance in the paper):\n1. Reduce specialised language in abstract.\n2. State that our results given in Table 1 come from the test set.\n3. Add the need for more complex environments to Limitations and Future Work.\n4. Discuss the differences between oracle and human labelling in the Appendix.\n\n**Revision Two** \nWe will run an additional experiment involving a more complex task. Our chosen environment is an adapted version of the LunarLander environment from Open AI Gym, in which the lander must first land, and then hover. This two-stage formulation makes the task non-Markovian. We aim for this to demonstrate that our methods scale to more complex environments with higher-dimensional state and action spaces, as well as longer episodes.\n\n**References** \n1. Littman, Michael L., et al. ``Environment-independent task specifications via GLTL.\" arXiv preprint arXiv:1704.04341 (2017).\n2. Gaon, M., and Brafman, R. ``Reinforcement learning with non-markovian rewards\". AAAI Conference on Artificial Intelligence 34 (2020).\n3. Griffith, Shane, et al. ``Policy shaping: Integrating human feedback with reinforcement learning.\" Advances in Neural Information Processing Systems 26 (2013).\n4. Hadfield-Menell, Dylan, et al. ``Inverse reward design.\" Advances in Neural Information Processing Systems 30 (2017).\n5. Reddy, Siddharth, et al. ``Learning human objectives by evaluating hypothetical behavior.\" International Conference on Machine Learning. PMLR, 2020.\n6. Lee, K., et al. ``B-Pref: Benchmarking Preference-Based Reinforcement Learning.\" Advances in Neural Information Processing Systems 34 (2021).\n", " We would like to thank all reviewers for their comments. We felt the reviews were fair as well as constructive, and form a solid basis for discussion that will help to improve both this paper and our future work. In this general comment, we summarise reviewers' positive comments about our paper, address the common concerns raised, and enumerate changes that we will make to the paper during the discussion phase. In the specific comments attached to each review, we address issues raised only by individual reviewers. We hope this separation of the common and individual issues aids clarity.\n\n**Summary of positive comments** \nAll reviewers agree that we have identified a noteworthy gap in the existing reward modelling literature, provided a theoretically sound formalisation, made a novel and valuable connection to multiple instance learning, presented a well-constructed selection of architectures and baselines, and performed a valid preliminary evaluation on bespoke benchmarks tasks. They have praised both the clarity of our writing and the quality of our figures, and appreciate the inclusion of a qualitative interpretability analysis alongside quantitative performance metrics.\n\n**Common concern: evaluation in more complex environments** \nAll reviewers raised concerned with regards to the complexity of our experiments. These included comments that the environments used are simple and artificial, do not test our approach on larger-scale real-world tasks, and that we have not applied our methods to common, existing baselines. \n\nTo the best of our knowledge, no agreed-upon benchmarks for non-Markovian tasks exist (e.g., all standard OpenAI Gym environments are Markovian), and prior work commonly focuses on grid worlds with very small discrete state spaces [1,2]. Therefore, we believe the continuous environments that we propose in this paper should be viewed as a contribution, rather than a weakness. We specifically designed these environments to be baselines that capture various kinds of non-Markovian structure (binary vs continuous hidden states, time-dependent vs state-dependent hidden state dynamics), meaning they could provide a valuable testbed for future methods. Restricting the state space to be two-dimensional enables interpretable visualisation of the dynamics of algorithms that are targeted at non-Markovian environments.\n\nHowever, we agree that evaluation in more complex settings is important to demonstrate the wider applicability of our methods. In our first planned revision of this work (see below) we will add this extension to our Limitations and Future Work (Section 5.3). Furthermore, we are making use of the current rebuttal and discussion period to run experiments on an adapted version of the LunarLander environment from OpenAI Gym, with a custom non-Markovian reward function. We hope that this experiment strikes the right balance between satisfying the reviewers that our method does indeed scale to more complex environments (LunarLander has an 8D continuous state space and episodes will run for up to 500 timesteps), and what is practically feasible given the time available. This experiment will be added to a revised version of the paper over the coming days.\n\n**Common concern: evaluation via human experiments** \nWe agree with the reviewers' comments that running experiments in realistic human-in-the-loop reward modelling scenarios (where labels are generated by people rather than by an oracle) is a major area of future work, which we highlighted in Section 5.3 of the paper. We also note that numerous highly-cited papers in the reward modelling literature use oracle experiments as their sole evaluation method, since it enables scalable quantitative validation [3,4,5]. \n\nThere are three concrete differences between our oracle preference labelling method and realistic human labelling: 1) preference form, 2) preference sparsity, and 3) preference noise. Preference form captures the different approaches to providing feedback other than direct return labels (e.g., pairwise rankings or good/bad/neural labels), preference sparsity occurs when generating human labels is expensive (whereas oracle labels are cheap), and preference noise arises due to uncertainty in human labels (as opposed to perfect oracle labels). \n\nWe decided to focus on noise in this work as it is an established way of making oracle experiments more realistic [6], and is aligned with our discussions of human uncertainties and cognitive biases. Crucially, our experiments in Section 4.4 highlight that our methods degrade gracefully in the presence of noise, which gives us some confidence that they will transfer well to human labels. We are in complete agreement that future work should consider preference sparsity and form, along with evaluations involving actual human data. For better clarity on the subject, we are happy to add a discussion to our Appendix on the differences between our oracle labelling and true human labelling.", " This paper presents reward modeling methods for non-markovian rewards, leveraging LSTMs to update a hidden state and encode the trajectory history. It performs experiments in custom environments with non-markovian rewards, with a baseline that assumes markovian rewards. Their various proposed methods perform better than the baseline. The theory behind the method is sound and the experiments are sound too. Good points: I think the connection between reward modeling with trajectory-level labeling and multi-instance learning is interesting and well explained. The experiments give good (although artificial) examples of non-markovian reward decision process. The writing is good and the paper is clear, and the reviewer had a good understanding of the presented methods after reading the paper.\n\nMy main gripe with the method and theory is the use of the NMRDP (non-markovian reward decision process), which can be viewed as a specialisation of a POMDP where only the reward depend on the hidden state. This would be a useful formalism if it led to a simpler algorithm, but the described algorithm resembles exactly what you would get if you tried to solve the reward modeling problem for POMDPs using LSTMs. I realise that the experiments only contain the special case of NMRDPs, but there is a missed opportunity in presenting a more general solution here. The argument of using this formalism to model human behaviour when assigning rewards is unconvincing to me as it is not tested in the experiment.\n\nSpeaking of this, I feel the experiments are too weak and obviously make it impossible for the baseline to perform well. They do demonstrate that the presented methods can handle non-markovian rewards in these limited settings. The problem is the applicability of this setting to more complex or better-known tasks; without resorting to completely standard benchmarks where the reward is markovian, could you make a stronger case that the methods are applicable and helpful for larger-scale, real-world tasks? And more importantly, can your method actually handle the complexity of human-labels rewards as you talk about in the introduction but don’t demonstrate in your experiments? In my opinion, addressing the second question with a set of experiment with real human labels would also address the first question.\n\nTo summarize 1) I think the method would be better presented as reward-modeling for POMDPs (where lstms have been used to encode the history in many works, see Dreamer, PlaNet, reccurent DQN) at which point the solution becomes obvious. 2) The experiments, while valid, do not convince the reviewer of the wider applicability of their methods.\n\nSmall comments: \n\nThe abstract uses a lot of specialized language RM, MIL, “provide interpretable learn hidden information” that wasn’t really clear to the reviewer until *after* reading the paper and coming back to the abstract.\n\nI find it awkward that the reward is a function of both s_t and h_{t+1}. I realize that it leads to better performance in practice, but it shouldn’t be necessary and maybe drop it from the math.\n\nSection 3.2, and the paper in general, completely ignores the existing body of work using LSTMs (and RNNs in general) for POMDPs, which leads to algorithms similar to yours (see for example Hausknecht, Matthew, and Peter Stone. \"Deep recurrent q-learning for partially observable mdps.\" 2015 aaai fall symposium series. 2015.) I think at least a mention should be added; I think the idea of using LSTMs specifically to handle non-markovian reward is simply a specialisation of this existing idea of using RNNs to encode past observations in POMDPs. It's not a bad thing per se, but should be mentioned.\n\nLine 229 mention an “interesting” case when CSC and instance-space LSTM beats the oracles; the reviewer does not think it is interesting but rather suspicious. This could be stemming from a lack of hyperparameter search or uncertainty in the results.\n\n\n Without resorting to completely standard benchmarks where the reward is markovian, could you make a stronger case that the methods are applicable and helpful for larger-scale, real-world tasks?\n\nAnd more importantly, can your method actually handle the complexity of human-labels rewards as you talk about in the introduction but don’t demonstrate in your experiments? It is not clear what the limitation of this work is as it has to potential to be very general; the experiments do not go far enough to find the limits of the methods. I see no obvious negative societal impact of this work.", " This paper proposes a new method to deal with non-Markovian rewards based on reformulating the problem as multiple instance learning with LSTM. The authors test a few alternative ways of doing this in 4 environments designed to demonstrate non-Markovain properties. Originality: \n\nWhile Markovian reward modelling is quite well studied, non-Markovian rewards are investigated much less. The proposed improvements to MIL with LSTM (instance space and skip connections) are not particularly original, but they effectively address the shortcomings of the prior methods. \n\nQuality: \n\nI am not completely convinced about the baseline \"Base Case: Embedding Space LSTM\" model where the rewards are obtained by taking the difference in partial bag labels. Suppose the task is to grasp an object. If I am given two sequences to annotate: 1) s1=not grasped, s2=not grasped, s3=not grasped, s4=grasped, s5=grasped and 2) s1=not grasped, s2=not grasped, s3=not grasped, s4=grasped, I could assign both of them label 1. Then, using the difference between partial bags would result in the reward of s5 to be 0 which is intuitively not correct. I expect that with predictions from LSTM we may face a similar problem. If it is the case, maybe the inferior performance of this method could be due to this.\nIn section 3.3 the authors explain that the RL agent observes hidden states along the reward from LSTM. However, I am wondering if it is suitable to use hidden state representation of the reward model for agent training. Maybe it is better to have a recurrent agent architecture that trains its own latent representation.\nFor the baselines in Figure 5, I think it would be more informative to have RL with recurrent architecture + oracle rewards as the upper bound on the performance and it would avoid the situation when an agent with learnt reward outperforms the oracle.\n\nClarity: \n\nThe paper is very well written and easy to follow. I found the visualizations in Figures 7 and 8 very informative. \nThe paper positions its problem as originating from an unrealistic assumption on human evaluation of temporally-extended behaviour. However, I believe the non-Markovian rewards might be important in practice in some tasks that are inherently non-Markovian and the success can't be determined only by a given state, but a whole sequence must be considered. Some of such tasks are studied in the experiments, but the point that these are different types of tasks was not made clear. It could be also informative as an additional ablation to consider simple Markovian tasks (with Markovian reward), for example, like navigating to a given target to confirm that the method of this paper works in that case too.\n\nSignificance: \n\nMy biggest concern in this paper is the complexity of the evaluation environments. While the proposed environments and tasks work sufficiently well to demonstrate the proof of concept, they are rather simplistic (grid world) and some tasks are a bit artificial. I am wondering if the proposed method would face any difficulties when dealing with a more challenging environment, for example, some control problems in simulation, or something that has image-based states. I think that in order to have a larger impact in the community, this paper should include results on more complex environments.\n - Could the authors clarify if the issue with the difference in partial bags as I described takes place in their experiments?\n- Could the authors clarify why it is chosen for the agent to reuse latent states from reward modelling rather than to train its own LSTM?\n- Could the authors elaborate why the baseline that just replaces each state with frame stacked state (several states concatenated, often used as a simplification for recurrence) would not suit this problem?\n- The authors train Deep Q-Network, but as far as I understand the states are just 2-dimensional, how important is it to have deep architectures in this case?\n- The authors say that the transformer architecture would be unsuitable for the temporal structure of the problem. Why would it be the case with temporally masked transformers?\n- How was the dataset for RM training (from lines 210-213) split into training and test? Are the results in Table 1 on the test set?\n Some limitations are discussed at the end of the paper. In the review I also pointed to/asked for clarification regarding some potential limitations. No potential societal impact is mentioned.", " This paper defines non-Markovian reward modeling and proposes a novel LSTM-based model to capture temporal dependencies in reward modeling. It also adopts a multiple instance learning (MIL) framework to handle temporal dependencies with reward labels. It encodes hidden state information using LSTM and also produces explicit reward predictions with skip-connection to the current state and action features.\nIt optimizes LSTM-based models on offline trajectory datasets. Quantitative experimental results on return and reward prediction show the model's compelling reward function modeling quality. The authors also present qualitative results on the interpretability of the learned hidden embeddings.\n Strengths:\n\nThe paper is well written and easy to read and understand. Figure 2 clearly demonstrates the proposed method and the baseline models. The experiments are comprehensive and the visualization result is quite inspiring. They also propose novel RL tasks that can show the effectiveness of their model. For significance, learning the non-Markovian reward function is the cornerstone of a successful real-world application using RL, especially with human interactions.\n\nWeaknesses:\n\nI appreciate the idea and results of the conceptual toy RL tasks experiments, however, it would be better if there were comparisons with other models in the more complex RL tasks such as Montezuma's Revenge where modeling non-Markovian rewards is important.\n 1. How important the proposed non-Markovian reward modeling is compared to other temporal information encoding models (such as Flare [1] or SPR [2])? \n\n2. How many timesteps the proposed LSTM-based model can encode?\n\n3. What did you use for the LSTM hidden states 2D visualization? T-SNE? And do you have any other visualization results on other common RL tasks with discrete action space such as Atari games?\n\n\n- [1] Reinforcement Learning with Latent Flow, Shang et. al., NeurIPS 2021\n- [2] Data-Efficient Reinforcement Learning with Self-Predictive Representations, Schwarzer et. al., ICLR 2021 Yes, the authors adequately addressed the limitations and potential negative social impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "jjiZ_Gnt75_7", "nips_2022_NJr8GBsyTF0", "nips_2022_NJr8GBsyTF0", "6jiubyZfqiE", "nips_2022_NJr8GBsyTF0", "Ai5hwij5M92", "77Z6KqURxA6", "swivzwOOSo4", "L1Oyq79128F", "sOYehp1fl7N", "K3dZT8Ce91d", "YUGrHi0Adpr", "nips_2022_NJr8GBsyTF0", "nips_2022_NJr8GBsyTF0", "nips_2022_NJr8GBsyTF0", "nips_2022_NJr8GBsyTF0" ]
nips_2022_S0TR0W63NKl
Generalization Bounds for Estimating Causal Effects of Continuous Treatments
We focus on estimating causal effects of continuous treatments (e.g., dosage in medicine), also known as dose-response function. Existing methods in causal inference for continuous treatments using neural networks are effective and to some extent reduce selection bias, which is introduced by non-randomized treatments among individuals and might lead to covariate imbalance and thus unreliable inference. To theoretically support the alleviation of selection bias in the setting of continuous treatments, we exploit the re-weighting schema and the Integral Probability Metric (IPM) distance to derive an upper bound on the counterfactual loss of estimating the average dose-response function (ADRF), and herein the IPM distance builds a bridge from a source (factual) domain to an infinite number of target (counterfactual) domains. We provide a discretized approximation of the IPM distance with a theoretical guarantee in the practical implementation. Based on the theoretical analyses, we also propose a novel algorithm, called Average Dose- response estiMatIon via re-weighTing schema (ADMIT). ADMIT simultaneously learns a re-weighting network, which aims to alleviate the selection bias, and an inference network, which makes factual and counterfactual estimations. In addition, the effectiveness of ADMIT is empirically demonstrated in both synthetic and semi-synthetic experiments by outperforming the existing benchmarks.
Accept
The authors propose theory and an algorithm for estimating average dose-response functions (ADRF) from observational data under assumptions of unconfoundedness and overlap. The approach extends theory and methodology from primarily the work in [13] where neural networks and integral probability metrics are used to learn outcome regressions and re-weighting functions to minimise a bound on the expected loss. The approach was evaluated on semisynthetic datasets and compared favourably to baseline. Reviewers found the setting novel and interesting but were concerned that the analysis was very close to previous works, requiring only a small modification to allow for continuous (rather than binary) treatments. The empirical evaluation was also rather limited, restricted to comparing mean squared errors on benchmark datasets. One of the reviewers asked why we should expect the method to perform so well when the learning objective represents a fairly loose bound on the expected error. The empirical results offer little to answer this question. The authors rebuttal suggests that this is due to the re-weighting function, but there is no empirical or theoretical evidence that this is the deciding factor. For example, how does the ADMIT model perform without re-weighting? In Figure 3, the authors claim to show that baselines perform worse when selection bias increases, but this trend is noisy at best. If anything, I would argue that it suggests that ADMIT does better no matter the selection bias, which begs the question: where is the advantage coming from? Overall, reviewers thought the paper appears sound and offered a few clarifying comments and questions which were mostly answered by the authors. The technical novelty is rather low, but appropriately applied. A revised version of the manuscript should address the presentation issues raised by reviewers as well as the attribution question asked above.
train
[ "aorA3_fWDdq", "WqZdKQ8XfJ0", "3cr2E7kBRd8", "SpelEwEVbvI", "pZUrD31s2Qn", "YFyqweQuN6Q", "FjmX8MpRG4", "r80uf5A-GUx", "BzPmM7M-B_9", "J9XIgKd3t3e", "1KZWUEoak6w", "diL2WbwHVVw", "ypUAHmAHziN", "_Hb8Sp_0g1A" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for clarifying my concerns. I will be maintaining my score.", " I thank the authors for the detailed answers. I re-read the paper and, while the author responses make the context and technical contribution more clear, I believe it should not take a 2 page response to get the main points across. All of these should be included in the paper in some form. I want to be clear: I think that the work is technically strong and has a good amount of impact, but the exposition obscures many of these strengths. Because of this, I maintain my original assessment of the work and I will be keeping my score unchanged. ", " Since the stage of the Reviewer-Author discussion period is closing soon, we hope the reviewer can re-evaluate our paper based on our updated paper and detailed response, especially the clarification of the computational complexity.", " Thank you for your review. We appreciate your acknowledgement of our work!", " Thank you to the authors for the careful response! The answers address my concerns. I read the paper again and still think positively about this paper. I keep my evaluation score unchanged. ", " **Comment1:** There are a few improvements that could be made to the empirical results section of the paper. Increasing the thoroughness of the empirical section is the only major weakness in my mind. For one, it would be useful to provide another view of the results by e.g. providing the actual dose response curve (i.e. response vs dosage for the estimation and the ground truth). This would help the reader get a better idea as to which regions of the treatment may be better (in terms of estimation) vs not.\n\n**Response 1:** Many thanks for your constructive suggestion. We compare the estimated dose-response curves of ADMIT and VCNet with the truth in a new subsection (C.3) added in the revised appendix. To observe the effect of our model visually, the estimated dose-response curves of ADMIT and VCNet and the truth are plotted in Figure 1 in the revised appendix. Across different datasets, when the true ADRF is simpler, both ADMIT and VCNet fit better. Moreover, ADMIT always be able to fit the ADRF better than VCNet, especially when the true ADRF is relatively complex (see Figure 1 in the appendix).\n\n**Comment2:** Secondly, it would be interesting to see how the method performs in the setting where some of the confounders were unobserved (or simulated to be unobserved, i.e. you conceal them). Although an initial assumption is one of unconfoundedness, characterizing the failure modes of the method is generally useful. At a high level, the authors need to do a better job at discussing the limitations and failure modes of their approach.\n\n**Response 2:** Many thanks for your kind reminder. The ADMIT will fail in the setting where some of the confounders are unobserved since the conditional causal effect is unidentifiable, i.e., $\\mathbb E[Y^t|x]\\neq\\mathbb E[Y|x,t]$. It is an interesting research topic to extend the theoretical results of ADMIT to scenarios where the unconfoundedness assumption is not valid, e.g., introducing the instrumental variables.\n\n**Question 1:** It would be nice for the authors to give more intuition for how strong assumption 3 is? Seeing as this is a core assumption for the approximation of the IPM term, the authors need to provide a little bit more discussion surrounding it.\n\n**Answer 1:** Thanks for your careful review. It is easy to find a constant $\\alpha$ that satisfies Assumption 3 when the output of the hypothesis $f_t$ is finite, e.g., survival years after taking a certain medicine, which is reasonable in applications. The main concern is that a large $\\alpha$ may lead to a loose bound, but it could be illustrated that this is not a common scenario. On the one hand, the RHS of Equation (17) represents the bound of the worst case. Without loss of generality, assume $s_1\\le s_2\\le \\cdots\\le s_n$, and let $IPM(p_{s_i},p_{s_{i+1}})=\\alpha_i|s_{i+1}-s_i|$. It is not difficult to prove that $\\alpha$ defined in Assumption 3 satisfies $\\alpha=max_{s\\in[0,1]}\\{\\lim_{\\delta\\to0}\\}\\frac{IPM(p_s,p_{s+\\delta})}{\\delta}\\ge max_{i\\in\\{1,2,\\cdots,n-1\\}}(\\alpha_i)$ according to the triangle inequality for the Integral Probability Metric. However, during the proof of Lemmata 2 and 3, all $\\alpha_i$ are enlarged to $\\alpha$, e.g., the inequality $IPM_\\mathcal{G}(p_{s_i}, p_t^w)+IPM_\\mathcal{G}(p_{s_i}, p_s)\\le IPM_\\mathcal{G}(p_{s_i}, p_t^w) + O_p(\\frac \\alpha {\\sqrt[3] n})$ in line 106 of the appendix. This is necessary for the proof, but it also shows that the RHS of Equation (17) represents the bound of the worst case. On the other hand, the bound will be loose when $\\alpha$ is large and $\\forall i, \\alpha_i=\\alpha$, i.e., the worst case happens. Intuitively, it is not common for each $\\alpha_i$ to be large. For instance, the dose of a particular medicine may depend on the age of the patient, and $\\forall i, \\alpha_i=\\alpha$ means that the age distributions of groups taking similar doses vary considerably, which is unreasonable. \n\n**Comment3:**\n\nMinor typos/spelling errors: \n\nLine 127: $\\hat u{(t)}$ should be $\\hat \\mu{(t)}$\n\nLine 173: What’s worse is that \n\nLine 16/Line 119: A transition word other than “besides” may be more appropriate\n\n**Response 2:** Thanks for your careful review. We have thoroughly checked and corrected the grammatical errors and typos we found in the revised manuscript. ", " **Question 5:** What were the computational bottlenecks of ADMIT? How does the GPU usage compare across methods?\n\n**Answer 5:** We apologize for the vague statements in the paper that led a misunderstanding that ADMIT required more than 3000 GPUs. In fact, ADMIT requires only one Nvidia **RTX3090** GPU. We list the times when ADMIT, VCNet, and DRNet run an epoch, and we can see that ADMIT does not have significant computational bottlenecks.\n\n | model | ADMIT | VCNet | DRNet |\n | ------------------- | ----- | ----- | ----- |\n | **Time** (seconds) | 0.422 | 0.174 | 0.229 |\n\n**Question 6:** Given the minor DGP modifications, have you tuned the parameters of the baselines (e.g. VCNet) to the new DGP?\n\n**Answer 6:** Yes, parameters have been tuned for the baselines. The slight difference in the performance of VCNet (0.15 vs 0.19) you mentioned should come from two sources. On the one hand, as you mentioned, there is some difference in the data generation process (DGP). On the other hand, The [News](https://www.fredjo.com/files/NEWS_csv.zip) dataset contains 50 realisations of a stochastic outcome model, and each realisation consists of 5000 randomly sampled news. We get the results by randomly selecting a certain realisation and repeating it several times. This may be different from the VCNet setup, which is not mentioned in the paper or [code](https://github.com/lushleaf/varying-coefficient-net-with-functional-tr) of VCNet.\n\n**Comment1:** There is not enough contextualization with previous work. For example, [1], [2], [3] are only briefly mentioned and their connection with the current work is not made obvious. The entire *Related Work* section lacks clarity.\n\n**Response 1:** Thanks for your kind reminder. The related studies have been introduced in detail in a new subsection (A.3) in Expanded Related Work in the revised appendix. In addition, we also explain the connection between our work and these efforts.\n\n**Comment2:** The introduction is also confusing, especially for those unfamiliar with the advances in this specific area of causal inference. The different paragraphs don't seem connected and it's unclear what the state of the art that this paper is improving upon actually is.\n\n**Response 2:** Unlike binary treatments, causal inference with continuous treatments is largely understudied. We introduce three state-of-the-art works, VCNet, DRNet and SCIGAN, in causal inference with continuous treatments in the introduction. However, these works could not well mitigate selection bias in the continuous setting. This paper achieves the state of the art by deriving an ADRF error upper bound, which provides theoretical guarantees to mitigate selection bias among a theoretically infinite number of subgroups.\n\n**Comment3:** Section 4.4 and the Algorithm Box are too underdeveloped. It it unclear what the different quantities (ϕ,h,w) until you read the text and the text doesn't have enough explanations about how to execute the different steps of the algorithm (e.g. how to computer the IMP gradient or how to choose δ appropriately). The appendix is very sparse and doesn't contain additional information that could answer this question.\n\n**Response 3:** Thanks for your careful review. We explain the calculation process of the IMP gradient in **Answer 3**. \n\n**Comment4:** Theorem 1 also holds for $σ_{min}=0$, for example when $Y^t=f_t(x)$, i.e. the counterfactual outcome under treatment t depends solely on observed features.\n\n**Response 4:** Thanks for your kind reminder. We have made a correction in the revised manuscript.\n\n[1] Corinna Cortes and Mehryar Mohri. Domain adaptation and sample bias correction theory and algorithm for regression. Theoretical Computer Science, 519:103–126, 2014.\n\n[2] Corinna Cortes, Yishay Mansour, and Mehryar Mohri. Learning bounds for importance weighting. Advances in neural information processing systems, 23, 2010.\n\n[3] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, and Gert RG Lanckriet. On integral probability metrics,\\phi-divergences and binary classification. arXiv preprint arXiv:0901.2698, 2009. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. The Journal of Machine Learning Research, 13(1):723–773, 2012.", " **It should be clarified that our experiments were run on a machine with one Nvidia RTX3090 GPU, not on a machine with more than 3000 GPUs.** We apologize for the vague statements in the appendix that mislead your understanding.\n\n**Question 1:** Lemma 1 seems somewhat restrictive given that the loss function has to be in the family of functions that defines the IPM. What are the implications for the squared loss?\n\n**Answer 1:** There are two reasons for applying the squared loss. Firstly, the squared loss is commonly used in regression, which coincides with the casual inference that estimating the potential outcome is a regression problem since $y$ is continuous. Secondly, when using the squared loss $L(y,y')=(y-y')^2$ and letting $l_{f_t}( {x}):=\\mathbb{E}_{Y^t|{X}}[L(Y^t,f_t({X}))| {X}= {x}]$ belong to the $\\mathcal G$ in Lemma, the IPM discrepancy is a distance, which implies that minimizing the discrepancy to zero guarantees balancing covariates distributions. Gretton et al. [1] prove that the discrepancy is a distance, i.e., if $IPM_\\mathcal G(p,q)=0$, then $p=q$, when $L$ is the squared loss, and $f_t\\in \\mathcal H$, a subset of the reproducing kernel Hilbert space (RKHS).\n\n**Question 2:** EMSE is a population quantity. What are the finite sample guarantees (if any)?\n\n**Answer 2:** Thanks for your kind reminder. We refer to a lemma from [2] to give the finite sample guarantee of Theorem 3.\n\nSuppose we have $n$ i.i.d sample of units with an empirical measure $\\hat p$, and the $i$th unit received a treatment $s_i$. Let $n_s$ denote the number of units belonging to $[s, s+\\delta]$, and $I\\hat{P}M_{\\Delta max}=max_{i\\in\\{1, \\cdots, n\\}}(IPM_\\mathcal{G}(\\hat p_{\\Delta s_i}, \\hat p_{\\Delta t}^w))$. We assume Assumption 3 holds for a constant $\\alpha$. Then, for a neighborhood size $0<\\delta<1$ we have, \n$$\n\\epsilon(f_t)\\leq \\epsilon_w(f_t|T=t)+I\\hat{P}M_{\\Delta max}+\\sqrt{18v^2log\\frac{4}{\\xi}}D_{n_s}+O_p(\\frac \\alpha {\\sqrt[3] n})+\\alpha\\delta,\n$$\nwhere $D_{n_s}=max_{i\\in\\{1, \\cdots, n\\}}\\{(\\frac {1}{\\sqrt{n_{s_i}}}+ \\frac {1}{\\sqrt{n_t}})\\}$.\n\n**Question 3:** How do you actually calculate the IPM gradients and how does it contribute to the computational complexity?\n\n**Answer 3:** From a practical point of view, the IPM gradients are calculated by using PyTorch’s automatic differentiation engine that powers neural network training. \n\nFrom a theoretical point of view, the IPM gradients are calculated as follows. Given a neural network $G_{\\theta_w,\\theta_\\phi}$, where $\\theta_w$, $\\theta_\\phi$ represent the parameters of re-weighting and representation network, respectively. Let $U=(u_1, u_2,\\cdots u_m)$ denote the inputs drawn from $p_{\\Delta l}^w$ in line 6 of Algorithm 1, let $V=(v_1, v_2,\\cdots v_n)$ denote the inputs drawn from $p_{\\Delta k}$, and let $Z_\\theta=(z_1,z_2,\\cdots,z_m)$ with $z_i=G_\\theta(u_i)$ and $\\theta=(\\theta_w, \\theta_\\phi)$. As explained in the paper, IPM becomes the Maximum Mean Discrepancy (MMD) metric when we choose a family of norm-1 reproducing kernel Hilbert space (RKHS) functions. Given a differentiable kernel $k$, we minimize $IPM(p_{\\Delta l}^w,p_{\\Delta k})=C(Z_\\theta, V)$ as a function of $\\theta$, where\n$$\nC(Z_\\theta, V)=\\frac{1}{n^2}\\sum_{i=1}^n\\sum_{j=1}^n k({z_i},{z_j})-\\frac{2}{mn}\\sum_{i=1}^n\\sum_{j=1}^m k({z_i},{v_j})+\\frac{1}{m^2}\\sum_{i=1}^m\\sum_{i=j}^m k({v_i},{v_j}).\n$$\nThen, the IPM gradients could be calculated with the chain rule as follows:\n$$\n\\Delta_\\theta C(Z_\\theta, V)=\\frac1n\\sum_{i=1}^n\\sum_{j=1}^m\\frac{\\partial C(Z_\\theta, V)}{\\partial z_j}\\frac{\\partial G_\\theta(u_i)}{\\partial \\theta}.\n$$\nThe additional complexity introduced by the $IPM$ term is about $O(\\eta n^2 d)$, where $\\eta=\\lceil 1/\\delta\\rceil$, $n$ denotes the sample size, and $d$ denotes the dimension of covariates $x$. Moreover, the spending time comparison between our model and the compared models with the same settings (using one Nvidia **RTX3090** GPU) is given in **Answer 5**. It could be seen that using the $IPM$ term increases the complexity but is within acceptable limits.\n\n**Question 4:** How do the sample and computational complexities depend on the choice of $\\delta$?\n\n**Answer 4:** The computational complexity is about $O(\\eta n^2 d)$, where $\\eta=\\lceil 1/\\delta\\rceil$, $n$ denotes the sample size, and $d$ denotes the dimension of covariates $x$. Specifically, we list the times when ADMIT runs an epoch (a total of 200 epochs are trained in our experiment) on the synthetic dataset with $\\delta\\in(0.05,0.1,0.2,0.25,0.5)$. The results show that the time spent is acceptable.\n\n | $\\delta$ | 0.5 | 0.25 | 0.2 | 0.1 | 0.05 |\n | ------------------ | ----- | ----- | ----- | ----- | ----- |\n | **Time** (seconds) | 0.196 | 0.286 | 0.422 | 0.891 | 2.498 |", " **Comment1:** lack of innovation. The proposed framework is mostly based on previous works such as GPS, CFR, and DRNet. limited contribution. although the author provides the generalization bounds for estimating causal effects of continuous treatment, the theoretical proofs are the extension of CFR [1]\n\n**Response 1:** On the basis of previous studies, we extend the generalization bounds for estimating causal effects of binary treatments to the generalization bound of continuous treatments. However, we claim that this extension is nontrivial and meaningful. Firstly, it's an important research area to estimate the causal effect of continuous treatments because continuous treatments arise in many fields, including economics and medicine. Secondly, causal inference with continuous treatments is largely understudied and far more challenging than binary treatments. Continuous treatments induce uncountably many potential outcomes per unit, which leads to a more complex selection bias problem than binary treatments. Therefore, it is nontrivial to extend the causal inference of binary treatments to continuous treatments. Finally, it is not so straightforward and simple to extend the causal inference of binary treatments to continuous treatments. Due to the potentially infinite number of covariates distributions, it is hard to mitigate the selection bias problem in the setting of continuous treatments via the theories and technologies in causal inference for binary treatments. We introduce Assumption 3 to constrain differences in the distributions of subpopulations receiving different treatments. Based on Assumption 3, we provide the approximation of the IPM term and related theoretically support to make it operational.\n\nOn the other hand, it may not be a good criterion for evaluating a study by whether it is an extension of previous works. For example, CFR [1] is a remarkable work on causal inference. However, CFR was also built on the basis of literature [2, 3] in domain adaptation.\n\n**Question 1:** This work provides a discretized approximation of the IPM term. Since there are limited samples while the number of domains is infinite, to overcome this challenge, they bound the difference between the IPM and its discretization under the assumption 3 that the probability distributions of subpopulations that received different treatments shift smoothly. However, how could this model handle a substantial shift and give a continuous and complete dose-response curve?\n\n**Answer 1:** Intuitively, a small $\\alpha$ in Assumption 3 indicates a smooth covariates shift, i.e., a slight selection bias. At first, when the shift among different subpopulations is smooth (a slight selection bias with $\\alpha=2$), Table 1 indicates that ADMIT outperforms the baselines. In addition, Figure 3 also shows that ADMIT has consistent performance and outperforms the baselines when the selection bias gradually becomes more intense (increase $\\alpha$ from 1 to 8), which indicates that ADMIT can handle the case whose shift is slightly sharp. When the shift is too substantial, ADMIT and other baselines may not be able to infer well. Note that a substantial shift does not seem to be common in real-world applications. For instance, the dose of a particular medicine may depend on the age of the patient, and then a substantial shift means that the age distributions of groups taking similar doses vary considerably, which is unreasonable. Besides, our extensive experiments show that ADMIT could give a continuous and complete dose-response curve. For more details, please refer to Section C.3 in the appendix.\n\n[1] Shalit, Uri, Fredrik D. Johansson, and David Sontag. \"Estimating individual treatment effect: generalization bounds and algorithms.\" International Conference on Machine Learning. PMLR, 2017.\n\n[2] Corinna Cortes and Mehryar Mohri. Domain adaptation and sample bias correction theory and algorithm for regression. Theoretical Computer Science, 519:103–126, 2014.\n\n[3] Mansour, Yishay, Mohri, Mehryar, and Rostamizadeh, Afshin. Domain adaptation: Learning bounds and algorithms. 2009.", " **Comments 1:** \n1. $\\hat u$ is a typo.\n2. \"the sume of the re-weighted factual loss\" -- should it be average? **Yes**\n3. think the MMD is defined for both continuous and discrete domains, while the authors seems to argue that it cannot be obtained for continuous domains — am I understanding correctly? **Yes**\n\n**Response 1:** Thanks for your careful review. We have thoroughly checked and corrected the grammatical errors and typos we found in the revised manuscript. \n\nFor your concern in 3: The understanding is correct. Theoretically, $IPM(t_1,t_2)$ is well defined $\\forall t_1,t_2\\in(0, 1)$. A certain number of samples are needed to estimate the MMD, as shown in Equation (14). When the treatment is discrete and finite, all the choices of the treatment can be observed in the samples and thus one can estimate the MMD from these samples. However, when the treatment is continuous, the samples that received some treatment $t$ may be unavailable since only a finite number of samples are observed while the choice of $t$ is infinite. We provide a detailed explanation in the revised manuscript.\n\n**Question 1:** The bound in Equation (18) is obtained by chaining a sequence of bounds. It seems to me the bounds are loose, for example, Equation (17). So I am curious why it can still outperform the other methods by large gap in the experiments section. And is it because of the use of reweighting schema (GPS) + neural network (a more flexible model), or is it because of using the generalization bounds? It is important to understand which part contributes in the improvement, and to give more explanations on the tightness of the bounds.\n\n**Answer 1:** Thanks for your helpful comments. \n\nFor your concern about Equation (18): The length of the sequence of $IPM_{\\Delta max}$ in the algorithm is $L=\\lceil\\frac1\\delta\\rceil$. The impact of chaining a sequence of bounds is relatively tiny when $L$ is small ($L=5$ in our experiment).\n\nFor your concern about Equation (17): It should be noted that the RHS of Equation (17) represents the bound of the worst case. Without loss of generality, assume $s_1\\le s_2\\le \\cdots\\le s_n$, and let $IPM(p_{s_i},p_{s_{i+1}})=\\alpha_i|s_{i+1}-s_i|$. It is not difficult to prove that $\\alpha$ defined in Assumption 3 satisfies $\\alpha=max_{s\\in(0,1)}\\{\\lim_{\\delta\\to0}\\}\\frac{IPM(p_s,p_{s+\\delta})}{\\delta}\\ge max_{i\\in\\{1,2,\\cdots,n-1\\}}(\\alpha_i)$ according to the triangle inequality for the Integral Probability Metric. However, during the proof of Lemmata 2 and 3, all $\\alpha_i$ are enlarged to $\\alpha$, e.g., the inequality $IPM_\\mathcal{G}(p_{s_i}, p_t^w)+IPM_\\mathcal{G}(p_{s_i}, p_s)\\le IPM_\\mathcal{G}(p_{s_i}, p_t^w) + O_p(\\frac \\alpha {\\sqrt[3] n})$ in line 106 of the appendix. This is necessary for the proof, but it also shows that the RHS of Equation (17) represents the bound of the worst case. The bound will be loose when $\\alpha$ is large and $\\forall i, \\alpha_i=\\alpha$, i.e., the worst case happens. Intuitively, it is not common for each $\\alpha_i$ to be large. For instance, the dose of a particular medicine may depend on the age of the patient, and $\\forall i, \\alpha_i=\\alpha$ means that the age distributions of groups taking similar doses vary considerably, which is unreasonable. \n\nThe performance improvement of ADMIT could be attributed to the re-weighting schema and the derived generalization bound that guides a neural network to learn better sampling weights than statistical methods. In our experiments, VCNet quipped with the weights learned by EBCT achieves better performance than the original version of VCNet on two datasets, which demonstrates the effectiveness of re-weighting schema that mitigates the selection bias problem. EBCT estimates sampling weights by solving a globally convex constrained optimization problem but does not provide generalization bounds like our work. Intuitively, the covariates among subpopulations that received different treatments can be balanced by minimizing the $IPM_{\\Delta max}$. In other words, the derived bound can guide the learning of superior sampling weights that mitigates the selection bias problem, which is indicated in the experimental results of Table 1.\n\n**Question 2:** Are there any literature in causal inference deriving generalization bounds? Not necessarily of a similar form to the one in this work.\n\n**Answer 2:** Many thanks for your kind reminder. There exists literature that derives generalization bounds for estimating the causal effects of binary treatments, such as [11, 13] cited in our paper. In addition, we also introduce in detail the related studies about the theoretical development in causal inference for binary treatments in a new subsection (A.3) in the revised appendix. We explain the connection between our work and these efforts. On the basis of these studies, to the best of our knowledge, this is the first study that provides a generalization bound for estimating the causal effects of continuous treatments.", " This work derives a new generalization bound for estimating causal effects of continuous treatments. The new bound is based on the idea of generalized propensity score reweighting, and is obtained via a sequence of inequalities, translating the original marginal treatment effect which is hard to estimate to a new quantity that is easier to estimate. The generalization bound has been used as the objective function for training a deep neural network, whose architecture has been adapted from existing works. Experimental results show state-of-the-art performance of the proposed method (objective function). Originality: \nThe proposed bound in this work seems novel to me. One question about related literature on generalization bounds is raised in the next section.\n\nQuality and clarify:\nThis work is overall of high quality and is very clear in terms of presentation. The only concern I have is about the soundness of the bounds, and is listed in the next section.\n\nSignificance:\nJudging from the experimental results, the improvements seem significant. However, it is not entirely clear to me why the method can perform so well. The bound seems loose and it is interesting to understand why they can still perform well. A better understanding is needed in order for the method to be more trustworthy.\n\nSome minor remarks:\n1. in line 127, \\hat u is a typo.\n2. in line 198, \"the sume of the re-weighted factual loss\" -- should it be average?\n3. more details are needed for line 201 to line 207; think the MMD is defined for both continuous and discrete domains, while the authors seems to argue that it cannot be obtained for continuous domains — am I understanding correctly? About soundness and contribution (primary):\nThe bound in Equation (18) is obtained by chaining a sequence of bounds. It seems to me the bounds are loose, for example, Equation (17). So I am curious why it can still outperform the other methods by large gap in the experiments section. And is it because of the use of reweighting schema (GPS) + neural network (a more flexible model), or is it because of using the generalization bounds? It is important to understand which part contributes in the improvement, and to give more explanations on the tightness of the bounds.\n\nAbout originality:\nAre there any literature in causal inference deriving generalization bounds? Not necessarily of a similar form to the one in this work. If so, the authors may want to compare your bound to theirs, and cite them in the paper.\n\n\n The authors do not discuss this.", " This paper estimates the causal effects of continuous treatments. To balance the covariates among infinite subpopulations, they learn re-sampling weights that reduce the IPM distance between observed and counterfactual groups. Thus, they derive an upper bound on the estimated counterfactual error and demonstrate experimentally the proposed algorithm ADMIT based on the derived upper bound outperforms GPS, EBCT, DRNet, SCIGAN, and VCNet. Strengths:\n1. provide a theoretical guarantee to the causal effect estimation of continuous treatments.\n2. give a comprehensive summary of the current studies on the continuous treatment effect estimation\n\nWeaknesses:\n1. lack of innovation. The proposed framework is mostly based on previous works such as GPS, CFR, and DRNet.\n2. limited contribution. although the author provides the generalization bounds for estimating causal effects of continuous treatment, the theoretical proofs are the extension of CFR [Shalit, Uri, Fredrik D. Johansson, and David Sontag. \"Estimating individual treatment effect: generalization bounds and algorithms.\" International Conference on Machine Learning. PMLR, 2017.] This work provides a discretized approximation of the IPM term. Since there are limited samples while the number of domains is infinite, to overcome this challenge, they bound the difference between the IPM and its discretization under the assumption 3 that the probability distributions of subpopulations that received different treatments shift smoothly. However, how could this model handle a substantial shift and give a continuous and complete dose-response curve? The experimental analysis is relatively insufficient for the continuous treatments. ", " This paper derives an error bound for treatment effect estimation in the continuous treatment (dosage) regime. The key insight is to relate this error to the bias introduced by distributional shift in treatment assignment using IPM distances between distributions. The authors further provide an algorithm inspired by the theoretical upper bound as well as empirical validation for the proposed method.\n\n The paper is by-and-large an extension of [1] to continuous treatments domains by leveraging a discretized version of the IPM metric used in [1], as well as importance sampling with learned weights ([2], [3]). The idea itself is a good addition to the causal inference literature, but I have reservations regarding the execution, both from a technical and quality of writing standpoint. \n\nOverall structure:\n* There is not enough contextualization with previous work. For example, [1], [2], [3] are only briefly mentioned and their connection with the current work is not made obvious. The entire *Related Work* section lacks clarity. \n* The introduction is also confusing, especially for those unfamiliar with the advances in this specific area of causal inference. The different paragraphs don't seem connected and it's unclear what the state of the art that this paper is improving upon actually is. \n* Section 4.4 and the Algorithm Box are too underdeveloped. It it unclear what the different quantities ($\\phi, h, w$) until you read the text and the text doesn't have enough explanations about how to execute the different steps of the algorithm (e.g. how to computer the IMP gradient or how to choose $\\delta$ appropriately). The appendix is very sparse and doesn't contain additional information that could answer this question.\n\nTechnical weaknesses:\n* Theorem 1 also holds for $\\sigma_{min}=0$, for example when $Y^t=f_t(x)$, i.e. the counterfactual outcome under treatment $t$ depends solely on observed features.\n* Lemma 1 seems somewhat restrictive given that the loss function has to be in the family of functions that defines the IPM. What are the implications for the squared loss?\n* EMSE is a population quantity. What are the finite sample guarantees (if any)?\n* How do you actually calculate the IPM gradients and how does it contribute to the computational complexity? How do the sample and computational complexities depend on the choice of $\\delta$?\n* Concerns about reproducibility: the simulations were run on 3080 GPUs according to the appendix. What were the computational bottlenecks? How does the GPU usage compare across methods?\n* The experimental results are encouraging, but there are some issues with the performance of the benchmarks. For example, the DGP is similar to the one in the VCNet paper [4], but their performance were more along $\\simeq 0.15$, rather than the $\\simeq 0.19$ found in this paper. And if the discrepancy comes from the minor DGP modifications, have you tuned the parameters of the VCNet to the new DGP?\n\nOverall, I don't think this work is ready for publication yet. \n\n[1] Uri Shalit, Fredrik D Johansson, and David Sontag. Estimating individual treatment effect: Generalization bounds and algorithms.\n\n[2] Negar Hassanpour and Russell Greiner. Counterfactual regression with importance sampling weights. \n\n[3] Fredrik D Johansson, Uri Shalit, Nathan Kallus, and David Sontag. Generalization bounds and\n360 representation learning for estimation of potential outcomes and causal effects. \n\n[4] Lizhen Nie, Mao Ye, Dan Nicolae, et al. VCNet and functional targeted regularization for learning causal effects of continuous treatments. Extracted from the above section:\n\n* Lemma 1 seems somewhat restrictive given that the loss function has to be in the family of functions that defines the IPM. What are the implications for the squared loss?\n* EMSE is a population quantity. What are the finite sample guarantees (if any)?\n* How do you actually calculate the IPM gradients and how does it contribute to the computational complexity? \n* How do the sample and computational complexities depend on the choice of $\\delta$?\n* What were the computational bottlenecks of ADMIT? How does the GPU usage compare across methods?\n* Given the minor DGP modifications, have you tuned the parameters of the baselines (e.g. VCNet) to the new DGP? The authors have not addressed the computational limitations of their algorithm besides the fact that $\\simeq 3000$ GPUs were used.", " The authors tackle the problem of estimating causal effects under continuous treatments by proposing a novel framework to estimate the average dose response function (ADRF). The core idea of their framework is to minimize a bound on the ADRF loss (instead of minimizing the error in estimating the ADRF directly). The ADRF loss consists of both a factual loss (exact) and an upper bound on the counterfactual loss – which is derived via a re-weighting schema and the use of an Integral Probability Metric (IPM) that measures the distance between the factual and counterfactual distributions. \n\nThe authors provide a practical implementation of this theoretical bound under a smoothness assumption on the shift between covariate distributions in different subpopulations. They call their algorithm ADMIT, which they test in synthetic and semi-synthetic settings.\n \nStrengths: \n\n[quality + significance] - The authors provide a clear and well-motivated development of their upper bound on the ADRF loss. Their connection of the theory to practical implementation was also particularly well done.\n\nWeaknesses: \n\n[quality] - There are a few improvements that could be made to the empirical results section of the paper. Increasing the thoroughness of the empirical section is the only major weakness in my mind. For one, it would be useful to provide another view of the results by e.g. providing the actual dose response curve (i.e. response vs dosage for the estimation and the ground truth). This would help the reader get a better idea as to which regions of the treatment may be better (in terms of estimation) vs not. Secondly, it would be interesting to see how the method performs in the setting where some of the confounders were unobserved (or simulated to be unobserved, i.e. you conceal them). Although an initial assumption is one of unconfoundedness, characterizing the failure modes of the method is generally useful. At a high level, the authors need to do a better job at discussing the limitations and failure modes of their approach.\n\n[clarity] (minor) - There are a few areas where there are some grammatical and spelling errors. See below for a few examples (not exhaustive). \n It would be nice for the authors to give more intuition for how strong assumption 3 is? Seeing as this is a core assumption for the approximation of the IPM term, the authors need to provide a little bit more discussion surrounding it.\n\nMinor typos/spelling errors: \n\nLine 127: $\\hat{u}(t)$ should be $\\hat{\\mu}(t)$\n\nLine 173: What’s worse is that \n\nLine 16/Line 119: A transition word other than “besides” may be more appropriate \n \nNo, there was not a significant discussion of the limitations of the approach, which is a weakness of this paper. See above for suggestions to improve this (i.e. assumption 3 discussion, more thorough empirical analyses). \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ "YFyqweQuN6Q", "3cr2E7kBRd8", "ypUAHmAHziN", "pZUrD31s2Qn", "J9XIgKd3t3e", "_Hb8Sp_0g1A", "r80uf5A-GUx", "ypUAHmAHziN", "diL2WbwHVVw", "1KZWUEoak6w", "nips_2022_S0TR0W63NKl", "nips_2022_S0TR0W63NKl", "nips_2022_S0TR0W63NKl", "nips_2022_S0TR0W63NKl" ]
nips_2022_bGo0A4bJBc
Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics
Self-training based semi-supervised learning algorithms have enabled the learning of highly accurate deep neural networks, using only a fraction of labeled data. However, the majority of work on self-training has focused on the objective of improving accuracy whereas practical machine learning systems can have complex goals (e.g. maximizing the minimum of recall across classes, etc.) that are non-decomposable in nature. In this work, we introduce the Cost-Sensitive Self-Training (CSST) framework which generalizes the self-training-based methods for optimizing non-decomposable metrics. We prove that our framework can better optimize the desired non-decomposable metric utilizing unlabeled data, under similar data distribution assumptions made for the analysis of self-training. Using the proposed \ttt{CSST} framework, we obtain practical self-training methods (for both vision and NLP tasks) for optimizing different non-decomposable metrics using deep neural networks. Our results demonstrate that CSST achieves an improvement over the state-of-the-art in majority of the cases across datasets and objectives.
Accept
The paper received two negative scores 3/4 (another is 8 with high confidence 5) and the main critcism is that the writing is vague especially for the topic of the paper may not be very popular to the community. The authors have made good efforts in improving their presentation and they also provide additional clarifications as well as new experimental results point to point. Hence in my opinion, the new pdf version is more readable. For its significance and novelty, the proposed new loss with regularization is theoretically sound and empirically effective. It also addresses the self-training for non-decomposable metrics which is the first time in literature to our knowledge. There are many applications for this method and there are little related work in the community which highlights its potential impact. I suggest to accept this paper for its significance, quality and strong results. The writing is also improved during the rebuttal.
train
[ "m58lNK34L8", "Agx6GkyR87", "YqnBokNTY20", "Kqw2AoQawWT", "tmDlfdNTkhdQ", "yXj-01U1UD4m", "H1IyC_9k6Rfl", "xXOK7Ha_3d", "eTZUKEhd835", "TWq2WAOc8g4", "Y8VcHju7DXP", "xV5DLIaey2w" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the answers. I think the paper should have been submitted in a readable form in the first submission. \nI want to wish success in the next submission with a better version of the paper that should be further improved both with respect to the writing clarity and the quality of the experiments. Specifically, the notations should made plainly and not ambiguous and the ablation, especially with respect to the threshold selection should be (much!) more comprehensive. \nThe idea in the paper might be of significance to the community but part of making it significant is writing it in a readable form.", " We thank all the reviewers for their interesting questions and constructive feedback. We have carefully responded to each point raised in the reviews. We hope that the response clarifies all the questions. Please let us know if any further clarifications are required.", " Dear Reviewer FQ9c, \nWe sincerely thank you for providing helpful feedback on our work, which has significantly improved the quality of our paper. Also, we think that the majority of your initial impression of our work was based on the misunderstanding\n*of learned classifier $\\hat{F}$ to be wrongly assumed as optimal classifier $F^\\*$* in Theorem 5, which we have clarified in our response. We would be grateful if you could please go over our response and let us know if you have any further concerns.\n\n\nThanks, \nAuthors", " 2)**Generality of CSST:** We find that though these above methods in Table improve mean recall, they still are sub-optimal on the particular non-decomposable metrics (i.e., **min-recall (Table 1)** and **min coverage constraint (Table 2)**) we aim to optimize. Hence, this clearly shows the advantage of our proposed CSST framework over other techniques, which are aimed at just improving the mean recall in a general sense. Also, our framework is general and can be plugged into any SSL method which uses a consistency regularizer and thresholding. As FixMatch and UDA are widely used and cited consistency-based methods, we plug them into CSST and show improvements. \n\n> Discussion on Limitations\n- We have discussed Limitations in Appendix A.1 and referred to that in the checklist. \n\n\nReferences:\n\n[1]: Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, Fan Yang, CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning, CVPR '21 \\\n[2]: Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, Jinwoo Shin, Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning NeurIPS '20 \\\n[3]: Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, Takahiro Shinozaki\n,FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling NeurIPS '21\\\n[4]: Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics (Ours) \\\n[5]: Muhammad Bilal Zafar, Isabel Valera, Manuel Gomez Rodriguez, Krishna P. Gummadi, Fairness Constraints: Mechanisms for Fair Classification, AISTATS'17\n[6]: Mehryar Mohri, Gary Sivek, Ananda Theertha Suresh,Agnostic Federated Learning, ICML 2019", " ### Response to 3.\n\n> Yet, the real missing experiment is checking what is the impact of this approach on existing baselines.\n\nFor investigating this we run the FixMatch algorithm with the proposed KL-Thresholding method in CSST for the objective of maximizing mean recall under coverage constraints (Section 5). We tabulate the results below.\n\n\n| Method | CIFAR-10 (Imbalance=100) | | CIFAR-100 (Imbalance=10) | |\n|:----------------:|:-------------:|:--------------:|:-------------:|:------------------:|\n| | Mean Recall | Min Coverage | Mean Recall | Min H-T Coverage|\n| | | (tgt. 0.095) | | (tgt. 0.01)|\n| CSST(FixMatch) w/o weighted consistency regularizer | 0.55 | 0.017 | 0.44 | 0.004 |\n| CSST(FixMatch) [4]| 0.80 | 0.092 | 0.63 | 0.010 |\n\nWe find that this leads to suboptimal results. These are probably due to the same gain matrix $\\mathbf{G}$ being simultaneously being used by the \nregularizer and the thresholding mechanism, which tightly couples them together. Hence, both the *weighted consistency regularizer and thresholding mechanism* are together required for the proper functioning of CSST.\n\nWe would like to mention that in case of maximizing the Worst-Case recall (Sec. 5) the <strong>G</strong> matrix is a *diagonal matrix*, where the proposed thresholding mechanism degenerates to the same thresholding mechanism as of FixMatch. A detailed discussion on it has been added in Appendix Sec. I regarding the same. It can be seen in Table 2 that yet our method CSST(FixMatch) outperforms FixMatch by 24\\% in min-recall, hence the proposed weighted consistency regularizer is also an important component in addition to the thresholding mechanism. Also below we compare with FlexMatch (NeurIPS 2021) and DARP (NeurIPS 2020) which use adaptive thresholding mechanisms, where we still find that they still perform inferior to our proposed CSST(FixMatch) model.\n\n\n### Response to 4.\n> why only one baseline is being compared to in each task? \n\n1)**Novelty of Task**: We would like to clarify that to the best of our knowledge there are no existing works that aim to optimize *non-decomposable metrics in a semi-supervised learning (SSL) setup*. We are the **first to propose a framework for the novel objective of optimizing non-decomposable objectives through SSL**, which is an important contribution to the community. There have been some recent works that aim to improve mean recall on imbalanced in a semi-supervised learning paradigm. We provide a comparison of that under similar objectives as ours below (using the official codes):\n\n\n**Table**: Maximising minimum recall for CIFAR10 and minimum of Head and Tail recall for CIFAR-100\n\n| Method | CIFAR-10 (Imbalance=100) | | CIFAR-100 (Imbalance=10) | |\n|:----------------:|:-------------:|:--------------:|:-------------:|:------------------:|\n| | Mean Recall | Min Recall | Mean Recall | Min H-T Recall |\n| CReST[1] | 0.72 | 0.47 | 0.52 | 0.46 |\n| DARP[2] | 0.81 | 0.64 | 0.55 | 0.54 |\n| FlexMatch[3] | 0.80 | 0.48 | 0.61 | 0.39 |\n| CSST(FixMatch)[4] | 0.76 | 0.72 | 0.63 | 0.61 |\n\n\n\n**Table**: Maximising Mean Recall with a target (tgt.) coverage constraint\n\n\n| Method | CIFAR-10 (Imbalance=100) | | CIFAR-100 (Imbalance=10) | |\n|:----------------:|:-------------:|:--------------:|:-------------:|:------------------:|\n| | Mean Recall | Min Coverage | Mean Recall | Min H-T Coverage|\n| | | (tgt. 0.095) | | (tgt. 0.01)|\n| CReST [1] | 0.72 | 0.052 | 0.52 | 0.009 |\n| DARP [2] | 0.81 | 0.063 | 0.55 | 0.006 |\n| FlexMatch[3] | 0.80 | 0.046 | 0.61 | 0.006 |\n| CSST(FixMatch) [4]| 0.80 | 0.092 | 0.63 | 0.010 |\n\nWe find that our framework CSST(FixMatch) is able to *improve significantly over even the recent baselines for the objectives of interest* i.e. **min-recall (Table 1)** and **min coverage constraint (Table 2)**. These objectives are of practical importance in areas like fairness where optimizing objectives like worst-case recall by trading off mean recall a bit is common for practical purposes[5,6]. In terms of mean recall, our approach performs better or is on par in 3/4 cases, hence achieving a *better trade-off in terms of satisfying the objective of interest along with the mean metric*.\n\n", " We sincerely thank the reviewer for their insightful and critical comments. We believe that the major misunderstanding has been considering $F^\\star$ to be the found classifier, whereas we use $F^\\star$ to denote the optimal classifier and $\\hat{F}$ to be found classifier in Theorem 5. We have put *significant effort into improving the clarity of the draft and would be grateful if you can please have a look at the revised version*.\n\n### Response to 1\n\n> Some symbols are defined only after they are being first used.\n\nWe apologize for the inconvenience. We have now fixed the notation of precision and recall, defined $F_{pl}$ before Assumption 1. We also would like to mention that we have provided a table of notations in the Appendix for reference.\n \n>Sometimes the authors use P_w where w is the weight and in the same equation use P_i where i is an index.\n \nThe ${P_w}$ is a drop in replacement for $P_i$ in our framework, as it's the weighted average of $P_i$ defined as: $P_w = \\sum_{i,j} w_{i,j}P_i$. Hence, $P_{w}$ doesn't have any index $i$. Furthermore, now we use $\\mathcal{P}_w$ in place of ${P_w}$ to denote the weighted probability to avoid confusion.\n\n \n> Effort on improving Clarity\n\nWe have now improved the clarity by introducing a summary Table for metrics, improving notational consistency, changing definition style, and providing a formal statement of each theorem used from prior literature in the appendix. A detailed list of changes has been specified in the summary of revision that we hope would significantly aid the readability of the paper and improves its clarity. We sincerely thank you for your suggestions.\n\n### Response to 2.\n\n> Weakness 2: Specifically, in assumption 1 if we assume $\\beta$ is negligible then it means that R_B,w is basically equal to zero. Is it really the constraint we want to use? \n\n\n- The condition $R_{B, w}(F)$ being small implies that $F$ is robust to data augmentation. Therefore, it is expected that a good classifier $F$ has a low value of weighted consistency $R_{B, w}(F)$. Since $\\beta$ is an upper bound of $R_{B, w}(F^\\star)$ and $F^\\*$ is an optimal classifier, we think even though $\\beta$ is small, it's natural to assume such an $F^\\*$ exists as we use overparameterized neural network based classifiers and high-dimensional (d) data (e.g., image data, etc.).\n- In addition, as we remarked just after Assumption 1, we also provide an example that justifies the existence of optimal $F^\\star$ under this assumption using a simple and common data generation model too, in Appendix C.1, Example 9. We show that for optimal classifier $F^\\star$ the optimal weighted consistency $R_{B,w} (F^\\star)$ is $O(\\frac{1}{poly(d)})$, which is negligible for large $d$ (here $d$ is the dimension of input i.e. large for image data).\n- Our assumption on $\\beta$ to be negligible is very similar to the assumption (3.3) made in Wei et al. [38] for theoretical analysis of self-training algorithms like FixMatch, which also justifies its applicability here. Hence, we also use the constraint of $\\beta$ to be small in our work. \n\n> Weakness 2: Also, if we assume that the error of the found function F^* is smaller than the one of F_{pl}, why Theorem 5 is surprising? \n\n- We would like to clarify that it's classifier **$\\hat{F}$ (not $F^\\*$)** that is found by minimizing the loss in Eq. 5. $F^\\*$ instead is the **optimal classifier** that minimizes the $Err_{w}(F)$ subject to the weighted consistency constraint $R_{B,w}(F) \\leq \\beta$ (i.e. robust to data augmentation) defined in Eq. 4, which is **unknown** for us.\n- The statement of Theorem 5 is non-trivial (and surprising) since the learned classifier $\\hat{F}$ using the loss function in Eq. 5 has superior performance than the pseudo labeler $F_{pl}$ in terms of the cost-sensitive metric, and the loss function for $\\hat{F}$ is defined using only pseudo labeler $F_{pl}$ and the weighted consistency regularizer $R_{B, w}$ which does not require labels. \n\n> At the end of the day, they want to show that F^* is better. So they assume that and then claim they proved that. So what is the point?\n- We want to convey that the learnt classifier **$\\hat{F}$ (not $F^\\*$)** is better than $F_{pl}$. *Obtaining a better classifier from lower performing pseudo labeler and regularization, justified theoretically through Theorem 5, explains the success of our CSST method.*\n\n\n\n", " ## Summary of Revision\n\nDear reviewers, \nWe sincerely thank you for all your feedback. We have posted a revised version of the draft, in which we have made significant efforts to *improve the readability of the draft by providing examples and intuitions for theoretical results*. However, all the theoretical results are still the *same* as the submitted version. We have highlighted the majority of modifications in the draft by font color **blue**. \n\n### List of Modifications/Additions\n- Table 1: Metrics defined using entries of a confusion matrix.\n- Line 91: Added the min-max optimization objective of improving worst-case recall using Lagrange multipliers $\\mathbf{\\lambda}$\n- Line 137: Class conditional distributions defined\n- Line 156-157: A robust classifier defined in terms of weighted consistency regularization\n- Line 159-160: $F_{pl}$ is defined\n- Line 164-167: Assumption 1 elaborated\n- Line 176: Domain-Range for c function in c-expansion defined\n- Line 180-183: Intuitive explanation for the non-decreasing nature of the c-expansion function given\n- Line 302-304: Reference section H in the Appendix regarding the equivalence of confidence-based thresholding to the KL-divergence-based thresholding for diagonal gain matrix and hard pseudo-label.\n- Line 586-594(Appendix): c-expansion property shown to hold for a mixture of isotropic gaussian functions\n- Section N.1: Bayes optimality of Cost-Sensitive-Loss introduced.\n- Section N.2: Comparision of c-expansion with (a, $\\tilde{c}$ )-expansion\n- Section N.3: Assumption of disjoint support assumed in Wei et al. [38] as compared to our setting where we do not require the existence of disjoint support.\n- Section H: Threshold mechanism for diagonal Gain Matrix with hard pseudo-label and its equivalence to simple confidence-based thresholding used in FixMatch.\n- Appendix Figure 5: 2 distributions with disjoint supports\n- Appendix Figure 6: 2 distributions with non-disjoint supports\n- Maximising worst-case recall is mentioned as a separate equation line 72-73\n- Line 185-187: Equivalence of c-expansion and (a, $\\tilde{c}$ ) expansion briefly introduced\n- $P_w$ changed to $\\mathcal{P}_w$\n- Line 202: Error bound on the learned classifier defined more clearly\n\nThanks \nAuthors\n", " We thank the reviewer for the suggestions and comments. Below we respond to the weaknesses mentioned.\n\n> Clarity of The Work\n\nIn the revised version of our paper, we have improved notational consistency, along with moving the complex and intricate details from the paper to the appendix. The following major changes have been made to improve the clarity of our work:\n- We added Table 1: Metrics defined using entries of a confusion matrix.\n- Line 181-183: Intuitive explanation for the non-decreasing nature of the c-expansion function given.\n- We also made the definitions of $F_{pl}$, $\\hat{F}$ and $F^\\star$ clearer\n- The main Theorem (i.e. Theorem 5) has been clarified followed by an explanation of the same. \n\nAlso, we do agree that our paper has mathematical complexity, but this is due to the nature of work on the problem of non-decomposable objectives[20,21,22]. *We have now provided intuitive examples and explanations for improving readability*.\n\n> Reference to earlier work and Self Containment\n\nThank you for your suggestion. We now formally state the results used from existing works in Appendix N of the revised version. Some further additions to make the paper self-contained have been listed as follows:\n- Section N.1: Bayes optimality of Cost-Sensitive-Loss introduced.\n- Section N.2: Comparision of c-expansion with (a, $\\tilde{c}$ )-expansion\n- Section N.3: Assumption of disjoint support assumed in Wei et al[38] as compared to our setting where we do not require the existence of disjoint support.\n- Example 8 (Appendix): c-expansion property shown to hold for a mixture of an isotropic gaussian function\n\nWe *sincerely request you to please have a look at the revised version of our paper* and refer to the summary of revisions made to improve the clarity of the paper. Please let us know in case you have any further concerns.\n", " Thank you for your suggestions and encouraging comments.\n\n#### **Response to Suggestion 1**\nWe shall make the codebase available immediately after the acceptance of our paper for the benefit of the community.\n\n#### **Response to Suggestion 2**\nWe looked into previous works for Long-Tail learning in a Semi-Supervised Learning setting and compared our results for the target objectives of maximising the worst-case recall and also maximising the mean recall subject to a target coverage constraint. We compared against CReST[1], and DARP[2]. As suggested we also compared the results against FlexMatch[3]. We observed that CSST(FixMatch) (Ours) gave the best results for the given target objective in all the cases.\n\n**Table**: Maximising minimum recall for CIFAR10 and minimum of Head and Tail recall for CIFAR-100\n\n| Method | CIFAR-10 (Imbalance=100) | | CIFAR-100 (Imbalance=10) | |\n|:----------------:|:-------------:|:--------------:|:-------------:|:------------------:|\n| | Mean Recall | Min Recall | Mean Recall | Min H-T Recall |\n| CReST[1] | 0.72 | 0.47 | 0.52 | 0.46 |\n| DARP[2] | 0.81 | 0.64 | 0.55 | 0.54 |\n| FlexMatch[3] | 0.80 | 0.48 | 0.61 | 0.39 |\n| CSST(FixMatch)[4] | 0.76 | 0.72 | 0.63 | 0.61 |\n\n\n\n**Table**: Maximising Mean Recall with a target (tgt.) coverage constraint\n\n\n| Method | CIFAR-10 (Imbalance=100) | | CIFAR-100 (Imbalance=10) | |\n|:----------------:|:-------------:|:--------------:|:-------------:|:------------------:|\n| | Mean Recall | Min Coverage | Mean Recall | Min H-T Coverage|\n| | | (tgt. 0.095) | | (tgt. 0.01)|\n| CReST [1] | 0.72 | 0.052 | 0.52 | 0.009 |\n| DARP [2] | 0.81 | 0.063 | 0.55 | 0.006 |\n| FlexMatch[3] | 0.80 | 0.046 | 0.61 | 0.006 |\n| CSST(FixMatch) [4]| 0.80 | 0.092 | 0.63 | 0.010 |\n\n\n[1]: Chen Wei, Kihyuk Sohn, Clayton Mellina, Alan Yuille, Fan Yang, CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning, CVPR '21 \\\n[2]: Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, Jinwoo Shin, Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning NeurIPS '20 \\\n[3]: Bowen Zhang, Yidong Wang, Wenxin Hou, Hao Wu, Jindong Wang, Manabu Okumura, Takahiro Shinozaki\n,FlexMatch: Boosting Semi-Supervised Learning with Curriculum Pseudo Labeling NeurIPS '21\\\n[4]: Cost-Sensitive Self-Training for Optimizing Non-Decomposable Metrics (Ours)\n", " The paper proposes a technique for optimizing non-decomposable metrics in self-supervised learning settings. The method is based on the hybrid loss for optimizing non-decomposable metrics in supervised learning settings (Narashiman & Menon, 2021). The authors apply the reduction of the non-decomposable metric to cost-sensitive learning and form a weighted consistency regularizer. The framework is then used to improve the FixMatch algorithm for self-supervised training. Finally, the authors demonstrate the benefit of the models in real-world datasets. Strengths:\n- Optimizing non-decomposable metrics are relevant to many real-world problems\n- The authors provide theoretical justifications for the proposed models.\n- The experiments show the benefit of the proposed algorithm compared to baselines.\n\nWeaknesses:\n- My biggest complaint about the paper is the clarity of the paper. The paper is hard to understand for someone that is not already familiar with the related works. In many places, the authors refer to the related works without providing enough explanation for the reader to understand. In short, I do not think the authors have made the paper self-contained.\n- Some terminologies in the paper are used before even defining them, for example, Assumption 1 in Sec 3.2 uses F_pl, but it is defined only after Sec 3.4.\n\n----------- post rebuttal -----------\n\nThanks to the authors for providing the detailed explanation. \nThe authors has added more explanation in the appendix. \nHowever, I still think the paper need more explanation on the main paper to add clarity and help reader that is not already familiar with the related works. As also mentioned by another reviewer, the paper is very hard to read.\nFor the technical contribution of the paper, I could not asses it thoroughly due to the difficulty of understanding the contribution of the paper.\n\n Please answer my concerns and questions in the previous section.\n The authors have not discussed the limitation of the proposed model. A discussion on the model's limitations is suggested.", " The paper propose a new loss function for self learning that can take into account also non-decomposable cost functions. The work both analyzes the performance when using this new cost function and show empirically its advantage over previous solutions for self-training. In the theoretical part, the author claim that their loss function leads to a lower error compared to standard optimization over the loss. In the practical part, the authors show that their approach, which combines also a smart thresholding for keeping only part of the examples for the self training, leads to improved results over previous solutions. Both vision and NLP benchmarks are being considered. The strength of the paper is that it combines a theoretical analysis with some strong empirical results compared to existing baselines. \nA new loss function is being proposed and a thresholding approach. Both seems to improve the performance for the task at hand\n\nWeaknesses: \n\n1. The paper is very (very very) hard to read. The notation are not well presented. Some symbols are defined only after they are being first used. The writing is quite bad and it is very hard to follow. Few examples:\na. the recall and precision on line 65-66 are defined the same. \nb. F_{pl =} is used in Assumption 1 but defined only in Section 3.4\nc. The notation is also not very consistent. Sometimes the authors use P_w where w is the weight and in the same equation use P_i where i is an index. It is very unclear and confusing. \nd. Less critical compared to the above but part of the things that are defined should be better defined. For example c that is used in assumption 4 is mentioned without saying what it is. Indeed, it is mentioned in the c-expansion but the statement should start with mentioning that there is a function c and then say what is done with it. The comment here, is indeed a matter of style (unlike point a-c, which are not a matter of style but just make the paper very hard to follow)\n\n2. The assumption made in theoretical part are not justified. Specifically, in assumption 1 if we assume beta is negligible then it means that R_B,w is basically equal to zero. Is it really the constraint we want to use? \nAlso, if we assume that the error of the found function F^* is smaller than the one of F_{pl}, why Theorem 5 is surprising? Is not it just saying that if the error is smaller than the error is smaller? The authors assume something and then claim something more complicated that more or less states the same. At the end of the day, they want to show that F^* is better. So they assume that and then claim they proved that. So what is the point? \n\n3. The experiments are interesting but I wonder whether the main benefit is just from the new thresholding being used. There is an ablation that show the impact of this new threshoding approach on the proposed CSST approach. Yet, the real missing experiment is checking what is the impact of this approach on existing baselines (e.g., FixMatch). Clearly, the problem with the threshold has already been mentioned in previous works as the authors admit. So proposing a solution for it should be checked also with previous works. \n\n4. Finally, why only one baseline is being compared to in each task? Is FixMatch/UDA the only work that was proposed sine 2020 for this problem (Clearly there are and the authors should compare to them). Note that this is the least important problem in the paper that I find compared to the other points raised above. see above Didn't see limitations being mentioned ", " Non-decomposable metrics take an important role in machine learning. This paper proposes to improve self-training by optimizing non-decomposable objectives for semi-supervised learning. This strategy can bring a significant average improvement in desired metric of minimizing worst-case recall while maintaining similar accuracy compared with SOTA methods. Empirical results show that proposed Cost-Sensitive Self-Training (CSST) framework help improve the performace of baselines, e.g., FixMatch, UDA. Pros\n- Propose to improve self-training method by optimizing non-decomposable metrics that utilizes unlabeled data in addition to labeled data.\n- Propose a weighted consistency regularizer for Cost-Sensitive Self-Training.\n- The superiority of desired non-decomposable metric is theoretically justified.\n- Empirical results have validated the effectness of the proposed method, and bring improvement over baselines.\n\n\nI am overall satisfied with this paper and would like to give some comments that may make this paper better.\n- It is expected to release the source code of all experiments to benifit the community.\n- It is promising to compare with methods that improves the recall of Fixmatch, e.g., Flexmatch. See above. Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 5 ]
[ "Kqw2AoQawWT", "nips_2022_bGo0A4bJBc", "Y8VcHju7DXP", "tmDlfdNTkhdQ", "yXj-01U1UD4m", "Y8VcHju7DXP", "nips_2022_bGo0A4bJBc", "TWq2WAOc8g4", "xV5DLIaey2w", "nips_2022_bGo0A4bJBc", "nips_2022_bGo0A4bJBc", "nips_2022_bGo0A4bJBc" ]
nips_2022_jJwy2kcBYv
SPD: Synergy Pattern Diversifying Oriented Unsupervised Multi-agent Reinforcement Learning
Reinforcement learning typically relies heavily on a well-designed reward signal, which gets more challenging in cooperative multi-agent reinforcement learning. Alternatively, unsupervised reinforcement learning (URL) has delivered on its promise in the recent past to learn useful skills and explore the environment without external supervised signals. These approaches mainly aimed for the single agent to reach distinguishable states, insufficient for multi-agent systems due to that each agent interacts with not only the environment, but also the other agents. We propose Synergy Pattern Diversifying Oriented Unsupervised Multi-agent Reinforcement Learning (SPD) to learn generic coordination policies for agents with no extrinsic reward. Specifically, we devise the Synergy Pattern Graph (SPG), a graph depicting the relationships of agents at each time step. Furthermore, we propose an episode-wise divergence measurement to approximate the discrepancy of synergy patterns. To overcome the challenge of sparse return, we decompose the discrepancy of synergy patterns to per-time-step pseudo-reward. Empirically, we show the capacity of SPD to acquire meaningful coordination policies, such as maintaining specific formations in Multi-Agent Particle Environment and pass-and-shoot in Google Research Football. Furthermore, we demonstrate that the same instructive pretrained policy's parameters can serve as a good initialization for a series of downstream tasks' policies, achieving higher data efficiency and outperforming state-of-the-art approaches in Google Research Football.
Accept
Reviewers appreciated the paper's contribution of a novel method for unsupervised skill learning in MARL. While the scores were borderline, reviewers are mostly in favor of acceptance, therefore I recommend acceptance as well. Additional baselines and environments added during the rebuttal phase were important considerations in this decision.
val
[ "gwAfZooa16Q", "Su1Lsi1b30H", "X9cREdxmceH", "KZZGdHVRcU-", "L4xfaa5fP_4", "6pqtNe-txPAQ", "nOk7X9HSMHc", "Cn5heZ-JTC", "ecy9dEexydm", "SFarmKvz90", "Zp_EbVpYHIV", "u28Klj0ZQE", "41ubI2mpq4w", "fSIreI3VbMa" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the rebuttal. \nMy unclear points are clarified. \nAlthough I have less confidence in my understanding, I raised my score. \n", " We sincerely appreciate all reviewers for their time and efforts in evaluating our paper, as well as their detailed comments and suggestions.\n\nWe hope that our responses and answers to your questions will alleviate your concerns and further improve your opinion of our work. \nA revised version of our paper and supplementary material has been uploaded. \nWe would be pleased to answer any further questions you may have.", " Thank you so much for your advice.\n\nWe've updated the Fig.1 in the revised submission and tried our best to illustrate the procedure vividly.\nHowever, we still keep a few necessary math notations to aid understanding.\n\nHope this will alleviate your concern.", " Figure 1 is not a well-designed illustrative figure. I am not asking for removing it, but instead I will be happy to see it can be improved with less math notations, more diagram and serve as a good illustration figure.", " We sincerely appreciate your inspiring comments and insightful suggestions.\nWe hope the following answers your questions and addresses your concerns.\n\n> Weakness. 2: Writing clarity can be further improved ... to third notation.\n\nThanks for the advice.\nWe have added a discrimination in Appendix F to clarify these concepts and a new equation Eq. (7) in Sec. 4.4 to show the final formulation of the pseduo-reward in the *Rebuttal Revision*, please download it if convenient.\n\n> Weakness. 3: Besides, the connection between paragraphs ... relationship between concepts.\n\nWe are sorry for the non-smooth reading experience.\nWe have been trying our best to smooth the flow between paragraphs in the *Rebuttal Revision* and highlighted the alterations in blue.\n\n> Q1. I saw StartCraft2 SMAC environment is supported in your code. Why don't put its result in the paper?\n\nWe were struggled to fulfill the experiments on the different maps in GRF, and we were unable to evaluate the algorithms on SMAC due to the limitations of the computation resource.\nTo the concern of the performance on SMAC, we will try our best to plot the further results before the end of 'Reviewer-Author Discussions' period.\n\n> Q2. In Figure 2 (a), the d_sp is computed ... one population. \n\nFirstly, we want to clarify the difference between the **'population'** you mentioned and the **'synergy pattern'** we proposed.\nIIUC, the 'population' you refer to is a collection of diverse policies for the agents to complete *a single task* through more efficient exploration (like [1]).\nBriefly, these learned policies are used for maximizing a *task-related* reward while keeping to be different to each other for exploration.\nURL (including our method SPD), however, aims to learn diverse skills/synergy patterns in *task-agnostic* settings, which means the learned policies are essentially different to the others.\nFor instance, the learned synergy pattern 'pass-and-shoot' tends to be useful in the offensive tasks and the synergy pattern 'running back to play defense' is intuitively effective in the defensive tasks while they are both learned in one URL training process.\n\nAs a matter of fact, DIAYN and WURL have the same number of policies to learn as our method SPD ($Z=10$ in the MPE experiment and $Z=20$ in the GRF experiment).\nSince they are designed for the single agent case, we deploy them by regarding all agents as a single agent.\nConcretely, we use the observations from all agents to create the features that DIAYN and WURL need.\nWe have updated the description of these two baselines in the *Rebuttal Revision* to make a more clear delineation.\n\n[1] Parker-Holder, J., Pacchiano, A., Choromanski, K. M., & Roberts, S. J. (2020). Effective diversity in population based reinforcement learning. Advances in Neural Information Processing Systems, 33, 18050-18062.\n\n> Q3. I think this figure is not convincing ... coordination policies\" (line 291).\n\nAs we mentioned (line 287-288), we adopt $d_{sp}$ as the metric for measuring the discrepancy among relative relationships of agents since the SPG is built based on the relative relations.\nBesides, to our best knowledge, there is no other suitable metric for the relative relations of agents and that's also why we proposed the discrepancy of synergy patterns.\nAs for the effectiveness of $d_{sp}$, we follow the suggestions by *Reviewer 8Kca* and *Reviewer tFW2*, adding further evaluation of DIAYN and WURL on GRF.\nWe plot the learning curves of DIYAN and WURL in Fig. 3 in the *Rebuttal Revision*.\nThe results shows that SPD learns faster and finally gets higher winning rate on all scenarios compared to the baseline QMIX and single-agent URL approaches.\nWe believe the gap between the improved performances of SPD and the other URL approaches is caused by the fact that SPD does capture more information than the single-agent URL approaches in multi-agent settings, which is the relative relations of agents since it is designed for this.\n\nAs for the claim that \"SPD can inspire agents to visit identifiable states while further encouraging agents to explore with other types of coordination policies\" (line 291), we have updated it to \"SPD can inspire agents to visit **comparably** identifiable states **as the conventional URL approaches** while further encouraging agents to explore with other types of coordination policies\".\nThe results on the DSR which measures the diversity of states shows that SPD achieves similar performance as DIAYN and WURL, and SPD significantly outperms conventional URL approaches on $d_{sp}$ which measures the discrepancy among relative relationships of agents.\nBy \"... explore with other types of coordination policies\", we mean the learned joint policies are diverse.", " > Q4. I expect more convincing evidence to show that after SPD pre-training the models already \"learns useful synergy patterns\" (line 338).\n\nIn fact, as for the statement that SPD \"learns useful synergy patterns\", the experiments in Sec. 5.2 on GRF show that SPD can perform meaningful synergy patterns such as 'conducting ball in long distance' and 'passing-and-shooting' with no external reward from the environment and no further training, which is similar to the original single-agent experiments in DIAYN [3].\nAlso, using the learned joint policies as the initialization in the downstream tasks can accelerate the learning preodure (as shown in Fig. 3) suggests some of the policies are already suitable for these specific tasks.\n\n[3] Eysenbach, B., Gupta, A., Ibarz, J., & Levine, S. (2018). Diversity is all you need: Learning skills without a reward function. arXiv preprint arXiv:1802.06070.\n\n> Q5. Can you plot the test performance of SPD population ... at all!\n\nActually, we can not find significant trends at the learning curve of the environment returns during the URL learning process, which is acceptable since the agents never receive this signal at all.\nTo this concern, we compare the test performance of the joint policies learned by different URL approaches with no extra training on the 'Half Court;Fixed Init' map as we used in Sec. 5.2 and that of a randomly initialized QMIX, and the results shows the chosen policy learned by SPD already exhibits some 'talent' on the downstream task while there is no task-related signal.\n\n| Algorithm | Episode Return |\n| ---- | ---- |\n| SPD | $5.73 \\pm 1.65$ |\n| DIAYN | $0.17 \\pm 0.24$ |\n| WURL | $2.83 \\pm 0.66$ |\n| QMIX | $0.44 \\pm 0.72$ |\n\n> Q6. If I understand correctly, there exists Z populations of policies ... in the paper if I miss it.\n\nSorry for the confusion.\nAs we have distinguished the difference between 'population' and 'synergy patterns' in the former part, here the process you concerned about actually is that, there exists Z policies and Z replay buffers and each policy samples one episode data and then insert it into **its own replay buffer** for training. \nDuring the downstream learning, we test Z policies on the task before training and choose the one with the best performance as the initialization for QMIX training.\nTherefore, it is quite different from 'league training' which adds copies of policies to the league and tries to against these past policies, in addition, 'ensemble method' which shares all data across the policies and has a set of policies during the whole learning procedure.\n\n> Q7. A question related to this is that do the Z populations really diverse? Figure 2 (a) shows the d_sp\n\nThe $d_{sp}$ score in the Fig. 2(a) is the average of all the learned policies, which means each policy has enouge difference to the others.\nBesides, the visualization of GRF in Appendix C demonstrates that these policies do exhibit diversity.\n\n> Q8. Current method labels transitions with pseudo reward ... latest policies?\n\nThough the SPD computes the pseudo reward after the episode is terminated, the reward is amortized into each step, which matching the original training process of QMIX.\nBesides, the replay buffer for QMIX has a limits of 5000 samples and the oldest samples will be discarded, alleviating this problem during the training process.\n\n> Q9. Related to above question, is it possible to compute ... episode is terminated.\n\n'No match', the variant of SPD discussed in the ablation part (in Sec. 5.1 and Appendix B.1), is exactly what you talk about.\nWe set the matching function $\\rho(t)=t$ directly which means there is no need to compute the pseudo-reward after the episode is terminated.\nAnd the result in Fig. 2(b) (Fig. 2 in the *Rebuttal Revision* now since we change the Fig. 2(a) into a table) shows the original SPD outperforms this variant.\n\n> Q10. Why the webpage is empty? If you don't prepare it, don't post it.\n\nWe are really apologetic for releasing a wrong version at that time and we have updated the released site, please check our [site](https://sites.google.com/view/spd-umarl) again.\n\n> Q11. Some typos ...\n\nWe are sincerely grateful for pointing these typos.\nWe have corrected them in the *Rebuttal Revision* and highlighted the alterations in blue.\n\n> Q12. I don't think Figure 1 is informative.\n\nFig. 1 shows that the pair of synergy pattern graphs are from step pair $(t, \\rho^*(t))$ and we hope it helps the readers who are not familiar with the background to understand the process correctly.\nThus, we insist to keep Fig. 1 since it delineates the process of obtaining the pseudo-reward for step $t$.\n\n> Q13. Figure 2 (a) and (b) are in quite different styles. And the legend of Fig2(b) is sketchy. This could be improved.\n\nThank you for pointing this out.\nWe have changed the Fig. 2(a) into a table and updated the legend of Fig. 2(b) in the *Rebuttal Revision*.", " > Q14. Appendix C figure 1 is good, but can be improved. ... nor webpage.\n>\n> Q15. Proposition 4.1 is boring. Maybe we can move it to appendix.\n\nThanks for the advice and we've now improved the Appendix C Fig. 1 and moved Proposition 4.1 to Appendix A in the *Rebuttal Revision*.\n\n> Q16. In Appendix D, the authors describe the limitations ... inter-agent diversity.\n\nIn fact, SPD does learn meaningful synergy patterns such as 'conducting ball in long distance' and 'passing-and-shooting' with no external reward from the environment and no further training (in Sec. 5.2 and Fig. 4), which is similar to the original single-agent experiments in DIAYN [3].\n\nWe agree with the second limitation you mentioned and we've added it into the Appendix D.\n\nFinally, we are really grateful for the reviewer's detailedly review and suggestions.\nBesides, our work, to our knowledge, is the only method so far to use URL manner in the multi-agent settings and thus we also plan to open our source code for further studies.", " We appreciate your review and constructive suggestions.\nWe hope the following answers your questions and addresses your concerns.\n\n> The comparisons between SPD, DIAYN and WURL in section 5.1 use $d_{sp}$ as an evaluation metric, ... the diversification of synergy patterns?\n\nWe are inspired by your important advice.\nWe have to admit that when designing the experiments, we took for granted that these methods, which were not designed for multi-agents, would have difficulty capturing the relationship of the agents, and ignored this part of the experimental comparison.\nThe further experiments of the DIAYN and WURL on GRF (alse based on QMIX with the same setting as SPD) are carried out, and we have updated the results in Fig. 3 in the *Rebuttal Revision*.\n\nThe results show that using the model learned by conventional URL methods (WURL and DIAYN) as the initialization mostly performs similarly to the baseline QMIX, while WURL slightly outperforms QMIX on the 'Full-court' maps.\nIn contrast, SPD learns faster and finally gets higher winning rate on all scenarios which demonstrate the effectiveness of $d_{sp}$ as an evaluation metric.\nActually, one difference between SPD and these URL methods is that SPD encourages the diversity of the relationship among agents while these diversity methods encourages the diversity of visited states.\nTherefore, the models trained by WURL may reach diverse states on the 'Full-court' map which need more exploration and we believe this accounts for the slightly better performance.\n\n> The discussion of the ablation experiments in section 5.1 and figure 2 is very light on details. I am not sure what the reward scale is, what the precision on the solution means, and why these are relevant ablations to consider.\n\nDue to the limits of the pages by NeurIPS, there is not enough space to describe the ablation experiments in detail in the main body.\nTherefore we delineate the ablation experiments in Appendix B.1.\nWe also update the descriptions to make it more clear in the *Rebuttal Revision*.\n\n'Reward scale $\\beta_{r}$' is a constant factor multiplying the pseudo-reward in Eq. (5) to change the order of magnitude of it, since the neural network might be insensitive to the original magnitude of pseudo-reward (like $1\\times 10^{-1}$).\n\n'Sparse return $\\tilde{R}_z$' removes the decomposition of the pseudo-return, which means using $\\tilde{R}_z$ in Eq. (4) directly as the final reward.\n\nThe other two setting is about the process of solving for the optimal matching function $\\rho^*$ in Eq. (3).\n'On-policy $\\rho(t)=t$' removes this process entirely and simply sets $\\rho(t)=t$, while 'Suboptimal $\\bar{\\rho}$' uses a lower number of iterations in this process and obtains a suboptimal matching funcion $\\bar{\\rho}$.\n\n> A diagram illustrating $G^{sp}$ and $d_{sp}$ could aid understanding, even though the descriptions are clear.\n\n### **Update**\nWe are delighted to inform that we illustrate $G^{sp}$ and $d_{sp}$ in Fig. 1 in the newest revision, and we hope this will adress your conern.\n\nThanks to your valuable comments again.\n\n> Q1. Related to my concern above, what do you expect ... it supported.\n\nBriefly, we expect that learned joint policies with high $d_{sp}$ will learn faster in downstream tasks.\n\nTo this point, an important hypothesis made by DIAYN is that the higher the coverage of learned skills over the set of possible behaviour, the greater the probability of gaining effective skills (Line 34-36).\nBasically, we devise the discrepancy of synergy patterns $d_{sp}$ to evaluate the difference between two joint policies in multi-agent setting.\nAnd increasing the $d_{sp}$ aims to improve the coverage of learned joint policies over the set of possible coordination behaviour.\nWhen some policies keep high $d_{sp}$ to other useless policies, these policies may learn useful coordination behaviour.\nAs the results of the visualization on GRF show (Fig. 4, and videos can be found at our [site](https://sites.google.com/view/spd-umarl)), some synergy patterns seems to be meaningless (such as kick the ball directly out of bounds), while some synergy patterns are surprising (such as pass-and-shoot).\nThe learned policies with these useful synergy patterns are intuitively be good initializations for the downstream tasks, leading to efficient downstream learning.\nAlso, our experimental results demonstrate that SPD does learn faster than the original baseline QMIX.\n\nAs for the assumption that \"it could be possible to achieve these results with more straightforward diversity methods\", we have shown the performance of URL approaches on GRF and discussed it in the former part.", " Thanks for your feedback.\nWe hope the following answers your questions and addresses your concerns.\n\n> Q1. L 84: no extra cost?\n>\n> Q2. L 244: utilize/an assignment?\n>\n> Q3. Algorithm 1 L15: Is eq. (2) correctly eq. (3)?\n\nWe are grateful to you for pointing these typos.\nWe have corrected them in the *Rebuttal Revision* and highlighted the alterations in blue.\nIf it is convenient, you can download and read the latest version.\n\n> Q4. The performances of CDS (Li et al. 2021, NeurIPS) in Figure 3 may be worse than expected (in the paper, CDS outperformed QMIX and QPLEX). I previously ran their code and it worked well. I know the experimental condition was different, but I want to know the reason (were there no code and hyperparameters for CDS?).\n\nActually, we ran the experiments of CDS [1] using their provided [code](https://github.com/lich14/CDS).\nAnd due to the missing of a concrete document describing the choosing of parameters, the performance of CDS on the map 'academy_3_vs_1_with_keeper' in GRF is produced by the default parameters (file at *CDS/CDS_GRF/config/algs/CDS_QMIX.yaml* provided by CDS).\n\nThere are some differences between our experiments and those in CDS that we believe may account for the unexpected results.\n\nFirst, as we described in Sec. 5.2, we add the randomness into the initialization while the environment in CDS initializes the agents in the fixed positions.\nSince CDS encourages the agents with different IDs to be diverse by maximizing the mutual information between the individual trajectory and agents' identity, the random initialization may be harmful to the performance (one extreme case: the agents exchange there positions but the IDs are the same).\n\nSecond, the experiments of QMIX [2] in our work is produced by the default parameters at [pymarl](https://github.com/oxwhirl/pymarl/blob/master/src/config/default.yaml) where the parameter `obs_agent_id=True`, while the same parameter is set to be `False` in the default parameters provided by CDS. This parameter controls the inputs of the RNN and the agent's one-hot ID will be included in the observation when `obs_agent_id=True`, which means the network will not recognize different agents when it is set to be `False`.\n\nBesides, we find the experiments (Figure 3, 4) in [3] have similar results that CDS does not outperform QMIX and QPLEX [4].\n\n> Q5. There was no video at https://sites.google.com/view/spd-umarl.\n\nWe are sincerely apologetic for releasing a wrong version at that time and we have updated the released site, please check our [site](https://sites.google.com/view/spd-umarl) again.\n\n> Q6. L328-329: “To avoid the effect of randomness, we repeat the visualization for 10 times for each skill.” It was not clear to me. What did the authors do?\n\nWe are sorry for the confusion caused by this statement.\nTo our concern that the different performance of different joint policies learned by SPD are only due to the randomness of the GRF environment itself but not the inter-policy diversity we want, we deploy each learned joint policy for 10 episodes and visualize them.\nThe results show that each joint policy exhibits the similar coordination behavior across 10 episodes (as the videos at our [site](https://sites.google.com/view/spd-umarl) show).\n\n> Q7. Appendix L83: The results show that none of the URL approaches, including our method SPD, achieve more than 75% on DSR. The values in Fig. 2 a seem to be 60%. Why did the authors mention 75%?\n\nIn fact, we evaluated the URL approaches with 5 different random seeds and the original results of these methods on DSR range from 56.6%~60.9%.\nSince there exist randomness during the evaluation process, we assume the result on DSR may surpass the current highest value.\nActually, this sentence means that the performance of these URL approaches are far away from satisfying, implying that there is still space for improvement.\nWe update the narration in the *Rebuttal Revision* and feel sorry for this confusion.\n\n[1] Chenghao, L., Wang, T., Wu, C., Zhao, Q., Yang, J., & Zhang, C. (2021). Celebrating diversity in shared multi-agent reinforcement learning. Advances in Neural Information Processing Systems, 34, 3991-4002.\n\n[2] Rashid, T., Samvelyan, M., Schroeder, C., Farquhar, G., Foerster, J., & Whiteson, S. (2018, July). Qmix: Monotonic value function factorisation for deep multi-agent reinforcement learning. In International conference on machine learning (pp. 4295-4304). PMLR.\n\n[3] Wu, S., Wang, T., Li, C., & Zhang, C. (2021). Containerized Distributed Value-Based Multi-Agent Reinforcement Learning. arXiv preprint arXiv:2110.08169.\n\n[4] Wang, J., Ren, Z., Liu, T., Yu, Y., & Zhang, C. (2020). Qplex: Duplex dueling multi-agent q-learning. arXiv preprint arXiv:2008.01062", " Thank you for your feedback and suggestions. We hope the following answers your questions and addresses your concerns.\n\n> Q1. Why is CDS performance worse than QMIX?\n\nWe ran the experiments of CDS using their provided [code](https://github.com/lich14/CDS) with the default parameters (file at *CDS/CDS_GRF/config/algs/CDS_QMIX.yaml* provided by CDS).\n\nIn fact, there are some differences between our experiments and those in CDS that we believe may account for the worse performance of CDS.\nFirst, as we described in Sec. 5.2, the randomness is added into the initialization while the environment in CDS initializes the agents in the fixed positions.\nThis random initialization may damage the performance of CDS because CDS inspires the agents with different IDs to be diverse by maximizing the mutual information between the individual trajectory and agents' identity. (one extreme case: the agents exchange there positions but the IDs are the same).\nSecond, the experiments of QMIX in our work is produced by default parameters at [pymarl](https://github.com/oxwhirl/pymarl/blob/master/src/config/default.yaml) where the parameter `obs_agent_id=True`, while the same parameter is set to be `False` in the default parameters provided by CDS.\nThe inputs of the RNN is controlled by this parameter and the agent's one-hot ID will be included in the observation when `obs_agent_id=True`, which means the network will not recognize different agents when it is set to be `False`.\n\nBesides, the experiments (Figure 3, 4) in [6] have similar results that CDS mostly does not outperform QMIX and QPLEX.\n\n[6] Wu, S., Wang, T., Li, C., & Zhang, C. (2021). Containerized Distributed Value-Based Multi-Agent Reinforcement Learning. arXiv preprint arXiv:2110.08169.\n\n> Q2. Why not compare with some other methods ... for comparison.\n\nThanks for your important advice.\nWe've compared with DIAYN [5] and WURL [6] on the GRF experiment with the same setting as SPD in Sec. 5.2 in the *Rebuttal Revision*, please check it if convenient.\nThe results show that using the model learned by conventional URL methods (WURL and DIAYN) as the initialization mostly performs similarly to the baseline QMIX, while WURL slightly outperforms QMIX on the 'Full-court' maps.\nIn contrast, SPD learns faster and finally gets higher winning rate on all scenarios.\nAs for the performance of WURL on the 'Full-court' maps which need more exploration, the models trained by WURL may reach diverse states during the URL training and we believe this accounts for the slightly better performance.\nThe significant improvement of SPD demonstrate that SPD is better for multi-agent settings compared to these URL approaches.\n\n> Suggestions 1. I think the auther should discuss some works ... MA exploration works [1].\n\nWe would like to point out the difference between unsupervised skill discovery and these diversity-based exploration methods([1, 2, 3]).\nUnsupervised Reinforcement Learning (URL), including our method SPD, mainly aims to learn diverse skills/synergy patterns in task-agnostic settings.\nThe diversity that URL cares about is the inter-policy diversity.\nAnd the results in Sec. 5.2 and Appendix C show that SPD does learn useful synergy patterns with no external reward from the environment.\nThe downstream task (in our paper, the experiment on GRF) learning is one of the applications of the policies learned by URL.\nMeanwhile, these diversity-based exploration methods mostly enhance single policy’s diversity or the inter-agent diversity and aim to encourage the exploration for efficient learning.\n\nMoreover, the reason why we compared with CDS is that it was reported to be the SOTA on GRF and we did not find the experiment on GRF in these methods, while we did not find official code provided by them as well.\nHowever, these diversity-based methods are related and we have cited and discussed them in the Related Works part in the *Rebuttal Revision*.\nThank you for pointing out this.\n\n> Suggestions 2. In addition, the experiments do not ... consideration, e.g., [4] [5].\n\nActually, Lee et al. [4] is also a exploration method that utils the intrinsic reward for exploration, which aims to learn **a single policy** for which the state marginal distribution matches a given target state distribution.\n\nAs for Liu et al. [5], we've been working on implementing this algorithm and we will try our best to plot the further comparison result before the deadline of 'Reviewer-Author Discussions' period.\n\n> Suggestions 3. Finally, SPD is compared on some environments ... improve the paper.\n\nOur experiment on 'academy_3_vs_1_with_keeper' in GRF is actually the same map used by CDS except we add randomness and increase the difficulty of exploration.\nTo the conerned point, we are working on produce further results on SMAC.\nWe believe these further evaluation will strengthen our claim on superiority of SPD.\n\n> limitations\n\nWe have discussed limitations in the Appendix D due to the page limits by NeurIPS.", " This paper proposes an unsupervised framework SPD to discover diverse policies for agents with no extrinsic reward. SPD uses synergy pattern graph to represent the joint policy for agents and discovers diversity policy by applying optimal transport. The experiments show that SPD outperforms some MARL algorithms in some environments. The contributions of this work are designing a new method to represent the joint policy for agents by considering other agents in MARL and proposing to explore skills by optimal transport. Strengths:\n\n1. This work provides an interesting and novel idea on the use of graphs to represent the joint policy for MARL.\n\n2. The paper is well written.\n\nWeaknesses:\n\n1. There is no support for the conclusion that SPD is better than other URL methods in terms of performance.\n\n2. Some standard benchmarks used in the MARL community need to be considered. Some of the results are very confusing to me, such as CDS does not work at all. \n\n\n\n \nQuestions:\n1. Why is CDS performance worse than QMIX?\n\n2. Why not compare with some other methods of URL on google football? Although these URL methods are not designed for the MARL algorithms, they are needed for comparison.\n\n\n\nSuggestions:\n\nI think the auther should discuss some works that are designed to encourage diversity [1] [2] [3]. \nFurthermore, I believe the baselines that are compared in this work are weak. Specifically, some MA exploration works are not considered, and the baseline with the best performance in the experiments (i.e., football game) is QMIX. The authors need to consider some effective MA exploration works [1].\nIn addition, the experiments do not compare URL methods on google football and more URL methods should be taken into consideration, e.g., [4] [5].\nFinally, SPD is compared on some environments and more environments need to be considered, such as SMAC, MAMUJOCO.\nFor example, on some SMAC or on Google football environments used by CDS. Thus a more comprehensive experiments could significantly improve the paper.\n\n\n[1]: Zhou Z, Fu W, Zhang B, et al. Continuously Discovering Novel Strategies via Reward-Switching Policy Optimization[J]. arXiv preprint arXiv:2204.02246, 2022.\n\n[2]: Parker-Holder J, Pacchiano A, Choromanski K M, et al. Effective diversity in population based reinforcement learning[J]. Advances in Neural Information Processing Systems, 2020, 33: 18050-18062.\n\n[3]: Lupu A, Cui B, Hu H, et al. Trajectory diversity for zero-shot coordination[C]//International Conference on Machine Learning. PMLR, 2021: 7204-7213.\n\n[4]: Lee L, Eysenbach B, Parisotto E, et al. Efficient exploration via state marginal matching[J]. arXiv preprint arXiv:1906.05274, 2019.\n\n[5]: Liu H, Abbeel P. Aps: Active pretraining with successor features[C]//International Conference on Machine Learning. PMLR, 2021: 6736-6747. Limitations should be discussed more adequately in the main body of the paper. Societal impact is likely minimal.", " \nThis work proposes a novel unsupervised reward labeling method called SPD, which rewards a population of agents when they have a relationship graph (synergy patten graph) that different to other populations. The \n\nConcretely, this work first builds \"synergy pattern graph\" of a given population and then uses the SPG discrepancy between populations as the metric to encourage inter-agent diversity. \n\nExperiments show that the proposed method can (1) learn diverse synergy patterns and (2) improve the generalizability on downstream tasks.\n\nGenerally, I like this work. I will raise my score if the writing can be improved and more convincing evaluation results, both qualitatively and quantitatively, can be provided.\n\n\n# Update after responses\n\nI am happy to see the authors revise the paper to make it more clear. I raised my score.\n \n### Strengths\n\n* This work formulates the diversity between agents in the perspective of \"synergy pattern graph\". This formulation of inter-agent diversity is novel. It encourages the **diversity in relationship**, which stands higher than simply computing diversity based on agents within one group or computing diversity based on single-agent information.\n* The continuity and completeness of the writing is good. There are no obvious flaws and disconnected notations. \n* I am very happy to see that code is provided. The code is provided with detailed documents and function description. This is a very big plus. \n\n\n### Weaknesses\n\n* The evaluation results do not provide a full view on the effectiveness of the proposed method. See questions.\n* Writing clarity can be further improved since the notations are a bit overwhelmed. I think the terms like SPG, SPG batch, SPG element, Discrepancy of SP and so are messed. It would be better to simplify them if don't have enough space to clarify define and introduce them. Also, I expect to see a concrete form of reward in implementation section, rather than a notation that refers to another notation that refers to third notation.\n* Besides, the connection between paragraphs and sections is not smooth enough, which creates huge burden in understanding. A good idea is to draw an illustrative diagram showing the relationship between concepts (I don't think Figure 1 is informative.). \n \n \n### Questions on experiments\n\n\nI saw StartCraft2 SMAC environment is supported in your code. Why don't put its result in the paper?\n\nIn Figure 2 (a), the d_sp is computed based on the Gromov-Wasserstein discrepancy between different populations. How you compute this for DIAYN and WURL? IIUC they only has one population. I think this figure is not convincing to stand for the claim \"SDP captures the relative relations of agents\" (line 338) and \"SPD can inspire agents to visit identifiable states while further encouraging agents to explore with other types of coordination policies\" (line 291). I expect more convincing evidence to show that after SPD pre-training the models already \"learns useful synergy patterns\" (line 338).\n\nCan you plot the test performance of SPD population while conducting the unsupervised learning? If we can see performance improvement then it shows that SPD can learn policy without reward at all!\n\n\n\n\n\n\n### Questions on method\n\nIf I understand correctly, there exists Z populations of policies and the population that actually involve in each episode is uniformly sampled from the pool of Z populations. Correct? \n**Is it possible that this is the reason why SPD is working? This is similar to league training and ensemble method, which have shown can improve generalization.**\nThis is an important question. Please point out existing exp result in the paper if I miss it.\n\nA question related to this is that do the Z populations really diverse? Figure 2 (a) shows the d_sp \n\n\n---\n\nCurrent method labels transitions with pseudo reward and stores them into replay buffer for QMIX training. Therefore the reward might be outdated in the process of training. Is it necessary to relabel those reward based on latest policies?\n\n\n---\n\n\nRelated to above question, is it possible to compute the proposed pseudo reward based on truncated episode? For example, to generalize your method to on-policy RL algorithm like independent PPO, the reward should be computed on-the-fly before the episode is terminated.\n\n\n\n### Other questions\n\n\nWhy the webpage is empty? If you don't prepare it, don't post it.\n\n---\n\nSome typos:\n\n* CSD paper has wrong citation format.\n* Algorithm 1 Line 14: Update d_gw according to Eq (6) as well as Eq (1).\n* Line 139: \"about how to [solving] the optimization problem\"\n* Line 244: \"which could solve such [a] assignment\"\n* Line 288: can add a reference to d_sp\n\n\n---\n\nI don't think Figure 1 is informative.\n\n---\n\nFigure 2 (a) and (b) are in quite different styles. And the legend of Fig2(b) is sketchy. This could be improved.\n\n---\n\nAppendix C figure 1 is good, but can be improved. Agents in different teams should have clear difference. For example, opponent, ball and guard should use different shape rather than circle. I actually prefer to watch video, but unfortunately you don't provide video nor webpage.\n\n---\n\nProposition 4.1 is boring. Maybe we can move it to appendix.\n \n\nIn Appendix D, the authors describe the limitations. \n\nI think a limitation is that, supposing URL methods (including SPD) perform bad when simply deploying the learned models to the test tasks without further fine-tuning, how to make URL method works in a complete reward-free setting. Previous works in single-agent RL, like DIAYN, shows that we can learn policy without any reward. This also relates to the next limitation.\n\nThe second limitation is that, I think we can incorporate diversity encouraging technique in single-agent with the SPD. By doing this, we can improve the intra-agent diversity as well as inter-agent diversity.\n\n\n\n", " The authors proposed a MARL algorithm to learn generic coordination policies for agents with no extrinsic reward called Synergy Pattern Diversifying Oriented Unsupervised MARL (SPD). They utilized a graph representing the relationships of agents at each time step called Synergy Pattern Graph (SPG), and an episode-wise divergence measurement to approximate the discrepancy of synergy patterns. Results showed the capacity of SPD to acquire meaningful coordination policies, such as maintaining specific formations in Multi-Agent Particle Environment and pass-and-shoot in Google Research Football. Furthermore, they demonstrated that the same instructive pretrained policy’s parameters can serve as a good initialization for a series of downstream tasks’ policies, achieving higher data efficiency and outperforming state-of-the-art approaches in Google Research Football. The strength of this paper is as follows:\n* The proposed unsupervised MARL algorithm seems to be original, and they used the synergy graph to train team skills without rewards from the environment, which brings the agents good generalization ability in downstream tasks, and takes no extra cost in execution. \n* The experimental results demonstrate that SPD achieves better performance in MARL compared to conventional unsupervised RL approaches and shows great potential to learn synergy patterns with generalizability for downstream tasks.\n\nThe weakness of this paper is as follows:\n* The presentation was sometimes unclear (please see below)\n* The experiment descriptions were sometimes unclear (please see below)\n * L 84: no extra cost?\n* L 244: utilize/an assignment?\n* Algorithm 1 L15: Is eq. (2) correctly eq. (3)?\n* The performances of CDS (Li et al. 2021, NeurIPS) in Figure 3 may be worse than expected (in the paper, CDS outperformed QMIX and QPLEX). I previously ran their code and it worked well. I know the experimental condition was different, but I want to know the reason (were there no code and hyperparameters for CDS?).\n* There was no video at https://sites.google.com/view/spd-umarl.\n* L328-329: “To avoid the effect of randomness, we repeat the visualization for 10 times for each skill.” It was not clear to me. What did the authors do?\n* Appendix L83: The results show that none of the URL approaches, including our method SPD, achieve more than 75% on DSR. The values in Fig. 2 a seem to be 60%. Why did the authors mention 75%?\n The limitations and potential negative societal impact were described in Appendices D and E, respectively.\n", " The paper presents a method for unsupervised multi-agent reinforcement learning which uses diversity bonuses as an intrinsic reward. The SPD method rewards diversity between _synergy pattern graphs_, which measure the similarity of the full set of agents' behavior. The paper argues that diversity as measured between these graphs can capture the dependencies between agents better than promoting diversity in the state space alone. The paper demonstrates improved performance in the Google Research Football (GRF) benchmark.\n\n# Update after author response\n\nThank you for the well-considered responses and updated experiments. The updated results provide a stronger support for the SPD method, and I have raised my score to reflect them. ### Originality\n - To the best of my knowledge, the SPD method is a novel approach that combines diversity-based unsupervised RL approaches with a graph-based representation of agent interactions.\n\n### Quality\n - The comparisons between SPD, DIAYN and WURL in section 5.1 use $d_{sp}$ as an evaluation metric, which seems incomplete. Without establishing that $d_{sp}$ correlates with, say, better down-stream task generalization performance, it is difficult to be confident in conclusions drawn using it as a metric. I would have liked to see these baselines (DIAYN and WURL) used in the experiments in section 5.2 as well. Can we attribute the improved performance of SPD demonstrated in figure 3 to diversity bonuses in general, or specifically to the diversification of synergy patterns?\n\n### Clarity\n - Although the SPD method is complex, the authors do a good job describing the approach, including describing the necessary background for understanding SPD. \n - The discussion of the ablation experiments in section 5.1 and figure 2 is very light on details. I am not sure what the reward scale is, what the precision on the solution means, and why these are relevant ablations to consider.\n - A diagram illustrating $G^{sp}$ and $d_{sp}$ could aid understanding, even though the descriptions are clear.\n\n### Significance\nThe significance of the approach is hard to judge given the current experiments. The approach seems to make advances on the state of the art in CTDE MARL approaches on GRF, but the concerns raised in the Quality section of this review above make it hard to know what to attribute the performance to. Related to my concern above, what do you expect the relationship between $d_{sp}$ and downstream task performance to be? Did I misunderstand why it should be a good metric on its own? My rating depends mostly on this point, because the approach seems promising and the results are good on one benchmark, but it could be possible to achieve these results with more straightforward diversity methods. Arguments in the paper suggest reasons why we should expect DIAYN and WURL to perform worse, but there is no experiment to back this up. Since this is a major claim of the paper, I'd like to see it supported. The authors address the limitations and potential societal impact well in the appendix." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "ecy9dEexydm", "nips_2022_jJwy2kcBYv", "KZZGdHVRcU-", "6pqtNe-txPAQ", "u28Klj0ZQE", "u28Klj0ZQE", "u28Klj0ZQE", "fSIreI3VbMa", "41ubI2mpq4w", "Zp_EbVpYHIV", "nips_2022_jJwy2kcBYv", "nips_2022_jJwy2kcBYv", "nips_2022_jJwy2kcBYv", "nips_2022_jJwy2kcBYv" ]
nips_2022_TTM7iEFOTzJ
EpiGRAF: Rethinking training of 3D GANs
A recent trend in generative modeling is building 3D-aware generators from 2D image collections. To induce the 3D bias, such models typically rely on volumetric rendering, which is expensive to employ at high resolutions. Over the past months, more than ten works have addressed this scaling issue by training a separate 2D decoder to upsample a low-resolution image (or a feature tensor) produced from a pure 3D generator. But this solution comes at a cost: not only does it break multi-view consistency (i.e., shape and texture change when the camera moves), but it also learns geometry in low fidelity. In this work, we show that obtaining a high-resolution 3D generator with SotA image quality is possible by following a completely different route of simply training the model patch-wise. We revisit and improve this optimization scheme in two ways. First, we design a location- and scale-aware discriminator to work on patches of different proportions and spatial positions. Second, we modify the patch sampling strategy based on an annealed beta distribution to stabilize training and accelerate the convergence. The resulting model, named EpiGRAF, is an efficient, high-resolution, pure 3D generator, and we test it on four datasets (two introduced in this work) at \(256^2\) and \(512^2\) resolutions. It obtains state-of-the-art image quality, high-fidelity geometry and trains \({\approx}\)2.5 faster than the upsampler-based counterparts. Code/data/visualizations: https://universome.github.io/epigraf.
Accept
The reviewers found the method simple and effective and considered it a contribution of interest to the community. Claims are well supported by experiments and design choices have been validated. The paper is well written. Furthermore, the authors provided highly detailed responses to all questions by reviewers, which creates confidence that reviewers' remarks will be addressed in the final paper.
test
[ "BKk-tLzFYjW", "V_SKhiggND7", "23Mw4KbVBb", "m7GqTCwVsLX", "VtedKmDiaHS", "pZ3kd-9Def0", "oIErTxvlPPx", "pjn9FRqUFs", "Nxw2zXZne5p", "XI6If5Nn5Dj", "OgMjOlPbp1Q", "Zw6wdfSO3dZ", "b8aDZVqbH9g", "O02Rrg9lCPS", "_JwSNEfncHZ", "HoykHFPUhYJ", "y2zuhLFAsQb" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the reply. I would encourage the authors to add discussion on sampling strategy in the paper. The rebuttal answers my questions and I would like to keep the original rating.", " Dear Reviewer, we are very thankful for your feedback which helped us to improve several important parts of our work. And if that's possible, it would be crucial for us to know if there are any concerns left on your side after our response? As far as we understood, the main concern of the review originates from the surmise that a tri-plane generator could be trained in full-resolution on its own, which eliminates the need for patch-wise training. In our response, we provided an exposition showing that training it in full resolution would result in ~$20\\times$ longer training, making it computationally infeasible *without* our patch-wise scheme.\n\nWe also conducted an additional series of experiments to ablate the proposed scale sampling strategy; implemented and conducted an experiment with the pi-GAN-based generator; and elaborated on pose conditioning in the discriminator. And we would be happy to know whether our exposition is convincing and whether there is anything else we could elaborate on to resolve the existing or any new concerns. And once again, we apologize for submitting our response with the delay.", " Thank you for the detailed response to all the questions. I find them quite informative and would maintain my original rating.", " I thank the authors for the exhaustive reply. I believe the response clarified my concerns and I am leaning towards acceptance.", " We thank the reviewers for their feedback — it helps us to improve the work and gives good ideas on what directions to explore in the future.\nWe sincerely apologize for the delay with our response and now provide the additional experiments, comparisons, and clarifications for our model.\nWe set up a separate anonymous web page to host the necessary media files for this: [https://rethinking-3d-gans.github.io/additional-results](https://rethinking-3d-gans.github.io/additional-results) (we will also include them in the supplementary material/web page).\nHere is the summary of what we did:\n- Elaborated on all the raised questions and concerns.\n- Provided geometry and multi-view consistency comparisons on FFHQ $512^2$ with EG3D, as suggested by R4, and extreme-angles visualizations on FFHQ $512^2$, as suggested by R3.\n- Conducted 9 additional experiments on Cats $256^2$ for different patch sizes and R1 $\\gamma$ regularization weights for a more thorough analysis of the patch resolution influence.\n- Conducted 3 additional experiments on Cats $256^2$ with the $\\text{lerp}[1, r/R, 1 - min(t/T, 1)]$ patch scale sampling strategy, as suggested by R2.\n- Conducted 4 additional experiments on Cats $256^2$ for the standard discriminator for different scale sampling strategies to address the concerns of R1 and R4.\n- Conducted 2 additional experiments for the bare tri-planes + GRAF uniform scale sampling setup on FFHQ $512^2$ and M-Plants $256^2$, as suggested by R4.\n- Launched EG3D on the standard FFHQ $512^2$ but observed that it seems to require some additional tuning to obtain the same performance.\n- Implemented a simple MLP-based NeRF generator with patch-wise training in our code repo and tested it on Cats $256^2$: it attained FID of 21.46 after 1 day of training on 8 $\\times$ V100 GPUs (compared to FID of 68.28 for $\\pi$-GAN after 7 days of training).\n- We invested additional efforts into tuning the hyperparameters for GRAM and managed to obtain a ${\\approx}$20\\% better performance for it on M-Plants $256^2$ and M-Food $256^2$. However, it was still diverging into a mode collapse.\n- Trained StyleNeRF with adaptive differentiable augmentation on Cats $256^2$ and observed that it dramatically improved its performance: from FID of 27.91 to FID of 5.91.\n- We made some minor changes to the submission text.\n\nWe updated our submission to reflect the changes (highlighted in blue) and will include all the additional results in the supplementary and/or on the accompanying web page.\nOnce again, we apologize for the delayed response and hope to have a fruitful discussion.", " We would love to express our appreciation to the reviewer for their thorough analysis of our work and the raised concerns and questions — they allow us to make our submission stronger.\nIn the following, we provide additional experimental results and visualizations, details on the questions and concerns, and elaborate on the advantages of our model.\n\n**Q**. *Why should one prefer this method instead of EG3D? It seems to obtain better quantitative metrics (and qualitative metrics are not shown).*\n\n**A**.\nFirst of all, note that EG3D's official codebase was released only after the NeurIPS submission deadline, that's why we could compare to it only in terms of the reported scores.\nNext, note that EG3D is trained not on the standard FFHQ but on a re-cropped/re-aligned version of it, that's why FID scores are not directly comparable for it with other 3D generators.\nThen, we would also want to note that EG3D is extremely well-tuned for FFHQ both in terms of sheer compute and engineering efforts: it contains a variety of advanced techniques which improve its performance and which we couldn't borrow since its official codebase was released after the submission deadline, like density regularization, some parts of Mip-NeRF volume rendering, richer information about camera poses (i.e., following prior work, we assume zero roll angles, while in their case they use all three rotation/elevation/roll angles), and others.\nBesides, our project spent ${\\approx}$12 NVidia V100 GPU-years in total (where $1.5$ GPU-years were spent on running/tuning baselines), while just a single run of EG3D on FFHQ $512^2$ is ${\\approx}0.2$ V100 GPU-years.\nWhile EG3D does not report their overall compute, the recent StyleGAN-s (StyleGAN2-ADA/StyleGAN3) typically consume 90-140 GPU-years per project, and it feels safe to assume that EG3D falls into a similar class of projects.\nAnd the amount of spent compute determines the possibility of finding better hyperparameters for a model and the number of ideas one can explore to improve the performance.\n\nMoreover, training EG3D on a new dataset seems to be not that straightforward: we attempted to train it on the standard FFHQ (to get directly comparable FID scores) with the official hyper-parameters provided for their re-aligned version of FFHQ and we observe considerably lower performance: for now, $40\\%$ of its training has passed and its current FID is 24.5.\nFor comparison, StyleGAN2 achieves an FID of 6.08 at this training stage, and our generator — 14.78.\nWe include the samples from it in Section 7 on [this web page](https://rethinking-3d-gans.github.io/additional-results).\nIn this way, it seems that EG3D needs some tuning when one applies it to a new dataset.\n\nAnswering the original question, we can name the following reasons why our model could be preferable:\n- It is 3 times cheaper to train.\n- It is multi-view consistent by construction and captures high-frequency geometry details (see the visualizations in Section 1 and Section 2 on [this web page](https://rethinking-3d-gans.github.io/additional-results)).\n- Our model can easily integrate tricks from the existing NeRF literature: i.e., we did background separation on FFHQ by simply copy-pasting the code from NeRF++ (see Figure 1 of the main text).\n\nFinally, we believe that our paper is not simply a novel generator architecture (which is useful for its own sake) but also our exploration of patch-wise sampling and improvements we develop on top of it, which could be used in other scenarios.\nAt the current moment, the community almost completely abandoned patch-wise training (even the original GRAF authors didn't use it in their recent [VoxGRAF project](https://katjaschwarz.github.io/voxgraf)) and switched to upsampler-based generators. However, in our work we show that patch-wise models are powerful rivals for them both in terms of performance and training cost, which we believe is important knowledge for the community to have.", " **Q**. *Multi-view consistency comparison with upsampler-based generators? With EG3D?*\n\n**A**.\nWe provided video comparisons with upsampler-based methods in terms of multi-view consistency on our supplementary material website: [https://rethinking-3d-gans.github.io/](https://rethinking-3d-gans.github.io/).\nUnfortunately, showing a video is the only reasonable way to assess multi-view consistency, which we cannot include in the main text.\nVideos on the website for all four datasets (FFHQ, Cats, M-Plants and M-Food) show that StyleNeRF and MVC-GAN (SotA generators which had their code released at the time of the submission) have their texture/shapes changing when the camera moves. At the same time, the samples of our model remain consistent.\n\nSince the official source code for EG3D is out, we provide the comparisons to it in Section 2 on [this web page](https://rethinking-3d-gans.github.io/additional-results).\nOne can still see that EG3D alters the texture (especially in the hair or around the eyes) and sometimes even structure (near the mouth) depending on the camera position.\nThis lack of multi-view consistency is undesirable in practical applications since it looks like aliasing problems or flickering to the end user.\n\n=====================================================================\n\n**Q**. *Could you provide a qualitative comparison with EG3D in terms of inferred geometry?*\n\n**A**.\nThat being said, we couldn't do this at the submission time since their codebase was not released.\nWe provide this comparison on FFHQ $512^2$ in Section 1 on [this web page](https://rethinking-3d-gans.github.io/additional-results).\nWe recommend viewing those videos in full resolution: one can see from them that our method captures high-frequency surface details (like hair or areas around the eyes) much better since it learns the shapes in the natural dataset resolution.\n\n=====================================================================\n\n**Q**. *Why could one not simply rely on the tri-plane representation + GRAF patch-based discriminator to achieve the same results proposed here? Is there a direct comparison with GRAF?*\n\n**A**.\nGRAF is a foundational model, but much time has passed since it was released, and directly comparing to it would be unfair since it uses older architectural backbones and trains on small compute.\nAt the submission time, we did not include the comparison to the bare tri-planes + GRAF's uniform scale sampling setup and ablated each component (discriminator structure and beta sampling strategy) separately.\nBut we agree that such a comparison would be helpful to a reader and run this model on FFHQ $512^2$, M-Plants $256^2$ for $T=5k$ and Cats $256^2$ for $T=1k, 5k, 10k$ and with the proposed beta scale sampling to compare the convergence for different scale sampling strategies for it.\nThe bare tri-planes + GRAF sampling setup achieves ${\\approx}30$% worse performance than our final model.\nWe report the scores in Table 2 (see the updated submission) and the convergence plots comparison for different scale sampling strategies in Section 5 on [this web page](https://rethinking-3d-gans.github.io/additional-results) (and add it to the supplementary).\nWe thank the reviewer for bringing this up: it helps us to improve our work.\n", " We appreciate the effort the reviewer spent studying our work and their positive assessment of our contributions.\nBelow, we did our best to elaborate on all the raised questions, provide additional visualizations and perform additional experiments to address all the concerns.\n\n**Q**. *As shown in the appendix, the patch-wise training strategy is subpar compared to full-resolution training for 2D generation, which limits its adoption.*\n\n**A**.\n% While it's true that our results for patch-wise training of StyleGAN2 were subpar compared to full-resolution training, one needs to note that\nPatch-wise optimization of 2D/3D generators is currently in its infancy, and there are \\textbf{very} few works which explore it.\nIn our project, we investigated two of its important aspects: scale sampling strategies and the adaptation of discriminator's filters' to different image scales and developed two ideas to improve them.\nWhile it's true that our two ideas are not enough to close the gap between patch-wise training and full-resolution training, they make solid steps into bridging it.\nWe believe that patch-wise optimization is promising and will be explored further, especially in the context of 3D and video generation (which is traditionally very expensive), and our ideas can find their use there.\nOne important aspect which could be improved (and which we couldn't make work in our project) is providing the global image information to the discriminator when it processes small-scale patches: right now, it is forced to judge only on the local information, which turn out to be a huge limitation compared to a full-resolution discriminator.\n\nAlso, note that [AnyResGAN](https://chail.github.io/anyres-gan/), a contemporary work, also uses patch-wise optimization, but for 2D generation and with a much larger patch resolution (256$^2$).\nAnd their patch-wise StyleGAN3 is very close to the full-resolution StyleGAN3 in terms of performance: FID of 3.95 vs FID of 3.06 on FFHQ $1024^2$ (Appx 7.5).\nThey explored two completely different aspects of it: knowledge distillation from a coarse-scale teacher and generator conditioning on patch parameters.\nHence, we believe patch-wise training is promising and its gap with full-resolution training will get smaller and smaller with time.\n\n=====================================================================\n\n**Q**. *Were there any strategies used to mitigate aliasing in generated patches (e.g., Mip-NeRF-like volume rendering)?*\n\n**A**.\nWell, that is an excellent question, and we must admit that we did not take care of aliasing effects.\nA straightforward (and \"ideal\") solution would be to generate a full-resolution image and then extract the patch with anti-aliasing, but it would be too expensive.\nIntegrating the ideas from Mip-NeRF would: 1) introduce additional complexities into the project; and 2) it is not clear how to use it on top of tri-planes, since the tri-plane representation uses coordinates as the interpolation weights for plane features rather than the positional embeddings inputted to an MLP.\nThis is why we didn't investigate anti-aliasing in our project and leave it as an important future research direction.\n\n=====================================================================\n\n**Q**. *How do you sample real patches? Were there any strategies used to mitigate aliasing?*\n\n**A**.\nWe extract real patches from images using grid sampling with bilinear interpolation.\nSince we do not employ anti-aliasing techniques in the generator, we considered it to be proper not to use them for real patches either, since it would lead to a potential mismatch in real and fake distributions.\nNevertheless, we agree that it is vital to explore it in the future, and it could significantly improve the performance of patch-wise training.", " **Q**. *What's the relationship between patch size and final image size? E.g., In table2 it seems for 512x512 image size, 64 patch size seems to be the best. Does it mean that for 256x256 image size, we should change to 32 patch size?*\n\n**A**.\nWe believe that, in general, the higher, the better, since higher-resolution patches contain more information and hence learning signal for both the generator and discriminator.\nOur project evolved around the highest affordable resolution in our resource constraints: $64^2$, and to be honest, we believe that the performance for the $128^2$ patch size could be improved if one tune it better.\nIn our case, it was simply not affordable since the training takes too long (40\\% more, which was a permissible cost for us for final ablations, but not to develop the whole project).\n\nTo compare more thoroughly, we launched a hyper-parameter grid search on Cats $256^2$ for $r = 32^2, 64^2$ $128^2$ patch sizes and $\\gamma = 0.01, 0.1$ and $1.0$ R1 penalty weight, which is the most important hyper-parameter when exploring new resolution or dataset for StyleGAN2-ADA (in our work, we used $\\gamma=0.05$ which we inherited from our patch-wise training experiments on 2D generation).\nThe FID@2k scores for them are provided below:\n- $32^2$ patch size:\n - $\\gamma=0.01$: 20.31\n - $\\gamma=0.1$: 30.72\n - $\\gamma=1$: 335.32\n - training speed: 9.84 seconds / 1K images\n- $64^2$ patch size:\n - $\\gamma=0.01$: 24.93\n - $\\gamma=0.1$: 18.13\n - $\\gamma=1$: 20.07\n - training speed: 11.36 seconds / 1K images\n- $64^2$ patch size:\n - $\\gamma=0.01$: 21.21\n - $\\gamma=0.1$: 18.72\n - $\\gamma=1$: 16.96\n - training speed: 17.45 seconds / 1K images\n\nIn this way, for Cats $256^2$, higher patch resolution steadily gives better results if one has the resource capacity to tune hyper-parameters for it (which was not the case for us). However, it comes with a considerably higher training cost.\nPlease note, that StyleGAN2-ADA/StyleGAN3/EG3D also do not have robustness with respect to R1 penalty weight $\\gamma$: e.g., for StyleGAN2-ADA (the most robust one) we obtained FID of 4.5 vs FID of 3.8 on FFHQ $256^2$ for $\\gamma = 0.1$ and $\\gamma = 0.05$.\nOur preliminary experiments showed that architectural variations are comparable for a fixed $\\gamma$ when the dataset and the resolution are also fixed.\n\n=====================================================================\n\n**Q**. *It is good to see that the proposed generators can generate up to 512x512 images. I wonder what's maximum is. Can it be scaled to 1024x1024?*\n\n**A**.\nIn our project, we launched just a single experiment somewhere close to the submission deadline to train our generator on FFHQ $1024^2$ with $128^2$ patch resolution. It gave us an FID of 20.33 at 3.5M processed images ($\\approx$ 15\\% of overall training time) which is good performance, but then the image quality had not been improving till 15M processed images. We decided to stop the training to free the resources for more urgent experiments since we hypothesized that we would need to increase the tri-plane resolution for it and likely tune some hyper-parameters.\nWe believe that patch-wise training is applicable at high resolution ([AnyResGAN](https://chail.github.io/anyres-gan/) gives some ground to this claim), and we plan to explore this in the future.\n\n**Q**. *For face and cat, the camera poses are limited to the front views. Can you do large view changes, e.g., rendering side faces or rendering from overhead? I want to know whether the radiance field will remain view-consistent for unseen training views.*\n\n**A**.\nSince we do not condition the color prediction branch on view angles (similar to $\\pi$-GAN, EG3D, and other prior works), it always remains view-consistent, even at extreme view angles (up to the randomness in the integral calculation in volume rendering).\nIn section 3 of [this web page](https://rethinking-3d-gans.github.io/additional-results), we provide the visualizations for our method and also for EG3D on FFHQ for the camera positions 3 standard deviations away from the frontal pose: i.e., $\\pm 3 \\sigma$ right/left and $\\pm 3 \\sigma$ up/bottom which is $\\pm$0.9 and $\\pm$0.6 radians, respectively.\nFor EG3D, we generated the samples using the official checkpoint and sampling scripts.\nNote that EG3D uses a re-aligned/re-cropped version of FFHQ and renders from a closer distance, which hides potential artifacts in the back of the head.", " We are thankful for the review and the valuable suggestions, and we agree that our developed patch-based discriminator would be a good contribution to the community.\nIn the following, we elaborate on the raised questions, clarify some misunderstandings and provide the results for the additional experiments.\n\n=====================================================================\n\n**Q**. *Does the generator synthesize all images or only a subset of pixels during training? If the tri-plane generator is already able to synthesize all pixels without running out of memory, it seems unnecessary to adopt a patch-based discriminator.*\n\n**A**.\nOur generator synthesizes just $64^2$ pixels for each random sample in a batch during training instead of $512^2$ (or $256^2$) ones, which leads to a greatly improved training speed. \n\nSynthesizing $512^2$ pixels on each iteration is not computationally feasible because tri-planes are still very expensive in high resolutions.\nEG3D trains with $64^2$ resolution tri-planes for $25M$ images and then increases this resolution to $128^2$ and fine-tunes for $1.5M$ images.\nThis slight increase in the resolution significantly decreases the training speed: from 24 seconds per 1K images to 46 seconds per 1K images (as per Appx 3 in [EG3D](https://nvlabs.github.io/eg3d/media/eg3d.pdf)) on 8 $\\times$ v100 GPUs.\nSo, generating these additional $128^2 - 64^2 = 12288$ pixels in NVidia's tri-plane implementation cost ${\\approx}22$ seconds per 1K images.\nIn this way, training a full-resolution tri-plane-based generator at $512^2$ image size would take **493.3 seconds per 1K images** (we consider the cost of the 2D upsampler to be negligible here).\nThe overall training time till 25M images are processed (the standard schedule for StyleGAN-1/2/3, EG3D, StyleNeRF, our generator, and other models) would thus cost **${\\approx}$4.5 months** on $8\\times$ NVidia V100 GPUs, which is far beyond the resource capacity of most research teams.\n\n=====================================================================\n\n**Q**. *Maybe implementing patch-wise training on top of $\\pi$-GAN is more suitable for demonstrating the efficacy of the patch-based discriminator.*\n\n**A**.\nAs said in the previous answer, tri-planes are still a good test-bed for exploring patch-wise training since they are also very expensive to scale to high resolutions. But, indeed, exploring patch-wise training on top of MLP-based NeRF generators (like $\\pi$-GAN) is an exciting direction.\n\nWe implemented a $\\pi$-GAN generator in our repo with patch-wise training for this discussion.\nAs the generator, we used an 8-layer MLP with 256 channels and positional embeddings of the coordinates (the setup from the original NeRF paper).\nWe had to decrease the patch size to $32^2$ from $64^2$ to make it train faster due to the time limit (7.2 seconds per 1K images instead of 18.3).\nFurthermore, we also used 24 steps per ray in volumetric rendering.\nThe attained FID after 10M seen images (1 day of training on 8$\\times$ v100s) is 21.46: for comparison, the original $\\pi$-GAN obtains FID of 68.28 after full training, which takes 1 week on $8\\times$ V100 at the $256^2$ resolution.\nWe provide the samples for this model in Section 6 of the [this web page](https://rethinking-3d-gans.github.io/additional-results).\nNote that FID is greatly affected by the $\\pi$-GAN's circular artifacts in the generations (see the [original samples](https://marcoamonteiro.github.io/pi-GAN-website/) on Cats).", " **Q**. *If possible, the authors should provide the results of the full-resolution discriminator. Since the full-resolution discriminator can see the global image, its performance can be used as an upper bound for the reference of patch-based discriminators.*\n\n**A**. \nA full-resolution discriminator would indeed be a reasonable upper bound for the performance.\nHowever, as described in the previous answers, it would be infeasible to train it for a 3D generator.\nIn our project, while exploring different patch-wise training setups, we instead conducted many experiments on 2D generation on top of StyleGAN2-ADA.\nWe reported the key findings in Appx A.1 — together with this full-resolution discriminator upper bound.\nWhat we found is that our current patch-wise training strategy still has a lot of room for improvement compared to the full-resolution training: patch-based StyleGAN2 (with the $64^2$ patch resolution) attains FID of 7.11 on FFHQ $512^2$ after 25M seen images versus FID of 3.83 for the full-resolution StyleGAN2.\nWe believe that there are two reasons why it under-performs.\nFirst, it received less overall training signal if measured in the number of seen content, e.g., it has seen fewer \"eyes variations\", fewer \"hair samples\", etc. — and training it for longer indeed improves the performance: it attains FID of 4.76 after 100M seen images ($\\times 4$ longer training);\nSecond, in our current setup, we do not provide the information on global content to the discriminator when it judges small-scale patches.\nSeveral of our attempts to provide it did not work (see the Failed experiments section in Appx C), but we believe it to be a promising future research direction.\nAlso, note that a concurrent patch-based [AnyResGAN](https://chail.github.io/anyres-gan/) comes very close to the performance of full-resolution generators — though in their case, the patch size is $256^2$.\n\n=====================================================================\n\n**Q**. *In equation 4, a simple baseline is $\\text{lerp}[1, r/R, 1 - \\min(t/T, 1)]$. The author should compare this setting to the proposed beta strategy if possible.*\n\n**A**.\nThis is a good suggestion and we have launched 3 experiments on Cats $256^2$ for $T = 1000, 5000, 10000$.\nWe report the results for it in Section 4 on [this web page](https://rethinking-3d-gans.github.io/additional-results).\nWe called this sampling strategy \"reversed uniform sampling\": it first focuses on the full range of patch scales and then gradually decreases this range to coarse scales only.\nAs one would expect, all the sampling strategies initially perform similarly during the initial training stage. However, then the reversed scale schedule starts to degrade in terms of performance because the discriminator starts operating on more coarse scales, making the generator forget high-frequency details, affecting FID.\nWe thank the reviewer for this suggestion and include this exploration in the supplementary material.\n\n=====================================================================\n\n**Q**. *Line 204, how to use pose supervision for the discriminator?*\n\n**A**.\nFor pose supervision, we take the rotation and elevation angles, encode them with positional embeddings and feed them into a 2-layer MLP.\nAfter that, we multiply the obtained vector with the last hidden representation in the discriminator, following the default Projection GAN strategy from StyleGAN2-ADA.\nThis is a default strategy from EG3D but for their model, the authors condition on $4\\times4$ camera extrinsic + $3\\times 3$ camera intrinsics parameter matrices, obtained from an off-the-shelf estimator.\nIn our case, it is just 2 scalars.\nWe added these details into the main text.\nBut also note that we release the source code, where any further technical details could also be found.", " We are delighted to receive such a positive assessment of our work and are grateful for it.\nBelow, we provide the answers to the raised questions with a reference for additional results.\n\n=====================================================================\n\n**Q**. *Are all baseline results generated using view-independent NeRF as well?*\n\n**A**.\nThat's true, **all** the generators are trained without the view dependence.\nIn our preliminary experiments, we observed that since there is not enough supervision in terms of views (e.g., FFHQ and Cats have just a single view per object) and there exist 3D biases in the existing benchmarks (e.g., frontal views in FFHQ have much more smiling people than side views) — the multi-view consistency severely degrades.\nPrior works also disable the view dependence (e.g., $\\pi$-GAN, EG3D, GRAM, and others.)\n\n=====================================================================\n\n**Q**. *Is the method robust to train with the proposed beta sampling strategy given the hyper-generated patch discriminator?*\n\n**A**.\nWe were unclear in our exposition: Figure 6 shows the improved convergence for the hyper-conditioned rather than standard discriminator — i.e., our patch sampling scheme does improve optimization in such a setup.\nTo address the inaccuracy, we performed the same ablation on Cats $256^2$ for the standard discriminator and reported them in Section 5 on [this web page](https://rethinking-3d-gans.github.io/additional-results).\nWe also observed the same effect in this scenario: the beta scale sampling strategy made the training more robust and convergence faster.", " We are very grateful to the reviewers for their careful analysis of our work and all the suggestions they gave. While preparing our response, we ran into an embarrassing misfortune: our internal cluster got shut down for maintenance for 5 days, and it took us a huge effort to migrate to a new external cluster. We are currently running the necessary experiments and preparing the visualizations/comparisons to address all the raised questions and concerns — and will post them with some short delay. We hope that this delay will not be too much of an inconvenience to the reviewers.", " This paper introduces new techniques in training NeRF-based GANs. The proposed techniques addresses the patch-based trianing issue, which blocked privious works to directly generate nerfs that renders into high quality resolution images. As a result, previous works lacks in training time, in addition to lack of view consistency because of image-based upsampling. The proposed method utilizes a hyper-network to generate filters for the patch discriminator under different resolutions, and utilizes an annealed beta distribution for samping the random scale for patch discrimination. Both disign choices are well founded and validated by the quality of the result. Strengths:\nOriginality:\nThis paper proposes new solutions to patch-based discriminators in training NeRF-based GANs. The proposed techniques are simple to implement and effective, as demonstrated by the results. In addition, the authors validate the quality on new datasets(megascan plants&food), which further demonstrated the geometric quality of the generated nerf.\n\nSignificance:\nI think this paper would be of significance to communities interested in generative methods centered around NeRF. Though it can be said that the proposed technique is simple, but it is more a merit than a drawback. I think those solid gadgets are keys to make things work better and better.\n\nQuality:\nThis paper conducts enough experiments to support the claim, and justifications of the design choices make sense. I find the evidence in the paper convincing and the webpage of results are quite representative.\n\nClarity:\nThis paper is well written and easy to follow. Readers should be able to reimplement the method based on the inforamtion provided.\n 1. Are all baseline results generated using view-independent NeRF as well?\n2. As suggested by the paper, using the beta distribution with the improved annealing improves the stability and convergence speed. Is that also the case for the hyper-network as well? Is the method robust to train given the hyper-generated patch discriminator?\n\n\n The authors addressed the limitations and potential negative impact adequately. ", " The paper proposes a non-upsampler-based 3D-aware generator. To train the non-upsampler-based generator, the paper presents a patch-based discriminator. In contrast to previous patch-based schemes, the author improves patch-based discriminators in two ways. First, they adopt location and scale to modulate the discriminator (in particular, the feature maps). Second, they modify the patch sampling strategy based on an annealed beta distribution to stabilize training and accelerate the convergence. Furthermore, they introduce two plants and food datasets, in order to evaluate 3D-aware generators. Strengths:\n\nThe paper is well-written, and the experiments are comprehensive.\n\nThe patch-based discriminator studied in the paper is important for differentiating high-resolution images and would be a good contribution to the community.\n\nWeaknesses:\n\n1. The paper's main contribution is to propose a new patch-based discriminator, which is particularly useful when the generator can only render partial pixels. I would like to know whether the generator synthesizes all images or only a subset of pixels during training. Since the generator adopts a tri-plane representation, it is memory efficient. If the generator is already able to synthesize all pixels without running out of memory, it seems unnecessary to adopt a patch-based discriminator. \n\n2. I think patch-based discriminators are valuable. A more challenging setting can demonstrate the method's effectiveness, namely training pi-GAN on high-resolution images. Since pi-GAN is memory-intensive, rendering all pixels for high-resolution images is impossible during training. Maybe this setting is more suitable for demonstrating the efficacy of the patch-based discriminator. \n\n3. If possible, the authors should provide the results of the full-resolution discriminator. Since the full-resolution discriminator can see the global image, its performance can be used as an upper bound for the reference of patch-based discriminators.\n\n4. In equation 4, a simple baseline is $lerp[1, r/R, 1 - min(t/T, 1)]$. If possible, the author should compare this setting to the proposed beta strategy. \n\n5. Line 204, how to use pose supervision for the discriminator? The author can briefly introduce the details. \n\n\n Please see the weaknesses. Yes, they did.", " This paper proposes two important techniques to train 3D generative models from 2D supervision. \n\nExisting works adopt nerf to render 3D radiance fields into 2D images with a large memory cost. As a result, they fail to train the discriminator with high resolution and more often render 2D images with low resolution and generate high-res images by 2D CNN up-sampling. However, it brings strong view-inconsistent problems. \n\nThis paper explores patched-based discriminator. By rendering small patches, the generator can directly synthesize high-res images without any upsampling tricks. To improve the quality of patch discriminator, the paper proposes 1) a location- and scale-aware discriminator and 2) a beta distribution patch sampling strategy, both showing good improvement. It also produces quite nice and view consistent results and outperforms the baselines.\n Strength:\nThe patch-based solution to solve the view-inconsistency problem of nerf-based generative models is reasonable and the two proposed techniques are quite simple but seem to work well.\nThe final FID scores are better than all the baselines with much less training time needed, and the results looks quite nice!\n\nWeakness:\nIt is very weird why the strategy cannot be applied in 2D cases, which raises concerns of generalization.\n I have several questions about the implementation details.\n\n1)\tAs for the rendering equation, did the authors adopt the original ray-based rendering equation (in original nerf paper), or light cone based one (in mip-nerf?) If I didn’t understand wrong, it seems the authors choose the former. In such a case, would it produce anti-aliasing problems when rendering a 64x64 patch with the scale to be 1(let’s say the original image is 256x256)? Adopting the mip-nerf seems to be the straightforward solution. Could the authors discuss this part?\n\n2)\tHow do you make real patch data for training the discriminator? Are you cropping the real images with specific scales, then resizing them to be 64x46? I wonder do you also consider the antialiasing problem here?\n\n3)\tWhat’s the relationship between patch size and final image size? E.g., In table2 it seems for 512x512 image size, 64 patch size seems to be the best. Does it mean that for 256x256 image size we should change to 32 patch size? \n\n4)\tIt is good to see that the proposed generators can generate up to 512x512 images. I wonder what's maximum. Can it be scaled to 1024x1024? \n\n5)\tFor face and cat, the camera poses are limited to the front views. I wonder can you do large view changes, e.g., rendering side faces, or rendering from overhead? I want to know whether the radiance field will remain view-consistent for unseen training views. On the other side, I recognize that if the training images cover 360 degrees of the object, then the radiance field doesn’t have such issues.\n I appreciate that the authors honestly report that the proposed techniques don’t work in 2D cases. However, I treat it as a limitation of the paper since it undermines the generalization of the techniques. ", " The authors propose a framework to generate 3D images from latent code. Authors analyze in details the drawbacks of current methods, based on upsampling strategy that could affect multi-view consistency or rely on low resolution rendering. They came up with a patch based strategy similar to GRAF to improve the performance. The idea is to condition the discriminator with a hypernetwork to better make sense of the output of the convolutional filter at multiple scales. Additionally, they propose a beta distribution when sampling the scale, improving the overall convergence +++ Relevance. The work is super relevant for the community, with a lot of possible applications.\n\n+++ Clarity. The paper is very well written and well motivated. Authors describe in details all the aspects of 3D generators, explaining their drawbacks and how to mitigate those. In particular, I really appreciated in the supplementary material the attempts at 2D generation and all the failing experiments the authors reported. This could be very useful for the community as GAN methods usually rely on a substantial amount of trial and errors before getting to the right combination of architecture and loss functions.\n\n--- Novelty and Experiments. Whereas this paper has many good aspects and I'd like to see it published, I have a few doubts regarding the overall novelty and the missing comparisons.\n\nIn particular, the method \"borrows\" multiple aspects from EG3D and GRAF, but comparisons are very limited. Indeed EG3D outperforms this method substantially in terms of FID, authors hint that the upsampler leads to a better FID score but worse multiview consistency (l240), but they do not provide any example of this.\n\nSimilarly, the method borrows the patch-based, multi-scale discriminator idea from GRAF, improving with the conditioning from a hypernetwork, but I am not sure there is a direct comparison with GRAF?\n\nThere are no examples of inferred geometry. Ideally authors would also show the inferred geometry and qualitatively compare with EG3D to prove that the method leads to better multi-view consistency.\n I'd like the authors to comment on the limitations I described in the Strength and Weakness section. In particular, why one could not simply rely on the EG3D triplane representation + GRAF patch based discriminator to achieve the same results proposed here. Is there any experiment showing that it's not sufficient?\n\nAlso, why one should prefer this method instead of EG3D? Seems to obtain better quantitative metrics (and qualitative metrics are not shown). Yes, Limitations and Ethical concerns are described in the supplementary material" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "Nxw2zXZne5p", "_JwSNEfncHZ", "Zw6wdfSO3dZ", "oIErTxvlPPx", "nips_2022_TTM7iEFOTzJ", "y2zuhLFAsQb", "y2zuhLFAsQb", "HoykHFPUhYJ", "HoykHFPUhYJ", "_JwSNEfncHZ", "_JwSNEfncHZ", "O02Rrg9lCPS", "nips_2022_TTM7iEFOTzJ", "nips_2022_TTM7iEFOTzJ", "nips_2022_TTM7iEFOTzJ", "nips_2022_TTM7iEFOTzJ", "nips_2022_TTM7iEFOTzJ" ]
nips_2022_oprTuM8F3dt
Coordinates Are NOT Lonely - Codebook Prior Helps Implicit Neural 3D representations
Implicit neural 3D representation has achieved impressive results in surface or scene reconstruction and novel view synthesis, which typically uses the coordinate-based multi-layer perceptrons (MLPs) to learn a continuous scene representation. However, existing approaches, such as Neural Radiance Field (NeRF) and its variants, usually require dense input views (i.e. 50-150) to obtain decent results. To relive the over-dependence on massive calibrated images and enrich the coordinate-based feature representation, we explore injecting the prior information into the coordinate-based network and introduce a novel coordinate-based model, CoCo-INR, for implicit neural 3D representation. The cores of our method are two attention modules: codebook attention and coordinate attention. The former extracts the useful prototypes containing rich geometry and appearance information from the prior codebook, and the latter propagates such prior information into each coordinate and enriches its feature representation for a scene or object surface. With the help of the prior information, our method can render 3D views with more photo-realistic appearance and geometries than the current methods using fewer calibrated images available. Experiments on various scene reconstruction datasets, including DTU and BlendedMVS, and the full 3D head reconstruction dataset, H3DS, demonstrate the robustness under fewer input views and fine detail-preserving capability of our proposed method.
Accept
This paper focuses on improving the training efficiency of coordinate based representations by reducing the number of camera views needed during training. To accomplish this, the authors proposed a codebook attention module and a coordinate attention module to inject prior knowledge into implicit representations. The intuition is that doing so encourages the network to learn the semantic correlation between the input point and the scene, enabling "extrapolation" to far away views. The reviewers appreciated the idea of the paper and how it improved image reconstruction quality across various number of views. They raised concerns regarding lack of experiments with models that perform conditioning to pixel features, e.g., PixelNerf, to generalize across scenes, as well as lack of ablative quantitative experiments to evaluate the architectural contributions and lack of a limitations section. The rebuttal submitted by the authors includes experiments with PixelNerf but does not state mention the number of views used, and does not describe how the method “ours (with scan 51 priors)” is obtained. The second experiment where PixelNerf is trained on DTU and tested on BlendedMVS is not a fair experiment and we do expect pixelNerf to fail there given there is no test time adaptation through gradient descent at the test scene as is the case with the present method. The authors are encouraged to move all rebuttal experiments to the main paper, and thoroughly explain the experimental setup they used. Overall, the paper is not very clearly written. Specifically, the reader learns only at the end of the implementation detail section that a separate network is trained per scene. Reviewer q1CY mentions: “Although prior knowledge are used in this work, *it seems like* the model is still trained specifically to one scene and is hard to generalize to novel scenes.” By not comparing or contrasting the proposed approach to cross-scene generalization works the reader is left to wonder what is the generalization capabilities of the proposed model with varying number of input views. The authors are encouraged to clarify these points in their final version.
train
[ "Aiv3ZATEBCQ", "p0hiBHjQKxn", "DieeZHR0UCi", "UcLwAW8NYI", "MsJ9KCF5rKt", "mGXGF9ItiLb", "EFj9gyTTp4C", "5fW-EVnC9cB", "ttr_7Z9pJUS", "jXaBqHcJxwT", "1wzO1UnyoE", "Nz449rURjg", "_gYTfxpMaiS", "QP90EEcZ3Gs", "F28JDvevpA", "-Wn3zThW2Q5", "QkfhKWvtpU3", "Yjz6V0xYRjL" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate your valuable and insightful comments. We feel glad about your generally favorable assessment of our methodology. Additional evaluation/ablation and corresponding explanations will be included in the final version.", " We appreciate your valuable and insightful comments. We feel glad about your generally favorable assessment of our methodology. Additional evaluation/ablation and corresponding explanations will be included in the final version.", " I thank the authors for their extensive response and especially for running this many further experiments. The response has addressed my concerns and I will update my rating accordingly. I would encourage the authors to include (if space permits) some of the additional explanations from this response into the main paper because these really helped with my understanding of the paper.", " Thank authors for the detailed response to my questions and concerns.\n\nI appreciate the clarification of Coco-INR compared against approaches relying on scene-related priors. Additional evaluation and abalation of key components also adressed most of my concerns.\n\nAfter reading all the reviews and authors' feedbacks, I would like to keep my positive rate towards this paper and recommend an acceptance.", " We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.\n\n📝 **Q: Evaluation with pre-existing evaluation protocols.** \n\n💡&#8194;**A:** Following the reviewer's suggestion, we conduct new experiments following existing evaluation methods like VolSDF. As these methods only provide PSNR metric results in their papers, we thus also compare with them in terms of PSNR but meanwhile report the SSIM and LPIPS of our approach. As shown in table below, our method can also achieve good results on pre-existing evaluation protocols.\n\n| |NeRF |VolSDF |Ours| Ours |Ours|\n|:--:|:--:|:--:|:--:|:--:|:--:|\n|Scan |PSNR↑ |PSNR↑ |PSNR↑ |SSIM↑| LPIPS↓|\n|24 |26.24| 26.28| **27.42** |0.813 |0.207|\n|37 |25.74| 25.61 |**26.32** |0.692 |0.223|\n|40 |26.79 |26.55 |**26.86** |0.569 |0.225|\n|55 |**27.57** |26.76 |26.98 |0.854 |0.307|\n|63 |31.96 |31.57 |**31.99** |0.803 |0.199|\n|65 |31.50 |31.50 |**32.63** |0.879 |0.268|\n|69 |29.58 |29.38 |**30.27** |0.939 |0.273|\n|83 |32.78 |33.23 |**34.34** |0.941 |0.333|\n|97 |28.35 |28.03 |**28.96** |0.913 |0.313|\n|105 |32.08| 32.13| **33.01** |0.932| 0.333|\n|106 |33.49 |33.16 |**33.77** |0.945 |0.332|\n|110 |31.54 |31.49 |**32.01** |0.943 |0.383|\n|114 |31.00 |30.33| **31.18** |0.916 |0.321|\n|118 |35.59 |34.90 |**35.63** |0.956 |0.322|\n|122 |35.51 |34.75 |**35.66** |0.960 |0.337|\n|Mean| 30.65 |30.38 |**31.14** |0.870 |0.292|\n\n📝 **Q: Fig.1. in the supplementary material very helpful to understand the proposed architecture.** \n\n💡&#8194;**A:** Thanks for your suggestions. We will remove the repeated part compared with Fig.1. (in the paper), then add Fig.1. (in the supplementary material) to the final version for a better understanding.\n\n📝 **Q: What is the intuition behind using the VQGAN pre-trained codebook? Would it be better to use a pre-trained codebook from a 3D network?** \n\n💡&#8194;**A:** The intuition behind the VQ-quantized codebook prior is introducing additional high-level semantic descriptors of the context scene as the prior information, to enrich the feature representation of each coordinate and compensate for the missing views when only sparse views are available. In our experiments, we borrow the codebook from the 2D-aware dataset without introducing additional 3D prior for a more fair comparison. To verify its effectiveness, we replace the pre-training codebook of the NIR network with random initialization under both sparse views and few views settings and report the results in table below. We can see that the performance drops significantly without introducing 2D priors. And the robustness is reduced substantially with fewer views because it fails on scan24 under the few views setting.\n\n[1] Hou J, Xie S, Graham B, et al. Pri3d: Can 3d priors help 2d representation learning?[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 5693-5702.\n\n[2] Liu Y, Wang L, Liu M. Yolostereo3d: A step back to 2d for efficient stereo 3d detection[C]//2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021: 13018-13024.\n\nTo verify this, we replace the pre-training codebook of the Implicit Neural Representation network with random initialization under both sparse views and few views settings.\n\n|Method with few views (5-8) |PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Without INR codebook |13.979 |0.564 |0.298 |\n |Randomly initialized INR codebook |11.941 |0.512 |0.327 |\n |**Ours (pre-trained codebook)** |**15.622** |**0.576** |**0.283** |\n\n\n |Method with sparse views (16-32) |PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Randomly initialized INR codebook | 22.307 |0.615 | **0.229** |\n |**Ours (pre-trained codebook)** |**22.907** |**0.688** |**0.229** |\n\nBesides, we have noticed that recent works like 3D-RETR[1], AutoSDF [2], and ShapeFormer[3], have successfully trained the VQ-quantized codebook from a 3D-aware dataset like ShapeNet. We believe that introducing 3D-aware codebook priors could further boost the performance, especially for geometry. But the exploring on how to train a 3D-aware codebook prior and leverage it to serve NERF task is out of this work's focus.\n\n[1] 3D-RETR: End-to-End Single and Multi-View 3D Reconstruction with Transformers, Zai Shi, Zhao Meng, Yiran Xing, Yunpu Ma, and Roger Wattenhofer. BMVC 2021.\n\n[2] AutoSDF: Shape Priors for 3D Completion, Reconstruction and Generation, Paritosh Mittal, YenChi Cheng, Maneesh Singh, and Shubham Tulsiani. CVPR 2022.\n\n[3] ShapeFormer: Transformer-based Shape Completion via Sparse Representation, Xingguang Yan and Liqiang Lin and Niloy J. Mitra and Dani Lischinski and Danny Cohen-Or and Hui Huang. Arxiv.2201.10326.", " 📝 **Q: Does more coordinate attention modules mean that the codebook is more important for generating the color.** \n\n💡&#8194;**A:** It is related to the properties of the codebook. Our codebook is trained in 2D and can provide the Neural Renderer network with rich feature representation and texture information, to produce more photo-realistic results. For Implicit Neural Representations, it just requires injecting prior codebook information into each coordinate, resulting in a more robust representation. The former needs to learn textures from rich geometry features and view-consistent prototypes, which is more challenging because human visual perception requires detail-preserving capability.\n\n📝 **Q: Reduce the number of MLP layers (maybe even to 0).** \n\n💡&#8194;**A:** Experiment results of reducing the number of MLP layers in the Implicit Neural Representations module are shown in table below. Benefiting from our CoCo-Attention module, which queries representative features from per-scene relevant learnable embeddings for each coordinate, new views can be synthesized even with 1 MLP layer under different settings of view sparsity. But using 0 MLP layers will not work because we need at least 1 MLP layer for the spherical geometry initialization as used in VolSDF.\n\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |2 layers MLP |14.656 |0.560 | 0.292 |\n |1 layers MLP | 13.210 | **0.576** |0.311 |\n |0 layers MLP |7.309 | 0.303 |0.401 |\n |**Ours (4 layers MLP)** |**15.622** | **0.576** | **0.283** |\n\n |Method with sparse views (16-32) |PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |2 layers MLP | 21.946 |0.592 |0.240 |\n |1 layers MLP | 21.171 |0.622 | 0.231 |\n |**Ours (4 layers MLP)** |**22.907** | **0.688** | **0.229** |\n\n |Method with all views |PSNR↑ | SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |2 layers MLP | 26.163 | **0.706** | 0.227 |\n |1 layers MLP | 25.332 |0.644 |0.237 |\n |**Ours (4 layers MLP)** | **26.870** |0.691 |**0.218** |\n", " 📝 **Q: Reduce the number of utilized codes.** \n\n💡&#8194;**A:** We report the results of different numbers of codebook items, ranging from 16384, 8192, 4096, 2048 to 1024 in table below. It shows that the performance tends to decrease with the reduction of the codebook items.\n\n |Method with few views (5-8) |PSNR↑ | SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |8192 items | 15.463 | 0.567 |0.287 |\n |4096 items | 14.042 | 0.550 |0.288 |\n |2048 items | 14.762 |0.565 |0.285 |\n |1024 items | 13.948 | 0.548 |0.300 |\n |**Ours(16384 items)** |**15.622** | **0.576** |**0.283** |\n\n📝 **Q: Use a randomly initialized codebook.** \n\n💡&#8194;**A:** Experiment results of either using randomly initialized codebook or without codebook are shown in table below. The performance and qualitative quality of both settings degrade significantly, and some scans fail with few views. Therefore, our prior can provide stronger robustness and better rendered images, especially with fewer views.\n\n |Method with few views (5-8) |PSNR↑ |SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Without codebook |12.694 |0.523 |0.325 |\n |Randomly initialized codebook |13.315 | 0.525 |0.298 |\n |**Ours (pre-trained codebook)** |**15.622** |**0.576** | **0.283** |\n\n |Method with sparse views (16-32) | PSNR↑ | SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Randomly initialized codebook | 21.458 | 0.615 |0.233 |\n |**Ours (pre-trained codebook)** |**22.907** |**0.688** | **0.229** |\n\n📝 **Q: Exclude the skip connection.** \n\n💡&#8194;**A:** The skip-connection can not be removed in the implicit neural representation because the network requires at least one MLP layer with coordinates information for analytic geometric (spherical) initialization, same with VolSDF.\n\n |Method with few views (5-8) |PSNR↑ | SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Without skip-connection | 7.969 | 0.380 | 0.371 |\n |**Ours (with skip-connection)** |**15.622** | **0.576** |**0.283** |\n\n |Method with sparse views (16-32) |PSNR↑ |SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Without skip-connection |10.088 | 0.454 | 0.369 |\n |**Ours (with skip-connection)** |**22.907** |**0.688** |**0.229** |\n\n\n📝 **Q: Use more coordinate attention modules in the \"Implicit Neural Representation\" and fewer in the \"Neural Renderer\"** \n\n💡&#8194;**A:** The number of coordinates attention modules in Implicit Neural Representation (geometry) and Neural Renderer (color) networks need to be tuned since there is only the RGB color supervision for the Neural Renderer network. We perform the ablation studies with a different number of modules in table below. We can see that directly using fewer modules in Neural Renderer would decrease the performance due to the reduced color representation. While directly using more modules in Implicit Neural Representation would also reduce the performance since the RGB supervision applied to the color part, instantly increasing the geometry network's parameters might result in overfitting.\n\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |INR #1, ↓NR #1 |14.840 | 0.560 | 0.291 |\n |↑INR #2, NR #2 |14.187 | 0.553 | 0.287 |\n |↑INR #2, ↓NR #1 |13.946 | 0.528 | 0.323 |\n |**Ours(INR #1, NR #2)** | **15.622** | **0.576** | **0.283** |", " We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.\n\n📝 **Q: Compared to volsdf, quantitative results are tied , but qualitative results are having a larger gap.** \n\n💡&#8194;**A:** We would like to clarify the results issue from two aspects as follows.\n\n**1)**. Effectiveness of our method: \nOur method is more robust and preserves more precise details under the few and sparse input views than VolSDF. VolSDF has failed on some scans, such as DTU scan118 (sparse views), DTU 63 (few views), BlendedMVS scan6 (few views), etc. However, our method produces decent results with precise details among these scenes, as shown in Fig. 2. For quantitative analysis (see table below) on the BlendedMVS and H3DS datasets which have richer texture information, our method achieves consistently significant increase of PSNR with more than 3\\%, and LPIPS with more than 5\\%.\n\n| |DTU | DTU |DTU | BlendedMVS | BlendedMVS | BlendedMVS | H3DS | H3DS | H3DS |\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n | |PSNR↑ | SSIM↑ | LPIPS↓ | PSNR↑ |SSIM↑ |LPIPS↓ |PSNR↑ |SSIM↑ |LPIPS↓ |\n |VolSDF |26.609 |0.839 | 0.309 | 18.942 | 0.747 | 0.213 | 23.922 | 0.898 | 0.110 |\n |**Ours** |26.738 |0.852 | 0.298 | 19.594 | 0.764 | 0.201 |25.279 | 0.911 | 0.098 |\n |**Improvement(%)** | **0.4** | **1.5** |**3.5** | **3.4** | **2.3** | **5.6** |**5.7** | **1.4** | **10.9** |\n\n**2)**.Evaluation Metrics. \n\nVisual similarity is very subjective and aims to mimic human visual perception. Simple metrics like PSNR and SSIM are insufficient to assess an image's perceptual quality [1,2]. A well-known example is that blurring causes significant perceptual artifact but with small $L2$ change. For our method, it can be seen that the performance of our method on DTU scan118 is much better than VolSDF as observed from Fig.2., but our method's PSNR and SSIM score is lower than that of VolSDF. So we introduce LPIPS as evaluation metrics in the paper. It can be seen that on each dataset, the LPIPS of our method is over 3\\% higher than that of VolSDF, which indicates a more meaningful metric.\n\n[1] Zhang R, Isola P, Efros A A, et al. The unreasonable effectiveness of deep features as a perceptual metric[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2018: 586-595.\n\n[2] Ma Y, Zhai Y, Yang C, et al. Variable Rate ROI Image Compression Optimized for Visual Quality[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 1936-1940.\n\n📝 **Q: Performance improvement due to large model size or design.** \n\n💡&#8194;**A:** To verify whether the improvement comes from our proposed CoCo modules or additional parameters, we double the dimension and number of MLP layers of VolSDF (called enlarge VolSDF), so that its parameter size (6.21M) is approximately the same as our method (6.16M). Table shows the experimental results. Directly stacking parameters can not get a significant performance improvement. In fact, due to the sparsity of training views, the network is easy to overfit, and the enlarged VolSDF often outputs misplaced images (images that are close to the verification view but actually belong to the training view). However, our method uses a large ratio of parameters to query learnable embeddings from the codebook. These parameters are only related to generating a scan-related prior and do not directly operate with coordinates, which can maintain the generalization of new perspectives and prevent over-fitting.\n\n|Method with few views (5-8)| PSNR↑| SSIM↑ |LPIPS↓|\n|:--:|:--:|:--:|:--:|\n|VolSDF| 14.249 |0.557| 0.290|\n|Enlarge VolSDF |14.322| 0.563| 0.294|\n|**Ours** |**15.622** |**0.576** |**0.283**|\n\n📝 **Q: The model is hard to generalize to novel scenes.** \n\n💡&#8194;**A:** In this work, we focus on how to bring additional prior information for implicit neural representation networks. We have successfully demonstrated the effectiveness of the dataset-related priors from the VQ-quantized codebook in ImageNet for the scene reconstruction and novel view synthesis tasks. However, the cross-scene generalization of INR is another critical area that needs more attention in the future. Our method has the potential to be extended to cross-scene generalization by introducing the scene-related codebook priors, pre-training on a cross-scene dataset, and finetuning with faster convergence on a novel scene, like Pixel-NeRF.", " 📝 **Q: More ablation studies**: Per-scene Learnable Codebook vs. Fixed Pre-trained Codebook vs. Without the codebook attention module.\n\n💡&#8194;**A:** Experiment results of either using randomly initialized codebook or without codebook are shown in table below. The performance and qualitative quality of both settings degrade significantly, and some scans fail with few views. Therefore, our prior can provide stronger robustness and better rendered images, especially with fewer views.\n\n |Method with few views (5-8) |PSNR↑ |SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Without codebook |12.694 |0.523 |0.325 |\n |Randomly initialized codebook |13.315 | 0.525 |0.298 |\n |**Ours (pre-trained codebook)** |**15.622** |**0.576** | **0.283** |\n\n📝 **Q: More ablation studies**: Reduce the number of MLP layers (even to 0).\n\n💡&#8194;**A:** Experiment results of reducing the number of MLP layers in the Implicit Neural Representations module are shown in table below. Benefiting from our CoCo-Attention module, which queries representative features from per-scene relevant learnable embeddings for each coordinate, new views can be synthesized even with 1 MLP layer under different settings of view sparsity. But using 0 MLP layers will not work because we need at least 1 MLP layer to model a scene as the foreground object into a bounded sphere like NeRF++.\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |2 layers MLP |14.656 |0.560 | 0.292 |\n |1 layers MLP | 13.210 | **0.576** |0.311 |\n |0 layers MLP |7.309 | 0.303 |0.401 |\n |**Ours (4 layers MLP)** |**15.622** | **0.576** | **0.283** |\n\n📝 **Q: More ablation studies**: The number of coordinates attention modules.\n\n💡&#8194;**A:** We perform the ablation studies with using more coordinate attention modules in the Implicit Neural Representation (geometry) and fewer in the Neural Renderer (color) networks in table below. We can see that directly using fewer modules in Neural Renderer would decrease the performance due to the reduced color representation. While directly using more modules in Implicit Neural Representation would also reduce the performance since the RGB supervision applied to the color part, instantly increasing the geometry network’s parameters might result in overfitting.\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |INR #1, ↓NR #1 |14.840 | 0.560 | 0.291 |\n |↑INR #2, NR #2 |14.187 | 0.553 | 0.287 |\n |↑INR #2, ↓NR #1 |13.946 | 0.528 | 0.323 |\n |**Ours(INR #1, NR #2)** | **15.622** | **0.576** | **0.283** |", " We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly. \n\n📝 **Q: How to select training views and testing views.** \n\n💡&#8194;**A:** We would like to divide view number into 3 settings: Sparse views (16-32 in total), Few views (5-8 in total) and Extremely limited views (only 3), and their sampling ways are as follows.\n\nFor the first case, views are evenly sampled. In particular, for original scans with less than 64 views, we sample one from every two views for training according to the view ID, and the rest are used as testing views (i.e., for training: 0, 2, 4, 6...; for testing: 1, 3, 5, 7...). For original scans with more than 64 views of an object, the sampling interval is changed to $\\lfloor \\frac{\\\\#Views}{32} \\rfloor$.\n\nFor the second case, views are evenly sampled. In particular, for original scans with less than 64 views, we sample one from every eight for training according to the view ID, and the rest are used as testing views (i.e., for training: 0, 8, 16, 24...). For scans with more than 64 views, the sampling interval is changed to $\\lfloor \\frac{\\\\#Views}{8} \\rfloor$.\n\nFor the third case, we select three representative views for each scan as training views (usually left, right, and top views), and the remaining views are used for testing.\n\nAs the comment box in the open review system does not support image and graphics input, we will provide the selected views lists and camera trajectory rendering images along with the code in the final version.\n\n📝 **Q: New views are interpolation or extrapolation.** \n\n💡&#8194;**A:** We would like to clarify that in the sparse views (16-32) setting, most testing views are interpolations. On the other hand, in the few and extremely limited views(3-8) settings, most testing views are extrapolations.", " \n📝 **Q: Comparison to other baselines introducing learnable/pre-trained priors.**\n\n💡&#8194;**A:** We have also noticed these works about NeRF generalization. Even though these works can reconstruct new scenes/objects based on a small number of views and show certain generalizations, the priors they use differ from ours. Specifically, the priors in these works are learned from many scans in the same dataset with the testing scenes, and the performance thus heavily depends on whether priors and testing scenes have a high semantic and appearance correlation. But our prior is a codebook obtained by training VQ-GAN on the 2D-aware dataset, ImageNet, which is not specially designed for a particular scene/object or even a specific 3D dataset, thus relieving the NeRF model of over-dependence on the training-testing scene consistency. Nevertheless, we follow the review comment and conduct new experiments as follows.\n\n &#8195; &#8195;In Pixel-NeRF's experimental protocols, many scans share the same object in the DTU dataset. We take the first test scene (scan 8) of Pixel-NeRF on the DTU dataset as an example for experiments. Scan 8 and scan 51 are two scans sharing the same object, but scan 51 is for the training, and scan 8 is for testing in Pixel-NeRF. These priors in Pixel-NeRF share a high-correlation context (the same object) between the training and testing scenes, while our codebook priors are scene agnostic. Our CoCo-INR can also learn the scene-related priors, same with Pixel-NeRF, and we perform the experiments of our method with scene-related priors and scene-agnostic priors as shown in table below. It shows that our CoCo-INR with scene-related priors outperforms the Pixel-NeRF under the same setting. Our CoCo-INR with scene-agnostic priors achieves a relatively comparable performance with Pixel-NeRF with scene-related priors.\n\n|Method |PSNR↑ |SSIM↑ |LPIPS↓|\n|:-:|:-:|:-:|:-:|\nPixel-NeRF (with scan 51 priors) |19.927| **0.776** |0.202|\n**Ours (without scan 51 priors)** |18.263 |0.673 |0.173|\n**Ours (with scan 51 priors)**| **29.583** |0.708 |**0.125**|\n \n &#8195; &#8195;Meanwhile, we further design a relatively fair comparison experiment without scene intersection between training and testing. We use the Pixel-NeRF pre-trained model on the DTU dataset and test on the BlendedMVS dataset. Pixel-NeRF introduces multi-view (3D) priors from the DTU dataset, but our method only introduces 2D priors from the ImageNet dataset and tests on the BlendedMVS dataset.\n \n | |Pixel Nerf| Ours| Pixel Nerf| Ours |Pixel Nerf |Ours|\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n |Scan |PSNR↑| PSNR↑| SSIM↑ |SSIM↑ |LPIPS↓ |LPIPS↓|\n |1 |9.329 | **16.597** | 0.329 | **0.630** |0.377 | **0.265** |\n |2 |9.188 | **15.277** | 0.318 |**0.637** |0.338 |**0.303** |\n |3 |10.180 | **10.965** | 0.303 | **0.490** |0.473 |**0.398** |\n |4 | 7.199 | **14.614** |0.366 | **0.735** | 0.284 |**0.254** |\n |5 | 8.308 | **14.358** | 0.352 | **0.699** | 0.355 |**0.268** |\n |6 | 9.310 | **12.114** | 0.318 |**0.575** | **0.353** |0.366 |\n |7 | 8.212 | **15.955** | 0.301 |**0.659** | 0.409 |**0.269** |\n |8 | 12.831 | **16.384** | 0.505 |**0.759** | 0.347 | **0.178** |\n |9 | 9.542 |**13.491** | 0.243 | **0.487** | 0.480 | **0.386** |\n |Mean |9.344 | **14.417** | 0.337 | **0.630** |0.380 | **0.299** |\n\n &#8195; &#8195;Table above shows that Pixel-NeRF does not perform as well as our method when both use cross-dataset priors. So even introducing priors from other datasets still cannot guarantee synthesizing novel views well. The design of prior learning and embedding, as well as equipping coordinates with richer features, plays a crucial role in NERF view synthesis, which is also our work's focus.\n\n\n\n📝 **Q: How do CoCO-VolSDF and CoCo-NeRF separately represent the full scene?** \n\n💡&#8194;**A:** We follow NeRF++ to model a scene as the foreground object in a bounded sphere, and the unbounded background via an inverted sphere parameterization. The only difference between CoCo-VolSDF (foreground) and CoCo-NeRF is the type of geometry function (implicit network). We use the Signed Distance Function (SDF), the same as VolSDF, to model the foreground surface in CoCo-VolSDF, while we use the density function to model the unbounded background in CoCo-NeRF.\n\n &#8195; &#8195;The grey regions in Fig.2 and Fig.3 (in the paper) with the \"Normal\" caption are empty regions predicted by the foreground network (CoCo-VolSDF). In contrast, these empty regions with colors in Fig.2 and Fig.3 with the \"Render\" caption are the predicted colors by the background network (CoCo-NeRF). The noisy patterns in the first two columns of the two figures are mainly due to the sparsely sampled training views of the DTU dataset with an ambiguity between foreground and background. However, since the sparsely sampled training views in BlendMVS and H3DS almost cover 360 degrees with a more apparent boundary between foreground and background, the noisy patterns disappear.", " \n📝 **Q:How the richness of priors affect performance of CoCo-INR.** \n\n💡&#8194;**A:** We report the results of different numbers of codebook items, ranging from 16384, 8192, 4096, 2048 to 1024 in table below. It shows that the performance tends to decrease with the reduction of the codebook items. \n|Method with few views (5-8) |PSNR↑ | SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |8192 items | 15.463 | 0.567 |0.287 |\n |4096 items | 14.042 | 0.550 |0.288 |\n |2048 items | 14.762 |0.565 |0.285 |\n |1024 items | 13.948 | 0.548 |0.300 |\n |**Ours(16384 items)** |**15.622** | **0.576** |**0.283**| \n \n📝 **Q: More ablation studies**: Per-scene Learnable Codebook vs. Fixed Pre-trained Codebook vs. Without the codebook attention module.\n\n💡&#8194;**A:** Experiment results of either using randomly initialized codebook or without codebook are shown in table below. The performance and qualitative quality of both settings degrade significantly, and some scans fail with few views. Therefore, our prior can provide stronger robustness and better rendered images, especially with fewer views.\n\n |Method with few views (5-8) |PSNR↑ |SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Without codebook |12.694 |0.523 |0.325 |\n |Randomly initialized codebook |13.315 | 0.525 |0.298 |\n |**Ours (pre-trained codebook)** |**15.622** |**0.576** | **0.283** |\n\n📝 **Q: More ablation studies**: Reduce the number of MLP layers (even to 0).\n\n💡&#8194;**A:** Experiment results of reducing the number of MLP layers in the Implicit Neural Representations module are shown in table below. Benefiting from our CoCo-Attention module, which queries representative features from per-scene relevant learnable embeddings for each coordinate, new views can be synthesized even with 1 MLP layer under different settings of view sparsity. But using 0 MLP layers will not work because we need at least 1 MLP layer to model a scene as the foreground object into a bounded sphere like NeRF++.\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |2 layers MLP |14.656 |0.560 | 0.292 |\n |1 layers MLP | 13.210 | **0.576** |0.311 |\n |0 layers MLP |7.309 | 0.303 |0.401 |\n |**Ours (4 layers MLP)** |**15.622** | **0.576** | **0.283** |\n\n📝 **Q: More ablation studies**: The number of coordinates attention modules.\n\n💡&#8194;**A:** We perform the ablation studies with using more coordinate attention modules in the Implicit Neural Representation (geometry) and fewer in the Neural Renderer (color) networks in table below. We can see that directly using fewer modules in Neural Renderer would decrease the performance due to the reduced color representation. While directly using more modules in Implicit Neural Representation would also reduce the performance since the RGB supervision applied to the color part, instantly increasing the geometry network’s parameters might result in overfitting.\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |INR #1, ↓NR #1 |14.840 | 0.560 | 0.291 |\n |↑INR #2, NR #2 |14.187 | 0.553 | 0.287 |\n |↑INR #2, ↓NR #1 |13.946 | 0.528 | 0.323 |\n |**Ours(INR #1, NR #2)** | **15.622** | **0.576** | **0.283** |\n\n\n\n\n📝 **Q: Figure 2 in the Supplementary Material has an incorrect title.** \n\n💡&#8194;**A:** Thanks for pointing out this typo. It should be \"Qualitative visualization results (zoom-in for the best of views) on the DTU dataset with 3 extremely limited views.\" We will correct it in the final version.\n\n\n📝 **Q: Does the method scale to larger or unbounded scenes well?** \n\n💡&#8194;**A:** Thanks. Our proposed CoCo-INR can be applied to large-scale or unbounded scenes following the NeRF++'s foreground and background parameterization. The experiment results on the BlendMVS dataset (scenes with foreground objects and unbounded background) have demonstrated our method's capability to model unbounded scenes.\n\n&#8194;&#8194;&#8194;&#8194;For a large-scale scene such as a city, our CoCo-INR could introduce different local codebook priors from each street block to separately model the local regions of a city. With the help of local codebook priors and the powerful representation ability of the transformer-based Multi-Head Attention mechanism, our CoCo-INR will be able to model a city-level scene. We believe it could be a promising research direction and will inspire more solutions in the future.", " We appreciate your approval of our idea and the detailed and insightful comments. Your concerns will be addressed in the following comments and the final version of our paper will be updated accordingly.\n\n📝 **Q: How did you get the baseline MLP features** \n\n💡&#8194;**A:** To demonstrate the feature representation of each coordinate, we firstly sample two local patches with 16 x 16 pixels of two foreground objects (one is a yellow apple, and the other is a red one) from DTU 63. Then, for each pixel ray in the local patches, we sample 128 points along each ray. We separately feed these sampled ray points into our CoCo-VolSDF and pure MLP-based network (MLP-VolSDF). Next, we extract the coordinate features with dim-256 (before the last regressor layer to SDF/density or color) in the geometry network (color network) of both CoCo-VolSDF and pure MLP-VolSDF. Then, we project these coordinate features into the visual 3D space via t-SNE and finally arrive at the visualizations in Fig. 4 of the paper. Our proposed CoCo-based network results in more discriminative features for each coordinate. It indicates that the codebook prior could enrich each coordinate's feature representation.\n\n📝 **Q: Performance improvement due to large model size or design.** \n\n💡&#8194;**A:** To verify whether the improvement comes from our proposed CoCo modules or additional parameters, we double the dimension and number of MLP layers of VolSDF (called enlarge VolSDF), so that its parameter size (6.21M) is approximately the same as our method (6.16M). Table below shows the experimental results. Directly stacking parameters can not get a significant performance improvement. In fact, due to the sparsity of training views, the network is easy to overfit, and the enlarged VolSDF often outputs misplaced images (images that are close to the verification view but actually belong to the training view). However, our method uses a large ratio of parameters to query learnable embeddings from the codebook. These parameters are only related to generating a scan-related prior and do not directly operate with coordinates, which can maintain the generalization of new perspectives and prevent over-fitting.\n\n|Method with few views (5-8)| PSNR↑| SSIM↑ |LPIPS↓|\n|:--:|:--:|:--:|:--:|\n|VolSDF| 14.249 |0.557| 0.290|\n|Enlarge VolSDF |14.322| 0.563| 0.294|\n|Ours |**15.622** |**0.576** |**0.283**|\n\n📝 **Q: More ablation studies**: Per-scene Learnable Codebook vs. Fixed Pre-trained Codebook vs. Without the codebook attention module.\n\n💡&#8194;**A:** Experiment results of either using randomly initialized codebook or without codebook are shown in table below. The performance and qualitative quality of both settings degrade significantly, and some scans fail with few views. Therefore, our prior can provide stronger robustness and better rendered images, especially with fewer views.\n\n |Method with few views (5-8) |PSNR↑ |SSIM↑ | LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |Without codebook |12.694 |0.523 |0.325 |\n |Randomly initialized codebook |13.315 | 0.525 |0.298 |\n |**Ours (pre-trained codebook)** |**15.622** |**0.576** | **0.283** |", " 📝 **Q: More ablation studies**: Reduce the number of MLP layers (even to 0).\n\n💡&#8194;**A:** Experiment results of reducing the number of MLP layers in the Implicit Neural Representations module are shown in table below. Benefiting from our CoCo-Attention module, which queries representative features from per-scene relevant learnable embeddings for each coordinate, new views can be synthesized even with 1 MLP layer under different settings of view sparsity. But using 0 MLP layers will not work because we need at least 1 MLP layer to model a scene as the foreground object into a bounded sphere like NeRF++.\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |2 layers MLP |14.656 |0.560 | 0.292 |\n |1 layers MLP | 13.210 | **0.576** |0.311 |\n |0 layers MLP |7.309 | 0.303 |0.401 |\n |**Ours (4 layers MLP)** |**15.622** | **0.576** | **0.283** |\n\n📝 **Q: More ablation studies**: The number of coordinates attention modules.\n\n💡&#8194;**A:** We perform the ablation studies with using more coordinate attention modules in the Implicit Neural Representation (geometry) and fewer in the Neural Renderer (color) networks in table below. We can see that directly using fewer modules in Neural Renderer would decrease the performance due to the reduced color representation. While directly using more modules in Implicit Neural Representation would also reduce the performance since the RGB supervision applied to the color part, instantly increasing the geometry network’s parameters might result in overfitting.\n |Method with few views (5-8) | PSNR↑ |SSIM↑ |LPIPS↓ |\n|:--:|:--:|:--:|:--:|\n |INR #1, ↓NR #1 |14.840 | 0.560 | 0.291 |\n |↑INR #2, NR #2 |14.187 | 0.553 | 0.287 |\n |↑INR #2, ↓NR #1 |13.946 | 0.528 | 0.323 |\n |**Ours(INR #1, NR #2)** | **15.622** | **0.576** | **0.283** |\n\n📝 **Q: Limitations and societal impact** \n\n💡&#8194;**A:** We have discussed the limitations in the Supplementary Materials and will merge them into the final version. Our method has conducted human face-related experiments and generates high-fidelity 3D-aware images from sparse images. Since human faces are highly private, our work might have some negative societal impacts for malicious purposes. We will add this societal impact into the final version.\n", " This paper proposed CoCo-INR, an implciit neural representation trained with sparse multi-view images leveraging on prior information from pre-trained image prototypes/features.\n\nSpecifically, two attention modules called codebook attention and coordinate attention are proposed to introduce most revelant scene priors into the learning process of neural implicit representations, so that few images are adequate to render realistic images as well as high-quality geometric information. Three datasets of different complixities are used to demonstrate the effectiveness of method. Strengths\n\n1. The idea of integrating VQ codebook into NeRF training looks interesting and leads to promising geometry recontruction given few images on three public datasets.\n\n2. The experimental set-up of ablations and vislusations validate the effectiveness of codebook attention.\n\n3. Though there are some missing details (mentioned in the weakness part), the overall writing is good and easy to read.\n\nWeaknesses\n\n\n1. How to select a subset of training views are not clearly represented and only the number of views are listed. Are subset views evenly sampled are consecutively sampled while leaving the rest as testing views? \n\n If possible, it would be good to see the camera trajectory or distributions along with rendering results to get better understand the method.\n\n2. Similar to previous points, as cross-attention mechanism are adopted to propagate information from observed to unobserved ones, it would be interesting to see the extrapolated view synthesis in addition to interpolation.\n\n3. One major point missing in my opinion is the comparison to other baselines introducing learnable/pre-trained priors into training process, such as PixelNeRF (CVPR2021) and GRF (ICCV2021).\n\n How does the codebook prior using VQ compared to stratigies adopted in PixelNeRF or GRF, where CNN feautes are used to augment local priors so that few views could be used to enable view synthesis. I think it is necessary to add related comparisons and discussions since currently all selected baselines does not take any external priors.\n\n4. Another interesting thing is to explore is how the richness of priors affect performance of CoCo-INR? For example,the number of embeddings M and the size or Codebook, for a specific given scene, presumably a limited number of prototypes would be activated while others are not. Will a meaningful subset reduce the overall computation without obvious performance degradation? \n\n5. As Coco-VolSDF and CoCo-NeRF are used to model foreground and background respectively, following NeRF++. It would good to see how the decomposition looks like because it is not clear to readers how the CoCo-NeRF hanles background regions. In Figure 2 and 3, there are some noisy patterns in the first two images while the rest shows a clean backgroud geometry.Is the geometry (normal) only computed from foreground regions? Is the grey region computed from background NeRF? How does the attention mechism helps the background region?\n\n\n6. Minor Issues.\nIn Sec3.2 of supplement, is it claimed that only 3 extreme views are used in Fig.2 while the captions says 8 images are used. Please clarify how many images are used?\n\n 1. One important missing experiments are about other strategies in introducing scene priors like PixelNeRF and GRF. Existing experiments only consider baseline methods without external piros so the comparison is not that fair in my point of view.\n\n\n2. Further demonstrate on how CoCO-VolSDF and CoCo-NeRF separately represent the full scene and individual improvements among baseline methods are expected.\n\n3. More ablation on hyper-parameters are welcome.\n\n4. Please consider adding more details about the training set-ups, e.g., how the subset views are selected. Furthermore, depending on how the training subset are selected, it would be good to show if it is possible to infer images given extrapolated view points.\n \nSome limitations are mentioned in the supplement material. Related to that, does the method scale to larger or unbounded scenes well? Is the cluterred background of these scenes hamper the performance of CoCo-INR? It woukd be good to add some analysis here.", " The paper presents a method to utilize a non-scene-specific ImageNet-pretrained codebook for learning neural 3D representations, such as NeRFs. The codebook attention module transforms the codebook into a scene descriptor, and the coordinate attention module concatenates it with the positional encoding/features. The intuition is that doing so encourages the network to learn the semantic correlation between the input point (e.g. R^3) and the scene, enabling \"extrapolation\". Strength: Demonstrates that the codebook method from VQGAN and VQ-VAE can be applied to 3D representation learning, lifting 2D ImageNet-based features to 3D. While the architecture may seem incremental, the extension to 3D is noteworthy.\n\n\nWeakness: No quantitative ablation study. The \"per-scene learnable features vs. fixed codebook\" and \"impact of codebook attention\" experiments are both important yet entirely qualitative. Improvement from VolSDF could have been due to better initialization, more computation, and so on (as usual). It would also be helpful to empirically evaluate novel views vs. observed views. How did you get the baseline MLP features (e.g. compared in Figure 4)? The paper does not address limitations and societal impact.", " This paper focus on improving the training efficiency of coordinate based representations by reducing the number of camera views needed during training. To accomplish this, the authors proposed a codebook attention module and a coordinate attention module to inject prior knowledge into implicit representations. Experiments on novel view synthesis are conducted and improvements are shown over baseline methods on some public datasets. Strengths:\n\n- The idea of bringing prior knowledge into implicit representation is novel and interesting.\n\n- The paper is well written and easy to read.\n\nWeaknesses:\n\n- Although we do observe improvements over other baselines, it seems like the changes are relatively small. For example, in fig.2 the results of VolSDF and the presented methods are giving similar rendering results and in table.1 the quantitative results are tied.\n\n- Lack of some ablation studies for a more thorough analysis of the proposed method. For example, the presented methods added two new modules to baseline implicit representations. And when doing comparisons with baseline methods, it is hard to tell whether it is because the addition of the new module brings the improvement or it is probably the network has more weights and could fits better. It could be better if more details are provided, for instance, the number of weights for the presented method and the baselines. - In Table 1. the quantitative results from the presented method and the VolSDF are tied on DTU dataset. However, it seems like in fig 2. the qualitative results are having a larger gap there. I am wondering what is the cause for this?\n - Although prior knowledge are used in this work, it seems like the model is still trained specifically to one scene and is hard to generalize to novel scenes.", " The paper proposes the utilization of a pre-trained codebook within an implicit neural 3D representation. This is enabled by using two attention modules termed codebook attention and coordinate attention, which together form the proposed Codebook Coordinate Attentional Implicit Neural Representation (CoCo-INR). The model is evaluated on the DTU, BlendedMVS, and H3DS datasets and compared with NeRF, UNISURF, and VolSDF using either \"sparse views\" (16-32 views) or \"few views\" (5-8 views). CoCo-INR shows better performance than the baseline methods in the conducted experiments. **Strengths**\n\n- The idea of combining a pre-trained codebook with (cross) attention modules is very interesting.\n- The paper has high visual quality due to well-arranged figures and tables.\n- For the presented experiments the model shows an improvement over the compared-against models (VolSDf, UNISURF, NeRF).\n\n**Weaknesses**\n\n- The comparison wrt. the SOTA is only done on \"sparse views\" and \"few views\". While these settings are sensible for evaluation of the proposed approach, it is crucial to also evaluate with pre-existing evaluation protocols. Currently, the reader cannot tell if the improved performance originates from an actually improved model, or because the model and benchmark were designed simultaneously.\n- The codebook idea is very interesting, however, it is difficult to understand what this module adds to the network and where the improved performance originates. Please also see the questions below.\n- The ablation study is only qualitative and could contain further ablations, please see the questions below.\n- Understanding the proposed architecture could be easier. I found Figure 1. in the supplementary material very helpful because it shows the actual computation the model is doing. In contrast, I found Algorithms 1 & 2 to be less helpful because they mostly reproduce the attention algorithm.\n\nOverall, the paper proposes an interesting idea that seems to improve results. If the evaluation wrt. the SOTA is improved, the ablation study is expanded by more ablations and quantitative results, and the clarity is improved I am willing to upgrade my rating. - What is the intuition behind using the VQGAN pre-trained codebook? These codes are extracted from 2D images, but the paper tackles 3D reconstruction. Would it be better to use a pre-trained codebook from a 3D network?\n- There are two coordinate attention modules in the \"Neural Renderer\" part of the pipeline, does this mean that the codebook is more important for generating the color?\n- It would be interesting to include further ablations, for example:\n - reducing the number of MLP layers (maybe even to 0) to see the effect of only using the introduced codebook/coordinate attention modules\n - reduce the number of utilized codes from 16384 to maybe 8192, 4096, and 1024 by random sampling to investigate the effect of the codebook size\n - using a randomly initialized codebook to understand the effect of the VQGAN pre-trained codebook\n - exclude the \"skip connection\" in the \"Implicit Neural Representation\" (Suppl., Fig. 1, upper part) that concatenates the output of the coordinate attention module ($\\widehat{X} \\in R^{N \\times 39}$) with the query coordinates after embedding ($X \\in R^{N \\times 39}$). This would also help to show how only the codebook-based module works.\n - Use more coordinate attention modules in the \"Implicit Neural Representation\" and fewer in the \"Neural Renderer\"\n The limitations are addressed in the supplementary material." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "DieeZHR0UCi", "UcLwAW8NYI", "EFj9gyTTp4C", "Nz449rURjg", "Yjz6V0xYRjL", "Yjz6V0xYRjL", "Yjz6V0xYRjL", "QkfhKWvtpU3", "QkfhKWvtpU3", "F28JDvevpA", "F28JDvevpA", "F28JDvevpA", "-Wn3zThW2Q5", "-Wn3zThW2Q5", "nips_2022_oprTuM8F3dt", "nips_2022_oprTuM8F3dt", "nips_2022_oprTuM8F3dt", "nips_2022_oprTuM8F3dt" ]
nips_2022_VQ9fogN1q6e
Factored Adaptation for Non-Stationary Reinforcement Learning
Dealing with non-stationarity in environments (e.g., in the transition dynamics) and objectives (e.g., in the reward functions) is a challenging problem that is crucial in real-world applications of reinforcement learning (RL). While most current approaches model the changes as a single shared embedding vector, we leverage insights from the recent causality literature to model non-stationarity in terms of individual latent change factors, and causal graphs across different environments. In particular, we propose Factored Adaptation for Non-Stationary RL (FANS-RL), a factored adaption approach that learns jointly both the causal structure in terms of a factored MDP, and a factored representation of the individual time-varying change factors. We prove that under standard assumptions, we can completely recover the causal graph representing the factored transition and reward function, as well as a partial structure between the individual change factors and the state components. Through our general framework, we can consider general non-stationary scenarios with different function types and changing frequency, including changes across episodes and within episodes. Experimental results demonstrate that FANS-RL outperforms existing approaches in terms of return, compactness of the latent state representation, and robustness to varying degrees of non-stationarity.
Accept
The paper proposes a factored reinforcement-learning method to deal with non-stationary environments. After reading the authors' rebuttals, the reviewers agree that this paper provides an original and sound contribution that deserves publication. We recommend that the authors modify their paper as reported in their answers to the reviewers' comments.
train
[ "xcSg23kr6NM", "0mKRC1hQsl0", "Ehcp96rw1ze", "EzDD6oQ-XH", "U3oBjH2XblO", "MwgBkh8DbM8", "lFAqUL4udY-", "wuuf0HpmWbS", "6bkaqZcQN1f", "Y8XcoGT81cN", "-FJEsgRPstM", "DpPtziADEBOn", "-F9P9AubNfb", "ciBhEBdS7rJ", "M2ZDbOf1ig", "_Fh1vvXmKHj", "8MHz7rP4k5O", "ld7Pp-S8dXv", "KqoKI-tsuWC", "qKFJ7CFiX7v" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your thoughtful review. We will be happy to discuss if you have any other concerns. \n", " We would like to express our sincere thanks for your positive feedback and valuable suggestions. \n- As you suggested in Q4, we have updated a new revision, which includes the ablation studies on the disentangled design of CF inference networks for all scenarios (see updated Fig. 2(d) and Sec. C.5). \n- Based on your advice, we plan to include the key assumptions of the method in Appendix A in our final main text if accepted (as NeurIPS allows one additional page for the camera-ready version). ", " Many thanks for your response and valuable suggestions. Based on your advice in Q6, we have updated a new revision, which includes the ablation studies on the disentangled design of CF inference networks for all scenarios (see updated Fig. 2(d) and Sec. C.5). ", " We highly appreciate your positive and valuable feedback. As NeurIPS allows an additional content page for the camera-ready version, we will include the results of using different smoothness losses (given in the rebuttal) in the evaluation section of our final version if accepted. ", " I thank the authors for their response. The authors have addressed most concerns and could include the new results in their final version.", " Thank you for the detailed response, which addresses all of my concerns. After reading the other reviews, related discussions, and re-reading the paper, I have increased my score.", " I thank the authors for carefully addressing my questions and suggestions. Because I do not have further concerns, I will increase my score.", " Once again, thanks for the suggestion. We have conducted experiments where one joint is randomly disabled in each episode in Half-Cheetah-v3. We compared our method with LILAC [Xie et al., 2021]. The results verify that our approach can still achieve good performance when there are changes in the agent’s mechanism. We will also compare with meta-learning approaches in the coming days and post the results once they have been done. And we plan to add this experiment to the camera-ready version if it is accepted. \n\n| Methods | Average final rewards (across 5 runs)|\n|:--------------------:|:----------------------:|\n| LILAC | -19.4 (+/-11.4) |\n| TRIO | -21.9 (+/-13.0) |\n| VariBAD | -17.3 (+/-10.2) |\n| Ours | -15.1 (+/-9.8) |\n\n[Xie et al., 2021] Xie, Annie, James Harrison, and Chelsea Finn. \"Deep reinforcement learning amidst lifelong non-stationarity.\" ICML 2021.\n\n------\nUpdated: we have also compared the meta-learning approaches, including TRIO [Poiani et al., 2021] and VariBAD [Zintgraf et al., 2021]. During meta-training, we randomly disable the joints at random time steps. Please check the results in the updated table above. \n\n[Poiani et al., 2021] Poiani, Riccardo, Andrea Tirinzoni, and Marcello Restelli. \"Meta-Reinforcement Learning by Tracking Task Non-stationarity.\" IJCAI 2021;\n[Zintgraf et al., 2021] Zintgraf, Luisa, et al. \"VariBAD: Variational Bayes-Adaptive Deep RL via Meta-Learning.\" JMLR 2021.", " >Q5: Why did the authors formalize FN-MDPs, not N-MDPs? Although the ablation study without structure shows improvement with respect to the expected return, I need a brief and intuitive explanation for understanding the effect of factored MDP.\n\nWe could potentially formalize N-MDPs, which would be MDPs with a latent change factor that follows a Markov process and they would be very similar to other approaches in literature, e.g. dynamic MDPs in LILAC.\nOn the other hand, our whole approach is based on factored representations and causality, following the ideas of a whole line of work Factored (PO)MDPs.\nAs shown in our experiments, learning the graphical structure is the main improvement that is derived from our framework, since it allows one to (1) capture non-stationarity in a compact way (a low-dimensional change factor), so one can adapt to it in an efficient way, and (2) identify the compact representations from AdaRL, i.e., the minimal dimensions for policy learning, thus also improving the sample efficiency. \n\nAs an example of capturing non-stationarity in a compact way, in the robotic control task, changes on ground frictions may only affect the state variables of robots' legs while the state variables of hands or head will not be affected. Then we can learn a low-dimensional $\\boldsymbol{\\theta}$ connected to the state variables of legs in the graph. \n\nAs an example of reducing the state dimensions, if the robot is trained to run at a target speed in 2D space, the position of the head may not be useful for policy learning as there is no path from the head position to the reward in the learned DBN. \n\n>Q6: To check the effect of disentangling the latent change factor as the two ones, I suggest adding an ablation study without disentangling latent factors in figure 2-(d).\n\nTo verify this, we conducted an ablation study where we only have one mixed encoder and decoder for reconstruction and prediction. We ran the experiments on the scenarios with multiple change factors on both dynamics and rewards in Minitaur benchmark. We will include the full results for all settings in the final version of the paper. The results (average across $10$ runs) are given below.\n\n| Methods | Average final rewards |\n|:------------------:|:------------------------:|\n| Mixed latent space | 26.9 (+/-12.6) |\n| Ours | 40.2 (+/-5.3) |\n\nThe results verify the effectiveness of the disentangling design in our model.\n\n>Q7: Figure 2-(e,f) shows the distances of the learned $\\theta^{r}$, similarly, I suggest plotting the distances of the learned $\\theta^{s}$; for example, the $\\theta^{s}$ estimates the changing wind force. It is helpful to identify that the learned latent factor can capture the true change in environment dynamics.\n\nThanks for the suggestions. We verify one dynamic CF (wind forces in Half-Cheetah). We randomly sample a few data points and compute the Euclidean distance below. $D(f^{i,j}\\_w)$ and $D(\\theta^{s}\\_{i,j})$ denotes the Euclidean distance on wind force $f_w$ and $\\theta^s$ between two sampled data points. We can find that there is a positive correlation between the distance of learned $\\theta^s$ and true wind forces. We add two heatmaps (similar to Fig.2(f)) in the revised Fig. A10 in the appendix. \n\n| $D(f^{i,j}_w)$ | $D(\\theta^{s}_{i,j})$ |\n|:--------------:|:---------------------:|\n| 1.62 | 2.9 |\n| 10.12 | 6.8 |\n| 11.74 | 8.5 |\n| 14.42 | 11.0 |\n\n>Q8: In Figure 2-(g), is the number of latent features of FANS-RL total number of dimensions of two latent features $\\theta^{s}$ and $\\theta^{r}$ ?\n\nYes, the number is the total number of two latent features. We have clarified this in the text in the revised version.\n", " Thank you for your time and attention in giving such a thoughtful review. We have made some changes to the revised version (highlighted in blue). We added an introduction on AdaRL and MiSS-VAE in the preliminary section (B.2) of the updated Appendix, focusing on the MDP case. We have also extensively reworked our proofs, and clarified how are they related to previous work. We address your specific comments one by one below.\n\n>Q1: The proposed method showed the adaptation for the limited changes in the environment dynamics; for example, the changes of wind force or agent’s mass or gravity. So, the reviewer wonders if FANS-RL can adapt to changes in the agent’s mechanism; for example, the disabled joint of the agent due to aging.\n\nThanks for the suggestion. We are now running the experiments you recommended. We will update the results by posting a new reply once they are available. \n\n>Q2: It seems difficult to find the fundamental difference between the architecture of FN-VAE and that of MiSS-VAE of AdaRL[1], even though they learn different MDPs. They have too similar components and loss functions.\n\nThough both Miss-VAE and FN-VAE leverage factorized generative models to learn the data generation process under distribution shift in RL, several aspects are different: \n\n- As you mentioned, we are modeling different (PO)MDPs under different problem settings (transfer RL from well-defined source domains to targets versus non-stationary RL). In particular, the change factors are quite different and so they are learnt quite differently. Miss-VAE uses the domain index $k$ as the input of the model and updates $\\mathbf{\\theta}_k$ (which is assumed to be constant in each domain) during training. In contrast, in FN-MDPs we model the non-stationary change factors as latent variables and allow them to vary according to a Markov process. So the CF inference networks in FN-VAE use LSTMs and we also add the CF dynamics networks, that require a KL loss $\\mathcal{L}\\_{\\text{KL}}$ that is quite different from the one in AdaRL and the smoothness loss across time-steps $\\mathcal{L}\\_{\\text{smooth}}$;\n\n- As opposed to Miss-VAE, in FN-VAE we use separate encoders for dynamics and rewards. As we show in the answer on modelling the latent change factors of the reward and dynamics as a single multidimensional change factor, these separate encoders and decoders improve our results.\n\n- Finally, Miss-VAE is focused on pixel inputs. In the special case of transfer RL in MDPs (as opposed to FN-MDPs), there aren't any latent variables to be observed, so Miss-VAE is technically not a VAE. We also show an extension of FN-VAE to raw pixels in Appendix D.3, which is a better comparison. The prediction and reconstruction losses in that case are very similar, but to be fair they are also quite obvious/common in this setting.\n\n> Q3: The paper relies much on the definitions and theorems of AdaRL [1] in literature. It would be better to add an additional explanation of the shared representation $s^{\\text {min }}$ and the compact domain-specific representation $\\theta^{\\text {min }}$.\nEspecially, in Line 74-84 in Appendix B.2, there exist sentences verbatim to those in the proof of the the in this point. Please check if this is OK. The paragraph of concern: \"We denote the variable set in the system ....\"\n\nWe have added a subsection to introduce the compact representations in Section 2. Regarding the proof, the part that was similar (but still with the appropriate changes related to the non-stationarity/time index vs transfer RL/ domain index) is related to the notation and the description of previous results from [Huang et al. 2020], since it's a similar setup. We have changed the proof and clarified the similarities to the related proof in AdaRL.\n\n[Huang et al. 2020] Huang, B., Zhang, K., Zhang, J., Ramsey, J. D., Sanchez-Romero, R., Glymour, C., & Schölkopf, B. (2020). Causal Discovery from Heterogeneous/Nonstationary Data. J. Mach. Learn. Res., 21(89), 1-53\n\n>Q4: minor points (typo) 1. The first equation between line 72-73: $pa\\left(\\theta_{i j, t}^{s}\\right) \\rightarrow pa\\left(\\theta_{j, t}^{s}\\right)$ 2. Figure 2-(f) in the main paper: the $x$ and $y$ labels need to correct.\n\nThanks for pointing this out. We have corrected them in the revised version. ", " >Q11: Because of the causality assumptions (see Section A of the Appendix), the proposed model can not model different types of non-stationarity at the same time (e.g. wind and gravity). Are any of the related work (e.g. LILAC) able to deal with these changes? If so, perhaps an experiment in such a setting would be interesting to observe whether the proposed method could still outperform the other baselines.\n\nOur model, LILAC, TRIO and VariBad can model multiple concurrent changes. We show the experiments with multiple change factors in Fig.2 (j). We describe the results in the paragraph \"Multiple change factors\" at the end of the Evaluation section. In particular, we test \"1) only change wind forces (1D); 2) change wind forces and gravity concurrently (2D); 3) change wind force and target speed (1D+1R); and 4) change wind force, gravity, and target speed together (2D+1R) in an across-episode way in Half-Cheetah.\"\nIn the multiple changes setting, FANS-RL outperforms the baselines even more than in the single change setting.\nWe also rephrased a few sentences in the Broader Impact section of the Appendix that might have been confusing from this perspective. In short, we can model multiple concurrent changes, but in our current method we cannot model changes that introduce new edges in the graph $\\mathcal{G}$ that we haven't observed at model estimation time.\n\n>Q12: For completeness of the first sentence of the Related Work section, the paper [\"Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection.\" In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems] is a more recent work on non-stationarity that detects changes that have already happened.\n\nThanks for the pointer. We discussed it in the updated related work section. ", " > Q4: The role of the state and reward encoders in Section 3 is not very clear. Why a single prediction network, conditioned on the masks, would be not enough to predict the next state? How are the encoders used?\n\nWe use the separate state and reward encoders to disentangle the data generation process of dynamics and rewards in FN-VAE. To verify the effectiveness of the disentangling design, we conducted an ablation on using mixed latent features and only one encoders and single reconstruction/prediction network in the model. We ran the experiments on the scenarios with multiple change factors on both dynamics and rewards in Minitaur benchmark. The results below suggests that the disentangling design can bring more performance gain. \n\n\n| Methods | Average final rewards |\n|:------------------:|:------------------------:|\n| Mixed latent space | 26.9 (+/-12.6) |\n| Ours | 40.2 (+/-5.3) |\n\n>Q5: How can we obtain the adjustable parameters $w_{1} \\ldots w_{7}$ ? In the total loss, how can we obtain $k_{1} \\ldots k_{5}$ ?\n\nWe have added an explanation in the revised version of the paper. In general, we use the automatic weighting method in [Liebel et al 2018] to learn the weights for $k_1, \\ldots, k_7$ and grid search for $w_1, \\ldots, w_7$. We have also clarified it in more detail in the updated Supplementary Section D.4.1. \n\n[Liebel et al 2018] Liebel, Lukas, and Marco Körner. \"Auxiliary tasks in multi-task learning.\" arXiv preprint arXiv:1805.06334 (2018).\n\n>Q6: The total objective function in line 167 does not include $\\mathcal{L}\\_{\\text {rec-dyn}}$. Additionally, there is a typo here: the coefficient $k\\_{5}$ does not appear in the loss.\n\nThanks for pointing this out. We used the same weights for $\\mathcal{L}\\_{\\text {rec-dyn}}$ and $\\mathcal{L}\\_{\\text {rec-rw }}$. The total loss is $\\mathcal{L}\\_{\\text {vae }}=k\\_{1}\\left(\\mathcal{L}\\_{\\text {rec-dyn}}+\\mathcal{L}\\_{\\text {rec-rw}}\\right)+k\\_{2}\\left(\\mathcal{L}\\_{\\text {pred-dyn }}+\\mathcal{L}\\_{\\text {pred-rw }}\\right)-k\\_{3} \\mathcal{L}\\_{\\text {KL }}-k\\_{4} \\mathcal{L}\\_{\\text {sparse }}-k\\_{5} \\mathcal{L}\\_{\\text {smooth }}$.\nWe have clarified this and fixed the typo in the revised version. \n\n>Q7: \"the policy parameters $\\psi=(\\pi, Q)$ are the actor $L_{\\pi}$ and critic loss $L_{Q}$.\" I believe this sentence is not well formulated, how can a loss be a parameter?\n\nSorry for the ambiguity. We have corrected the typo in the revised version.\n\n>Q8: In Algorithm 1, how is $\\mathbf{s}\\_{t}^{\\min }$ and $\\mathbf{\\theta}\\_{t}^{\\min }$ computed before following the policy in line 23?\n\n$\\mathbf{s}^{\\min }$ and $\\mathbf{\\theta}^{\\min }$ are the dimensions of the state and change factors that are minimal and sufficient for policy optimization, as shown in AdaRL [Huang et al. 2022]. They can be identified from the estimated binary masks/graph, by selecting all variables that have a directed path to a present or future reward. For completeness, we have added an explanation in Section 2.\n\nIn particular, in Algorithm 1, $\\mathbf{s}^{\\min }$ and $\\mathbf{\\theta}^{\\min }$ are computed in Line 5, after we have estimated the binary masks/the graph. In Line 23 we select the values at time t of these dimensions, $\\mathbf{s}\\_{t}^{\\min }$ and $\\mathbf{\\theta}\\_{t}^{\\min }$.\n\n>Q9: \"The horizon in each episode is 50.\" The horizon in these environments is generally higher, why this value was chosen?\n\nWe chose this setting to follow the instructions in LILAC [Xie et al., 2021] for a fair comparison, where each episode is 50 time-steps long.\n\n[Xie et al., 2021] Xie, Annie, James Harrison, and Chelsea Finn. \"Deep reinforcement learning amidst lifelong non-stationarity.\" ICML 2021.\n\n>Q10: In Fig. 2(g-j) how were the values normalized? I suppose the value of $1.0$ does not mean that the proposed method reaches the maximum possible reward, but that it is the maximum reference point used in the normalization.\n\nThat's correct, the values are normalized based on the results of our method. We chose this normalization, because our approach can outperform all the baseline methods (e.g., LILAC, TRIO, and VariBAD) and it seems an intuitive measure. For completeness, we added the figures where all methods are normalized based on the oracle in Appendix Fig.A7-A8.\n", " Thanks for your thorough review. We tried to clarify the definitions and theory (highlighted in blue in the revised version) and fixed the typos. We also would like to point out that the assumptions we discuss in the Broader impacts are already mentioned in the main paper. We have added a few clarifications in the Broader impacts, but just for completeness:\n- no unobserved confounders (except for the change factors) and no instantaneous causal effects between the state components are implied by Definition 1, since the FN-MDP has a Dynamic Bayesian Network $\\mathcal{G}$ in which only the change factors are unobserved. \nThis is also stated in a equivalent form in the equations of the generative model, e.g. in Eq. (1) in which the state component $s_{i,t}$ can only depend on components of the state at the previous time-step $\\mathbf{s}\\_{t-1}$;\n- the causal Markov and faithfulness assumptions are used for Proposition 1 and 2 to prove that we can recover (most of the) true causal graph.\n\n> Q1: In Definition 1, I suggest clarifying that it is assumed that the action is a vector of m dimensions. It is ambiguous as m could denote the number of discrete actions.\n\nThanks for pointing this out. We have clarified this in the revised version. \n\n> Q2: \"Although the masks and noise are stationary, we allow the change of graph structure and noise distributions, whose changes are captured by $\\theta$ instead.\" If the masks are stationary, does it mean that if one of its values is 0 (denoting that one state variable does not influence some other state variable), then it will never have an effect, even if the dynamics change? In other words, to be able to capture all possible dynamics changes, would we need to have all masks with none zero elements?\n\nThanks for pointing this out. We clarified this part in the revised version in Section 2 and rephrased parts of the Broader Impacts in the Appendix that might be confusing. For the text in Section 2 we added:\n\"Although $\\mathbf{c}^{\\cdot \\rightarrow \\cdot}$ and $\\epsilon$ are stationary, we model the changes in the functions and some changes in the graph structure through $\\mathbf{\\theta}$. \nFor example a certain value of $\\mathbf{\\theta}\\_t^r$ can switch off the contribution of some of the state or action dimensions in the reward function, or in other words nullify the effect of some edges in $\\mathcal{G}$. Similarly the contribution of the noise distribution to each function can be modulated via the change factors. On the other hand, this setup does not allow adding edges that are not captured by the binary masks $\\mathbf{c}^{\\cdot \\rightarrow \\cdot}$.\" \nIn particular, in our current method, the change in dynamics can only switch off edges from the estimated mask, but not switch them on. On the other hand, the estimated mask will contain the union of the edges that are present at any timestep that is used for model estimation. We assume this is a sensible inductive bias in setting. One could estimate also the graph dynamically (e.g. by allowing it to change at each iteration in Alg.1), but might would also require changing the compact representation $\\mathbf{s}\\_{t}^{\\min }$ and $\\mathbf{\\theta}\\_{t}^{\\min }$ (which are estimated based on the graph), and therefore make policy optimization harder.\nIf we want to capture all possible dynamic changes, including the ones that we might not observe, we would then indeed need binary masks without any zeros. In our setting, this is represented by the ablation that does not use the structure. \n\n>Q3: The proof of Proposition 1 in the Appendix ends with: \"In this setting, identifiability of the graph G is trivial [6].\" However, It is not clear why this is trivial. As this is an important theoretical result of the paper, I suggest elaborating this proof. \n\nThanks for pointing this out, we have completely reworked the proof and made several of the points much more explicit. We do not claim that this proof is in itself a particularly novel result, since the identifiability of similar time-series with no unobserved confounders and no instantaneous effects is well-known, but adding the change factors which do have some instantaneous effects on the state and reward did require a bit of additional explanation.\n", " Thank you for your constructive comments. We have made some changes in the revised version (highlighted in blue). We also added a more thorough introduction to AdaRL in the Preliminaries section (B.2) of the revised Appendix. To simplify exposition, we only describe the MDP case, which is the only relevant for our paper.\nIf the paper is accepted, we will try to integrate some parts of it, as well as some results with different non-stationary settings (Table A1 in the Appendix) and a brief explanation of the ablation studies in the additional page in the final version. We answer the questions one by one in the following:\n\n> Q1: The experimental chapter devotes a lot of space to the various environmental settings, and the experimental results seem to be insufficient. More detailed discussions on the experimental results in the main paper would be more appealing.\n\nThanks for the suggestions. In the main paper, we include most representative results, including rewards curves (Fig.2(a-c)), ablation studies (Fig.2(d)), visualization on $\\boldsymbol{\\theta}$ (Fig.2(e-f)), and performances under various experimental settings (Fig.2(g-j)). Due to the limited space, we put the full results in the appendix. As NeurIPS allows an additional content page for the camera-ready version, we plan to move the quantitative results on all benchmarks with different non-stationary settings (Table A1) into the final paper.\n\n> Q2: The experiment section has compared different baselines with ablation experiments. However, a large number of results were presented in the Appendix section. From the main text, it's not very clear why the authors' approach is more effective than baselines, and ablation's analysis doesn't show what each component of FANS-RL does. It would be useful to extend the discussion in this section on why FANS-RL outperforms baseline methods.\n\nThere is an ablation analysis shown in Fig.2 (d) and described in the subsection \"Experimental results and ablation studies\", for which the whole results are in Appendix C.5, which shows what each component of FANS-RL does.\nIn particular we show what happens:\n- Without smoothness loss ($\\mathcal{L}_\\text{smooth}$);\n- Without structural relationships ($\\mathbf{C}^{\\cdot \\rightarrow \\cdot}$);\n- Without compact representations ($\\mathbf{s}^{min}, \\mathbf{\\theta}^{min}$);\n- Without sparsity losses ($\\mathcal{L}_{\\text{sparse}}$);\n- Without reward or state prediction losses ($\\mathcal{L}\\_{\\text{pred-rw}}$, $\\mathcal{L}\\_{\\text{pred-dyn}}$).\n\nAs shown in the ablation studies and described in the text in the main paper (which we highlighted in the revised version), the most significant gain is brought by the factored representation, which provides the structural relationships between states, actions, rewards and change factors. To make it even more clear, we plan to add the details of ablation studies (Section C.5 in Appendix) into the main paper in our final version with additional space.\n\n> Q3: Figure 2 (c) shows that FANS-RL's reward is comparable to Oracle, even higher than Oracle. It is useful to explain why this is happening.\n\nWe added a clarification in the revised version in the caption of Fig. 2. To improve the readability, in this figure we only show the average of the highest rewards of Oracle across the different seeds. The quantitative results can be found in Table A1. There are some seeds in which the highest reward of FANS-RL is higher than the average highest reward for Oracle, but the average highest reward for FANS-RL (the full red line) is always lower than Oracle.\n", " > Q6: Line 93-94 is confusing and seems to be making contradictory claims.\n\nWe have added the following explanation in the revised version.\n\n\"Although $\\mathbf{c}^{\\cdot \\rightarrow \\cdot}$ and $\\epsilon$ are stationary, we model the changes in the functions and some changes in the graph structure through $\\mathbf{\\theta}$. \nFor example a certain value of $\\mathbf{\\theta}_t^r$ can switch off the contribution of some of the state or action dimensions in the reward function, or in other words nullify the effect of some edges in $\\mathcal{G}$. Similarly the contribution of the noise distribution to each function can be modulated via the change factors. On the other hand, this setup does not allow adding edges that are not captured by the binary masks $\\mathbf{c}^{\\cdot \\rightarrow \\cdot}$.\" \nHopefully this clarifies things a bit. Given the space constraints, this explanation might be a bit short, so we will expand on this concept in the final version with an additional page.\n\n> Q7: Algorithm 1, initial values of $\\theta_{\\text {old }}$ not specified. line 3, what is $k$ ? What does $\\tau_{0: k}^{i}$ denote? \n\nWe have defined $\\mathbf{\\theta}\\_{\\text {old }}$ and $k$ in updated Algorithm 1. $\\tau\\_{0: k}^{i}$ denotes the $i$-th collected trajectory with length $k$ for estimating the FN-VAE (See Algorithm A1). \n\n> Q8: The smoothing technique applied is slightly unusual. Based on my reading, most works employ moving average-based smoothing.\n\nWe have added an experiment to compare with moving average-based smoothing. Specifically, we choose the moving average and exponential moving average smoothing with the different hyper-parameters:\n\n- Moving average: $\\mathcal{L}\\_{\\text {smooth }}=\\sum\\_{t=2}^{T}\\left(\\left\\|\\|\\mathbf{\\theta}_{t}-(\\mathbf{\\theta}\\_{t-1}+\\mathbf{\\theta}\\_{t-2}+\\ldots+\\mathbf{\\theta}\\_{t-T})/T\\right\\|\\|\\_{1}\\right)$; \n\n\n- Exponential Moving average: $\\mathcal{L}\\_{\\text {smooth}}=\\sum_{t=2}^{T}\\left(\\left\\|\\|\\mathbf{\\theta}_{t}-(\\beta \\mathbf{\\theta}\\_{t-1}+(1-\\beta) \\mathbf{v}\\_{t-2})\\right\\|\\|\\_{1}\\right)$, where $\\mathbf{v}\\_{t}=\\beta \\mathbf{\\theta}\\_{t} + (1-\\beta)\\mathbf{v}\\_{t-1}$ and $\\mathbf{v}\\_{0}$ is a zero vector. \n\nWe also report the results of the experiments on Half-Cheetah with continuous changes on dynamics. The results are given below.\n\n\n| Methods | Average final rewards |\n|:----------------------------------------:|:------------------------:|\n| Moving average smoothing $(T=2)$ | -23.7 (+/-19.6) |\n| Moving average smoothing $(T=4)$ | -25.9 (+/-20.4) |\n| Moving average smoothing $(T=8)$ | -25.6 (+/-17.5) |\n| Exponential moving average $(\\beta=0.9)$ | -31.5 (+/-25.6) |\n| Exponential moving average $(\\beta=0.98)$ | -26.2 (+/-20.3) |\n| Ours | -24.8 (+/-21.1) |\n\nThe results suggest that our smoothing term has the similar performances with the moving average smoothing.\n\n> Q9: Appendix lines 190-191 state that CF inference networks are fully-connected, shouldn't they be LSTMs?\n\nSorry for the ambiguity. Here we mean the LSTM layers are followed by those dense layers. We have clarified this in updated Appendix. \n", " Thanks for the careful review, we have made some changes in the revised version (highlighted in blue). We answer the points one by one in the following:\n\n> Q1: The inference networks infer $\\theta_{t}$ from $\\tau_{0: t}$ as described in the paper but in Eq. (4) $q_{\\phi}$ also includes dependence on $\\theta_{t-1}$. This dependence is not clear, does this represent the time-evolving hidden state of the LSTM, $h\\_{t-1}$ ?\n\nThanks for pointing this out, this was a typo on our side. Yes, $\\mathbf{\\theta}\\_{t-1}$ in $q_{\\phi}$ represents the hidden state of the LSTM $\\mathbf{h}\\_{t-1}$. We have changed it into $q_{\\phi}(\\mathbf{\\theta}_{t}\\mid\\mathbf{s}_t, \\mathbf{a}_t, r_t, \\mathbf{h}\\_{t-1})$ in the revised version (updated Eq. (4)). \n\n> Q2: The one-step prediction encoders in Eq. (6) and (7) should have dependence on $\\theta\\_{t+1}^{s}$ and $\\theta_{t+1}^{r}$, respectively, as specified in the FN-MDP. Though Fig 2(d) shows that state and reward prediction help improve performance, predicting $\\left(s\\_{t+1}, r\\_{t+1}\\right)$ from $\\left(\\theta_{t}^{s}, \\theta_{t}^{r}\\right)$ seems to ignore the latent change factor dynamics. A clarification from the authors would be helpful.\n\nWe clarify in the revised version that we only use the one-step prediction loss, when we expect the changes to be smooth.\n- In the discrete changes case, we do not use the prediction losses at the timesteps $(\\tilde{t}_1-1, \\ldots, \\tilde{t}_M-1)$ when there are discrete changes happening at timesteps $\\boldsymbol{\\tilde{t}} = (\\tilde{t}_1, \\ldots, \\tilde{t}_M)$, since the changes are not smooth by definition.\n- For the continuous changes, we have two settings: across-episode and within-episode. In the across-episode changes, we do not use the one-step prediction for the first time-step in the next episode, because the state is randomly initiated in each episode. In the within-episode changes, we use the prediction loss, but we ignore the latent change factor dynamics, because we assume that the changes are smooth across time. \n\nWe tested empirically if adding the latent change factors $\\left(\\mathbf{\\theta}\\_{t+1}^{s}, \\mathbf{\\theta}\\_{t+1}^{r}\\right)$ would help in the prediction of future states and rewards. In particular, we tried this ablation in a setting with continuous changes on the dynamics in Minitaur. The results below show that using either $\\theta^{t+1}$ or $\\theta^{t}$ seems to have similar performances on rewards. We will include this ablation on all continuous within-episode settings into our final version. \n\n| Methods | Average final rewards |\n|:--------------------:|:----------------------:|\n| Using $\\mathbf{\\theta}^{t+1}$ | 5.9 (+/-11.7) |\n| Using $\\mathbf{\\theta}^{t}$ | 6.3 (+/-10.4) |\n\n\n> Q3: The multiple loss functions terms each have an associated weight, which seems like a hyper-parameter tuning nightmare. This can hurt the applicability of the proposed model in practice. Could the author comment on this aspect? On a related note, Section D.4.1 specifies the values used for these weights but is missing a description of the hyper-parameter selection method.\n\nThanks for pointing this out. We have clarified in the revised version that we use the automatic weighting method in [Liebel et al 2018] to learn the weights for $k_1, \\ldots, k_5$ and grid search for $w_1, \\ldots, w_7$. We have also added more details in the Supplementary Section D.4.1. We believe these are common hyper-parameters selection strategies, so they do not limit the applicability of the proposed model. \n\n[Liebel et al 2018] Liebel, Lukas, and Marco Körner. \"Auxiliary tasks in multi-task learning.\" arXiv preprint arXiv:1805.06334 (2018).\n\n> Q4: The experiments verifying the values of learned reward CFs are insightful. It would be interesting to verify the dynamics CF in a similar experiment.\n\nThanks for the suggestion. We verify one dynamic CF (wind forces in Half-Cheetah). We randomly sample a few data points and compute the Euclidean distance below. $D(f^{i,j}_w)$ and $D(\\theta^{s}\\_{i,j})$ denotes the Euclidean distance on wind force $f_w$ and $\\theta^s$ between two sampled data points. We can find that there is a positive correlation between the distance of learned $\\theta^s$ and true wind forces. We added two heatmaps (similar to Fig. 2(f)) in the revised Fig. A10 in the Appendix. \n\n| $D(f^{i,j}_w)$ | $D(\\theta^{s}_{i,j})$ |\n|:--------------:|:---------------------:|\n| 1.62 | 2.9 |\n| 10.12 | 6.8 |\n| 11.74 | 8.5 |\n| 14.42 | 11.0 |\n\n> Q5: Minor points on notations\n\nThanks for the point out the typos in the text and figures. We have corrected them in the revised version. ", " The paper proposes a factored adaptation framework for reinforcement learning in non-stationary environments. The paper first formalizes the notion of Factored Non-stationary MDP (FN-MDP), which augments a factored MDP with time-evolving latent change factors under the Markovian assumption. The generative model in AdaRL is adapted to the time-varying setting by introducing additional equations for the latent change factors update. The causal structure is captured by a DBN, where the edges are represented using binary masks. These masks can be inferred under certain identifiability assumptions.\n\nThe paper then proposes FN-VAE to model the dynamics of the latent change factors, the state transitions, and the reward function. It includes some sparsity and smoothness regularization terms, and can be adapted for continuous or discrete changes. The FANS-RL framework combines model learning with policy optimization. The policy depends on the states and change factors for the dynamics, and the graph structure is used to select only the dimensions which affect the reward.\n\nExperiments are performed on four popular benchmarks, which have been modified to be non-stationary with both continuous and discrete changes. The results show that FANS-RL with FN-VAE performs better than some existing methods. The paper also includes ablation studies and experiments testing various aspects of the proposed method. Learning effective and stable policies in non-stationary environments is an active area of research within the community. This is an important area of research and contributions in this direction can facilitate the application of deep RL to real-world scenarios. This work formalizes a non-stationary analogue of factored MDPs and proposes algorithms to model the environment and combine it with policy optimization to provide a general-purpose framework. The presented approach seems reasonable and theoretically sound. To the best of my knowledge, this represents novel and original work.\n\nThe writing is clear and concise. The factored generative model is quite complex with many moving parts, but the explanation and the diagrams make things fairly straightforward to follow. It is appreciable that both continuous and discrete changes are considered, and the framework is also adapted to image observations. The experiments are suitable and include comparison with recent works. The extensive results both in the main paper and the appendices are impressive. Notably, the authors perform significance tests for comparison with previous work.\n\nThe paper presents good quality and original work, with clear explanation of concepts and extensive results. The quality of submission can be enhanced by addressing some clarification questions and suggestions, provided below. 1. The inference networks infer $\\theta_t$ from $\\tau_{0:t}$ as described in the paper but in Eq. (4) $q_\\phi$ also includes dependence on $\\theta_{t-1}$. This dependence is not clear, does this represent the time-evolving hidden state of the LSTM, $h_{t-1}$?\n\n1. The one-step prediction encoders in Eq. (6) and (7) should have dependence on $\\theta^s_{t+1}$ and $\\theta^r_{t+1}$, respectively, as specified in the FN-MDP. Though Fig 2(d) shows that state and reward prediction help improve performance, predicting $(s_{t+1},r_{t+1})$ from $(\\theta^s_{t}, \\theta^r_{t})$ seems to ignore the latent change factor dynamics. A clarification from the authors would be helpful.\n\n1. The multiple loss functions terms each have an associated weight, which seems like a hyper-parameter tuning nightmare. This can hurt the applicability of the proposed model in practice. Could the author comment on this aspect? On a related note, Section D.4.1 specifies the values used for these weights but is missing a description of the hyper-parameter selection method.\n\n1. The experiments verifying the values of learned reward CFs are insightful. It would be interesting to verify the dynamics CF in a similar experiment.\n\n**Minor comments:**\n\n- Eq. (2) seems to have a typing error, $r_t$ should be function of $s_t,a_t$ instead of $s_{t-1},a_{t-1}$.\n- Line 87 typing error, $c$ should not have subscript $i$.\n- Line 93-94 is confusing and seems to be making contradictory claims.\n- Algorithm 1, initial values of $\\theta_{old}$ not specified.\n- Algorithm 1, line 3, what is $k$? What does $\\tau^i_{0:k}$ denote?\n- The smoothing technique applied is slightly unusual. Based on my reading, most works employ moving average-based smoothing.\n- Figure 2(f) has incorrectly labelled axes.\n- Appendix lines 190-191 state that CF inference networks are fully-connected, shouldn’t they be LSTMs?\n- There is a lot of notation used in the paper, which naturally introduces scope of typing errors. I assume such issues will be fixed. The paper provides a brief discussion on some limitations, namely the scalability and applicability of the approach to complex problems. The appendix provides a more detailed discussion on the effect of different assumptions or inductive biases, and acknowledges the lack of theoretical guarantees for the proposed method. ", " This paper introduces FANS-RL, a factored adaptation approach that aims to generalize to non-stationary scenarios including changes across episodes and within episodes. They formalize FN-MDPs, and prove the causal graph of the transition and reward function is identifiable. The experiments show FANS-RL outperforms the state of the art. **Originality:** Fair: The paper contributes some new ideas\n\n**Quality:**\t\tGood: The paper appears to be technically sound, but I have not carefully checked the details.\n\n**Clarity:**\t\t Good: The paper is well organized but the analysis of the experiments' results could be improved.\n\n**Significance:** Fair: The paper is likely to have a moderate impact within a subfield of AI.\n\n**Main Strengths:**\n\n- It is a nice synthesis of causality and RL, leading to an intuitive design.\n- It's fascinating to generalize a Factored-MDP to a FN-MDP, solving the non-stationary problem from a different perspective.\n- If the results are robust then it could be an important and useful tool.\n\n**Main Weakness:**\n\nThe paper is weak in a few ways. Mainly, the experiments section could include additional insight on the results as I will discuss in my question comments to the authors. As a minor point, the paper omits certain details such as a more thorough intro of AdaRL due to page limitations, which is understandable but would require more background knowledge from the readers. I suggest improving the readability in the revised version. \n I list a few ways in which the paper can be improved below.\n\n- The experimental chapter devotes a lot of space to the various environmental settings, and the experimental results seem to be insufficient. More detailed discussions on the experimental results in the main paper would be more appealing. \n\n- The experiment section has compared different baselines with ablation experiments. However, a large number of results were presented in the Appendix section. From the main text, it's not very clear why the authors' approach is more effective than baselines, and ablation's analysis doesn't show what each component of FANS-RL does. It would be useful to extend the discussion in this section on why FANS-RL outperforms baseline methods.\n\n- Figure 2 (c) shows that FANS-RL's reward is comparable to Oracle, even higher than Oracle. It is useful to explain why this is happening. N/A", " In this paper, the authors considered non-stationarity which is faced when RL is deployed into the real world. To model the non-stationarity across episodes or within an episode, which seems to be general scenarios, the authors disentangled the non-stationarity as two types of latent change factors and formalized FN-MDPs. These disentangled latent factors enable the proposed method called FANS-RL to estimate changes both in the environment dynamics and the reward function. The authors assumed the generative process of FN-MDPs containing these latent factors, and FANS-RL learns the generative process of FN-MDPs during training. FANS-RL can also infer the change factors in environment dynamics and reward function at the current time-step via the learned generative process of FN-MDPs. The experimental results showed that FANS-RL outperforms baseline algorithms in various simulation tasks. Especially, the ablation study showed the performance effect of their method, which has many loss functions and variables they considered.\n\n This paper proposed FANS-RL that addresses non-stationarity both in environment dynamics and reward function. \n\n1. Strengths\n\n(1-1) The authors formalized FN-MDPs that can deal with a general non-stationary RL setting including changes both across episodes and within an episode. It is a very important problem in RL.\n(1-2) FN-MDPs also contain two latent change factors for the environment dynamics and the reward function, and FANS-RL can more explicitly handle the non-stationarity of the MDP due to the two change factors. \n(1-3) FANS-RL shows better adaptation for non-stationarity than other existing algorithms in various tasks. Also, the ablation study showed the effect of each component of FANS-RL architecture with respect to the expected return.\n\n2. Weakness\n\n(2-1) The proposed method showed the adaptation for the limited changes in the environment dynamics; for example, the changes of wind force or agent’s mass or gravity. So, the reviewer wonders if FANS-RL can adapt to changes in the agent’s mechanism; for example, the disabled joint of the agent due to aging.\n\n(2-2) It seems difficult to find the fundamental difference between the architecture of FN-VAE and that of MiSS-VAE of AdaRL[1], even though they learn different MDPs. They have too similar components and loss functions.\n\n(2-3) The paper relies much on the definitions and theorems of AdaRL [1] in literature. It would be better to add an additional explanation of the shared representation $s^{min}$ and the compact domain-specific representation $\\theta^{min}$. \nEspecially, in Line 74-84 in Appendix B.2, there exist sentences verbatim to those in the proof of theorem1, page 18, [1]. I marked ethics alert due to this point. Please check if this is OK.\nThe paragraph of concern: \"We denote the variable set in the system ....\"\n\n3. minor points (typo)\n(3-1) The first equation between line 72-73: $pa(\\theta_{ij,t}^s)\\to pa(\\theta_{j,t}^s)$\n(3-2) Figure 2-(f) in the main paper: the x and y labels need to correct.\n\n[1] Biwei Huang et al., AdaRL: What, Where, and How to Adapt in Transfer Reinforcement Learning, ICLR 2022\n\n FANS-RL can address non-stationarity, which is an important challenge in RL, and I think it is a great approach. However, it seems to rely on AdaRL, so I need additional explanation and experimental results to support the acceptance of the paper.\n\n1. Why did the authors formalize FN-MDPs, not N-MDPs? Although the ablation study without structure shows improvement with respect to the expected return, I need a brief and intuitive explanation for understanding the effect of factored MDP.\n\n2. As mentioned in “Weakness”, the experimental results showed the adaptation to the limited changes in the environment dynamics. To find out the adaptation for the changes in the agent’s mechanism, I suggest additional experiments for these changes; for example, the increasing joint friction is introduced at a joint of Half-Cheetah.\n\n3. To check the effect of disentangling the latent change factor as the two ones, I suggest adding an ablation study without disentangling latent factors in figure 2-(d).\n\n4. Figure 2-(e,f) shows the distances of the learned $\\theta^r$, similarly, I suggest plotting the distances of the learned $\\theta^s$; for example, the $\\theta^s$ estimates the changing wind force. It is helpful to identify that the learned latent factor can capture the true change in environment dynamics.\n\n5. In Figure 2-(g), is the number of latent features of FANS-RL total number of dimensions of two latent features $\\theta^s$ and $\\theta^r$?\n\n6. What is the fundamental difference between the FN-VAE and the MiSS-VAE of AdaRL?\n\n I already mentioned the limitations of this paper and the suggestions for the proposed method. ", " This paper proposes a novel formalism to model non-stationary environments, Factored Non-stationary Markov Decision Process (FN-MDP), that models latent factors that affect the dynamics and rewards and evolves with time as in Dynamic Bayesian Network. Then, they propose FN-VAE, which is built on top of AdaRL, and is used to learn the parameters (latent factors, dynamics, rewards, masks) of the underlying FN-MDP. The method is evaluated in several robotic environments and compared with state-of-the-art algorithms tailored to deal with non-stationarity in RL. Strengths:\n- The proposed formalism is able to model, in a factored manner, non-stationary environments in which the factors that change the dynamics/reward also evolve over time. This is a novel idea not yet explored in the related literature.\n- The authors compare the proposed approach with several state-of-the-art algorithms tailored to deal with non-stationarity, showing relevant performance improvements. Several ablation experiments were also presented.\n\nWeaknesses:\n- Some mathematical and algorithmic definitions are not very clear in the current version and should be clarified in the main text.\n Furthermore, I have the following questions and constructive criticisms:\n\n- In Definition 1, I suggest clarifying that it is assumed that the action is a vector of m dimensions. It is ambiguous as m could denote the number of discrete actions.\n\n- “Although the masks and noise are stationary, we allow the change of graph structure and noise distributions, whose changes are captured by $\\theta$ instead.” \nIf the masks are stationary, does it mean that if one of its values is 0 (denoting that one state variable does not influence some other state variable), then it will never have an effect, even if the dynamics change? In other words, to be able to capture all possible dynamics changes, would we need to have all masks with none zero elements?\n\n- The proof of Proposition 1 in the Appendix ends with: “In this setting, identifiability of the graph G is trivial [6].” However, It is not clear why this is trivial. As this is an important theoretical result of the paper, I suggest elaborating this proof.\n\n- The role of the state and reward encoders in Section 3 is not very clear. Why a single prediction network, conditioned on the masks, would be not enough to predict the next state? How are the encoders used?\n\n- How can we obtain the adjustable parameters $w_1$ … $w_7$? In the total loss, how can we obtain $k_1$ … $k_5$?\n\n- The total objective function in line 167 does not include $\\mathcal{L}_{\\text{rec-dyn}}$. Additionally, there is a typo here: the coefficient $k_5$ does not appear in the loss.\n\n- “the policy parameters = $\\psi = (\\pi, Q)$ are the actor $L_\\pi$ and critic loss $L_Q$.” I believe this sentence is not well formulated, how can a loss be a parameter?\n\n- In Algorithm 1, how is $s^{min}_t$ and $\\theta^{min}_t$ computed before following the policy in line 23?\n\n- “The horizon in each episode is 50.” The horizon in these environments is generally higher, why this value was chosen?\n\n- In Fig. 2(g-j) how were the values normalized? I suppose the value of 1.0 does not mean that the proposed method reaches the maximum possible reward, but that it is the maximum reference point used in the normalization.\n\n- Because of the causality assumptions (see Section A of the Appendix), the proposed model can not model different types of non-stationarity at the same time (e.g. wind and gravity). Are any of the related work (e.g. LILAC) able to deal with these changes? If so, perhaps an experiment in such a setting would be interesting to observe whether the proposed method could still outperform the other baselines.\n\n- For completeness of the first sentence of the Related Work section, the paper [“Minimum-Delay Adaptation in Non-Stationary Reinforcement Learning via Online High-Confidence Change-Point Detection.” In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems] is a more recent work on non-stationarity that detects changes that have already happened.\n The authors appropriately discuss the limitations of the proposed approach. I would suggest, however, that the authors briefly include in the main text some of the assumptions of the method which are only discussed in Appendix A, as I believe they are very relevant and readers often do not check the Appendix." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 4 ]
[ "ciBhEBdS7rJ", "lFAqUL4udY-", "U3oBjH2XblO", "MwgBkh8DbM8", "wuuf0HpmWbS", "M2ZDbOf1ig", "-FJEsgRPstM", "6bkaqZcQN1f", "Y8XcoGT81cN", "KqoKI-tsuWC", "DpPtziADEBOn", "-F9P9AubNfb", "qKFJ7CFiX7v", "ld7Pp-S8dXv", "_Fh1vvXmKHj", "8MHz7rP4k5O", "nips_2022_VQ9fogN1q6e", "nips_2022_VQ9fogN1q6e", "nips_2022_VQ9fogN1q6e", "nips_2022_VQ9fogN1q6e" ]
nips_2022_0JV4VVBsK6a
Bringing Image Scene Structure to Video via Frame-Clip Consistency of Object Tokens
Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. At the same time, if a small set of annotated images is available, either within or outside the domain of interest, how could we leverage these for a video downstream task? We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights. First, as both images and videos contain structured information, we enrich a transformer model with a set of object tokens that can be used across images and videos. Second, the scene representations of individual frames in video should ``align'' with those of still images. This is achieved via a Frame-Clip Consistency loss, which ensures the flow of structured information between images and videos. We explore a particular instantiation of scene structure, namely a Hand-Object Graph, consisting of hands and objects with their locations as nodes, and physical relations of contact/no-contact as edges. SViT shows strong performance improvements on multiple video understanding tasks and datasets, including the first place in the Ego4D CVPR'22 Point of No Return Temporal Localization Challenge. For code and pretrained models, visit the project page at https://eladb3.github.io/SViT/.
Accept
This paper proposes StructureViT (SViT), a network architecture to incorporate structured information from images to aid in video tasks. All four reviewers found several aspects of the paper interesting including the ability to use information from just a few images and be beneficial to video tasks. They noted the thorough experimentation on multiple datasets and also found the paper easy to follow. One of the reviewers had concerns about the positioning of the paper. The authors had multiple discussions with this reviewer and were able to comprehensively update their paper and address most concerns, which was commended by the reviewer. Another reviewer had concerns about comparisons and discussions with regards to previous work. The authors did a good job of addressing most of their concerns. One common concern that emerged from the reviews and discussions was the existence of prior work that incorporates structured information into video tasks, thus reducing the novel contributions of this paper. Having read the paper, reviews and discussions carefully, I think the paper improves upon past work and has sufficient novel contributions that are valuable to readers. I recommend acceptance.
train
[ "A73QzbQ0R1s", "muKvD3wjTLA", "CvmHrLhmQY_", "q8_4mD9M4n_", "QX68AJfzMe", "3n4MQLEXpE", "LO0oYv4ZazO", "68RI0YBIKsg", "kApS2-Dt_W-", "RuQZ5MhQPeY", "PedhLFk7Vi6", "QMQ4o7ylOhB", "5N3K6ns-6BW", "RSDYUPuHjdA", "3mJPCPwS_rD", "mzVhFG5mr4I", "0VuCszxw0WQ", "5npGaw841Yb", "WZ-NqvgM93o", "D5FGdoyTwz", "yhZ04PMJsLp", "MdEFKx14pec", "3CRrZRToGo5" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for thoroughly answering all my concerns. \n\nI am reasonably convinced about the general applicability of their proposed approach to several tasks after their provided additional results.\n", " Thank you for your insightful comments. \n\nIn our method, we model objects and hands with object tokens. As objects are very general, the object tokens could be used for a wide range of entities in the scene. For example, in Figure 3 in the Supplementary, we can see that SViT localizes the human on the Diving48 dataset. Similar to Q4, the filtered split on SomethingElse does not include hands, but there are still meaningful objects that can be used to predict specific action labels. For example, in the following link, https://authors98741273.github.io/, we present a few video samples from that split that do not contain the hands, but the objects are still clearly important cues in recognizing the actions. \n\nAdditionally, we believe that we still observe improvements as our transformer model mainly benefits from having a structured representation rather than from particular implementation details (e.g., hands and objects). Several previous papers (such as `Zhang et al., CVPR'19` and `Wu and Krahenbuhl, CVPR'21`) suggest that structured representation leads to a more data-efficient model, which may explain why our approach is applicable even when hands or objects cannot be fully observed. Our focus is on building a transformer-based video model that incorporates a component for modeling structure, and this component can be supervised using static images. Following the discussion with reviewer `LCAU`, we revised the paper to emphasize that our model leverages structured representations (see the revised paper introduction `L22-L39` and related work `L339-L349`). \n\nWe hope our response above and the draft modifications have addressed all your comments. Please let us know if there are still issues. \n", " Thanks for your insightful comments. We appreciate your efforts in helping us improve our paper. The references in the current version have also been updated.", " I am somewhat confused by the takeaways of Q4, why does the proposed method show improvements even when no hands are present? Isn't that opposite of what is expected? \n\nIt would mean the improvements in all cases are actually coming due to some other reason? \n\nOverall, I am happy with most new experiments and as such will increase my score by 1 point.", " I thank the authors for comprehensively updating the positioning of the paper, which fully addresses my concerns.\n\nOne small comment: you seem to have forgotten to include references in the current version of the pdf. ", " Thank you for your insightful comments. The draft has been updated in L22-L39 in the introduction and in L339-L349 in the related work section. **This update is meant to reflect the fact that there is extensive prior work on structured representations in computer vision, and also work on utilizing image annotations in videos (e.g., by training detectors and using them for object representations in videos). Our focus is on building a transformer-based video model that has a component for modeling structure, and where this component can use static-image supervision.** We hope it better reflects your viewpoint. Please let us know if there are still issues with the updated revision.\n \nRegarding your final point, our answer \"Our conceptual and technical contributions focus on the interface between images and video, and modeling these within a single transformer model,\" is in reference to structured models. Namely, compared to the structured models in previous works, our approach utilizes a transformer to capture the joint structure.\n \nLast, the title has also been also changed to reflect our focus on video transformers to \"Bringing Image Structure to Video **Transformers** via Frame-Clip Consistency of Object Tokens.\"", " Thank you for bringing this issue to our attention. We based the summary above on the sentence in your review: \"The proposed video transformer architecture with object tokens is novel to the best of my knowledge.\" In order to better reflect your viewpoint, we have updated the summary by removing your reviewer-id from this point.", " I thank the authors fro their detailed response. Unfortunately, some of my main concerns are not addressed, making me inclined to decrease the score. I hope we can resolve those issues during the discussion.\n\nFirstly, the very valuable additional experiments demonstrate that the model mainly benefits from having a structured representation, not from particular implementation details (e.g. focusing on hands and objects). This is especially clear in the Diving experiments where the video domain has nothing to do with either hands or objects but the improvements are the largest. The authors suggest that this is due to the proposed structured representation being more data efficient. This is indeed in line with claims in the prior work (e.g. Zhang et al., CVPR'19, but also Wu and Kra ̈henbu ̈hl, CVPR'21 is extremely relevant and should be discussed in detail). These results further reinforce my point that this paper is not properly positioned with respect to the prior work. It should be upfront about discussing structured video representations proposed in the past and only then discuss the proposed approach for implementing those ideas with a transformer. \n\nMy previous comments about reducing the novelty claims are also not addressed in the revised manuscript. In particular, in L22-33 the authors do not mention that methods that can utilize image annotations to learn structured video representation already exist (e.g. Zhang et al., CVPR'19). This work is not proposing anything conceptually new here, but simply adapts these ideas to transformer architectures. The introduction has to be updated accordingly. \n\nThe same goes for the added discussion in L331-335: it omits the most important detail - [77] also uses static image object annotations, not labeled video frames, to build its representation, making it significantly closer to the proposed work than the other references. This needs to be clearly emphasized and also discussed in the introduction. \n\nFinally, in the rebuttal the authors claim that \"Our conceptual and technical contributions focus on the interface between images and video, and modeling these within a single transformer model.\". Modeling images and videos with a single model does not constitute neither a conceptual nor a technical contribution since all existing video architectures essentially treat videos as images with and additional dimension (time), so they can naturally be applied on both videos and images without significant modifications.", " I would like to point out that my review explicitly states that the proposed approach is \"is sound, though not novel\", and I further elaborate on that as follows: \"There is no conceptual novelty in the proposed approach, only technical novelty of adapting those ideas to the transformer architecture (which is still a valuable contribution).\". Thus I believe that the authors' summary of my novelty assessment of the work is not entirely accurate :)", " **Q5: When comparing with MViTv2, is it a fair comparison because additional information/computation has been used in the proposed approach?\n\\\nA5:**\nIn our work, we use additional image data, and this data is not required to be in correspondence with the video dataset. Additionally, our approach requires little additional data (2% of the video frames are sufficient to achieve a reasonable improvement, as shown in Table 3b). Therefore, we believe that the comparison is reasonable. \n\\\n\\\nIn addition, we provide the MViTv2-MT baseline, which utilizes the same additional information as we did. MViTv2-MT is a multitask baseline that consists of the HAOG detection task as well as the video downstream task. Tables 1 and 2 show that SViT outperforms it in all tasks evaluated (+2 on SomethingElse, +2.7 on Ego4d, +0.6 on SSv2, +4.3 on Diving48). Regarding computation, the increase in the computation cost of our approach compared to MViTv2 is relatively small: +0.4% in FLOPs and +5% in parameters (we incorporated this information in the revised manuscript in A.2 in Supplementary in L754-L756).\n", " **Q4: Please provided an analysis of the method's performance on Diving48 to explain why the proposed approach demonstrated the largest improvements on this dataset despite it not containing any objects.\n\\\nA4:** We agree that Diving48 is different from the other datasets since it does not contain any objects or clearly visible hands. We believe that its performance can be explained by two factors: (a) It detects the diving human by using the hand object token. (b) It learns to reason about movement trajectories during training, and thus can model the diving trajectory.\n\\\n\\\nIn support of the first point above, in Figure 3 in the Supplementary, we can see that SViT localizes the human on the Diving48 dataset. To further quantify the contribution of such localization, we performed an experiment removing the \"hand\" tokens from the HAOGs annotation during training. We observe a resulting degradation of 1.7 in top 1 accuracy. This suggests that training on hand detections helps in the Diving dataset. Additionally, when we removed the “object” tokens we saw a degradation of 0.6. This suggests that hand detections are more important for this dataset. We incorporate these experiments in the revised manuscript in A.2 in Supplementary, See L757-762. The fact that hand detection helps human-body detection suggests that SViT is capable of distilling structured information from objects of similar but different nature (hands vs humans) with a high degree of transferability.\n\\\n\\\nAs an additional comment, Diving48 is considered a small dataset (15K videos, compared to 70K in Ego4D, 169K in SSv2, or 55K in SomethingElse), and we have already demonstrated that SViT-2% is highly data efficient. Therefore, we believe that the size of the dataset is one of the main reasons why SViT is able to achieve good results on Diving48.\n\\\n\\\n**Q5: The authors refer to their approach as multi-modal, but images and videos are not different modalities. These claims should be removed.\n\\\nA5:** Thank you for your suggestion. We revised the manuscript to clarify this point (See L52, L68, L122, L295, L298, L302, L306, L309).\n\\\n\\\n**Q6: It's not clear whether the benefits of the proposed approach are limited to the ego-centric domain.\n\\\nA6:** We demonstrated that our approach also improved datasets that were not egocentric. We believe that our method is sufficiently robust such that when the hands and objects are not visible, the performance is not significantly affected, while when they are visible, the performance is boosted.", " Thank you for your insightful and positive comments regarding our paper. We were able to improve our submission based on your comments. As a result of our responses and draft modifications, we hope that you will consider raising your score.\n\\\n\\\n**Q1: Please report results on a 3rd person action recognition dataset of your choosing which does require object reasoning to demonstrate to what degree the learned object representations transfer to the 3rd person scenario.\n\\\nA1:** Thank you for your suggestion. Based on your comment, we evaluated the accuracy on 3rd-person video datasets, such as Kinetics-400 and AVA. Our results for Kinetics-400 with SViT showed an improvement of +1.5 over MViTv2. As for AVA, we evaluated the mAP and received an improvement of +0.7 over MViTv2. We incorporated these experiments in the revised manuscript in A.4 in Supplementary, See L771-L773.\n\\\n\\\nFinally, we demonstrate to what extent the learned object representations can be used for auxiliary image tasks. We evaluate the ability of object tokens to be utilized explicitly for the auxiliary task as a simple detector of hands and objects in images. This is accomplished by predicting the detections on SomethingElse based on the learned object tokens. The learned object tokens were compared to the MViTv2 model extended with regression and detection heads (similar to DETR [A]). Our model achieved an mAP of 16.8, while the proposed baseline achieved a similar result with an mAP of 15.5. These results suggest that the object tokens learn meaningful and useful representations. We incorporated these experiments in the revised manuscript in A.2 in Supplementary, See L738-L744.\n\\\n\\\n[A] \"End-to-End Object Detection with Transformers\", ECCV 2020, Cairon et al.\n\\\n\\\n**Q2: There is no conceptual novelty in the proposed approach, only technical novelty of adapting those ideas to the transformer architecture (which is still a valuable contribution).\n\\\nA2:** Our approach demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. In our method, we propose shared object prompts that are used in the image domain and the video domain. In this way, the shared object prompts are utilized to learn the shared structured representation between video and image domains, which enhances the self-attention layers with the structured representation for the main video task. As mentioned by other reviewers, this is a form of “a clever solution to distill information” (`DF7L`) by exploiting the design of the transformer architecture to process video and image domains using the same set of shared weights. \n\\\n\\\n**Q3: Please improve overview of the prior work to included methods that use out-of-domain object detection labels to improve action recognition via object graph reasoning and tone down the novelty claims accordingly.\n\\\nA3:** We will improve the overview of the prior work in the revised manuscript (See L324-L335), along with adding the citation for the paper you mentioned: \"A Structured Model For Action Detection” by Zhang et al., CVPR'19. As we briefly mentioned in L319-322, there has been a considerable amount of work on improving action recognition using object graph reasoning [5, 24, 30, 31, 37, 63, 68], and the above study should be included in this list. Indeed these works show the importance of using object representations and their interactions in a video model. Our conceptual and technical contributions focus on the interface between images and video, and modeling these within a single transformer model. We have expanded and edited the related work accordingly.", " Thank you for your insightful and positive comments regarding our paper. We were able to improve our submission based on your comments. Next, we address your concerns.\n\n**Q1: What procedure/losses did the authors employ for the pre-training task?\n\\\nA1:** In the pretraining procedure, we use only $L_{Vid}$ (cross-entropy loss for video classification) and $L_{HAOG}$ (HAOG loss for detecting HAOGs) losses, while we do not use the frame-clip consistency loss ($L_{Con}$). The hyperparameters for the pre-training procedure are identical to those that MViT uses.\n\\\n\\\n**Q2: One of the main weaknesses of the proposed technique is the requirement of a large number of labeled images (~100K) to distill from. While in Table 1 and Table 3, the authors show the effect of reducing the number of annotated images for the Compositional and Few-Shot Action Recognition tasks, I am curious to know how these effects play out for the experiments presented in Table 2 as well. Does the authors' claim that the number of images can be reduced to 2% (or some similar small number) of the annotated image training images hold in general or is it only true for this task?\n\\\nA2:** Thank you very much for your suggestion. Following your comment, we incorporated these results in the revised manuscript in A.2 in Supplementary (See L725-L733 and Table 6c).\nWe evaluated SViT-2% on the Ego4d, SSv, and Diving48 datasets. The SViT-2% in Diving48 shows an improvement of +5.8 compared to the MViTv2 baseline (while SViT shows an improvement of +6.7). The SViT-2% in SSv2 shows an improvement of +0.6 compared to the MViTv2 baseline (while SViT shows an improvement of +0.8). The SViT-2% in Ego4D shows an improvement of +1.8 compared to the MViTv2 baseline for object state change classification (while SViT shows an improvement of +2.2) and an improvement of 0.163 compared to the MViTv2 baseline for object state change temporal localization (while SViT shows an improvement of 0.187). We also include these results in the table below.\n\n| Dataset | SViT-2% | SViT | MViTv2|\n| ----------- | ----------- | ----------- | ----------- |\n| Diving48| 78.9 | 79.8 | 73.1|\n| SSv2| 68.7 | 68.9 | 68.1|\n|Ego4D Cls. | 73.4 | 73.8 | 71.6|\n|Ego4D Loc. | 0.539 | 0.515 | 0.702|\n\n\n\\\nAs can be seen from Table 1, the SViT-2% also appears to be effective and performs similarly on other datasets. Accordingly, we conclude that our claim is generally valid based on these experiments.\n\\\n\\\n**Q3: As an additional ablation, I would be curious to see the results of an experiment where the authors first learn a vision transformer to perform the HAOG task alone and then fine-tune its backbone network for the video-related task\n\\\nA3:** We evaluated this experiment and achieved 62.3 (compared to 63.3 of MViTv2). This indicates that training HAOGs as a form of “distillation,” as we do in SViT, is indeed important. Following your comment, we incorporated these results in the revised manuscript in A.2 in Supplementary (See L734-L737).\n\\\n\\\n**Q4: Is the requirement for a large number of annotated images a strict one for the success of the proposed method? If it is, I would like or see the authors acknowledge this limitation more clearly.\n\\\nA4:** We demonstrated that our method does not require large annotated images. See A3 for reviewer oj8q. \n\\\nThe results indicate that using too few annotated images (1 image per 100 videos) may result in degradation, but using a relatively small number of images (1 image per 10 videos) is sufficient to achieve good improvement. \n\\\n\\\n**Clarity:** The manuscript has been revised to correct the errors you pointed out.", " **Q13: I feel that authors should emphasize the limitation of using hand-object graph which limits the proposed methods to a subset of videos where hands are prominent and not occluded.\n\\\nA13:** It is possible that HAOGs may limit the model to certain domains or scenes. However, as we demonstrate empirically here, our method is robust even for datasets without prominent hands, such as Diving48, AVA, and Kinetics, or even when the hands or objects are occluded (see Q4). As a result of this empirical observation, we believe that our method is robust for a wide range of natural videos. In particular, we believe that our method is sufficiently robust such that when the hands and objects are not visible, the performance is not significantly affected, while when they are visible, the performance is boosted. It should be noted, however, that our approach can be applied to any structural scene type, and therefore, modifying the structure annotation could be an exciting direction to inspire future work.\n\\\n\\\n**Q14: Why does diving-48 show mvit performing so much worse than slow-fast and timesformer?\n\\\nA14:** It is possible that Slow-Fast performs better than MViT because Diving48 is considered a relatively small dataset, and therefore we assume that ConvNets perform better since they have a better inductive bias than Transformers. Also, TimeSformer proposed a data-efficient spatio-temporal divided attention which could also be more effective when dealing with small datasets than vanilla self-attention.\n\\\n\\\n**Q15: (Minor) In section 3.1, the dataset numbering is incorrect. (3) is skipped.\n\\\nA15:** Thank you. We have revised the manuscript in 3.1 to address this issue.", " **Q7: Moreover, it is unclear how the proposed method can be extended to more general action videos such as Kinetics data. \n\\\nA7:** Following your comment, we evaluated the accuracy on Kinetics-400 with SViT and received an improvement of +1.5% over MViTv2. We incorporated this experiment in the revised manuscript in A.4 in Supplementary, See L771-L773.\n\\\n\\\n**Q8: The relative performance improvement in Table 1 are very small, and it is unclear if the differences are actually significant.\n\\\nA8:** Action recognition improvements are typically in the range of 1-2%, even for high-impact works such as MViT (MViTv2 outperforms MotionFormer by about 2-3%. See Table 1. This shows that the task is challenging, not that our performance improvement is insignificant. Furthermore, we ran the SViT-ID experiment on SomethingElse (Table 1) ten times with different seeds and calculated a 95% confidence interval of $65.8 \\pm 0.44$, while the MViTv2 achieved $63.3 \\pm 0.47$, indicating the SViT performance is consistently higher than the reported result of MViTv2. We incorporated these results in the revised manuscript in A.3 in Supplementary (See L765-L768). We will add the variance results to the reported mean performances in Table 1 for the camera ready.\n\\\n\\\n**Q9: Improvements are not always consistent in Table 1\n\\\nA9:** As explained above, Table 1 shows the results of experiments performed on the SomethingElse dataset. It can be seen that the SVIT models (SViT-DD, SViT-2%, SViT-ID; We exclude SViT-SFT, which is a weaker SViT variant that does not use auxiliary images in fine-tuning) outperform the MViT-MT model (which is the strongest baseline) in all settings. The three SViT models perform similarly, up to statistical noise (note that noise here is on the order of 0.4% as noted in A8, probably due to the small size of the dataset). We do not view this as a problem, but rather as conveying that the main improvement in SViT comes from the use of image supervision, and the particular domain and amount of supervision has a minor effect on performance. \n\\\n\\\n**Q10: Naming conventions: I don't think it is fair to say annotated video frames count as image-annotations. They are still video annotations, just that they are sparse.\n\\\nA10:** The thinking behind this naming convention is that annotations of individual frames in videos can be referred to as “image annotations” because the temporal order is not used. This is also helpful in terms of presentation, in order to differentiate between video and image supervision. The main difference between video frames and a batch of images is the temporal information. Since we do not use the temporal order of the annotated video frames, we refer to them in the paper as “image annotations”. Following your comment, we incorporate a discussion of this point in the revised manuscript. See L870-L873 in the supplementary.\n\\\n\\\n**Q11: On data used: According to L196, video frames are annotated, but no details of how they are obtained are provided. Is it completely automatic / semi-automatic / purely human annotated?\n\\\nA11:** Following your comments, the manuscript has been revised in Section 3.1 to include these details, See L209-L212. The object boxes collected from SSv2, 100DOH, and Ego4D represent the original data, which has been manually annotated (as reported in the original papers [50, 60, 28]). In SSv2 and Ego4D, contact relations between the object and hand are not annotated, so we automatically assign the closest object to a hand for each hand. The contact relations for 100DOH are available in the original publication.\n\\\n\\\n**Q12: (Minor) Given that the model works for both images and videos, the authors could show results on human-object interaction based datasets such as [Ref4 - hoi]\n\\\nA12:** Thank you for your suggestion. \n\\\n\\\n​​We agree with the reviewer about the HOI suggestion. We would like to emphasize that the object prompts in our approach are used in order to leverage structured information from the images into videos. We believe that it is an interesting future direction to use structured information from video to image in our method, and we will leave this to future work.\n\\\n\\\nNevertheless, we can evaluate the ability of object tokens to be utilized explicitly for the auxiliary task as a simple detector of hands and objects in images. This is accomplished by predicting the detections on SomethingElse based on the learned object tokens. The learned object tokens were compared to the MViTv2 model extended with regression and detection heads (similar to DETR [A]). Our model achieved an mAP of 16.8, while the proposed baseline achieved a similar result with an mAP of 15.5. These results suggest that the object tokens learn meaningful and useful representations. Following your comments, the manuscript has been revised in A.2 (See L738-L744) to include these results.\n\\\n\\\n[A] \"End-to-End Object Detection with Transformers\", ECCV 2020, Cairon et al.\n\n\n", " **Q4: Use of Hand-object graphs seems to be relevant for very specific domains. What if hands are not visible, or they are not interacting with a specific object?\n\\\nA4:** Following your comment, we incorporated the following experiments in the revised manuscript in A.2 in Supplementary (See L701-L711, Table 6b).\n\\\nWe suggest two experiments to investigate your hypothesis in order to verify the usefulness of our approach. We will first test the model on a filtered test split, which contains only videos without hands. The second step will be to test the model on a filtered split containing only videos without objects. In the experiments, we used SomethingElse, which has dense ground-truth annotations, allowing us to filter frames within the videos. Due to the fact that SomethingElse contains many objects without annotations, there is still data available for training (the numbers are provided below).\n\\\n\\\n(i) After filtering the videos containing (annotated) hands, we tested our model and the MViTv2 model on the filtered hand split (consisting of 5922 videos). MViTv2 achieved an accuracy of 64.6 while our model achieved 66.5. This implies an improvement of +1.9. \n\\\n(ii) After filtering the videos containing (annotated) objects in more than 40% of the frames (consisting of 5595 videos), we tested our model and the MViTv2 model on the filtered object split. MViTv2 achieved an accuracy of 64.9 while our model achieved 67.0. This implies an improvement of +2.1.\n\\\n\\\nWe can observe that our model outperforms the baseline even when there are no objects or hands in the videos, demonstrating the robustness of our approach. The total improvement (+1.9 and +2.1) is also a little bit lower than before the filtering (+2.5), which indicates that there has been a slight degradation. Even so, it is still valuable to use the hand-object graphs. \n\\\n\\\n**Q5: In L163, the motivation for frame-consistency loss is that image loss may not transfer to video loss. It is unclear if the proposed solution is the best way to approach this. For instance, one could simply perturb the image temporal position embedding to any random index. Alternatively, a object tracking system could interpolate / extrapolate the bounding boxes. Some experiments comparing such additional ways are needed.\n\\\nA5:** Following your suggestion, we include additional baselines in the revised manuscript in A.2 in Supplementary (See L712-L724). Specifically, we did the following: (i) We perturb the image temporal position without consistency loss. We refer to this version as SViT-Perturb (and similarly SViT-Perturb-DD and SViT-Perturb-ID). (ii) We predict the HAOG annotations, which are extrapolated from one random frame of a video. This serves as additional supervision (without the consistency loss) since the HAOG annotations correspond to the video frames (we note that SViT does not require such correspondence since it uses only HAOG annotations from single images). (iii) We predict the HAOG of a random frame in a video, and then duplicate it over the temporal dimension and use it in the same manner as in the consistency loss.\n\\\n\\\nWe find that these three baselines lead to worse performance. (i) We obtained SViT-Perturb-DD with 64.1 (while SViT-DD obtained 65.1), and SViT-Perturb-ID with 65.2 (while SViT-ID with consistency loss got 65.8). (ii) The proposed baseline achieved 65.0 compared to our SViT-ID, which achieved 65.8 (and does not require correspondence). (iii) The proposed baseline achieved 65.1 compared to our SViT-ID, which achieved 65.8. Taken together, this demonstrates the importance of our frame-clip consistency loss. \n\\\n\\\n**Q6: The authors should provide SViT-2% results for Table 2a,b,c as well.\n\\\nA6:** Thank you very much for your suggestion. \n\\\n\\\nWe incorporated these results in the revised manuscript in A.2 in Supplementary (See L725-L733, Table 6c). Specifically, we have added the SViT-2% results for the Ego4D, SSV2, and Diving48 datasets. The SViT-2% in Diving48 shows an improvement of +5.8 compared to the MViTv2 baseline (while SViT shows an improvement of +6.7). For SSv2, the SViT-2% experiment results in an improvement of +0.6 compared to the MViTv2 baseline (while SViT shows an improvement of +0.8). For Ego4D, the SViT-2% experiment results in an improvement of +1.8 compared to the MViTv2 baseline (while SViT shows an improvement of +2.2) for object state change classification and an improvement of 0.163 compared to the MViTv2 (while SViT shows an improvement of 0.187) baseline for object state change temporal localization. We also include these results in the table below. It can be seen that the SViT-2% also appears effective and provides similar improvements on other datasets to the results shown in Table 1.\n\n| Dataset | SViT-2% | SViT | MViTv2|\n| ----------- | ----------- | ----------- | ----------- |\n| Diving48| 78.9 | 79.8 | 73.1|\n| SSv2| 68.7 | 68.9 | 68.1|\n|Ego4D Cls. | 73.4 | 73.8 | 71.6|\n|Ego4D Loc. | 0.539 | 0.515 | 0.702|\n", " Thank you for your insightful comments regarding our paper. We were able to improve our submission based on your comments. We hope that our responses below and the draft modifications have addressed all of the comments made in the review. Therefore, we would appreciate it if you would consider updating your score. Next, we address your concerns below.\n\\\n\\\n**Q1: The authors should distinguish their work better compared to previous work. For instance, the idea of using object tags has been previous used in [44], so improvements just based on object tags are not exactly unexpected. Similarly, [Ref1] shows using entity-prompt can improve video-text pretraining.\n\\\nA1:**\nThank you for your suggestion. Following your comments, the manuscript has been revised in L326-L328 to include these papers (the discussion below will be included in the 10-page camera-ready version). In contrast to [44] and [Ref1], SViT does not require object labels, supervision of textual descriptions, or the use of random region crops during training or inference. Furthermore, both [44] and [Ref 1] focus on vision and language, whereas we focus on the video domain. Last, they utilize external pretrained models that were trained on large datasets, while SViT does not.\n\\\n\\\nMore concretely, [44] uses explicit object representations similar to ours, but unlike ours, [44] uses an external pre-trained detector to initialize the object representation and use them as an input, while our approach learns them only during training. In addition, in [44], object tags (classes) are used, which serve as object labels, while we only use general \"object\" and \"hand\" tags (and use them as supervision, not as an input).\n\\\n\\\n[Ref 1] uses pseudo labels of random region crops from a pretrained prompter module, pretrained on 5.5M video-text pairs and inspired by CLIP, as a source of supervision for video and language training. Compared to our approach, this leverages a much greater amount of data. As mentioned above, this work focuses on vision and language.\n\\\n\\\n**Q2: Structured representations for videos using semantic roles has been previously investigated, see [Ref2] and [Ref3], but discussion on them is missing.\n\\\nA2:**\nFollowing your comments, the manuscript has been revised in L332-L334 to include these data (the discussion below will be included in the 10-page camera-ready version). Structured representations for videos have been studied in the past (e.g., ActionGenome [37], etc.), and this is not our main contribution, which is to leverage structure from images to video. [Ref2] proposes a new framework for video understanding using visual semantic role labeling. They discussed a specific task and proposed a model designed especially for it. Their proposed task models entities in videos. Their approach differs from ours in that our structured representation models the hand-object interactions within a single scene (within a video clip), whereas [Ref2] models the relation between entities appearing in different clips. [Ref 3] proposes a new task that includes the extraction of events from both video and text. In their approach, they focus on training with pairs of \"video+text\" data and build on the alignment of text and video in these pairs. In our approach, we do not assume paired data, but instead only a small amount of image-only supervision in addition to the video task. We do not even require any alignment between the images and the video task (i.e., images may come from a variety of sources).\n\\\n\\\n**Q3: L71 suggests that in training we have access to video-labels and structured scene annotations. It is not clear how scalable this is, how much preprocessing time and annotation it requires.\n\\\nA3:**\nA3: As mentioned in L21-L24, several works [5, 24, 30, 31, 37, 63, 68] have shown that “object-centric” models (ORViT, STRG, STIN, etc.) perform well on action recognition tasks. However, these models require structured annotations of video (as well as video labels), which is clearly very expensive, time-consuming, and not scalable. Our proposed approach uses relatively sparsely labeled images (as SViT-2% experiment in Table 1) in contrast to the above object-centric models. This allows our work to be much more scalable. Furthermore, as noted by reviewer `oj8q`, images are considered \"relatively low-cost annotations,\" which strengthens our motivation to incorporate them into our approach.\n\n\n", " Thank you for your insightful comments regarding our paper. We were able to improve our submission based on your comments. Next, we address your concerns.\n\n**Q1: What is the explicit interaction between the main video task and the auxiliary image task, and how is it not applicable to other approaches like [26, 33, 61]?**\n\\\n**A1:** In our method, we propose shared object prompts that are used in the image domain and the video domain. In this way, the shared object prompts are utilized for learning the shared structured representation between video and image domains, which enhances the self-attention layers with the structured representation for the main video task. As mentioned by other reviewers, this is a form of “a clever solution to distill information” (`DF7L`) by “exploiting the design of the transformer architecture” (`DF7L`, `LCAU`). Our focus is on leveraging image-level scene structure for video understanding. [26, 61] discuss the relation between different image-level tasks, the information they share, and how they can be utilized for other image-level tasks. [33] describes a multi-task learning approach for reinforcement learning, which is not applicable to the domain of real-life videos. In contrast to these works, our approach aims to demonstrate how we can transfer the structure of a scene from an image to a video.\n\\\n\\\n**Q2: It appears in Table 3 (a,b) that the amount of annotated images is not critical, and neither is the type of HAOG attributes. Does it indicate that the auxiliary task is not functioning as additional information, but more as a regularization? \n\\\nA2:** To examine the regularization effect of our approach, we suggest learning HAOGs without any useful information. Thus, we run an experiment in which the HAOGs are completely random. This means that, for each image, a random HAOG is generated by sampling the boxes and their relationships uniformly. We refer to this experiment as SViT-Random-ID. The SViT-Random-ID result on SomethingElse is 50.6 compared to 63.3 for the SViT-ID baseline. This demonstrates that predicting the actual HAOG attributes is important, but that not many annotations are necessary for the auxiliary task. One explanation may be that learning the shared structured representation between the domains is sample efficient, and does not require many annotated images, which we view as a big advantage of our approach. We incorporated the SViT-Random-ID experiment in the revised manuscript in A.2 in Supplementary (See L745-L753)\n\\\n\\\n**Q3: It might be worth to systematically demonstrate what are the minimum requirements to make the model work, maybe not the best, but reasonably well. \n\\\nA3:**\nWe have revised the manuscript in A.2 in Supplementary (See L694-L700 and Table 6a) in response to your comments. We explored the minimum requirements to make our model work. We provide the following experiments: A ratio of 1 image to X videos (where X is 1,10, and 100): 65.6 (1-to-1), 64.8 (1-to-10), and 61.6 (1-to-100). We also include these results in the table below. The results indicate that using too few annotated images (1 image per 100 videos) may result in degradation, but using a relatively small number of images (1 image per 10 videos) is sufficient to achieve good improvement.\n| Images-to-Videos Ratio| top-1| \n| :-----------: | ----------- | \n| 1-1 | 65.6 | \n| 1-10| 64.8 | \n| 1-100 | 61.6 |\n\n\\\n\\\n**Q4: Is it possible to use the model in a zero-shot setting and compare with CLIP on action recognition tasks like Kinetics?\n\\\nA4:**\nFollowing your comment regarding Kinetics, we evaluated the accuracy in the standard setting (not zero-shot) of Kinetics-400 with SViT and received an improvement of +1.5% over MViTv2 (we incorporated this experiment in the revised manuscript in A.4 in Supplementary, See L771-L773). Regarding zero-shot, we assume you are asking about generalization to new action categories with textual descriptions. CLIP may be used in this direction, but this is an orthogonal direction to our current focus, so it is left for future work. \n", " We thank the reviewers for their insightful comments. Three reviewers support accepting the paper. We are encouraged that they found the proposed approach for incorporating structured information about hand-object interaction in images for video-related tasks to be “novel” (`DF7L`), “well-motivated” (`oj8q`), and \"a clever solution to distill information,\" as well as a \"novel attempt to inject information about structured hand-object interactions” (`DF7L`). They observed “consistent improvement gain” (`oj8q`) and “non-negligible/significant” improvements across multiple tasks and datasets (`DF7L`, `LCAU`). They also find that the ablation studies provide \"a reasonable breakdown of improvements coming from different components\" (`oj8q`) and a \"thorough\" analysis (`LCAU`). Finally, they find the paper “well written and easy to follow” (`oj8q`, `LCAU`), as well as “clear and organized” (`DF7L`). Next, we address the concerns of each reviewer separately.\n\n", " This paper proposes an approach to use annotated images from different domain to help with the training of video-based models. The transformer can take both inputs and there are various objectives involved during training for images and videos, respectively. During inference, the only input is video, and the object prompts are used for label prediction. The paper is generally well-written and easy to follow. It is well-motivated to use annotated images due to the relatively low-cost annotations. The experiments demonstrate consistent performance gain over existing method like MViTv2, and the ablations are providing a reasonable break-down of improvements coming from different components. 1) I think the motivation of this approach is clear, but in terms of the technical contributions, my understanding is that the joint training of video and image inputs is enabled by ViT to tokenize, and there are other works jointly training images and videos [6, 25, 73, 26, 33, 61]. The authors mentioned that this work \"involves explicit interactions between the main video task and the auxiliary image task\", so can you further elaborate what is the explicit interaction and how is it not applicable to other approaches like [26, 33, 61]?\n\n2) For experimental results, it appears in Table 3 (a) and (b) that the amount of annotated images is not critical, and neither is the type of HAOG attributes. Does it indicate that the auxiliary task is in fact not functioning as providing additional information, but more as a regularization factor? In other words, it might be worth to systematically demonstrate what are the minimum requirements to make the model work, maybe not the best, but reasonably well (e.g., from ~63 to ~65 in Table 3 examples). This will provide guidance when there's no annotation available but possible annotations can be collected with minimum efforts.\n\n3) Is it possible to use the model in a zero-shot setting and compare with CLIP on action recognition tasks like Kinetics? Also, when comparing with MViTv2 results, is it a fair comparison because additional information/computation has been used in the proposed approach? I think they properly addressed the limitations as mentioned in \"We do not anticipate a specific negative impact, but, as with any Machine Learning method, we recommend to exercise caution.\"", " Brief Summary: The paper tackles a few-shot / semi-supervised setup where a small number of training samples about hand-object interaction are provided to drive downstream video performance. The two key ideas are to use hand object information (bounding boxes, and contact edges), and using a frame-consistency loss so that the supervision is propagated to other frames which don't have annotations.\n\nExperiments are carried out on Something-Something, Something-Else, Ego4D, Diving48, and show improvements over compared baselines. Pros:\n\n1. Interesting finding that using 2% of the in-domain data is comparable to using the entire in-domain data. \n\n2. Multiple datasets are considered for evaluation, \n\n\nCons:\n\n1. On naming conventions: I don't think it is fair to say annotated video frames count as image-annotations. They are still video annotations, just that they are sparse. \n\n2. On relations to previous work:\n\n(i) The authors should distinguish their work better compared to previous work. For instance, the idea of using object tags has been previous used in [44], so improvements just based on object tags are not exactly unexpected. Similarly, [Ref1] shows using entity-prompt can improve video-text pretraining. \n\n(ii) Structured representations for videos using semantic roles has been previously investigated, see [Ref2] and [Ref3], but discussion on them is missing.\n\n3. On the task setup: \n\n(i) L71 suggests that in training we have access to video-labels and structured scene annotations. It is not clear how scalable this is, how much preprocessing time and annotation it requires.\n\n(ii) Use of Hand-object graphs seems to be relevant for very specific domains. What if hands are not visible, or they are not interacting with a specific object?\n\n4. On data used: According to L196, video frames are annotated, but no details of how they are obtained are provided. Is it completely automatic / semi-automatic / purely human annotated?\n\n5. On model and experiment:\n\n(i) In L163, the motivation for frame-consistency loss is that image loss may not transfer to video loss. It is unclear if the proposed solution is the best way to approach this. For instance, one could simply perturb the image temporal position embedding to any random index. Alternatively, a object tracking system could interpolate / extrapolate the bounding boxes. Some experiments comparing such additional ways is needed.\n\n(ii) The relative performance improvement in Table 1. are very small, and it is unclear if the differences are actually significant.\n\n(iii) Improvements are not always consistent, for instance, in Table 1, MViT v2 MT outperforms SViT-SFT on Top-1 base. Similarly, and perhaps more surprisingly, SViT-2% outperforms SViT-ID on 10-shot case. \n\n(iv) The authors should provide SViT-2% results for Table 2a,b,c as well. \n\n(v) (Minor) Given that the model works for both images and videos, the authors could show results on human object interaction based datasets such as [Ref4]\n\n\n[Ref1]: Li, Dongxu, Junnan Li, Hongdong Li, Juan Carlos Niebles, and Steven CH Hoi. \"Align and Prompt: Video-and-Language Pre-training with Entity Prompts.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4953-4963. 2022.\n\n[Ref2]: Sadhu, Arka, Tanmay Gupta, Mark Yatskar, Ram Nevatia, and Aniruddha Kembhavi. \"Visual semantic role labeling for video understanding.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5589-5600. 2021.\n\n[Ref3]: Chen, Brian, Xudong Lin, Christopher Thomas, Manling Li, Shoya Yoshida, Lovish Chum, Heng Ji, and Shih-Fu Chang. \"Joint Multimedia Event Extraction from Video and Article.\" arXiv preprint arXiv:2109.12776 (2021).\n\n[Ref4]: Chao, Yu-Wei, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng. \"Learning to detect human-object interactions.\" In 2018 ieee winter conference on applications of computer vision (wacv), pp. 381-389. IEEE, 2018.\n\n========================\nGiven the extensive response by the authors, I raise my score by 1 point. In particular, the experiments on Kinetics-data, and additional ablative studies are helpful.\n\nHowever, I am still confused why, according to the new expts, the proposed method shows improvements in the cases where no hands are visible? This might suggest the real improvements are due to some other hyper-parameter, and not HAOG. Q1. (Minor) In section 3.1, the dataset numbering is incorrect. (3) is skipped.\n\nQ2. Why does diving-48 show mvit performing so much worse than slow-fast and time-sformer? \n\n I feel that authors should emphasize the limitation of using hand-object graph which limits the proposed methods to a subset of videos where hands are prominent and not occluded. Moreover, it is unclear how the proposed method can be extended to more general action videos such as Kinetics data.", " This paper proposes the StructureViT (SViT) architecture designed to incorporate structured information about hand-object interaction labels in images to aid in video-related classification tasks. The authors propose a modified video-transformer architecture that, in addition to predicting video-level labels, also predicts the hand-object graph structure for annotated images via additional learned 'object tokens'. To distill information of the hand-object graphs of images to the video domain the authors further propose a Frame-Clip Consistency loss, where the hand-object graphs predicted for a video and its individual frames are forced to be consistent. The authors assess their technique on several different datasets and video-related tasks and show improvements in accuracy over the state-of-the-art (SOTA) across-the-board. Originality: The proposed work is novel in many aspects. It is novel in attempting to exploit the transformer architecture's seamless ability to process multiple domains (video and image in this case) using the same set of shared weights. It is also novel in proposing a clever solution to distill information from the image domain to the video domain via the frame-clip consistency loss. Lastly, it is novel in attempting to injecting information of structured hand-object interaction labels to improve several downstream video-related tasks.\n\nQuality: All methodology and experiments in the paper are technically sound and correct.\n\nClarity: The material is presented in a clear and organized fashion. Some editorial errors are noted below. \n- ln 30: \"perfect\" should be \"perfectly\"\n- ln 129: $r_i$ should be $r_t$\n- ln 130: suggested to write this as $T \\times ( H \\times W + n)$ for improved clarity\n- ln 199: should be \"to treat them\"\n- ln 305: \"share\" should be \"shared\"\n\nSignificance: This work presents a significant result towards advancing multi-modality processing and information sharing with transformer architectures. Transformers are quickly becoming the dominant architecture for visual information processing surpassing CNNs. They present the additional advantage of being able to align multi-modality information much more seamlessly versus CNNs. Hence, this work is an interesting exploration that advanced our understanding of the use of transformers for multimodal (image and video) information sharing via transformers. The proposed approach is fairly general and may be applicable with minor modifications to other video and image related tasks as well besides HAOG and the video related tasks considered by the authors. Q. What procedure/losses did the authors employ for the pre-training task?\n\nQ. One of the main weaknesses of the proposed technique is the requirement of a large number of labeled images (~100K) to distill from. While in Table 1 and Table 3, the authors show the effect of reducing the number of annotated images for the Compositional and Few-Shot Action Recognition tasks, I am curious to know how these effects play out for the experiments presented in Table 2 as well. Does the authors' claim that the number of images can be reduced to 2% (or some similar small number) of the annotated image training images hold in general or is it only true for this task?\n\nQ. As an additional ablation, I would be curious to see the results of an experiment where the authors first learn a vision transformer to perform the HAOG task alone and then fine-tune its backbone network for the video-related task.\n\n------------------\nPost-rebuttal:\n\nI thank the reviewers for addressing all my concerns. However, after having considered all the other reviews, I share the concerns of the other reviewers in terms of limited novelty of the current work given the existence of prior non-transformer-based works that incorporate structured information into video-based tasks. Hence, I have lowered my original rating by 1 point. Is the requirement for a large number of annotated images a strict one for the success of the proposed method? If it is, I would like or see the authors acknowledge this limitation more clearly.", " The authors extend prior work on utilizing scene structure information to improve action recognition performance to transformers. In particular, their model is jointly trained to detect hands and objects in images and to classify activities in videos. The images can come from the same or from a different dataset. Differently to a naive multi-task approach they introduce dedicated tokens that are used for object detection, which improves the perfromance somewhat. Additional loss encourages consistency between object tokens (which are only supervised in images) in frames encoded separately and as a part of a clip to make sure that they are not ignored during video inference (a form of domain adaptation). This consistency loss brings further improvements. Overall, minor to moderate improvements are demonstrated on several ego-centric actions recognition datasets (and, for some reason, Diving48). Strengths:\n\nThe paper is well written and is easy to follow.\n\nThe idea of utilizing scene structure information for improving action recognition perfromance is sound, though not novel.\n\nThe proposed video transformer architecture with object tokens is novel to the best of my knowledge.\n\nNon-negligible improvements over baselines are reported on SomethingElse, Ego4d and Diving48.\n\nA thorough ablation study is provided.\n\n\nWeaknesses:\n\nThe authors ignore prior work which utilizes inference on object graphs to improve action recognition without requiring in-domain object detection label (there is at least A Structured Model For Action Detection by Zhang et al., CVPR'19). There is no conceptual novelty in the proposed approach, only technical novelty of adapting those ideas to the transformer architecture (which is still a valuable contribution).\n\nThe authors refer to their approach as multi-modal, but images and videos are not different modalities. These claims should be removed. \n\nThe biggest improvement is observed on the Diving48 dataset which does not require any object reasoning. This is counterintuitive and not explained. Please improve overview of the prior work to included methods that use out-of-domain object detection labels to improve action recognition via object graph reasoning and tone down the novelty claims accordingly. \n\nPlease provided an analysis of the method's performance on Diving48 to explain why the proposed approach demonstrated the largest improvements on this dataset despite it not containing any objects. \n\nPlease report results on a 3rd person action recognition dataset of your choosing which does require object reasoning to demonstrate to what degree the learned object representations transfer to the 3rd person scenario. It's not clear whether the benefits of the proposed approach are limited to the ego-centric domain." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "5N3K6ns-6BW", "q8_4mD9M4n_", "QX68AJfzMe", "mzVhFG5mr4I", "3n4MQLEXpE", "68RI0YBIKsg", "kApS2-Dt_W-", "PedhLFk7Vi6", "WZ-NqvgM93o", "5npGaw841Yb", "QMQ4o7ylOhB", "3CRrZRToGo5", "MdEFKx14pec", "3mJPCPwS_rD", "mzVhFG5mr4I", "0VuCszxw0WQ", "yhZ04PMJsLp", "D5FGdoyTwz", "nips_2022_0JV4VVBsK6a", "nips_2022_0JV4VVBsK6a", "nips_2022_0JV4VVBsK6a", "nips_2022_0JV4VVBsK6a", "nips_2022_0JV4VVBsK6a" ]
nips_2022_NkK4i91VWp
Increasing Confidence in Adversarial Robustness Evaluations
Hundreds of defenses have been proposed to make deep neural networks robust against minimal (adversarial) input perturbations. However, only a handful of these defenses held up their claims because correctly evaluating robustness is extremely challenging: Weak attacks often fail to find adversarial examples even if they unknowingly exist, thereby making a vulnerable network look robust. In this paper, we propose a test to identify weak attacks and, thus, weak defense evaluations. Our test slightly modifies a neural network to guarantee the existence of an adversarial example for every sample. Consequentially, any correct attack must succeed in breaking this modified network. For eleven out of thirteen previously-published defenses, the original evaluation of the defense fails our test, while stronger attacks that break these defenses pass it. We hope that attack unit tests - such as ours - will be a major component in future robustness evaluations and increase confidence in an empirical field that is currently riddled with skepticism.
Accept
This paper proposes a simple yet effective test to identify weak adversarial attacks, and thus weak defense evaluations. Empirical results have revealed insufficiently strong evaluations in 11/13 previous published defenses. To me, the paper studies an important problem and makes a valuable contribution to the active research field of adversatial defense and robustness evalution. I recommend acceptance, and encourage the authors to incorporate the reviewers' comments and suggestions when working on the final version.
train
[ "i045wtKBTGh", "zZWFWtkquEN", "6-gZlgmLqJ", "rfOaa5-QmAG", "nyfZ5oofUPF", "gRPlxAhjsjC", "ZsAssj5VDPH", "FZ8dNUoAVJh", "hhY4iwoVki", "iaVefuOZdiV", "lbnOfOCSLx9", "F6Af4Ae9kcu", "8aMU-2FqA-M", "kSmArjRvszv" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nThank you for your response and your overall positive assessment of our work! We would be grateful if you could let us know what aspects of the paper would need to be improved so you would consider a higher overall assessment. We will do what we can to address any remaining concerns and thank you again for your constructive review.", " Thank you for this explanation and making us aware of this! We know see how this part of the legend can be confusing and will update the legend accordingly for the final version of the paper.", " Thank you for answering my minor queries. I would like to maintain my positive score at this point. ", " I appreciate the detailed response, and my remaining concerns are clarified. I would recommend adding as many as these clarifications to the final version.\n\nA minor follow-up on the legends of Figure 1b, as I see that other reviewers also mentioned this -- What confused me was that the threat model's legend is ■, yet I could not find ■ in the figure, even if I know that it refers to the square boundary. Hence, I thought using □ might be more clear, as this is what the boundary appears in the figure.", " We would like to thank all reviewers for their time and very much appreciate their assessment of our work as a _“very interesting and novel”_ (XdpX), _“insightful”_ (KmTN), and _“well-written”_ (KmTn, XdpX) paper that _“addresses an important issue”_ (Wz6X) and _“makes a great contribution”_ (Rijw). Further, our proposed method was praised for being _“novel”_ (Rijw, Wz6X, KmTn) and _“simple and computationally cheap”_ (Wz6X) with our evaluation being _“strong and clearly demonstrating the effectiveness of the test”_ (KmTn).\n\nWe considered the thoughtful suggestions of the reviewers and addressed them in the latest version of the manuscript, which we believe further improved our submission.\n\nHere is a summary of the two main concerns and how we addressed them:\n\n* Missing information for Figure 1 and Algorithm 1: We updated the caption of Figure 1, and the description of Algorithm 1 and 2 to make them clearer and easier to read.\n* Test results for adversarially trained models: In a follow-up experiment, we investigated how our test behaves for adversarially trained models and weak attacks.\n", " **Comment:** _“The statement [in] L190-192 is hard to follow.”_ \\\n**Answer:** As outlined in L184-189, many detection defenses are actually (wrongfully) evaluated by attacks that are oblivious to the detector. This creates a false sense of security, as an attack that is not oblivious to the detector might still break the defense. In a nutshell, the “inverted test” just ensures that the attack in question is capable of attacking both the detector and classifier. For this, we introduce a new detector (that is the negated version of the original detector) and check if the attack can still find adversarials for this new detector. We will update this description in the camera-ready version of our paper.\n\n[1] Engstrom, L., Ilyas, A., Santurkar, S., Tsipras, D., Tran, B., & Madry, A. (2019). Adversarial robustness as a prior for learned representations. arXiv preprint arXiv:1906.00945.\n\n[2] Engstrom, L., Ilyas, A., Salman, H., Santurkar, S. and Tsipras, D.. Robustness (Python Library). 2019.", " Dear Reviewer,\n\nThank you very much for your positive review and your valuable feedback! We are very happy that you perceived our work as “novel”, “insightful” and “well-written” with a “strong evaluation”.\n\nPlease find our responses to your points below:\n\n**Comment:** _“How [does] the algorithm used in this paper differs from those used in [34]?”_ \\\n**Answer:** Thanks for asking this question. We see that the footnote can be misleading. What we meant to say is that there is a weak similarity between [34] and our work, in the sense that both methods use the idea of injecting adversarial examples into a classifier. However, this is where the similarity ends. Our work and [34] differ in both motivation (theirs is an adversarial example defense while ours is a test to identify weak defense evaluations) and methodology. \n\n**Comment:** _“Does Equation (1) need a lower bound for the inner point sampling?”_ \\\n**Answer:** Yes, you are correct, that the number of inner samples affects the hardness of the problem (see Section 4.3, lines 287-291). We are not aware of a theoretical lower bound on the number of inner samples, however, empirically we see that this number needs to be set high compared to the number of boundary samples (e.g. 999 vs 1 in most of our experiments, see Appendix B).\n\n**Comment:** _“Unclear effect of changing the classification head.”_ \\\n**Answer:** You are correct that changing the classification head might change the efficiency of adversarial attacks against the model (e.g., if the classification readout performs gradient masking due to enormous logits). To ease this issue, we aim to “mimic” the original classification head as much as possible with the replacement, e.g. reproduce similar logit values. Further, we are not aware of any known defense that explicitly increases robustness with its classification readout: For example, for adversarial training, it was shown by Engstrom et al. 2019 [1] that robustness rather comes from robust features than from a special readout mechanism.\n\n**Comment:** _“[Could] the test [...] detect weak attacks [...] on adversarially trained models?”_ \\\n**Answer:** Thanks for raising this interesting point. To investigate this, we applied our test to different variants of PGD (different step sizes, number of steps) that attacked an adversarially trained ResNet50 [2]. Our preliminary results suggest that PGD attacks with too few steps (e.g., if $\\textrm{step}_\\textrm{size} \\cdot n_\\textrm{steps} < \\epsilon$) for which the model appear robust, result in low test performance, i.e. our test can identify these weak evaluations. We will investigate this further and add a full ablation study on this in the camera-ready version of the paper.\n\n**Comment:** _“Unclear overheads.”_ \\\n**Answer:** Thank you for raising this question. As described in the text, for each clean data sample, we need to run three steps which each contribute to the computational cost: First, we obtain the features of $N_{inner}$ + $N_{outer}$ samples (i.e. performing $(N_\\textrm{inner}+N_\\textrm{outer}) / \\textrm{batch size}$ forward passes). Second, we train a binary classifier on these features, e.g., using logistic regression which, in our experiments, was computationally neglectable. Finally, we run the attack for a single sample - depending on the attack this is the most costly step as this also cannot be parallelized for multiple clean data samples in question as the weights of the classification layer differ for each of them. Thus, if we want to run our test on $N$ samples, its effective computational cost is mostly dominated by the cost of running the attack $N$ times.\n\n**Comment:** _“Slightly confusing clarification of false positives.”_ \\\n**Answer:** It is correct that no label information is used in sampling the data points we use to train the binary readout. We believe that there is a misunderstanding on what caused the false positive result in this case: The attack used by the defense’s authors is strong and reasonable - the authors of the defense just had a bug in their call to the attack during the evaluation, which is only relevant if the model makes mistakes for clean samples. However, since we run our test only for clean samples for which the newly trained binary classifier module got a perfect score, none of these cases exist. \n\n**Comment:** _“Figure 1[b] is confusing.”_ \\\n**Answer:** We are sorry that this figure was not as clear as we hoped it would be. We updated and expanded the caption of Figure 1B to make it easier to parse the figure (i.e., we explained that the black box depicts the feasible set of an l-infinity norm-bounded attack).\n\n**Comment:** _“functions in Algorithm 1 are not clearly defined and may need some clarification or reference to corresponding texts.”_ \\\n**Answer:** Thanks for pointing this out. We now ensured that all external methods used in the algorithms are properly described (see Appendix A of the updated version).", " Dear Reviewer,\n\nWe thank you for your positive review and helpful feedback! It is encouraging to see that you acknowledge our work addresses an important issue using a “novel”, “simple” and “computational cheap” method.\n\nPlease find our responses to your questions and comments below:\n\n**Comment:** _“writing is very unclear”, “figures and even algorithms are hard to parse”_ \\\n**Answer:** We updated the description of Figure 1 and expanded the description of the algorithms to make them easier to parse. Furthermore, we ensured that all methods used in the algorithms are defined (see Appendix A of the updated version).\nRegarding the writing, we are unsure what parts or aspects you found unclear or confusing. We would be grateful for any specific pointers and will be happy to incorporate your suggestions to increase the manuscript's clarity.\n\n**Comment:** _“How many samples are you using in training the classifier g?”_ \\\n**Answer:** We train the binary classification readout on the features of $N_\\textrm{inner} + N_\\textrm{boundary}$ samples. In our experiments, this amounts to 1000 samples for all defenses except that by Zhang et al. 2020 (10000) and Zhang et al. 2019 (1009). For a more detailed description of the chosen parameters, please refer to Appendix B (line 593 of the original and 557 of the revised manuscript).\n\n**Comment:** _“Do you have any ablation on the strength of the attack ϵ?”_ \\\n**Answer:** Since we wanted to stay as close as possible to the original evaluation of the defenses investigated in our study, we used their attack settings. As most defenses only report these for a single ϵ, we could not run a large-scale ablation on the influence of ϵ on the test. ", " Dear Reviewer,\n\nThank you very much for your positive review. We are happy you found our paper to \"study an important question and make a great contribution”.\n\nPlease find our responses to your points below:\n\n**Comment:** _“most adopted defenses have been broken before”_ \\\n**Answer:** You are correct that 11 of the 13 defenses analyzed in this work have been broken before. However, 2 of these defenses were believed to be secure (as they were just recently published) and we are the first to investigate and, consequently, break these defenses. We hope that this eases your concern. Furthermore, we generally understand your concern regarding the generalizability of our proposed test and would like to analyze even more defenses. However, we are not aware of any purely empirical defense (with surprisingly good results) that was released in the weeks/months before the submission deadline. Thus, there is a lack of potential further candidates for our test. If you have any potential candidates in mind, we are happy to consider them in an updated version of our submission. \n\n**Comment:** _“The [...] test also needs to be designed for different models. How about the efforts?”_ \\\n**Answer:** The tests presented in our work are applicable to any defense that uses a classifier that can be divided in an (arbitrarily complex) feature encoder and a linear classification readout. As the majority of defenses proposed in the past follows this structure, the effort to apply our test to them is considerably small. For conceptually very different defenses, authors might have to (re-)design the test - however, we expect that in most cases only small modifications to the specific type of defense and not a major redesign will be required.", " Dear Reviewer,\n\nThank you very much for your positive review and your valuable feedback! It is very encouraging that you perceived our work as “novel”, “interesting” and “well-written”.\n\nPlease find our responses to your points below:\n\n**Comment:** _“The legend of Figure 1B is not very clear.”_ \\\n**Answer:** We are sorry that this figure was not as clear as we hoped. We expanded the caption of Figure 1B to make it easier to parse the figure (i.e., we explained that the black box depicts the feasible set of an l-infinity norm-bounded attack).\n\n**Comment:** _“What is the proportion of samples that are skipped in the experiment?”_ \\\n**Answer:** Thanks for asking this question. For all defenses except that by Pang et al. 2019 we can successfully set up the binary classifier for all samples. For this specific defense, we notice that the setup fails for approximately half of the samples. We hypothesize that this is due to the stochastic nature of the defense, but will further investigate this. We will add this result and a discussion of it to the revised version of the manuscript.", " This paper proposes a method to test if an adversarial defense method for deep neural networks is strong enough. By a simple modification of the threat model, the paper introduces a new classifier in which the existence of an adversarial example is guaranteed. Such a modified model can be used to examine whether an attack is strong enough, and hence the evaluation of robustness is convincing. Experimental results show most of the previously published defenses are insufficiently strong. Originality & Significance:\n\nThe idea of the paper is very interesting and looks novel to me. The results show that most of the previously published defenses are not strong enough under the proposed test, which is a little bit surprising. The paper is therefore of significance in that it could guide future research on this area to consider stronger evaluations.\n\nQuality & Clarity:\n\nThe paper is generally well-written. The analogy to refuting proof in complexity theory is interesting. The legend of Figure 1B is not very clear. In L139:\n >If this is not possible for an original sample xc, we cannot apply the test and, hence, skip the sample\n\nWhat is the proportion of samples that are skipped in the experiment? The author has discussed the limitation of the paper in Section 5.", " This paper studies adversarial robustness evaluation of defense models. Based on the fact that the accurate robustness evaluation is difficult, this paper proposes a new binarized testing method that can discover weak attacks. Eleven out of thirteen defenses fail the test and could be broken by stronger defenses. This paper studies an important problem and make a great contribution to the field. Although previous studies have been conducted to provide guidelines and practice to develop more accurate robustness evaluation, there truly lacks a formal test whether the robustness is overestimated. This paper proposes a novel testing method, which successfully identifies weak defenses.\n\nThe main weakness of this paper is that most adopted defenses have been broken before. Thus it is not surprising to see these defenses cannot pass the test. Evaluating more recent and state-of-the-art defenses can make the paper more convincing. The generalizability of the proposed method for more models is questionable. The proposed binarized test also needs to be designed for different models. How about the efforts? The authors have discussed the limitations and potential negative societal impacts.", " Hundreds of adversarial attacks and defenses have been proposed in the last few years. How Hundreds of adversarial attacks and defenses have been proposed in the last few years. How robust defense depends on the choice of evaluation. All defenses can be broken given enough perturbation. This paper presents a model, defense, and attack agnostic methodology to identify weak defenses. The authors propose a binarization test. (1) Create a 2-class synthetic dataset based on real examples which are in-boundary and on the boundary\n (2) Take the feature extractor of a robust classifier (f*) and train a binary classifier (g) that can classify the dataset (3) Evaluate the attack on the h = (g o f*) classifier and compute the % of times the attack is successful. Higher this score, more potent the attack. 11/13 previously published defenses failed this test, showing that the defenses are not strong but the evaluation is weak.\n Strengths: The authors address an important issue of robustness evaluation in the adversarial literature. The binarization test technique is novel and it is both simple and computationally cheap. The authors discuss the failure case, I appreciate it.\n\nWeakness: The writing is very unclear. The idea is simple, but it is really hard to understand from the writing. Things like figures and even algorithms that are supposed to convey the message easily are hard to parse. It needs a major re-write to simplify things if it were to be accepted into the conference. - How many samples are you using in training the classifier g? Do you have any ablation on this? \n- Do you have any ablation on the strength of the attack $\\epsilon$ and how the attacks fare against the binarization test? N/A", " This paper proposes a binarization test to identify weak attacks against adversarial example defenses. The proposed test changes the model’s prediction layer to a binary classifier and fine-tunes it on a small crafted dataset for each benign example. As a result, the original attack, if sufficiently strong, should be able to find adversarial examples when applied to the modified model. This test serves as an active robustness test to complement existing passive tests of weak attacks. Empirical results show that the proposed test effectively re-confirmes the weak evaluations of 11/13 previous defenses, and two of them were not discovered before. ### Originality\n\n**Strengths (major)**\n* The proposed active test provides a novel perspective for identifying weak evaluations of adversarial examples defenses.\n* The overall idea and approach are novel and insightful.\n* The design of evaluating the difficulty of the proposed test is good, as it provides validation when a weak attack passes the test.\n\n**Weaknesses (minor)**\n* **Unclear algorithmic improvements to [34].** The authors mentioned in footnote 1 (page 3) that a similar idea was used before as a honeypot defense [34]. It is suggested to discuss how the algorithm (e.g., injecting adversarial examples) used in this paper differs from those used in [34]. For example, are there any new challenges when directly adopting the previous approach, are there any insightful modifications to that approach so it fits the proposed test, or does the idea of [34] fit more in detecting weak attacks rather than as a defense?\n* **Lower bound for Equation (1).** Does Equation (1) need a lower bound for the inner point sampling to guarantee the minimum hardness of the test?\n\n### Quality\n\n**Strengths (major)**\n* The evaluation is strong and clearly demonstrates the effectiveness of the proposed active test.\n* Two previous weak evaluations are discovered.\n\n**Weaknesses (minor)**\n* **Unclear effect of changing the classification head.** While it makes sense as a test to change the prediction head, I am not sure if that would negatively affect the test as the changed model may not precisely “mimic” the original model. For example, if we were to evaluate adversarial training, the fine-tuned prediction head may lead to a less robust model than the original one. As a result, it seems easier to attack the test model than the original one.\n* **Applicability to adversarial training.** While I understand that evaluating skeptical defenses is more straightforward in demonstrating the effectiveness of the proposed test, it is suggested to discuss the proposed test’s applicability to adversarial training. In particular, I am curious if the test could detect weak attacks (e.g., weak PGD with a large step size or a few steps) on adversarially trained models (with different robustness). This might be informative as the robustness from adversarial training is also developing, and weak attacks (PGD vs. AutoPGD) may overestimate its robustness.\n* **Unclear overheads.** It is suggested to include the overheads of Algorithm 1, as it trains a new prediction head for each test sample. It is also unclear how many test samples are needed to produce a confident claim of passing the test. At L138, the authors mentioned that some samples might not produce a model with perfect accuracy. I am curious why this would happen and if that affects the overheads significantly.\n* **Slightly confusing clarification of false positives.** The clarification at L261-263 is slightly confusing, as I did not expect misclassified samples to be included in the evaluation from the beginning. If included, it may not be the attack that \"is not used correctly,\" but the evaluation (of this paper) should follow the same setting as the integrated attack. This further leads to the confusion of why the label matters here: If all points are sampled without labels in Equation 1, and the RunAttack in algorithm 1 refers to the exact original attack with the exact choice of label, then it seems that we should not observe the artifact at L265-268.\n\n### Clarity\n\n**Strengths (major)**\n* The paper is generally well-written.\n\n**Weaknesses (misc)**\n* The legend for the threat model in Figure 1 is confusing; maybe it should be an empty square.\n* I feel that some contributions can be highlighted earlier in the paper. For example, the inclusion of detection defenses and the newly identified flaws.\n* Most functions in Algorithm 1 are not clearly defined and may need some clarification or reference to corresponding texts. For example, RunAttack, RunRandomAttack, SampleInnerPoint, TrainReadout, etc.\n* The statement at L190-192 is hard to follow and needs more motivation and clarification.\n\n### Significance\n\n**This paper provides strong positive results for more confident evaluations of adversarial example defenses. While I have listed several weaknesses above, most are not major and are suggested as discussions that further strengthen the paper.** I would recommend that the authors elaborate on the four weaknesses in the Quality section. The lack of evaluation of adversarial training might be a potential weakness, but the existing evaluation is already sufficient and representative." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 5 ]
[ "6-gZlgmLqJ", "rfOaa5-QmAG", "FZ8dNUoAVJh", "gRPlxAhjsjC", "nips_2022_NkK4i91VWp", "kSmArjRvszv", "kSmArjRvszv", "8aMU-2FqA-M", "F6Af4Ae9kcu", "lbnOfOCSLx9", "nips_2022_NkK4i91VWp", "nips_2022_NkK4i91VWp", "nips_2022_NkK4i91VWp", "nips_2022_NkK4i91VWp" ]
nips_2022_zbuq101sCNV
TANGO: Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition
Creation of 3D content by stylization is a promising yet challenging problem in computer vision and graphics research. In this work, we focus on stylizing photorealistic appearance renderings of a given surface mesh of arbitrary topology. Motivated by the recent surge of cross-modal supervision of the Contrastive Language-Image Pre-training (CLIP) model, we propose TANGO, which transfers the appearance style of a given 3D shape according to a text prompt in a photorealistic manner. Technically, we propose to disentangle the appearance style as the spatially varying bidirectional reflectance distribution function, the local geometric variation, and the lighting condition, which are jointly optimized, via supervision of the CLIP loss, by a spherical Gaussians based differentiable renderer. As such, TANGO enables photorealistic 3D style transfer by automatically predicting reflectance effects even for bare, low-quality meshes, without training on a task-specific dataset. Extensive experiments show that TANGO outperforms existing methods of text-driven 3D style transfer in terms of photorealistic quality, consistency of 3D geometry, and robustness when stylizing low-quality meshes. Our codes and results are available at our project webpage https://cyw-3d.github.io/tango/.
Accept
This paper presents a new CLIP-driven stylization method given an input mesh and text description. Compared to previous works Text2Mesh, the paper introduces a more expressive rendering model based on learnable SVBRDF and normal maps. Many reviewers found the paper easy to follow, the idea promising, and the results visually appealing. They also expressed their concerns regarding the similarity to Text2Mesh, the limitations of the normal maps approach (compared to changing geometry explicitly), and the relighting and material editing of the stylized object. The rebuttal has addressed most of the concerns. The AC agreed with most of the reviewers and recommended accepting the paper. Please revise the papers according to the reviewer’s comments: (1) change the title according to Reviewer kpuS, (2) add relighting/material editing/view synthesis results, and (3) highlight the pros and cons of the proposed method w.r.t. Text2Mesh.
train
[ "10muIgpP4mE", "BxNNR6XVdWB", "lIWctFja3-h", "G1UgoRGyigm", "ly1zSatz50P", "5cK29RWgqG4", "vh276Ag4Lki", "hcOMqCZEgMc", "vYAkU0FSxw-", "Q1194HZhOBL", "nxkKXHxut2m", "ZgOJ3FDGlJu", "UHul6mOdKy" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the comments and additional experiments. The authors' response has resolved most of my concerns, especially the explanation of the disentanglement of light and reflectance. On the other hand, I agree with the comments from Review kpuS that the limitations of this method and Text2mesh should be discussed carefully in the revision. Besides, More pure geometry results should be presented since it's claimed that this method could stylize both 3D shape and appearance. Meanwhile, the authors should also release the meshes and the text prompts for the user study conducted on geometry and appearance individually. Finally, I encourage the authors to provide more convincing results as promised. After thorough consideration, I decide to keep my original rating.", " Dear reviewers, \n\nThank you all for providing valuable comments. The authors have provided detailed responses to your comments. Has the response addressed your major concerns?\n\nIf you haven't, It would be great if you could reply to the authors’ responses soon as the deadline is approaching (Tues, Aug 9). \n\nBest, \n\nACs\n", " We thank you again for the constructive comments and recognition of our method. In the final version, we will follow the suggestions by clearly summarizing the difference between Text2Mesh and our method, and presenting the limitations of both methods. We would also revise the title to \"Text-driven Photorealistic and Robust 3D Stylization via Lighting Decomposition\", if all reviewers agree with the title as well; we will correspondingly revise the major claim. Finally, we will add more rendering-aware manipulation results like decomposition, relighting, and material editing in our final version. ", " Thanks Authors for the response. I appreciate the additional results, which indeed helps demonstrate the capability for meaningful rendering edits besides only showing the final results. While I am still not fully convinced if the proposed mesh stylization method is better than Text2Mesh, I am not against acceptance, but would suggest the authors to make it more clear the difference to Text2Mesh and highlight the limitations of both methods. In addition, I recommend to rephrase the title and the major claim that this is a method for \"3D Stylization For Arbitrary Meshes\", which may imply that this stylize both 3D shape and appearance. And please also try to include more rendering-aware manipulation results, like those shown in the new supp, to the final version.", " We thank reviewer ojBL for the valuable comments, and the point-to-point response to every individual comment is itemized as below:\n\n$Q1.$ **More visualization with moving viewpoint and relighting.**\n\nWe appreciate your suggestion. We investigate the moving viewpoint on a shiny object of \"A shoe made of gold\" and a diffuse object of \"A candle made of wicker\", \nwhose gif results are illustrated in the ANONYMOUS link -- https://anonymous.4open.science/r/NeurIPS_1808/.\nOur method learns good materials and lightings in general and performs well on different views. \nAdditionally, as expected, the shiny materials (e.g., gold) have highlights that move in sensible ways, while the diffuse materials (e.g., wicker) do not. \nWhat's more, our method also performs well on tasks of relighting and material editing, which could be visualized in the $Experiments.pdf$ with the ANONYMOUS link. \nNote that the PDF cannot be illustrated normally on the website, so please download it for better visualization. \n\n$Q2.$ **Complicated equation**\n\nThanks for your advice. We admit that Eq. 1 is complicated, but we think it is necessary since it provides an overall of our adopted rendering pipeline. \nTo help readers understand the equation, we break down the equation into environment map ${L}_i$, BRDF function ${f}_r$ and normal map $\\hat{{n}}_p$, which are explained one by one in line 168 to 191.\nTo ease understanding, we may consider replacing the term ${\\Pi}({n}_p,{x}_p;{\\gamma})$ in Eq. 1 to $\\hat{{n}}_p$ in the future revision.\n\n$Q3.$ **Constraints on the normal fitting.**\n\nWe indeed set constraints on both normal generation and normal displacements. \nAs described in line 174 to 175, we clamp the output normals in $\\hat{{n}}_p \\in (1, \\hat{\\theta}_p, \\hat{\\varphi}_p) | \\hat{\\theta}_p \\in (0,2\\pi), \\hat{\\varphi}_p \\in (0,\\pi) $. \nAs for the normal displacements $\\triangle {n}_p$, we clamp it in $\\triangle {n}_p \\in (0, \\triangle{\\theta}_p, \\triangle{\\varphi}_p) | \\triangle{\\theta}_p \\in (-\\frac{\\pi}{3},\\frac{\\pi}{3}), \\triangle{\\varphi}_p \\in (-\\frac{\\pi}{3},\\frac{\\pi}{3})$.\n\n\n$Q4.$ **An in-depth comparison between our method and the vertex displacement framework.**\n\nPlease refer to the official comment to all reviewers.", " We thank reviewer 8TXp for the valuable comments, and the point-to-point response to every individual comment is itemized as below:\n\n$Q1.$ **The key point for a good disentanglement.**\n\nWe think the key point is the shading model and carefully designed elements in Eq. 1 in our paper, which is a strong physical prior for learning parameters. \nBy predicting normal offsets and SVBRDF individually, optimizing the light SG parameters, and inputting these elements into the physical rendering equation, \nthe individual module predicts physics-aware results, as illustrated with the disentangled components in the $Experiments.pdf$ at https://anonymous.4open.science/r/NeurIPS_1808/.\nNote that the PDF cannot be illustrated normally on the website, so please download it for better visualization. \nWhat's more, it is also important to clamp the networks' output in a reasonable range, \ne.g., clamping the predicted normal $\\hat{{n}}_p = (1, \\hat{\\theta}_p, \\hat{\\varphi}_p)$ in $\\hat{\\theta}_p \\in (0,2\\pi), \\hat{\\varphi}_p \\in (0,\\pi) $.\n\n\n$Q2.$ **The clarification and visualization on relighting and material editing.**\n\nIn Figure 4, we only change the lighting and keep the material fixed. We are will clarify the misleading and we will make it more clear in the revision.\nFurthermore, we provide more visualizations about relighting and material editing in the $Experiments.pdf$, which could be accessed through an ANONYMOUS link -- https://anonymous.4open.science/r/NeurIPS_1808/.\nNote that the PDF cannot be illustrated normally on the website, so please download it for better visualization. \nThese visualizations demonstrate the ability of our method on the rendering component disentanglement, which will be added in our future revision.\n\n$Q3.$ **Illustration with more convincing results.**\n\nThanks for your advice. We appreciate that you find our method producing more details and preserving better 3D geometry. We will provide more convincing results in the revision and in our future webpage.\n\n$Q4.$ **An in-depth analysis of the geometry preservation for 3D stylization.**\n\nPlease refer to the official comment to all reviewers: \"An in-depth comparison between our method and the vertex displacement framework\".\n\n$Q5.$ **New user study in geometry and appearance.**\n\nThanks for your advice. We conduct a new user study by investigating geometry and appearance individually. \nWe randomly chose 127 users to evaluate 13 source meshes and style text prompt combinations on both high and low-quality meshes. Each of them was asked two questions: \n(Q1) \"How well does the output geometry match the text prompt?\" (Q2) \"How well does the output appearance effects match the text prompt?\". \nOur method outperforms the Text2Mesh baseline on both questions, with an advantage of 0.71 and 0.92 for Q1 and Q2, respectively. We will add this new user study in the revision.\n\n$Q6.$ **The clarification on the data augmentation.**\n\nIndeed, our data augmentation strategy is the same as that adopted in Text2Mesh. \nSpecifically, we use multiview images and randomly cropped images simultaneously to ensure the global semantic consistency and preserve local details, \nwhich is identical to the description in Sec. 3.3 of Text2Mesh paper and their official code repository.\nWe will clarify this part more clearly in the future revision.\n\n$Q7.$ **The clarification on the random background.**\n\nIndeed, our method is insensitive to the background. \nThe reason is that we only render those pixels whose rays hit the object's surface, and then the intersection point and its normal will feed into the network. \nFor the background pixels whose rays do not hit the surface, though we assign a background value to them, they are not input to the network and their gradients are not backpropagated to update the network parameters.\nWe test black, white, and random Gaussian backgrounds, and observe similar stylized performance. We will add these analyses in the revision.\n\n\n$Q8.$ **The Controlling of light intensity, direction, and light distribution.**\n\nThanks to our spherical Gaussian representation, we can easily control light. \nUsers can change the light intensity and light direction by modifying ${a}_k$ and ${\\mu_k}$ in multiple spherical Gaussians of the environment map, respectively.\nMeanwhile, users can easily modify the light distribution by modifying the spherical Gaussians or adopting a new environment map from the Internet, which is shown in our additional relighting experiment in the $Experiments.pdf$.\n\n\n$Q9.$ **Semantic ambiguity of CLIP model.**\n\nWe agree with your opinion. The major reason for this phenomenon is the capability of the CLIP model. \nFortunately, our method has the potential to be guided by other Vision-language Pre-training models.\nThe semantic ambiguity of the CLIP model may be alleviated with more powerful Vision-language Pre-training models.\n", " We thank reviewer kpuS for the valuable comments, and the point-to-point response to every individual comment is itemized as below:\n\n$Q1.$ **Analyses on individual components, material editing, and relighting.**\n\nWe thank the reviewer for pointing out this issue.\nWe provide more analyses in the $Experiments.pdf$ file with this ANONYMOUS link -- https://anonymous.4open.science/r/NeurIPS_1808/.\nNote that the PDF cannot be visualized normally on the website, so please download it for better visualization. \n\nSpecifically, in Figure 1 of the $Experiments.pdf$, we visualize the individual components of the normal map, BRDF, and environment map with the example of \"A shoe made of brick\". These correctly estimated components contribute to the realism of final renderings, and furthermore, make it possible to edit the generated stylization, which is shown as follows.\n\nIn Figure 2, we conduct the material editing with the example of \"A shoe made of gold.\". \nThe surface becomes more diffuse as we enlarge the roughness value of the material, while a larger specular value typically results in a more shiny surface, proving the capability of our method on material editing.\nIn Figure 3, we conduct the relighting experiments with the example of \"A vase made of wood\". \nBy replacing the estimated environment map with others downloaded from the Internet, the lighting of the rendering results could be successfully manipulated. \nCompared to the Text2Mesh, our method presents more flexibility in manipulating the rendering results, facilitating its application in practice. \nWe will add these analyses in the revision.\n\n\n$Q2.$ **An in-depth comparison between our method and Text2Mesh.**\n\nPlease refer to the official comment to all reviewers.\n\n$Q3.$ **A clarification on the contribution.**\n\nWe agree with the reviewer that the rendering pipeline and the CLIP-guided stylization have been studied in existing works, and we do not claim these as our contributions.\nActually, our major contribution is a text-driven 3D stylization architecture, which can transfer the appearance style of a given 3D shape according to a text prompt in a photorealistic manner. \nCompared to the recent Text2Mesh which utilizes vertex displacement, we investigate a different technical approach and achieve higher photorealistic quality and more robustness to the mesh quality.\nA more detailed comparison between our method and the Text2Mesh could be found in the official comment to all reviewers. \n\nFrom another perspective of rendering components disentanglement, we make the first attempt to predict and disentangle the rendering components (e.g., normal map, SVBRDF, and environment map) under quite weak supervision, e.g., a short text prompt, \nwhich is typically investigated under the strong pixel-by-pixel image supervision in previous works, e.g., NeRD[4], PhySG[53], and nvdiffrec[36].", " We thank reviewer qBos for the valuable comments, and the point-to-point response to every individual comment is itemized as below:\n\n$Q1.$ **The advantages and disadvantages of geometry preservation in 3D stylization.**\n\nPlease refer to the official comment to all reviewers.\n\n$Q2.$ **The key to learning correct disentanglement.**\n\nActually, we do not observe any ambiguity in disentangling lighting and reflectance.\nWe think the key point is the shading model and carefully designed elements in Eq. 1 in our paper, which set a strong prior for learning parameters. \nBy predicting normal offsets and SVBRDF with two individual networks, optimizing the light SG parameters, and inputting these elements into the rendering equation, each module is constrained to predict physics-aware results. \nMoreover, it is also important to constrain the networks' output in a reasonable range, \ne.g., clamping $\\hat{\\theta}_p \\in (0,2\\pi)$ and $\\hat{\\varphi}_p \\in (0,\\pi)$ for the normal prediction $\\hat{{n}}_p = (1, \\hat{\\theta}_p, \\hat{\\varphi}_p)$.\n\n$Q3.$ **The definition of a reasonable anchor view.**\n\nExcept for the person object, where a front view is adopted as the anchor, we randomly sample the anchor view in $(r, \\theta, \\phi)$ with $\\theta \\in (-\\pi,\\pi)$ and $\\phi \\in (0,2\\pi)$. \n\n$Q4.$ **The implementation of the first intersection point and intersection surface.**\n\nWe use the ray casting method implemented in the python Open3d library to find the first intersection point and intersection surface.\n\n$Q5.$ **A comparison between NeRD and our method.**\n\nWe have clarified the difference between our method and the vertex displacement framework (e.g., Text2Mesh) in the official comment to all reviewers.\nIn the following, we will illustrate the difference between our method and NeRD, which has been briefly discussed in our related work.\n\nFirst of all, NeRD and our method are proposed for different tasks. NeRD reconstructs 3D assets (e.g., 3D meshes and lighting) from multiple images, which provide strong pixel-by-pixel supervision signals.\nIn contrast, we aim to generate photorealistic 3D stylization given arbitrary meshes and a text description.\n\nTaking it a step further, NeRD and our method share similar construction objectives if we view it from a more general perspective of 3D assets construction. \nHowever, as detailed in the first point, the input of the two methods are quite different. \nAdditionally, it takes about 144 GPU hours and 1.5 GPU hours to train a NeRD and extract 3D assets, respectively. \nIn contrast, our method only consumes about 0.2 GPU hours to get a new 3D asset, presenting a significant advantage in time efficacy.", " $Q1.$ **An in-depth comparison between our method and the vertex displacement framework.**\n\nAlthough the vertex displacement framework presents a larger geometry capacity in theory, \nthe rendering capacity gap between frameworks of vertex displacement and our adopted normal displacement could be largely reduced with our introduced learnable SVBRDF and normal.\nGenerally speaking, when we render a pixel $p$, we project a ray from the camera center through the pixel. \nThe intersection point between the ray and geometry surface is denoted as $x_p$ and its corresponding surface normal is denoted as $n_p$. \nThen, the pixel color is calculated by inputting $x_p$, $n_p$, and the view direction $\\nu_p$ to the following rendering equation:\n\n$$\nL_p(\\nu_p, x_p, n_p) = \\int_\\Omega L_i({\\omega}_i)f_r({\\nu}_p,{\\omega}_i,{x}_p)({\\omega}_i \\cdot {n}_p ) \\mathrm{d}\\omega_i,\n$$\n\nwhere $L_i({\\omega}_i)$ is the incident light intensity from direction $\\omega_i$ and $f_r({\\nu}_p,{\\omega}_i,{x}_p)$ represents spatially varying BRDF. \nAccording to the above rendering equation, the vertex displacement adjusts the pixel color by changing $x_p$ and $n_p$, which will be analyzed individually in the following. \nSpecifically, changing $x_p$ only influences the SVBRDF term $f_r({\\nu}_p,{\\omega}_i,{x}_p)$, which is modeled as an MLP network in our method. \nConsidering the universal approximation property of neural networks, the MLP-based SVBRDF term in our method is expected to cover (at least approximate) the effects of $x_p$ change in the vertex displacement framework.\nMeanwhile, we refine the term ${n}_p$ with a normal prediction network, which similarly covers (at least approximates) the ${n}_p$ change in the vertex displacement framework.\nTaking it a step further, the possible difference between the vertex displacement framework and our method appears in the contour area. \nIn this area, some rays that originally hit the surface may not hit it due to the geometry change via vertex displacements; therefore, the pixels that are originally colorful may turn to the background color, and vice versa. \nFortunately, the contour area only occupies a small percentage of the whole rendering images, leading to a small rendering capacity gap between our method and the vertex displacement framework.\n\nJust as every coin has two sides, we emphasize that several advantages are achieved by keeping the geometry unchanged.\nFirstly, more robust results are achieved with unchanged geometry. \nIn the vertex displacement framework, it is difficult to control the displacement direction and self-intersection may occur everywhere with improper vertex displacement, especially when the displacement is significant.\n\nSecondly, our method is more robust to the number of vertices (i.e., the mesh quality). \nAs detailed in Sec. 4.2 in our paper, the vertex displacement method requires a large number of vertices and its performance degrades significantly as the vertex number reduces. \nOn the contrary, our method is quite robust when mesh quality degenerates and works well with such low-poly meshes, presenting wide applications in industrial 3D assets creation. \n\nLast but not least, keeping the geometry unchanged is more preferred in current game engines. \nWhen users want to change the style of an object, it is time-consuming to re-import another geometry and then run the physics simulation again. \nIn contrast, it is quite convenient to replace the material and normal map for the target rendering style, which is a widely used technique in the games industry.\n\nIn conclusion, compared to the vertex displacement framework, our method has a slightly smaller capacity but presents more robustness and time efficiency. \nConsidering the advantages and disadvantages of geometry preservation, whether should we preserve the geometry in 3D stylization is still an open problem. \nOur method and the recent Text2Mesh conduct the initial attempts to this problem in different directions, which may inspire more in-depth following investigations.", " This paper targets at stylizing a mesh with a text prompt. It represents appearance of a mesh as spatially varying BRDF, normal and lighting, and matches the CLIP feature of rendered image and given text prompt by learning the parameters of these components. For evaluation, the paper compares with the state-of-the-art text prompt stylization method Text2Mesh and achieves better performance. **Strengths.**\n- This paper is well written and easy to follow.\n- The idea of representing the appearance of a given mesh as SVBRDF, normal and lighting is interesting, although its has certain limitations.\n\n**Weaknesses.**\n- Modeling appearance as a lighting model certainly performs better in complex realistic scenes. However, it also limits the capacity of the model without geometric displacements modeling. For instance, in Fig.3 in the Text2Mesh paper, the surface of the given mesh can be changed to match the text prompt, while the proposed method cannot do so. This limitation is not discussed in the paper. **Method.**\nI am actually surprised this model can predict correct lighting parameters given only CLIP latent space as supervision. Have the authors observed any ambiguity in disentangling lighting and reflectance? If not, what is the key to make the model learn correct disentanglement?\n\n**Implementation details.**\n- How to find reasonable anchor view in line 145?\n- How to find the first intersection point and intersection face in line 150?\n\n**Novelty.**\nAlthough not using text prompt as supervision, a similar lighting model has been explored in the NeRD [4] paper, while the text prompt stylization has been used by Text2Mesh. Thus although the idea is interesting, the novelty and contribution of the proposed method is limited, in my opinion. The authors have discussed the limitations of the chosen lighting model. As stated in the weakness, I encourage the authors to include \n the limitation of not modeling geometric displacement. ", " This paper presents a 3D mesh stylization method that performs both appearance and geometric styles by optimizing with a differentiable rendering pipeline. Compared with existing methods that simply optimize per-vertex color and displacement, it adopts a more physics-aware rendering model that explicitly models geometry (normal map), BRDF (diffuse, roughness, specular), and lighting (SG), which is claimed to generate more realistic results (better shading appearance and details) and allows for editing such as relighting. Results are compared with Text2Mesh. ++ Indeed very promising idea to do physics-aware modeling of mesh stylization.\n\n++ Interesting visual results are presented, which look good.\n\n++ The proposed rendering formulation is technically sound and practically reasonable.\n\n++ Very good paper-writing, except for some redundancy that could be made more concise.\n\n-- All results are just final renderings, lacking results to show the individual components of normal maps, BRDFs, and lightings, which are critical to show the quality of the system and its advantage over Text2Mesh.\n\n-- No results on manipulating the rendering, such as relighting (the relighting results on the shoe are less interesting for stylization) and material editing, which are not achievable by Text2Mesh.\n\n-- Approximating geometry with normal maps turns the method into a pure appearance stylization method. There are no actual geometry changes but just a way to model geometry variation for shading. But Text2Mesh does achieve joint geometry and appearance stylization, which seems better than this method although the geometric details have limitations.\n\n-- Limited contributions, the rendering pipeline, and CLIP-guided stylization are mostly based on existing works. I have put my major concerns in the weaknesses section. Overall, I think it is an interesting work that combines neural rendering with mesh stylization that could produce meaningful assets that are more compatible with current rendering pipelines and allow for more wide editing applications. However, the presented results, unfortunately, do not fully the potential of the proposed method -- there are no results showing the quality of individual components and the possibility to edit one of them for editable stylization, without these I am not convinced that this method has achieved significant improvement over Text2Mesh except some better shading details. Also considering that this method is not able to produce real geometric details, and the visual results on single stylization images are not clearly better than Text2Mesh, I cannot say this method advances the state-of-the-art. Finally, given the limited technical contributions, I am leaning a bit negative unless more convincing evidence could be presented. There are discussions on limitations.", " This paper focuses on the task of 3D mesh stylization according to the text prompt. The key idea is to jointly learn three disentangled components, ie, the spatially varying bidirectional reflectance distribution function, the local geometric variation, and the lighting condition, under the supervision of CLIP Loss with a spherical Gaussians-based differentiable renderer. Experiments show that the proposed method achieves better visualization quality and 3D geometry consistency on stylized meshes compared to the state-of-the-art. Besides, this method demonstrates robustness when stylizing low-quality meshes. Strengths:\n1. This is an interesting idea that may inspire future research in related fields. It employs a professional shading model to help learn more photorealistic appearance and geometry, which is intuitively reasonable and practically effective.\n2. The paper is well-organized and easy to follow.\n3. The experiments look convincing since it supports the major contributions claimed in the introduction that the realism of rendered images can be improved by the explicit modeling of light and materials. Moreover, the ablation study also demonstrates that each disentangled component plays an important role in the improvement of the rendering quality.\n4. The authors also provide some reasonable limitations of the current method, which may help researchers to build new frameworks based on current results.\n\nWeakness:\n1. This paper proposes to learn materials, normals, and light separately. How can **a good disentanglement** of light and physical reflectance be guaranteed? In the whole pipeline, only a CLIP loss is used. It seems like the network here has no motivation to learn a good disentanglement. In Fig4, the relighting results are so weird since the materials are also modified here. How about changing the light only?\n2. The quality of 3D stylization results may not be impressive enough in Fig.2. It is not easy to evaluate the performance of the proposed method from the target text prompt. Such as “A shoe made of brick”, I know the proposed method produces more details, but it seems that the result of test2mesh is more natural. A similar situation occurs in \"A vase made of colorful crochet\". It is hard to judge between this method and Text2Mesh. Of course, this method can preserve better 3D geometry. However, is geometry preservation necessary in 3D stylization? By the way, there are a lot of clear artifacts in the case of \"A candle wearing jeans\". I think more convincing visualization results can be provided.\n3. The quantitative result in Table 1 is unclear. As this work splits the style into geometry and appearance and claims the improvement in both, user studies should be conducted separately in terms of both aspects. 1. I wonder why the authors adopt “randomly cropped images” rather than “multi-view images” that text2mesh adopts in the phase of data augmentation. Is it because that normal map helps the model focus on the geometry and cropped images guide the model to focus on the detail? If text2mesh adopts cropped images in data augmentation, will it produce more details? An ablation study should be conducted here.\n2. What is the purpose of adding a random background to the rendered image in data augmentation? Is it necessary? I do not see the ablation study about the background in experiments.\n3. Is it easy to explicitly control the lighting intensity and direction? How should we determine the light distribution? CLIP model may produce semantic ambiguity. It seems that the target text has to contain the category of input mesh for content preservation, which may limit the application of this method in real life.", " The paper describes a method for generating a reflectance map (and lighting environment) for a given 3D shape, based on a text prompt, using a CLIP-guided loss. The problem being tackled is novel, and several of the results are plausibly-good. The impact of the work is incremental, but the general direction of text-guided geometry/texture generation is a significant one. \n\nOne big gap in the paper for me is a clear explanation for the motivation for using the SVBRDF/env. map rather than just fitting a texture map. It's clear that the results look better than just fitting a texture map (as shown in the ablations), so this aspect is clear. But, by fitting materials+lighting, the paper gives the possible impression that one gets a good materials+lighting in general, not just for a single view. Is this true? It hasn't been demonstrated or tested. Specifically: can the model be relighted and the viewpoint moved with sensible results? For example, do shiny materials have highlights that move in sensible ways, and diffuse materials do not? etc.\n\nI think the description of the method is unnecessarily mathematical and opaque, with many different terms. However, I don't have any suggestions for improving it and it might be that this is necessary level of detail. Eq 1 has a lot of terms but, in a way, it all boils down to one integral with a bunch of parameters.\n\nNote that \"text2mesh\" was published at CVPR 2022 after the NeurIPS deadline, so I don't think it \"counts\" as prior work. Nonetheless, it's great that it's included as a baseline, which is very informative. Are there any constraints on the normal fitting, e.g., do the normals need to be consistent with the geometry or can they be totally unrelated? Presumably there is a limit to how far normals can vary from the normals of the input geometry (or the nearby vertex normals).\n\n One limitation is that stylization may involve creating new geometry, not just texture. \"text2mesh\" does create new geometry, which has visible pros and cons in the results.\n\nAs discussed above, it's unclear whether good materials+lighting have been recovered, or just enough to render views with static lighting." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "5cK29RWgqG4", "nips_2022_zbuq101sCNV", "G1UgoRGyigm", "vh276Ag4Lki", "UHul6mOdKy", "ZgOJ3FDGlJu", "nxkKXHxut2m", "Q1194HZhOBL", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV", "nips_2022_zbuq101sCNV" ]
nips_2022_HIslGib8XD
AutoMS: Automatic Model Selection for Novelty Detection with Error Rate Control
Given an unsupervised novelty detection task on a new dataset, how can we automatically select a ''best'' detection model while simultaneously controlling the error rate of the best model? For novelty detection analysis, numerous detectors have been proposed to detect outliers on a new unseen dataset based on a score function trained on available clean data. However, due to the absence of labeled data for model evaluation and comparison, there is a lack of systematic approaches that are able to select a ''best'' model/detector (i.e., the algorithm as well as its hyperparameters) and achieve certain error rate control simultaneously. In this paper, we introduce a unified data-driven procedure to address this issue. The key idea is to maximize the number of detected outliers while controlling the false discovery rate (FDR) with the help of Jackknife prediction. We establish non-asymptotic bounds for the false discovery proportions and show that the proposed procedure yields valid FDR control under some mild conditions. Numerical experiments on both synthetic and real data validate the theoretical results and demonstrate the effectiveness of our proposed AutoMS method. The code is available at https://github.com/ZhangYifan1996/AutoMS.
Accept
The paper proposes a method for finding the best anomaly detector among a set of candidate methods that are all based on constructing a score function. The selection method is based on a leave-one-out estimate. Some theoretical results are presented and proven in the appendix, and in addition, some experiments are reported. Overall, this paper presents a novel and interesting method for an important problem, and the theoretical considerations are certainly a plus. The only major issue of the paper is that only 4 real world data sets were considered, and despite the fact that this problem was raised by the reviewers, the authors did not include more during the rebuttal phase. From my perspective, a strongly theoretical paper does not require extensive experiments, but the paper under review does not fall into this category. And for this reason, more experiments, on say another 15 data sets would have been really helpful. In summary, this is an interesting paper with a sufficiently good theoretical part and some promising experiments. The latter could have been more, but overall this paper should be accepted.
val
[ "MAedsfmaLr0", "66yvH5jbONY", "wSS2B1Io_7Y", "DP33cR5VIt", "qRfNfQ7bMWJ", "mZT4ryOD63w", "DbpdJYK1h_", "luQw6NfemKR", "gvWF0H3OhX3", "v8HMNqIH8-e", "LVyq5tbqOk3", "w4bHLYK4ZA9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nDear Reviewer MLPQ,\n\nThank you for providing the insightful comments on the **Experiment scale of our AutoMS method and METAOD**.\nWe have tried our best to answer your questions piece by piece, to make it clear that why there is no need for our AutoMS method to go through hundreds of datasets. As MetaOD uses meta-learning, therefore, METAOD requires a large number of datasets as the historical benchmarks to measure the similarity between the test set and benchmark datasets. We sincerely understand your concern on the effectiveness of our AutoMS method, as we have only showed experiments on 4 datasets. One thing is certain that our AutoMS method works on all other datasets, of course not only on these 4 datasets showed in the original paper, which can be obtained from Theorem 4.1 (FDR control) in our paper.\nWe are willing to do more experiments if needed, but the conclusion will remain the same that AutoMS can always control FDR while METAOD cannot. We will revise the manuscript accordingly in the final version.\n\nWe really appreciate it if you could re-evaluate our work which, to our best knowledge, is the first effort of model selection for novelty detection with theoretical guarantees in the view of FDR control. Our proposed AutoMS method can select the best model and simultaneously control the error rate of the best model.\n\n\nAuthors of Paper AutoMS", " We greatly appreciate the time you spent on our responses and your valuable suggestions for adding experimental results. We will add experiments with AutoMS-SRS in the final version to further support the conclusion that our AutoMS-JK is better than SRS and AutoMS-SRS.", " Thank you again for your concern on the experiments.\n\n1. One advantage of AutoMS is not depending on historical benchmarks.\n- Clearly METAOD is an important previous work about model selection for OD.\n- As METAOD uses meta-learning, therefore, METAOD requires a large number of datasets as the historical benchmarks to measure the similarity between the test set and benchmark datasets.\n- It is quite obvious that our AutoMS has the advantage of not requiring historical benchmarks.\n- And the different train/test dataset similarity will effect the results of METAOD, while our AutoMS approach has no special requirements for datasets.\n\n\n2. The most difference between our AutoMS method and METAOD is that AutoMS can control the FDR, which METAOD does not take into account FDR control.\n- The most important **advantage** of our AutoMS procedure is that **the AutoMS can do both FDR control and model selection while METAOD is a model selection method without considering error rate control**.\n- All datasets we use show that without FDR control, the points detected using METAOD may contain too many false discoveries, which already provides sufficient evidence for our conclusion.\n- We do not choose special datasets that show the FDR of AutoMS outperforming METAOD.\n\n\n3. We think the experiments were sufficient enough to show the advantages of our procedure AutoMS compared to METAOD. We are willing to do more experiments if needed, but the experiments results are there to support our theory, not our main goal.\n- Our AutoMS method works on all other datasets, of course not only on these 4 datasets showed in the original paper. The effectiveness of our AutoMS can be found and supported by the theories of our paper, which AutoMS can control the FDR.\n- If needed, we can do experiments on as many datasets as we can in the final version. But the experiments are not our main goal in our AutoMS method, and they are used to support the effectiveness of AutoMS and our theories.\n- We believe the conclusion on more datasets remains that AutoMS can always control FDR while METAOD cannot.\n", " Thank you for your responses. I understand that FDP of AutoMS is more difficult to control than that of SRS. Also, additional experiments suggest the advantage of applying the Jackknife procedure.\nThe newly added experimental results and authors' response addressed a part of my concerns. I have raised my score (4->6).", " I thank the authors for the detailed response. I agree that AutoMS has the advantage of not requiring historical benchmarks. However, it is unclear to me whether AutoMS works only on these 4 datasets or it also works on other datasets. Additionally, since MetaOD is clearly an important (or maybe the only) previous work about model selection for OD, I believe a more thorough comparison with MetaOD on more datasets is needed to show the significance of the results. I keep my score unchanged since my concern is not addressed.", " Q1: The computational overhead of applying the Jackknife procedure is not negligible, especially when the training set is large\n\nA1: We use Jackknife method to improve accuracy by making full use of data information, which inevitably makes some sacrifices in computational complexity. If computational considerations outweigh accuracy in practical applications, we also recommend using cross-validation instead of leave-one-out. Or we can combine our model selection procedure with SRS when computationally intensive.\n\nQ2: Experimental results, e.g. Fig.3, suggest that the FDR control of $\\mathcal{M}$ gets slightly worse by applying the Jackknife compared to the original SRS (by Bates et al.).\n\nA2: \n- 1) The reason why the FDR control of AutoMS is more difficult than SRS is that the FDP distribution of our selected detector is different from a given detector. Take a simple example: there are 10 standard Gaussian variables, each with a mean 0. The largest variable among those 10 does not have a mean 0. Therefore, the selected detector is more difficult to control FDR. \n\n- 2) A smaller threshold could detect a larger number of discoveries including more false discoveries, which means the FDP and TDP change in the same direction. If we want a higher TDP, the FDP will tend to increase as well. From the Experiment results in **General Response**, both AutoMS-SRS and AutoMS-JK have this problem caused by the selection step. But we can see from Theorem 4.1 and the experimental results that the deviation of FDP is very small. Our procedure can select the detector that discovers more outliers while the FDP is still controlled **asymptotically around the FDR level**. \n\n- 3) SRS does not fully explore the clean data and can cause randomness by data-splitting. So we use the Jackknife method instead of SRS to improve the accuracy and stability of the estimated p-values and enhance detection power. On the other hand, SRS only considers FDR control while AutoMS selects the detector with largest TDR while the FDP is still controlled **asymptotically around the FDR level**. Therefore, our AutoMS strategy can be more practical in real applications, which takes into account FDR and TDR and selects the best detector. \n\nQ3: It seems possible to apply the model selection using Equation (6) even when SRS is used to estimate $L_{\\mathcal{M}}$. If this is true, I would like to see the results of \"model selection + SRS\" in Fig.2-4.\n\nA3: SRS can be combined with our model selection procedure, hereafter called AutoMS-SRS. AutoMS-SRS can be regarded as a special case of AutoMS and also has the theoretical guarantees that the selected model yields asymptotically valid FDR control. Experiment results on AutoMS-SRS and AutoMS-JK is showed in **General Response**.\n\n\nQ4: In Fig.4: Why SRS-kNN and SRS-OCSVM are not compared?\n\nA4: We compared 6 algorithms under different target FDR levels $\\alpha$ including kNN and OCSVM coupled with SRS in **Section B of Supplementary Material**, and we did not show the results of SRS-kNN and SRS-OCSVM because they sometimes give all zeros and they are not always giving usable results. \nWe thus picked better-behaved SRS-LODA and SRS-LOF to compare with our method, and results show that the TDR of AutoMS is higher than the TDR of SRS-based methods. \n\n\nQ5: Does AutoMS include kNN/OCSVM models as candidates?\n\nA5: Yes, AutoMS use HBOS, iForest, kNN, LODA, LOF, and OCSVM with their corresponding hyperparameters as a set of candidate detectors. A complete list of the detector set we used is shown in **Section A of Supplementary Material**.\n\n\nQ6: In Algorithm 1: Procedure of learning $S_{\\mathcal{M}}^{[-j]}$ should be added between line3-4.\n\nA6: The definition of $S_{\\mathcal{M}}^{[-j]}$ is given in lines 139-140. Define $S_{\\mathcal{M}}^{[-j]}$ as the score function trained on $\\mathcal{D}^{[−j]}$ which is the subset of the training set $\\mathcal{D}$ with the jth observation removed.", " Q: Only several real-world datasets are selected in the experiments. As a comparison, the previous work MetaOD has performed experiments on hundreds of datasets. The authors are encouraged to conduct a more thorough comparison with MetaOD.\n\nA: \n- 1) It is worth noting that METAOD is a model selection method using meta-learning without considering error rate control, while the AutoMS can do both model selection and error rate control. That is why METAOD requires a large number of datasets to study the effect of task similarity, while a few datasets are enough for our proposed AutoMS to illustrate the advantages.\n- 2) METAOD requires a large number of datasets as the historical benchmark to measure the similarity between the test set and benchmark datasets by using meta-learning. And the different train/test dataset similarity will effect the results of METAOD, while our AutoMS approach has no special requirements for datasets. \n- 3) The four real datasets used in section 5.4 have illustrated the advantages of AutoMS over other methods, therefore we donot need to go through hundreds of datasets. \n- 4) Note that SRS method is guaranteed with FDR control for any given detector, without considering model selection. The conclusion that our AutoMS approach outperforms SRS and MATAOD is unified for the four data. \n- 5) For example, Credit card is more suitable using SRS-LODA and Covertype is better using SRS-LOF, which reflects the importance of model selection. The FDP of all datasets using METAOD is very high, indicating a very high false discovery rate, which means METAOD can not control the FDR. \nHowever, AutoMS can improve the TDR while controlling the FDR.\n", " Q1: In page5 rationale, you explain why we can select the model which detects the most outliers as the \"best\" one. ... The issue may be that you assume that all the models will keep the FDR level constant.\n\nA1: \n- 1) It is indeed a conservative model A is better in your special case which has a strong signal where outliers are easy to be detected. But in more general cases where the signal is not strong enough, too aggressive models may detect more false discoveries, while too conservative models may miss weak signals/outliers. But actually, a smaller threshold could detect a larger number of discoveries including more false discoveries, which means the FDP and TDP change in the same direction. Our goal is to have a higher TDR while controlling the FDR under a target level. To achieve the goal, the FDP will tend to be closer to the preset level when obtaining a larger TDP. Therefore, the FDP of a competitive good detector will be roughly around our target level. \n\n- 2) The preset FDR level should be an acceptable level, which means all detectors that control the FDR below this level should be accepted. As long as the FDR of the selected detector is below a preset level, we have achieved our goal on FDR and focused on improving TDR.\n\nQ2: The main theoretical result is based on assumption 4.1-4.3. It is very common to require the assumption of learning stability and density. However, the assumption of rates seems a little complicated here. Could you give more explanation about this assumption?\n\nA2: \n- 1) In general, Assumption 4.3 is a technical one to ensure the uniform consistency of estimated p-values, which performs an important role of the convergence of the FDR control. \n\n- 2) Specifically, $A_n=o(n)$ is a pre-specified sequence and gives a non-trivial lower bound of true p-value function, $G_{\\mathcal{M}}(t)\\geq \\frac{A_n}{n}=o(1)$. The $B_m$ is a quantity used to measure the difference magnitude of two p-values, e.g., the difference between the p-values based full sample score function and the Jackknife ones in Lemma C.3, or the difference between the joint p-values of pair $(j,k)$ based on the Jackknife score estimators and the leaving-$(j,k)$-out estimators in Lemma C.2. That's why we need $B_m=o(1)$, i.e two p-values should be close enough. This is a very mild condition and we can expect that the p-values will be close if the training sample sizes of inliers $m$ and test sample sizes of inliers $n_0$ are large enough in practice.\n\n- 3) Note that the number of detectors $\\varpi_m$ is allowed to go to infinity with some rate for the proposed AutoMS. The assumptions $n\\varpi_m B_m=o(1)$ and $m\\varpi_mA_n^{-2/3}=o(1)$ indicates the required rate of $\\varpi_m$ to ensure the uniform consistency among $\\varpi_m$ detectors.\n\nQ3: In section 5.2, the results shows that the Jackknife method is better than simple split for estimating the score function. However, in later experiments, it seems that SRS has lower FDP than AutoMS. Based on this, could you please adding another version AutoMS-based method in section 5.3 and 5.4, which is AutoMS-simple, which just replace the Jackknife method with simple split?\n\nA3: \n- 1) SRS can be combined with our model selection procedure, hereafter called AutoMS-SRS. Experiment results on AutoMS-SRS and AutoMS-JK is showed in **General Response**. AutoMS-SRS can be regarded as a special case of AutoMS and also has the theoretical guarantees that the selected model yields **asymptotically** valid FDR control. But SRS does not fully explore the clean data and can cause randomness by data-splitting. So we use the Jackknife method instead of SRS to improve the accuracy and stability of the estimated p-values and enhance detection power.\n\n- 2) The reason why the FDR control of AutoMS is more difficult than SRS is that the FDP distribution of our selected detector is different from a given detector. Take a simple example: there are 10 standard Gaussian variables, each with a mean 0. The largest variable among those 10 does not have a mean 0. Therefore, the selected detector is more difficult to control FDR. \n\n- 3) As we explained in A1 (1), a smaller threshold could detect a larger number of discoveries including more false discoveries, which means the FDP and TDP change in the same direction. If we want a higher TDP, the FDP will tend to increase as well. From the Experiment results in **General Response**, both AutoMS-SRS and AutoMS-JK have this problem caused by the selection step. But we can see from Theorem 4.1 and the experimental results that the deviation of FDP is very small. Our procedure can select the detector that discovers more outliers while the FDP is still controlled **asymptotically around the FDR level**. Therefore, our AutoMS strategy can be more practical in real applications, which takes into account FDR and TDR and selects the best detector.", " Dear reviewers, we thank for your great efforts and valuable comments on our paper. Below, we would like to response your questions generally.\n\nFirst, we would like to emphasize our contributions again. \n\n- **1) We propose a criteria for model selection from the perspective of FDR control, which ideally finds a \"best\" detector with the largest TDR while controlling FDR.** Our AutoMS method can give theoretical guarantees for the TDP and FDR, while most existing model selection work lacks statistical theoretical guarantees. \n\n- **2) With FDR control, the selected detector with the largest number of discoveries is roughly the one with the largest TDR.** Otherwise, without FDR control, just selecting the largest number of discoveries is not even a correct criteria, in this case, most detected discoveries are inliers and true outliers are not be detected. The rationale in lines 157-164 explain the rationality of our selection strategy.\n\n- **3) We give a new theoretical guarantees that the FDR of the selected best detector can be controlled asymptotically below the target level.** \nThat is different from the one in Bates et al.(2021), which only focuses on the FDR control with one fixed detector under one time data-splitting. \n\n- **4) We use the Jackknife method instead of the split-conformal approach in Bates et al (2021) to improve the accuracy and stability of the estimated p-value**, by fully exploring the clean data and avoiding the randomness caused by data-splitting.\n\n**Comment: Experiment results on AutoMS-SRS and AutoMS-JK**\n\n**Response:**\nWe appreciate the suggestion to add another experiment that combines our model selection procedure with SRS, hereafter called AutoMS-SRS. AutoMS with Jackknife is called AutoMS-JK. The table below shows results of AutoMS-JK and AutoMS-SRS when the target FDR level is 0.10. And we will add experiments with AutoMS-SRS in the final version. \n\nData set | AutoMS-JK | | AutoMS-SRS | |\n------ | ------ |------ | ------ | ------\n| | FDP | TDP | FDP | TDP\nCovertype | 0.106 | 0.933 | 0.105| 0.905\nCredit Card | 0.099 | 0.758 | 0.104 | 0.757\nSatellite | 0.114 | 0.968 | 0.134 | 0.963\nShuttle | 0.100 | 0.942 | 0.102 | 0.926\n\nWe see that the FDP of AutoMS-SRS is as large as that of AutoMS-JK, or even worse (the larger deviation than 0.1, the worse). Actually, the larger FDP is caused by the selection step. So AutoMS-SRS also has a larger FDP than SRS. But AutoMS-JK has a higher TDP than AutoMS-SRS.", " This work automatically selects a best detection model while simultaneously controlling the false discovery rate.\nThe experimental results shows that the proposed method can control the false discovery rate (FDR) and the true discovery rate (TDR) simultaneously. This paper is very clearly written and easy to understand. I really enjoy reading this paper and it makes interesting contribution. The key idea is estimating more \"stable\" p-value for better FDR and then adding extra step (i.e., model selection) for additional TDR control. It's not surprising to see in experiments this method have better TDR than those methods without controlling TDR.\n\nThis paper can be seen a good extension work of Bates et al. [17]. The authors replace the simple split conformal prediction with Jackknife technique for more \"accurate\" estimated p-value, by fully exploring the clean data and avoid the randomness caused by data-splitting. This idea is very straightforward. Another contribution is selecting the best model from a pool of detectors. “Best\" here means that the model detected the most outliers in the new dataset, which is not novel technique as well. Overall, the novelty of this work is limited. The manuscript will benefit from adding explanation about the novelty of such combination of two existing techniques, theoretically or practically. I have a couple of concerns and questions regarding the method, and empirical evaluations.\n\n1. In page5 rationale, you explain why we can select the model which detects the most outliers as the \"best\" one. This idea seems heuristic somehow, but I am still confused about this part. Consider there are two models A and B, and TDP_A =1 and TDP_B = 1. But B is more aggressive and have detect some false novelty. Based on your strategy, we will choose B as the best, but actually A is better than B because the FDP of A is smaller. The issue may be that you assume that all the models will keep the FDR level $\\alpha$ constant .\n\n2. The main theoretical result is based on assumption 4.1-4.3. It is very common to require the assumption of learning stability and density. However, the assumption of rates seems a little complicated here. Could you give more explanation about this assumption?\n\n3. In section 5.2, the results shows that the Jackknife method is better than simple split for estimating the score function. However, in later experiments, it seems that SRS has lower FDP than AutoMS. \nBased on this, could you please adding another version AutoMS-based method in section 5.3 and 5.4, which is AutoMS-simple, which just replace the Jackknife method with simple split? \n\n\nOther minor comments:\npage5 line 189: ADNER --> AutoMS??\n\n###########\n\nupdate: Thanks for the author's response. I will keep my score as 6.\n yes", " This paper proposes a general AutoML framework for novelty detection and controlling the error rate of the model. The framework consists of an automated model selection procedure with FDR control. The theoretical bound is provided for AutoMS. Extensive experiments are conducted to demonstrate its effectiveness. Strengths\n\n1: The paper proposed a unified framework that can be combined with different base detectors\n2: The paper provides a theoretical bound of FDR\n3: Experiments are conducted to evaluate the effectiveness of AutoMS on both synthetic and real-world data.\n\nWeaknesses\n1. Only several real-world datasets are selected in the experiments. As a comparison, the previous work MetaOD has performed experiments on hundreds of datasets. The authors are encouraged to conduct a more thorough comparison with MetaOD. None None", " The authors propose a model selection method for novelty detection with false discovery rate (FDR) control. Given a detection model $M$, a detection threshold $L_M$ is selected based on the Benjamini-Hochberg (BH) procedure so that the FDR of $M$ is less than $\\alpha$. To estimate the p-values in the BH procedure precisely, the authors propose to apply the Jackknife estimation, which extends the existing work by Bates et al. After estimating $L_M$ for each model $M \\in G$, the model that most detects the novelties with $L_M$ is selected as the best model $M^*$. The authors also give theoretical results to show that the FDP of $M^*$ is non-asymptotically bounded and the FDR of $M^*$ is asymptotically bounded by $\\alpha$. Experiments using synthetic or real datasets demonstrate the advantage of the proposed method against the work by Bates et al. or METAOD.\n Strengths\n- Hyperparameter tuning or model selection is especially hard in unsupervised settings like novelty dectecion. This paper proposes a simple yet effective approach for this problem from the viewpoint of \"maximize_{M \\in G} #detection(M), subject to FDR(M) \\lt \\alpha$.\"\n- The control of FDR of $M$ is mainly achieved by the existing framework of Bates et al., but its Jackknife extention is proposed.\n\nWeaknesses\n- The computational overhead of applying the Jackknife procedure is not negligible, especially when the training set is large\n- Experimental results, e.g. Fig.3, suggest that the FDR control of $M$ gets slightly worse by applying the Jackknife compared to the original SRS (by Bates et al.).\n\n\n=====POST-REBUTTAL COMMENTS========\n\nThanks for the authors' response. The newly added experimental results and authors' response addressed a part of my concerns. I have raised my score. It seems possible to apply the model selection using Equation (6) even when SRS is used to estimate $L_M$. If this is true, I would like to see the results of \"model selection + SRS\" in Fig.2-4.\n\nIn Fig.4:\n- Why SRS-kNN and SRS-OCSVM are not compared?\n- Does AutoMS include kNN/OCSVM models as candidates?\n\nIn Algorithm 1:\n- Procedure of learning $S_M^{[-j]}$ should be added between line3-4.\n- line 11: Input -> Output\n\nIn line 189, what is \"the ADNER method\"?\n The computational overhead of applying the k-fold Jackknife against the original SRS should be assessed in the experiment.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "LVyq5tbqOk3", "DP33cR5VIt", "qRfNfQ7bMWJ", "mZT4ryOD63w", "DbpdJYK1h_", "w4bHLYK4ZA9", "LVyq5tbqOk3", "v8HMNqIH8-e", "nips_2022_HIslGib8XD", "nips_2022_HIslGib8XD", "nips_2022_HIslGib8XD", "nips_2022_HIslGib8XD" ]
nips_2022_omI5hgwgrsa
Optimal Algorithms for Decentralized Stochastic Variational Inequalities
Variational inequalities are a formalism that includes games, minimization, saddle point, and equilibrium problems as special cases. Methods for variational inequalities are therefore universal approaches for many applied tasks, including machine learning problems. This work concentrates on the decentralized setting, which is increasingly important but not well understood. In particular, we consider decentralized stochastic (sum-type) variational inequalities over fixed and time-varying networks. We present lower complexity bounds for both communication and local iterations and construct optimal algorithms that match these lower bounds. Our algorithms are the best among the available literature not only in the decentralized stochastic case, but also in the decentralized deterministic and non-distributed stochastic cases. Experimental results confirm the effectiveness of the presented algorithms.
Accept
The paper makes a significant contribution to the literature on distributed SVIs. The results provided are fairly comprehensive -- both lower bounds and algorithms achieving the lower bounds are provided. Hence, the paper is recommended for acceptance.
train
[ "JwOS4uonNR", "5P2CB4zIaUW", "Jxkj6ypJld", "DG57C4ZO-hh", "DokUYpN5e6_", "ZbiyTcwT3aQ", "wguDRuYabqP", "-eFFJZAqKzP", "l8BNoGdn4ju", "zNh2uhLQHoT", "Za3yPrLtt-3", "U3p48Hsw8ef", "IobNwRXCz4b" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We're glad to hear it! Thanks again for the review!", " Thanks for the response. As in my previous review, I think this paper has value, and I still think this paper can be accepted. Variational inequality has more applications than optimization.", " With this message, we would just like to kindly remind Reviewers that we would be happy if Reviewers would participate in the rebuttal discussion process. We are looking forward to hearing from Reviewers **LbuJ** and **JEvK**. We thank Reviewer **4cfY** for the responses to the rebuttal, we are also looking forward to hearing from Reviewer **4cfY** in reply to our clarifying question.", " We greatly appreciate the response, the comments, and the raising of the score! We kindly ask you to let us know if any issues are left that we could address. Perhaps we haven't solved all the problems mentioned in the review? We ask this because, at this point, we don't fully understand the score. The main contribution of our paper is the algorithms that are not only state-of-the-art but are provably optimal, i.e., they have the best possible complexities due to our lower bounds. However, we could not find any negative points about the algorithms in the review.\n", " Thanks for the response. The authors have addressed my previous concern and I have decided to raise my score. ", " We thank Reviewer **4cfY** for review, time and the insightful comments, which will help to improve our work.\n\nNext, we answer the questions and shortcomings noted by Reviewer.\n\n> **The reviewer is concerned about the role of the convex function g as stated in (1) in the lower bound analysis.** In fact, as seen in (7)-(8) of Sec. 3, [also (19) in Appendix C], the class of algorithm under consideration seems to be independent of g, i.e., the lower bounds are applicable only when g=0. This leads to the further confusion as the proposed \"optimal\" algorithms actually use a proximal operator depending on g, which made the algorithms to be outside of the algorithm class analyzed in Sec. 3. Strictly speaking, the said result is still giving a lower bound complexity to solving the VI problem, yet it should be motivated clearly in the analysis setup for why g can be ignored. \n\n1) We thank Reviewer! We changed Definition 3.1 and added the ability to compute $\\text{prox}_g$ in local computations. Please see (7) in Definition 3.1 of the revision.\n\n2) We also added to the revision that $g$ is a proximally friendly function, i.e. $\\text{prox}_g$ can be computed for free. This is a classic assumption when considering composite problems, because usually the objective function is complex, and $g$ is, for example, some simple regularizer for which one can analytically calculate the formula for the proximal operator. \n\n3) Adding the ability to calculate prox in Definition 3.1 does not affect the resulting lower bounds. When we get lower bounds we need to present a particular bad problem from the class of problems that makes all algorithms converge in a bad way. \nWe consider the following class of problems: distributed variational inequalities, where the operator $F$ is strongly monotone (Assumption 2.2) and Lipschitz continuous\n (Assumption 2.1), and the composite function $g$ is proper lower semicontinuous convex.\nAs we have already said, in order to obtain lower bounds, we need to choose a bad example from this class of problems. We take some operator $F$, and $g = 0$. Is $g = 0$ proper lower semicontinuous convex? Yes. And $\\text{prox}_g (z) = z$ for $g = 0$. It turns out that in such a situation, the calculation of the prox operator does not change anything. We indicate this in Appendix C, in particular, because of this we do not need to change the proof in any way.\n\n\n4) If Reviewer wants to see how we can obtain lower bounds with $g \\neq 0$, then the following trick can be done. In the current version of the paper, when obtaining lower bounds, we solve the saddle point problem on $R^{d_x} \\times R^{d_y}$:\n$$\n\\min_{R^{d_x}} \\max_{R^{d_y}} f(x,y).\n$$\nThis problem has a solution $x^*_{orig}, y ^*_{orig}$. Let us define sets $\\mathcal{X} = B(0, R)$ and $\\mathcal{Y} = B(0, R)$ – balls with centers in $0$ and with radiuses are equal to $R$, where $R > \\max( \\|x^*_{orig}\\|, \\| y^*_{orig}\\|)$. It means that $x^*, y^*$ in $\\mathcal{X} \\times \\mathcal{Y}$. \nLet us consider the next problem for lower bounds:\n$$\n\\min_{X} \\max_{Y} f(x,y).\n$$ \nThe solution of this problem is still at the point $x^*_{orig}$ and $y^*_{orig}$. We can look at this problem differently and write in the following form\n$$\n\\min_{R^{d_x}} \\max_{R^{d_y}} f(x,y) + g_1 (x) + g_2(y),\n$$\nwhere $g_1$ and $g_2$ are indicator functions, i.e. $g_1 (x) = 0$ if $x \\in \\mathcal{X}$ and $g_1(x) = + \\infty$ if $x \\notin \\mathcal{X}$ (the same for $g_2$ and $\\mathcal{Y}$). When we rewrite\n$\n\\min_{R^{d_x}} \\max_{R^{d_y}} f(x,y) + g_1 (x) + g_2(y)\n$\nas the variational inequality, $f(x,y)$ goes to the operator $F$ and $g(z) = g_1 (x) + g_2(y)$. $g(z)$ is a convex function. \nLet us understand how the lower bounds change. Note that in this case $\\text{prox}_g$ is the Euclidean projections onto the balls, i.e.\n$$\n\\text{prox}_g(z) = \\binom{\\text{proj}_B (x)}{\\text{proj}_B (y)}\n$$\nand $\\text{proj}_B (x) = x$ if $x \\in B$, but if $x \\notin B$, then\n$$\n\\text{proj}_B (x) = x \\cdot \\frac{R}{|| x||}.\n$$\nThe main idea behind the lower bounds is the number of non-zero coordinates that we can guarantee in the final output. Note that our proximal operator cannot increase the number of non-zero coordinates. Then the reasoning from our proofs will be completely valid.\n", " > **From a quick scan on the appendix, it seems the key analysis depends mainly on [68].** The said reference has not been discussed in the main paper while it is necessary to discuss the main novelty here, e.g., in terms of the proof techniques?\n\n[1] ([68] in the review) is devoted to lower bounds for deterministic non-distributed VIs. Our lower bounds is for distributed stochastic problems. We take the same step as papers [2] and [3] did in their time. From Nesterov's lower bounds [4] for the deterministic non-distributed minimization problems, they obtained lower bounds for the deterministic distributed minimization problems and then for the stochastic distributed ones.\n\nWe added some more references and ideas about lower bounds (see lines 209 and 651 in the revision). \n\n> **It seems that the technical novelty is limited**\n\nWe see comments and questions only about lower bounds, but **70 percent of our contribution is algorithms**! By themselves, these algorithms without lower bounds are a big and interesting contribution (because their upper bounds are SOTA), and lower bounds complement them well and show optimality.\n\n\n> **The definition of (7) can be problematic** \n\nThis is a typo! We deleted the word “given” from the definition.\n\n> **2 more typos** \n\nWe fixed it. Thanks!\n\n[1] Junyu Zhang, Mingyi Hong, and Shuzhong Zhang. On lower iteration complexity bounds for the saddle point problems.\n\n[2] Kevin Scaman, Francis Bach, Sébastien Bubeck, Yin Tat Lee, and Laurent Massoulié. Optimal algorithms for smooth and strongly convex distributed optimization in networks.\n\n[3] Hadrien Hendrikx, Francis Bach, and Laurent Massoulie. An optimal algorithm for decentralized finite sum optimization.\n\n[4] Yurii Nesterov. Introductory lectures on convex optimization : a basic course.\n", " We thank Reviewer **JEvK** for review, time and the insightful comments, which will help to improve our work.\n\nWe are glad that Reviewer liked our results. Next, we answer the questions and shortcomings noted by Reviewer.\n\n> **The strong assumption may limit the vast application of the proposed algorithms.** It requires strong monotonicity for the convergence of the proposed algorithms. Is it possible to show the convergence for general problems?\n\n1) If we are talking about the general monotone case, we are sure that we can get the result. It is clear that we cannot give the whole proof here, therefore we propose to consider the classical regularization trick to make sure that results can be obtained. \nThe essence of this trick is to make the convex minimization problem strongly convex, the convex-concave saddle point problem strongly convex - strongly concave, and the monotone VI strongly monotone. For example, we have the convex - concave saddle point problem\n$$\n\\min_{x \\in X} \\max_{y \\in Y} f(x,y).\n$$\nWe can change the goal function and consider\n$$\n\\tilde f(x,y) = f(x,y) + \\frac{\\varepsilon}{8D^2}||x - x_0 ||^2 - \\frac{\\varepsilon}{8D^2} ||y - y_0 ||^2,\n$$\nwhere $x_0, y_0$ is a starting point, $\\varepsilon$ is a desired solution accuracy, $D = \\max (D_X, D_Y)$ with $D_X, D_Y$ Euclidean diameters of $X$ and $Y$.\nThe new problem is $\\frac{\\varepsilon}{4D^2}$ strongly convex - strongly - concave. It can be seen that both regularizers do not spoil the original problem much, because the sum does not exceed $\\varepsilon/4$. If we solve the new problem with accuracy $\\varepsilon/2$, then we solve the original problem with accuracy $\\varepsilon$.\nSimilarly, we can consider the monotone operator $F$, and instead of it use the strongly monotone operator\n$$\n\\tilde F = F + \\frac{\\varepsilon}{4 D^2} (z - z_0).\n$$\nAnd based on these conclusions it is easy to obtain estimates of convergence in the monotone case, just substitute $\\mu = \\frac{\\varepsilon}{4D^2}$ in our estimates from the paper. \n\n2) If we are talking about non-monotone operators, it is a more complicated issue. This is due to the fact that there are no results for them in the literature. Usually, non-monotonicity is considered with an additional minty/variational stability condition. People also consider saddle point problems under PL condition or convex-nonconcave saddle point problems. These are interesting directions for future research, but here we still need to decide which of the setups is a higher priority.\n\n> **In the complexity, the term $n \\log 1/e$ is not included in the computation**, and the term $\\sqrt{n\\chi} \\log 1/e$ is not included in the communication (they are both in Theorem 4,2). When $n\\geq L\\mu$, this term can not be ignored.\n\nIf we understand correctly, this comment is related to Table 1. We modified Table 1 in the revision. At first we wanted to add the full estimates to Table 1, but this made the line longer and we had to make the text in Table 1 very small. Then we added a footnote (6), where we specify that the complexities can contain additional factors. Please see the revision of our paper.\n\n> **It seems that the difference between Algorithm 4 and the algorithms in [1] is not just in the gradient estimator.** In [1], the extragradient requires two proximal operators in each iteration, while the FBF has different update orders in the iteration. Since the non-distributed case is also interesting, it would be better to have more discussion in the supplementary material.\n\nWe are glad that Reviewer found more differences between the algorithms from [1] with our non-distributed algorithm. But it is important to note that in the first place, our algorithm should be compared not with Algorithms 1 (extragradient) or 2 (FBF), but with Algorithm 4 (FoRB) from [1]. It is Algorithm 4 that is closest to us. Comparison with Algorithm 4 is already present in Appendix B (see line 627). Therefore, we did not add anything, we think this is the most honest option in relation to the authors of [1].\n\n> **The parameter for the algorithm requires the strong convexity constant**, which may not be easy to obtain. It may limit this algorithm's application for many nonconvex machine learning problems.\n\nIt seems that we answered this issue when we discussed the general analysis above.\n\n\n[1] Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods.\n", " We thank Reviewer **LbuJ** for review, time and the insightful comments, which will help to improve our work.\n\nWe are glad that Reviewer liked our results. But Reviewer noted that wanted to see more context and intuition. Unfortunately, it's hard for us to see which points we should clarify in more detail. We just note that we have Section 1.4 where we talk about related results. We also have Section B (Appendix), where we give a link between our methods and those known in the literature.\n\nWe would be grateful if Reviewer gave us more detailed comments on what we can add about context and intuition and gives us an opportunity to improve the paper.", " Dear Reviewers, Area Chairs and Senior Area Chairs!\n\nWe published a revision of our paper. We made some small changes and highlighted them in blue. In particular,\n\n1) We added a footnote (6) to Table 1 at the request of Reviewer **JEvK**.\n\n2) We added that $g$ is a proximal friendly function.\n\n3) We added $\\text{prox}$ in Definition 3.1 at the request of Reviewer **4cfY**.\n\n4) We added some more references and ideas about lower bounds (see lines 209 and 651) at the request of Reviewer **4cfY**.\n\n5) Fixed 3 typos (thanks, Reviewer **4cfY**).\n\nThank you very much for your work! You really helped make our paper better.\n", " This paper studies decentralized stochastic variational inequalities (SVI). The authors present lower bounds for the communication and computation complexities of decentralized SVI. The authors construct new algorithms that achieve the optimal rates matching the lower bounds, for both fixed and time-varying networks. This paper studies an interesting problem and the results seem solid and comprehensive. The writing and exposition can be improved to give more context and intuition to the readers. - Yes", " Methods for variational inequalities are applied to solve problems in optimization, machine learning, image processing, game theory, etc. This paper provides a lower complexity bound in communication and computation for decentralized stochastic variational inequalities under some conditions. Then optimal algorithms are proposed to match these lower bounds. This paper considers a strong monotone operator for which linear convergence is derived. Strengths\n+ The lower bound in the complexity is provided. In general, finding the lower bound is not easy. \n+ Optimization algorithms are proposed to match the lower bound. \n\nWeakness\n+ The strong assumption may limit the vast application of the proposed algorithms. It requires strong monotonicity for the convergence of the proposed algorithms. Is it possible to show the convergence for general problems? + In the complexity, the term $n\\log{1\\over \\epsilon}$ is not included in the computation, and the term $\\sqrt{n\\chi}\\log{1\\over \\epsilon}$ is not included in the communication (they are both in Theorem 4,2). When $\\sqrt{n}\\geq {L\\over \\mu}$, this term can not be ignored. \n+ It seems that the difference between Algorithm 4 and the algorithms in [1] is not just in the gradient estimator. In [1], the extragradient requires two proximal operators in each iteration, while the FBF has different update orders in the iteration. Since the non-distributed case is also interesting, it would be better to have more discussion in the supplementary material. + The parameter for the algorithm requires the strong convexity constant, which may not be easy to obtain. It may limit this algorithm's application for many nonconvex machine learning problems. ", " This paper considers the decentralized variational inequality problem and focuses on the setting when the VI is strongly monotone. Particularly, it studies the optimal stochastic algorithm under the finite sum setting through deriving a lower communication/computation complexity bound and an algorithm that achieves the bound. Numerical experiments are also presented for the proposed \"optimal\" algorithm. **Strengths**: The settings considered for the optimal decentralized stochastic VI algorithms is new. Furthermore, the proposed \"optimal\" algorithms have demonstrated promising performance as reported from the numerical experiments.\n\n**Weaknesses**: While the settings considered for the optimal algorithm is new, it seems that the technical novelty is limited. Moreover the reviewer finds a number of issues with the definition of the problem class considered. As follows, \n\n1. The reviewer is concerned about the role of the convex function $g$ as stated in (1) in the lower bound analysis. In fact, as seen in (7)-(8) of Sec. 3, [also (19) in Appendix C], the class of algorithm under consideration seems to be independent of $g$, i.e., the lower bounds are applicable only when $g = 0$. This leads to the further confusion as the proposed \"optimal\" algorithms actually use a proximal operator depending on $g$, which made the algorithms to be outside of the algorithm class analyzed in Sec. 3. Strictly speaking, the said result is still giving a lower bound complexity to solving the VI problem, yet it should be motivated clearly in the analysis setup for why $g$ can be ignored. \n\n2. From a quick scan on the appendix, it seems the key analysis depends mainly on [68]. The said reference has not been discussed in the main paper while it is necessary to discuss the main novelty here, e.g., in terms of the proof techniques? In addition to the above mentioned weaknesses, there are a number of typos/issues throughout the paper:\n\n1. The definition of (7) can be problematic:\n\nIt is in fact described in the equation that the new iterate $z$ is taken from the span of $z'$, and $\\sum_{i_m} F_{m,i_m}(z^{''})$ for some given $z', z^{''} \\in {\\cal M}_m$. \n\nThis seems to suggest that $z',z^{''}$ have to be fixed. This is different from the framework of [29,59,68] which defines $z$ to be in the span of $\\{ z', \\sum_{i_m} F_{m,i_m}(z'') : z', z'' \\in {\\cal M}_m \\}$. That said, this seems to be merely a typo since the paper mainly adopting the results from [29,59,68].\n\n2. In (3), it should be $\\max_y$ instead of $\\min_y$?\n\n3. In line 761, \"for the fixed it is real distance\" misses the word \"graph\". The limitations have been described in the reviews of \"Weaknesses\" and \"Questions\". " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "5P2CB4zIaUW", "-eFFJZAqKzP", "nips_2022_omI5hgwgrsa", "DokUYpN5e6_", "IobNwRXCz4b", "IobNwRXCz4b", "IobNwRXCz4b", "U3p48Hsw8ef", "Za3yPrLtt-3", "nips_2022_omI5hgwgrsa", "nips_2022_omI5hgwgrsa", "nips_2022_omI5hgwgrsa", "nips_2022_omI5hgwgrsa" ]
nips_2022_9U4gLR_lRP
Logit Margin Matters: Improving Transferable Targeted Adversarial Attack by Logit Calibration
Previous works have extensively studied the transferability of adversarial samples in untargeted black-box scenarios. However, it still remains challenging to craft the targeted adversarial examples with higher transferability than non-targeted ones. Recent studies reveal that the traditional Cross-Entropy (CE) loss function is insufficient to learn transferable targeted perturbations due to the issue of vanishing gradient. In this work, we provide a comprehensive investigation of the CE function and find that the logit margin between the targeted and non-targeted classes will quickly obtain saturated in CE, which largely limits the transferability. Therefore, in this paper, we devote to the goal of enlarging logit margins and propose two simple and effective logit calibration methods, which are achieved by downscaling the logits with a temperature factor and an adaptive margin, respectively. Both of them can effectively encourage the optimization to produce larger logit margins and lead to higher transferability. Besides, we show that minimizing the cosine distance between the adversarial examples and the targeted classifier can further improve the transferability, which is benefited from downscaling logits via L2-normalization. Experiments conducted on the ImageNet dataset validate the effectiveness of the proposed methods, which outperforms the state-of-the-art methods in black-box targeted attacks. The source code for our method is available at https://anonymous.4open.science/r/Target-Attack-72EB/README.md.
Reject
In this paper, the authors propose novel method to improve transferability of targeted adversarial attacks by enlarging the margin between targeted logit and non-target logits. Experiments on ImageNet with different methods demonstrated the effectiveness of the method. However, as is pointed out by the reviewers that there exist high overlap between the paper and the existing works, which significantly hinders the novelties of the paper. The paper are expected to clarify the novelty and provide more comprehensive evaluations.
train
[ "04OYiCm6jg", "VacpnSA35Nc", "sT14tZIeoddW", "jNXvtEGBFMIS", "sIAr738LGW9", "OyP0Pw8xB5", "IfQv4z1bM6K", "gLRl60iGOpe", "7eZgZ-rivOg", "eqDmaQ47r91", "IDrIEbjfT7Y", "bNMs61pCbvC", "JcXe9C2n82K" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their response and have checked the revised version. I agree with Reviewer WQZv that the change of the current version is falling into a major revision. In particular, I would like to highlight the high overlap between the previous submitted manuscript and the existing work. Therefore, I remain my previous rating and still vote for reject.", " Thanks for your comments and constructive feedback. We have uploaded a revision to address the concerns. The notable changes are below.\n\n1. We revised the introduction section to clarify the distinction between the Logit [30] and moved the logit margin figure (Fig. 1) to the introduction to better illustrate the motivation and contribution of this study (Section 1).\n\n2. We thoroughly rewrote the related work section to address the similarity issue pointed out by the reviewer qX4X (Section 2).\n\n3. We added the experiments on the varied targets and T=10/20 in the combining logit calibrations. Besides, the suggestion for achieving a better-targeted attack is added (Section 4).\n\n4. To better present the tables, we reported the average targeted transfer success rate with three digits at most for Tables 1, 2, & 3 instead of the average number of successfully attacked samples with four digits (Section 4).\n\n5. We carefully polished the manuscript and corrected some typos and grammar mistakes.\n\n6. More experiment results during the rebuttal period are added into the supplementary.\n", " We thank the reviewers for their positive comments and constructive feedback. We have uploaded a revised manuscript based on the reviewers’ feedback and have highlighted changes from the original submission in blue. We summarize the notable changes below.\n\n1. We revised the introduction section to clarify the distinction between the Logit [30] and moved the logit margin figure (Fig. 1) to the introduction to better illustrate the motivation and contribution of this study (Section 1).\n\n2. We thoroughly rewrote the related work section to address the similarity issue pointed out by the reviewer qX4X (Section 2).\n\n3. We added the experiments on the varied targets and T=10/20 in the combining logit calibrations. Besides, the suggestion for achieving a better-targeted attack is added (Section 4).\n\n4. To better present the tables, we reported the average targeted transfer success rate with three digits at most for Tables 1, 2, & 3 instead of the average number of successfully attacked samples with four digits (Section 4).\n\n5. We carefully polished the manuscript and corrected some typos and grammar mistakes.\n\n6. More experiment results during the rebuttal period are added into the supplementary.", " Thanks for providing the response. I am worried all of these changes are falling more into a major revision and would not be within the limit of the conference. ", " >**Exp 1:** Experiments on another dataset (e.g., CIFAR-10, MNIST, SVHN)\n>\n>**Response 1:** During the rebuttal, we conducted the experiments on the CIFAR-10 dataset under the untargeted attack setting based on the code provided by [a]. The ResNet-18 is used as the white-box model for crafting the perturbation by training with the I-FGSM for 20 iterations. The DenseNet, GoogLeNet and SENet18 are black-box models. Table 1 reported the fooling rate of attacking the 10,000 images in the CIFAR-10 testing set. \n>\n>From Table 1, we can find that the fooling rate continually increases along with the T in the white-box attack. In transfer black-box attacks, the best fooling rates are obtained at T=5 or T=10, and the fooling rate will decrease when further increases T. These results also can validate the effectiveness of logit calibration in non-targeted attacks on a small dataset.\n>\n>[a] Enhancing Adversarial Example Transferability with an Intermediate Level Attack, *ICCV 2019*.\n\nTable 1: The transfer untargeted fooling rate of training with ResNet-18 and testing by the DenseNet-121, GoogleNet and SENet-18 on CIFAR-10.\n| | ResNet-18*| DenseNet-121 | GoogLeNet | SENet-18 |\n| - | :-: | :-: | :-: |:-: |\n|T=0.5 |89.77 |50.23 |37.43 |51.04 |\n|T=1 |91.61 |50.78 |37.30 |51.20 |\n|T=2 |91.39 |51.14 |37.60 |51.65 |\n|T=5 |92.01 |55.56 |41.77 |55.74 |\n|T=10 |94.04 |54.76 |42.41 |55.10 |\n|T=20 |94.20 |53.33 |41.31 |54.11 |\n\n>**Exp 2:** A real-world attack on the Google Cloud Vision API.\n>\n>**Response 6:** We randomly select 100 images and compute the attacking performance of the ensemble of four CNNs using the same evaluation protocol in [30]. The results are as follows. We can find that the results of the Logit and CE (T=5) are very similar. But the Margin-based calibration performs worse than Logit and CE (T=5).\n\nTable 2: Non-targeted and targeted transfer success rates (%) on Google Cloud Vision.\n| | Logit | CE (T=5) | Margin |\n| - | :-: | :-: | :-: |\n| Targeted | 16| 15 | 12 | \n| Non-targeted| 51 | 53 | 42 |\n\n>**Exp 3:** The targeted success rates for transfer with varied targets.\n>\n>**Response 3:** The targeted transfer success rate with varied targets is reported in Table 3, and we can have the following findings. **(1)** The three types of logit calibration methods can improve the targeted transfer success rate over the original CE. The angle-based calibration has the best performance. But, we notice that the margin-based calibration doesn't work well in this setting. **(2)** The Temperature-based (T=5, 10) and the Angle-based calibrations can outperform the Logit loss by a large margin, especially the Angle-based calibration.\n\nTable 3: Targeted transfer success rate (%) when varying the target from the high-ranked class to low. (Average of 5 times)\n| | 2nd | 10th | 100th | 200th | 500th | 800th | 1000th |\n| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |\n|Logit | 83.7 | 83.2 | 77.3 | 74.5 | 71.5 | 64.9 | 52.4 |\n|CE | 77.4 | 58.6 | 34.0 | 26.9 | 23.7 | 16.7 | 7.0 |\n|CE/5 | 91.3 | 88.7 | 80.7 | 77.1 | 75.8 | 70.1 | 58.8 |\n|CE/10 | 89.0 | 87.8 | 82.8 | 81.0 | 79.2 | 73.5 | 62.5 |\n|Margin| 87.4 | 81.7 | 67.4 | 61.3 | 51.6 | 43.1 | 23.0 |\n|Angle | 92.4 | 89.1 | 82.2 | 80.3 | 79.2 | 76.1 | 66.3 |\n", " Thanks for your comments and valuable suggestions for the presentation. We would like to address your concerns in the following aspects.\n\n>**Comment 1:** Clarified the distinction between [30] in the introduction. \n>\n>**Response 1:** Thanks for your suggestion. We will rewrite this part in the introduction to better clarify the contributions of this study.\n\n>**Comment 2:** To have a better presentation of the tables.\n>\n>**Response 2:** We agree that a better presentation of the tables is needed. Currently, we use tables instead of line graphs that capture progress through iterations mainly due to the results of different calibration methods being very similar, and their lines will largely overlap with others. In the revision, we will report the average targeted transfer success rate with 3digits at most for Tables 1 & 3 instead of the average number of successfully attacked samples with 4 digits, and replace Table 2 with line graphs. \n\n>**Comment 3:** Why not introduce a single best receipt and present everything else as ablations? \n>\n>**Response 3:** The main reasons are: **(1)** The primary goal is solving the saturated issue in the CE loss for learning better transferable targeted adversarial attacks. Therefore, we first evaluate the effectiveness of different logit calibrations. **(2)** Since the logit calibration works, we then test their mutual effects by combining them jointly. However, we find that the optimal combination is different for different models, and there isn't a universal receipt for them. Consequently, we didn't introduce a single best receipt and present else as ablations.\n>\n>On the other aspect, based on the results in the manuscript and new results in the rebuttal, we might suggest using (T=5 + Margin) or (T=5 + Angle) for CNNs with more layers and the single Margin-based calibration for CNNs with fewer layers. \n\n>**Comment 4:** Why $T=5$ +Margin or Angle in Table 3? \n>\n>**Response 4:** We currently use the same $T=5$ mainly based on the result of ResNet-50, instead of the optimal $T$ for each model. During the rebuttal, we test the performance of $T=10,20$ + (Margin or Angle). The results are reported in Table 1. We can find that the $T$ has a marginal influence in the combination of \"$T$ + Margin\", while largely increasing the performance of \"$T$ + Angle\" of VGG16. \n\n**Table 1.** The comparison of combining logit calibration.\n\n(1) Surrogate model: **ResNet-50**\n| | Dense121 | VGG16 | Inc-v3 |\n| - | :-: | :-: | :-: |\n|T=5 + Margin |338.2/698.4/772 |239.6/590/655.4 |33.4/96/111 |\n|T=5 + Angle |345.2/742.6/823.8 |256.2/664.8/721.6 |35.8/104.6/131.4 |\n|T=10 + Margin |326.8/694.6/772.8 |227.8/593.6/663.2 |129.4/96.8/114.6 |\n|T=10 + Angle |329.6/697.6/790.6 |244.2/590.2/689.4 |33.6/99.8/128.6 |\n|T=20 + Margin |330/691.6/762.4 |230.8/584.4/658.2 |31.6/95/117.8 |\n|T=20 + Angle |342.2/686.2/764.6 |247.4/587/666.2 |34.4/97.4/126.8 |\n|Margin+Angle |344/708.4/781.4 |242.6/601.8/673.8 |35/103.6/125.8 |\n\n(2) Surrogate model: **Denss121**\n| | ResNet50 | VGG-16 | Inc-v3 |\n| - | :-: | :-: | :-: |\n|T=5 + Margin |192.6/442.6/477.8 |141.2/377.4/408.4 |25.4/74.8/93.6 |\n|T=5 + Angle |202.6/526.6/619.2 |158.2/450.2/536.4 |23.4/92/127.2 |\n|T=10 + Margin|183.2/441.4/491.2 |136.6/369.4/416.4 |24.4/82.8/91.8 |\n|T=10 + Angle |193.8/472/561.2 |148.2/400.6/470.2 |25.2/82.8/109.8|\n|T=20 + Margin|191/433.8/485.4 |138.8/366.6/414.2 |23.6/78.2/95.4 |\n|T=20 + Angle |199.6/443.8/508.6 |155.4/383.8/437 |24.6/82.4/95.2 |\n|Margin+Angle |198.8/465.6/527.4 |152.2/392.8/445.4 |27/82/99 |\n\n(3) Surrogate model: **VGG16**\n| | ResNet50 | Dense121 | Inc-v3 |\n| - | :-: | :-: | :-: |\n|T=5 + Margin |34.8/101.8/114 |37.2/123.6/145.6 | 3/10.8/13 |\n|T=5 + Angle |21.6/25/23.4 |23.6/25.6/23.2 | 1.6/1.4/1.6 |\n|T=10 + Margin |31.6/107.2/117.4 |34.4/129.4/149.6 | 2.2/10.4/14.2|\n|T=10 + Angle |34/62/50.8 |34.8/75/70.2 | 2.4/6.4/5.8 |\n|T=20 + Margin |34.6/100.8/118.2 |33.6/120.2/148.6 | 2.8/12.4/14.4|\n|T=20 + Angle |32.4/96.6/101 |38.8/119.4/133.2 | 2.8/10.4/12 |\n|Margin+Angle |33/98.4/111.2 |35.4/126.4/146 | 2.6/12/14 |\n\n(4) Surrogate model: **Inc-v3**\n| | Dense121 | VGG-16 | Inc-v3 |\n| - | :-: | :-: | :-: |\n|T=5 + Margin |4.8/14.4/16 |6.2/21.2/28.6 | 5/17.2/28.4 |\n|T=5 + Angle |5.2/16/20.4 |5.8/19.6/31.2 | 5.4/16.6/24.6|\n|T=10 + Margin|5.4/14/19.2 |4.6/18.8/30.4 | 3.2/14.8/23 |\n|T=10 + Angle |5.8/13.4/18.6 |6/19.6/32.2 | 4.6/16.4/25.6|\n|T=20 + Margin|6.4/12.2/19.4 |5/19/29 | 4.8/16.4/26.8|\n|T=20 + Angle |6.4/16.2/20.4 |5.6/19.6/35.2 | 4.8/17/28.8 |\n|Margin+Angle |6.4/14/21 |5.6/17/31.2 | 5.4/15.4/26.2|\n\n>**Comment 5:** Limitation of this study.\n>\n>**Response 5:** Thanks for your valuable suggestions. We will add this information in the revision.", " Thanks for your comments. We would like to address your concerns in the following aspects.\n\n>**Comment 1:** The influence of different $T$ in CE.\n>\n>**Response 1:** \n>\n>(1) ***A large $T$ for VGG-16 and Inc-V3:*** In the Response to Reviewer SQcN, we guess that the T is related to the model depth, in which a large $T$ is preferred for the CNN models with few layers. Compared with the ResNet-50 and DenseNet-121, VGG-16 and Inc-V3 have few layers, and better performance is obtained using large $T$. \n>\n>(2) ***The results of continually increasing $T$:*** In the supplementary, we analyzed the relation between the Logit loss in [30] and the CE calibrated by a large $T$. The gradient of Logit loss is $\\frac{\\partial L_{Logit}}{\\partial \\phi(\\hat{x})} = - W_t$, and the gradient of CE with a large $T$ is $\\frac{\\partial L_{CE}^T}{\\partial \\phi(\\hat{x})} \\approx - \\frac{W_t}{T}$. Since the I-FGSM only considers the Sign of gradient while neglecting the magnitude, then the optimization of the CE calibrated by a large $T$ is nearly equivalent to the Logit loss. In Table 1 and Figure 1 in the supplementary, we reported the comparison results of $T=50, 100$, and the Logit loss. Their results are very similar to each other, which verifies our analysis between the Logit loss and using a large $T$ in CE.\n>\n>Therefore, the performance of the targeted attack will get saturated when T continually increases since it is nearly equivalent to the Logit loss.\n\n>**Comment 2:** The relation between three calibration methods.\n>\n>**Response 2:** In this study, we investigate temperature-based, margin-based, and angle-based logit calibrations to validate the main hypothesis of our study that “enlarging the logit margins can increase the targeted transferability.” \n>\n>The temperature-based is the simplest one which only calibrates the logits by a constant value of T. However, the optimal T is different for different models, as shown in Tables 1 & 2. Therefore, we investigate the margin-based and angle-based calibrations to deal with this hyper-parameter issue. The margin-based method adaptively computed the “T” based on the Top-2 logits of each iteration instead of using a constant value. On the aspect, since $z_i= W_i*x + b$ and the L2 norm of $W_i$ is different for each class $i$, we further perform the calibration by normalizing the classifier weight $W_i$ of each class $i$ and the feature $x$ to the unit length by L2-normalization. This calibration is actually computing the cosine between the $W_i$ and $x$ while without considering their norms. Therefore, we term it angle-based calibration.\n\n>**Comment 3:** The option for the best targeted attack.\n>\n>**Response 3:** Since the best combinations of different models are different, we currently cannot have a universal receipt for each model. Based on the results in the manuscript and new results in the rebuttal, we might suggest using (T=5 + Margin) or (T=5 + Angle) for CNNs with more layers and the single Margin-based calibration for CNNs with fewer layers. \n\n>**Comment 4:** Typos and Grammar mistakes.\n>\n>**Response 4:** We will carefully polish the manuscript.", " Thanks for your comments. First, we will revise all tables to have a better presentation in the revision. Then, we would like to address your concerns about the interpretation of Table 1. \n\n>**Comment:** Potential interpretation of the results in Table 1.\n\n>**Response:** We argue the main reason for better results obtained by Margin-based calibration for the VGG-16 is mainly due to the influence of model depth. For the CNN models with fewer layers, a large normalization factor “T” is preferred to achieve higher targeted transferability. In our Margin-based calibration, the denominator “T” (logit margin between the first and second logits) will keep increasing along with the optimization iterations and thus leads to better performance. \n>\n>To further check the influence of depth, we leverage the ResNet-18 with fewer layers as the surrogate model and reported the results in the following Table 1. We also find that a large T and the margin-based calibration are preferred. \n\n>**Table 1.** The average number (#) of successfully attacked targeted samples with the ResNet-18 as the surrogate model.\n| | Inc-v3 | ResNet-50 | Dense-121 | VGG-16 |\n| - | :-: | :-: | :-: |:-: |\n|CE |21/30.4/29.6 |191.8/239.8/259.8|185.8/239.6/246.2|158.6/192.6/190.4|\n|CE/5 |39.2/108/119 |278.2/606.8/636.2|271.8/574.8/615.6|237.2/530/565.8|\n|CE/10 |36.2/111.6/132.4|259.2/597.4/668.6|258.6/571.8/642.2|222.4/530/596.6|\n|CE/20 |38.8/113.8/129.8|251.8/578/641.8 |248.2/543.2/607 |211.4/497.4/571.2|\n|Margin |41/113/130.8 |273/601.4/653.2 |273.2/572.6/629 |234.2/535.4/586.2|\n|Angle |36.6/81.8/83.6 |271.4/514.8/542.6|280.6/527.6/557.4|239/449.4/462|\n|Logits |37.2/100.2/122 |247.8/556.2/606.8|243.2/536.4/585. |212.4/494.2/548.6|", " Thanks for your feedback. We would like to address your concerns in the following two aspects.\n\n>**Comment 1:** The related work section.\n>\n>**Response 1:** The structure of our current related work is mainly based on the following considerations. (1) The [1] highly inspired this study, and we followed the academic writing skills of [1] to some extent. (2) The I-FGSM, MI-FGSM, TI-FGSM, and DI-FGSM have been used as the baseline in the experiments. Besides, the optimization of only using the Sign of gradient in the I-FGSM is essential for our analysis of the relation between Temperature-calibration with large T and the Logits loss function. (3) The Po-Trip and the Logits are two main comparison methods, and then we also introduce them in detail.\n> \n>We will rewrite the related work section in the revision to avoid this similarity issue.\n\n----\n\n>**Comment 2:** Marginal Improvement and Contribution.\n>\n>**Response 2:** We totally disagree with your comments that the contribution of this work is limited by only achieving marginal experimental gains. \n>\n>* First, we would like to **recap the primary goal of this study**, which mainly aims to analyze why the widely used CE loss function can not generate adversarial samples with higher targeted transferability. However, previous studies only reveal this issue due to the vanished gradient issue without further analysis. In this study, we take a close analysis of the CE loss and find that the logit margin between the targeted and non-targeted classes quickly gets saturated during the optimization process, hindering the CE's transferability.\n>\n>* Second, **how to solve this issue**? Based on our analysis, we then explored three different logit calibration methods to deal with the saturated issue of logit margin. The experiment results valid our findings for the problem. Besides, in the supplementary, we further analyze that the Logit loss in [1] is nearly equivalent to the Temperature-calibration with large T. \n>\n>* Third, **what is not the goal**? We do not intend to beat the state-of-the-art by a large margin. Although the logit calibrations slightly outperform the Logit for most cases, they can significantly increase the performance of the original CE. Besides, we also notice the results of combined logit calibrations in Table 3, which can outperform the Logit by more than 10% when using the ResNet50 and Dense121 as surrogate models. The additional experiment on the difficult transfer with varied targets suggested by the Reviewer WQZv can further show the effectiveness of logit calibration in the targeted attack.\n> \n>Based on the above explanation, we believe our investigation in this study can provide valuable insight for future researchers by using the logit calibration from both attack and defense.\n", " This work proposed three different calibration methods, temprature-based, margin-based and angle-based temperature scaling to enlarge the margin between targeted logit and non-target logits to improve transferability of targeted adversarial attacks. This work is highly inspired by the work [1] and perform experiments to show the proposed methods are better than other existing methods.\n\n\n[1] \"On Success and Simplicity: A Second Look at Transferable Targeted Attacks\".\nZhengyu Zhao, Zhuoran Liu, Martha Larson. NeurIPS 2021. First of all, after comparing the related work in [1] and this work, there is a huge amount of overlapping of the equations or rewriting the sentences. This significantly destroys the overall quality of the work.\n\nSecond, the improvement of the proposed method over [1] is marginal compared to the improvement of [1] over cross-entropy loss.\n\nThird, the contribution of this work is limited. Although the authors proposed different temperature-scaling based methods to improve transferability of targeted attacks, which only achieve limited experimental gains, this work did not provide extra useful insight to this research area.\n\n[1] \"On Success and Simplicity: A Second Look at Transferable Targeted Attacks\". Zhengyu Zhao, Zhuoran Liu, Martha Larson. NeurIPS 2021. In general, it leaves me a poor impression when I realize the great similarity between the Related Works in this work and that in [1]. Although the authors try to reframe the sentences, it's still very unprofessional to structure related works with such a strong similarity with another existing work. See above.", " The authors propose a novel and effective method to improve the transferability of adversarial attacks. They increase the logit margins between targeted and non-targeted classes, which can quickly become saturated in cross-entropy loss. Strengths:\n1. The findings are very interesting and the motivation is well-explained.\n2. Comprehensive experiments are presented and the combining logit calibrations have significantly better performance than previous methods.\n\nWeaknesses:\n1. The proposed method has various settings and hyper-parameters. Compared to the simple Logit method, the proposed method needs more effort for tuning or need combining logit calibrations to achieve better performance. This can make the method less attractive to the community.\n3. There is no theoretical analysis to support the empirical findings.\n2. The presentation of the results needs to be improved. All tables contain tons of numbers, which makes it hard for the reader to get the point in a short time. Could authors provide any interpretation of the results in Table 1? For example, why do Margin and Angle have better performance when the surrogate models are VGG17 and Inc-v3, but have lower performance for ResNet50 and Dense121? N/A", " This paper designs a new logit calibration method which is inspired by knowledge distillation. The method uses logit calibrations in the CE loss function so that it can improve the targeted adversarial attack with higher transferability than other attack methods with cross-entropy loss. Except for the primary temperature-based method, this paper designs margin-based and angle-based methods to solve different surrogate models and different norms. The strengths of this paper are:\n\nThis paper designs a new cross-entropy (CE) loss function to improve the targeted adversarial attack, which performs better than Logit (NIPS21).\n\nExcept for the temperature-based method, this paper designs margin-based and angle-based methods to solve different surrogate models and norms.\n\nThe weakness of this paper are:\n\nThis paper follows `Zhengyu Zhao, Zhuoran Liu, and Martha Larson. On success and simplicity: A second look at transferable targeted attacks. NeurIPS, 34, 2021.' from academic writing skills and code in specific. However, instead of Zhao et al. designing the Logit loss and using it to generate universal adversarial perturbations, this paper's method does not have any additional functions such as UAP. \n\nMoreover, this paper's method only exceeds the Logit loss by around 10%, which is not a significant improvement. Therefore, this paper lacks novelty.\n\nThe equation 14 seems to have some mistakes. On the left of the equation, would $z_{i}$ be $\\tilde{z}_{i}$?\n\nA few grammar problems in this paper should be improved. For example, in line 275, it should be \"be similar to\"; in line 23, it should be \"Following many approaches\"; in line 27, it should be \"it is vital to explore.\"\n Firstly, for the influence of different $T$ in CE, this paper claims that when the surrogate model is VGG16 and Inc-V3, a larger $T$ obtains better transferability. However, I am curious about is there a limitation of $T$ on VGG16 and IncV3. For example, after the ASR achieves 600, the performance of the targeted attack will decrease when $T$ continually increase. And then, although this method is based on CE, it would be better if the authors designed a new name to describe it. The relationship between temperature-based, margin-based, and angle-based logit calibration is unclear. This paper claims that the margin-based one is designed to face different surrogate models, and the angle-based one is designed to solve the influence of various norms. However, in the experiments, the performance on T=5, T=10, Margin, and Angle does not prove the relation between them. This paper does not evaluate which method is the best for the targeted attack, or in other words, which option should I choose if I need to achieve the highest attack success rate?", " The paper targets improving the transferability of adversarial attack using the logit calibration. Despite the recent success in untargeted black-box attacks, the targeted transferability of adversarial attacks remains challenging. The paper takes a closer look at the vanishing gradient issue in the CE loss function which is commonly used to learn transferable adversarial samples and suggests that the logit margin between the targeted and non-targeted classes quickly gets saturated during the optimization process. So, to improve transferability they aim to enlarging logit margins which consequently reduce saturation and enable longer optimization and iterations. The paper investigates three different types of logit calibrations including temperature-based, angle-based and margin-based inspired by previous studies and techniques. Experiment conducted using ImageNet dataset and different methods including ResNet50, DenseNet-121, VGG-16 and Inception-v3. Results are compared to SOTA methods including Po+Trip, Logit, and TTP. Strength:\n- Quality: The writing quality is very good, very easy to follow, there are areas that can be improved but nothing major.\n- Clarity: Very clear. Easy to understand the motivation and the thought process of different method’s component. \n- Significance and novelty: Novelty is a bit limited and built up mostly on top of previous methods, but it is also not a weakness because this work attempts to solve an interesting problem and the analysis and results are valuable. The results are somewhat important. Mostly inspired by Logit [30], future researchers might use the suggested logit calibration and increase the number of iterations for targeted attacks.\n\nWeaknesses:\n- The quality of results and presentation of it could be improved significantly. (see below)\n- The distinction between [30] and this paper should be clarified and the introduction in page 2 [line 36-56] can benefit from a re-writing. (see below)\n - There is a large novelty overlap between [30] and the current method. It is proper to get the similarity and distinction discussed upfront. Specifically, discussion around CE starting line 36 is getting very confusing and blurry going through line 56. Vanishing gradient and the use of Logit loss has been discussed and proposed in previous arts, for example [30]. This gets discussed later at line 145+ and in method, however, I feel the contribution of the current work can get discussed in a more clear way and upfront in introduction. \n\nExperiments are limited and can get improved:\n- First the organization of the results is not optimized or ideal. (1) Following Table 1, 2 and 3 is very hard and replacing this with line graphs that capture progress through iterations would be very beneficial. (2) Overall collective of Table 1, 2, 3 seems to be more exploratory and ablation tables rather than the main results. The main message that I read from these two tables are T=10, 20 and a combination of Margin + Angle or T + Angle can result in the best outcome. So, why not introduce a single best receipt and present everything else as ablations? (3) Also I would suggest sticking with conventional methods such as heat-map to summarized heavy tables, as it is a norm in adversarial attack literature. [21] have good examples of result presentations.\n\n- One relevant question, is also as Table 2 suggests the best outcome is coming from T=10 or T=20 so why in Table 3 we analyze the effect of combining logits as T=5 +Margin or Angle? If any underlying study suggests this, results should be provided. \n\nThe results could be strengthened by:\n- Providing experiments on another dataset (e.g., CIFAR-10, MNIST, SVHN) Since the proposed method works well on ImageNet, it could only be a minor concern.\n- Incorporating study of a real-world attack for example the Google Cloud Vision API.\n- Providing targeted success rates for transfer with varied targets.\n I cannot find any specific discussion around the potential negative social impact of this work. Also, the limitation of the method was not addressed in the paper.\n\nTo improve this part, discussion around the benefits of adversarial attack research can get discussed. Potentially this can motivate the AI community to design stronger defenses against transferable attacks, and in the long run such results can be directly used for social good applications, such as protecting privacy. On the contrary, there are applications that can benefit from transferable attacks in a harmful manner to damage the outcome of any AI system, e.g. imaging a scenario that someone uses such attacks to intrude with the outcome of a medical AI device. \n\nI also suggest that authors discuss the limitation of the work, failure cases and processing time.\n\nOverall, I enjoy reviewing this paper and looking forward to reading the authors' responses. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 4 ]
[ "7eZgZ-rivOg", "jNXvtEGBFMIS", "nips_2022_9U4gLR_lRP", "sIAr738LGW9", "OyP0Pw8xB5", "JcXe9C2n82K", "bNMs61pCbvC", "IDrIEbjfT7Y", "eqDmaQ47r91", "nips_2022_9U4gLR_lRP", "nips_2022_9U4gLR_lRP", "nips_2022_9U4gLR_lRP", "nips_2022_9U4gLR_lRP" ]
nips_2022_2EQzEE5seF
Adversarially Perturbed Batch Normalization: A Simple Way to Improve Image Recognition
Recently, it has been shown that adversarial training (AT) by injecting adversarial samples can improve the quality of recognition. However, the existing AT methods suffer from the performance degradation on the benign samples, leading to a gap between robustness and generalization. We argue that this gap is caused by the inaccurate estimation of the Batch Normalization (BN) layer, due to the distributional discrepancy between the training and test set. To bridge this gap, this paper identifies the adversarial robustness against the indispensable noise in BN statistics. In particular, we proposed a novel strategy that adversarially perturbs the BN layer, termed ARAPT. The ARAPT leverages the gradients to shift BN statistics and helps models resist the shifted statistics to enhance robustness to noise. Then, we introduce ARAPT into a new paradigm of AT called model-based AT, which strengthens models' tolerance to noise in BN. Experiments indicate that the APART can improve model generalization, leading to significant improvements in accuracy on benchmarks like CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet.
Reject
The paper presents a new way of bridging the gap between models’ generalization and robustness, by combining gradients computed on unperturbed BN statistics with gradients computed on perturbed statistics. The main goal is to improve the standard generalization, but the authors should clarify their definition of "robustness" as it seems to confuse all reviewers (e.g., questioning adversarial attacks). Moreover, the method itself is very simple, and the idea of using adversarial perturbation to stabilize model training isn't new (AdvProp, etc.). Reviewers are further concerned about the lack of large-scale experiments or on state-of-the-art architectures. Besides, there are no comparisons with some of the competing methods such as AdvProp. Therefore, I find no sufficient ground to recommend acceptance in this paper's current shape.
train
[ "RjLNTOrnS7", "kUAUmsa75zZ", "D16-Vn2X-qK", "yI3DIcDWGgb", "XUlT8cWDlgO", "_iUK-AEkbF", "mfNvDcYwESe", "z5zom5iC5d1", "GQufvbUajNF", "BLP-ko1mdm3", "cQ48IUAjIlj", "FsXHRovW6BS" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the rebuttal, however, the responses only address parts of my concerns. I feel grateful that the authors have added the robustness experiments on ImageNet-C, while many experiments (e.g., Mixup on ImageNet, more backbones on ImageNet) are still missing for now. In my opinion, experiments for this paper are not sufficient for publishing. I will keep my rating. ", " We thank you for your review. Below, we address your comments. If you find our response adequate, we would appreciate it if you increase your score.\n\n* **The reviewer thinks experiments on ImageNet-C or Stylized ImageNet are needed to show the advantages of robustness.**\n\n\tWe report the results of ImageNet-C as follows. For comparison, the table includes the results of AdvBN [15]. We use the [official code](https://github.com/azshue/AdvBN) to do new experiments using AdvBN, and follow the original experimental setting in its original paper [15]. Also, the table includes the results of AugMax [16] (NeurIPS’21) for further comparison. The results of APART on ImageNet-C are still significant. Note that APART does not use data augmentations. While AugMax is a composition of multiple augmentations. \n\t\n\n **Table 1: Results on ImageNet and ImageNet-C.**\n\n | **ResNet-18** | **ImageNet Top-1** $\\uparrow$ | **ImageNet-C mCE $\\downarrow$** |\n |---------------------------------|-------------------------------|----------------------------------|\n | Pretrained | 69.76 | 84.54 |\n | AdvBN [15] (Finetune 20 Epochs) | 69.81 | 84.37 |\n | APART (Finetune 20 Epochs) | 70.30 | 84.14 |\n | AugMax-DuBIN [16] | 67.62 | 82.56 |\n | APART (105 Epochs) | 70.86 | 82.81 |\n | APART (210 Epochs) | 72.14 | 81.91 |\n | **ResNet-34** | **ImageNet Top-1** $\\uparrow$ | **ImageNet-C mCE** $\\downarrow$ |\n | Standard (105 Epochs) | 73.71 | 76.89 |\n | APART (105 Epochs) | 74.58 | 75.17 |\n\n* **The comparison with other methods is missing.**\n\n\tThanks for your reminder. These normalization methods [18-20] change models' architectures indeed and are applied in the tasks with mini-batch due to the demanding computations. While APART is a training method without modifying the architectures, and focuses on the training with normal batches of samples. Lack of the comparison between them does not reduce the main contributions. \n\t\n\tComparison with [15] is shown above. In terms of AdvProp [11], we will provide the comparison in the revised version. On the other hand, there is a potential combination of APART and AdvProp since they attack the different components (samples and models) in adversarial training without explicit conflicts between these two methods. Therefore, they are not competitors.\n\t\n* **Experiments of Mixup on ImageNet and other backbones on (Tiny-)ImageNet are missing.**\n\n\tDue to the time limit during the rebuttal, we have run some quick experiments on CIFAR datasets to evaluate APART's effects on other backbones. More experimental results on (Tiny-)ImageNet will be included in the revised version. Below is the CIFAR results on other backbones.\n\t\n\n **Table 2: Results on CIFAR datasets.**\n\n | **Model** | **VGG16** | **VGG19** | **DenseNet121** |\n |---------------------------|-------------|-------------|------------------|\n | CIFAR10 (Standard/APART) | 93.35/94.04 | 92.80/93.92 | 95.11/95.90 |\n | CIFAR100 (Standard/APART) | 70.67/74.30 | 71.15/72.33 | 78.52/81.60 |\n\n\tAs shown above, the gains on CIFAR are still significant despite the backbones. Besides, we provide the results of ResNet-34 in Table 1. The results of ResNet-34 corroborates that the deeper ResNet is still enjoying the accuracy gain brought by APART.\n\n\tWe think that ResNet is a representative architecture for evaluating the performance in our paper. Based on our new experiments on CIFAR, we summarize that our proposed method indeed helps improve the generalization of the model on different backbones with mixup. We continue evaluating different backbones on ImageNet, though it is very time-consuming. ", " * **No results on [11] or [15] are reported.**\n\n\tDue to the time limit during the rebuttal, we compare with AdvBN [15] currently. For AdvProp [11], we will provide the comparison in the revised version. On the other hand, there is a potential combination of APART and AdvProp since they attack the different components (samples and models) in adversarial training without explicit conflicts between these two methods. Therefore, they are not competitors.\n\n\tWe use the [official code](https://github.com/azshue/AdvBN) to do new experiments using AdvBN, and follow the original experimental setting in its original paper [15]. Besides, we fine-tune pretrained ResNet-18 by APART with the same epochs and learning rate. The performance improvements brought by APART are significant in the results shown as follows (including APART's results of training from the scratch). \n\n **Table 1: Results on ImageNet and ImageNet-C.**\n\n | **ResNet-18** | **ImageNet Top-1** $\\uparrow$ | **ImageNet-C mCE $\\downarrow$** |\n |---------------------------------|-------------------------------|----------------------------------|\n | Pretrained | 69.76 | 84.54 |\n | AdvBN [15] (Finetune 20 Epochs) | 69.81 | 84.37 |\n | APART (Finetune 20 Epochs) | 70.30 | 84.14 |\n | AugMax-DuBIN [16] | 67.62 | 82.56 |\n | APART (105 Epochs) | 70.86 | 82.81 |\n | APART (210 Epochs) | 72.14 | 81.91 |\n | **ResNet-34** | **ImageNet Top-1** $\\uparrow$ | **ImageNet-C mCE** $\\downarrow$ |\n | Standard (105 Epochs) | 73.71 | 76.89 |\n | APART (105 Epochs) | 74.58 | 75.17 |", " Thank you for your review. Below we address your comments. If you find our response adequate, we would appreciate it if you increase your score.\n\n* **The proposed method is almost identical with AdvBN [15] (NeurIPS’21).**\n\n\tWe elaborate on the differences between AdvBN and APART in terms of backgrounds, motivations and implementations. \n\t\n\t* **Backgrounds**\n\n\t\tAdvBN investigates fine-tuning pretrained models to obtain the robustness against common corruptions and style changes in images. While APART aims at training models from the scratch to improve models' generalization on clean images. Also, such generalization improvement might lead to the robustness against the corruptions.\n\t\n\t* **Motivations**\n\t\n\t\tAdvBN attempts to generate worst-case feature perturbations during training. Then the model is trained to resist the generated feature perturbations, leading to better performance on unseen corrupted samples. While APART perturbs BN statistics from a numerical perspective instead of considering the unseen domains. Then APART helps models enhance their robustness against noise in BN statistics to obtain better generalization on clean samples. AdvBN and APART focus on completely different problems of training networks.\n\t\t\n\t* **Implementations**\n\t\n\t\t* AdvBN performs attacks in non-BN layers, e.g. the end of the 2nd convolutional stage of ResNet-50 (please refer to section 5.1 of [15]). While APART performs attacks in each BN layer. The possibly confusing similarity between them is caused by the normalization trick. Performing controllable attacks requires the normalization of internal features. So the sophisticated normalization trick of BN is adopted by AdvBN. Such normalization trick is also adopted by A-FAN [46], resulting in the similarity between AdvBN and A-FAN. While APART directly performs attacks on BN, which results in the confusing similarity between AdvBN and APART's perturbation formulas.\n\t\n\t\t* As described in Algorithm 1 of [15], AdvBN freezes the shallower subnetwork $g_{\\theta}^{1,l}$ to extract the stable features, and only fine-tunes the deeper subnetwork $g_{\\theta}^{l+1,L}$ . Thus, the feature perturbations of AdvBN essentially share the same paradigm with sample perturbations, where the major difference is that AdvBN attacks features (as special samples fed into internal layers) instead of samples fed into input layers. APART follows a different paradigm that attacks the model instead of samples. In detail, APART's attacks are performed in all BN layers through the whole network instead of the features in a specified single layer. Besides, APART is not a multi-layer version of AdvBN. The authors of AdvBN tried multiple AdvBN layers, but found that the perturbations at successive layers compound and destabilize the training process (please refer to \"Reply to Reviewer 4gjW\" of AdvBN's [OpenReview](https://openreview.net/forum?id=A-RON3lv-aR)). In contrast, APART is stable under its multi-layer attacks.\n\t\n\t\t* As discussed before, AdvBN aims to fine-tune pretrained models to obtain the robustness. On the other hand, employing AdvBN to train models from the scratch will contradict with its assumptions, since AdvBN performs attacks on the semantic features extracted by some pretrained models. Thus, AdvBN heavily depends on pretrained models. While APART is proposed to train models from the scratch, and the pretrained models can also be used in the training. For the next question about the experiments, we will report the results of comparing these two methods.", " Thank you for your review. We are glad that you liked our paper. Below, we address your concerns with the paper. If you find our response adequate, we would appreciate it if you increase your score.\n\n* **Scalability of ARAPT to large datasets and models is not clearly supported in the experiments.**\n\n\tAs reported in the paper, with the same 2x budget, APART is inferior to the standard training on ImageNet: APART improves the accuracy by 0.62% while scaling the epochs of standard training leads to the improvement of 1.01%. However, such significant improvement (over 1%) from scaling the epochs illustrates the underfitting of the model in this experiment. Indeed, APART focuses on improving generalization of models trained with sufficient epochs, and to some extent raises the accuracy of insufficiently trained models. The reported further experiments with 4x budget substantiate the effects of APART: standard method merely leads to 1.21% while APART leads to 1.90% in accuracy improvements. \n\n\tSuch phenomenon also exists in SAM [28]. As reported in Table 2 therein [28], in terms of shallower ResNet-50, SAM with 100 epochs is inferior to the standard training with 200 epochs though they share the same training budget. This phenomenon disappears in deeper networks since they fit the samples more easily.\n\n\tIntuitively, in the AT paradigm followed by APART and SAM, some gradients are used to perform attacks that inevitably slow the convergence of training. Therefore, sufficient epochs are needed.\n\n\tBesides, there is a link between networks' depths and BN's effects on training. Such link leads to varying effects of APART on different backbones. We conduct the experiment of ResNet-34 after submission, where APART with 2x budget improves the accuracy by 0.87%, more significant than 0.62% in terms of ResNet-18. We will include more experimental results in the revised version.\n\t\n* **It'd be good to include experiments on other architectures (e.g. EfficientNet), and see if the gains are significant.**\n\n\tDue to the time limit during the rebuttal, we have run some quick experiments on CIFAR datasets to evaluate APART's effects on other architectures. More results on EfficientNet and ImageNet will be included in the revised version. The gains on CIFAR are still significant despite the architectures. Below is the CIFAR results.\n\n **Table 2: Results on CIFAR datasets.**\n\n | **Model** | **VGG16** | **VGG19** | **DenseNet121** |\n |---------------------------|-------------|-------------|------------------|\n | CIFAR10 (Standard/APART) | 93.35/94.04 | 92.80/93.92 | 95.11/95.90 |\n | CIFAR100 (Standard/APART) | 70.67/74.30 | 71.15/72.33 | 78.52/81.60 |\n", " * **How is the introduced AT paradigm different from what prior works proposed, or what novel insights are provided?**\n \n The prior works [8, 28] focus on the trainable parameters of a model, i.e., the weights, and optimize them to find a minima with a flat loss landscape. In their methods, only the attacks and defense over the trainable parameters are considered. However, there exist non-trainable parameters in models, e.g. BN statistics (mean and variance), which require estimation instead of optimization. So we proposed the new AT paradigm defined by Eq 2. This paradigm allows the attacks and defense on both trainable and non-trainable parameters, and further enables the combination of APART and SAM [28].\n \n* **What are the issues with large-scale training?**\n \n As reported in the paper, with the same 2x budget, APART is inferior to the standard training on ImageNet: APART improves the accuracy by 0.62\\% while scaling the epochs of standard training leads to the improvement of 1.01%. However, such significant improvement (over 1%) from scaling the epochs illustrates the underfitting of the model in this experiment. Indeed, APART focuses on improving generalization of models trained with sufficient epochs, and to some extent raises the accuracy of insufficiently trained models. The reported further experiments with 4x budget substantiate the effects of APART: standard method merely leads to 1.21% while APART leads to 1.90% in accuracy improvements. \n \n Such phenomenon also exists in SAM [28]. As reported in Table 2 therein [28], in terms of shallower ResNet-50, SAM with 100 epochs is inferior to the standard training with 200 epochs though they share the same training budget. This phenomenon disappears in deeper networks since they fit the samples more easily.\n \n Intuitively, in the AT paradigm followed by APART and SAM, some gradients are used to perform attacks that inevitably slow the convergence of training. Therefore, sufficient epochs are needed.\n \n Besides, there is a link between networks' depths and BN's effects on training. Such link leads to varying effects of APART on different backbones. We conduct the experiment of ResNet-34 after submission, where APART with 2x budget improves the accuracy by 0.87%, more significant than 0.62% in terms of ResNet-18. We will include more experimental results in the revised version.\n \n* **Section 3.3 is somewhat confusing.**\n \n Sorry for the confusing formula. $\\mathcal{R}$ in L206 differs from the latter $\\mathcal{R}$ in Eq 7. The former is introduced to formulate the prior method [28] in our proposed AT paradigm, and the latter is exactly $\\mathcal{L}(x, y; \\theta, \\phi)$ with $\\theta = 0$ defined by Eq 4 in section 3.2. Generally, there are minor differences between APART and SAM [28] in terms of their objective functions $\\mathcal{R},\\mathcal{L}$. In the revised version, we will replace $R$ in L206 with $R_{sam}$ to make it clear. Meanwhile, we will replace $\\mathcal{L}$ in R205 with $\\mathcal{L}_{sam}$.", " Thank you for your detailed comments. As prior works [11, 15] did, this work focuses on improving models' performance in standard classification on benign samples, without considering their robustness against adversarial samples. Below, we address your only concerns with the paper. If you find our response adequate, we would appreciate it if you increase your score.\n\n* **How is the robustness to other typical adversarial attacks?**\n\n We suspect that the reviewers may have misunderstood our main concern. Indeed, robustness against adversarial attacks is an important problem in machine learning. But, it mainly belongs to the noise perturbation on the input image. In this paper, we mainly study the robustness of the network architecture, especially for the BN layer, which affects the generalization of the test data. \n \n Generally, the definition of robustness varies in different contexts: for tasks with safety concerns, it's defined as the robustness against adversarial samples; for standard classification without safety concerns, the definition is unclear without explicit assumptions. Some prior works [15,16,17] reposition the robustness as the one against common corruptions. Besides, robustness might be models' stability for some problems, or models' insensitivity to some noise. \n \n For standard classification, we focus on BN layers and identify the robustness as the one against noise in BN statistics in the submitted paper. Then we concentrate on such robustness in attempts to improve models' generalization.\n \n Empirically, the results of section 4.4 and 4.5 corroborate our method's effects of achieving the robustness against BN statistics noise. Meanwhile, our method can boost models' generalization as shown in section 4.2. These experimental results prove our method has bridged the robustness-generalization gap. \n \n Moreover, such robustness obtained by the proposed method has a positive impact on the models' performance on corrupted samples (considered as corrupted attacks). During the rebuttal, we have run quick experiments of the evaluation on ImageNet-C, with the results shown as follows.\n \n\n **Table 1: Results on ImageNet and ImageNet-C.**\n\n | **ResNet-18** | **ImageNet Top-1** $\\uparrow$ | **ImageNet-C mCE $\\downarrow$** |\n |---------------------------------|-------------------------------|----------------------------------|\n | Pretrained | 69.76 | 84.54 |\n | AdvBN [15] (Finetune 20 Epochs) | 69.81 | 84.37 |\n | APART (Finetune 20 Epochs) | 70.30 | 84.14 |\n | AugMax-DuBIN [16] | 67.62 | 82.56 |\n | APART (105 Epochs) | 70.86 | 82.81 |\n | APART (210 Epochs) | 72.14 | 81.91 |\n | **ResNet-34** | **ImageNet Top-1** $\\uparrow$ | **ImageNet-C mCE** $\\downarrow$ |\n | Standard (105 Epochs) | 73.71 | 76.89 |\n | APART (105 Epochs) | 74.58 | 75.17 |", " We thank all reviewers for their critical assessment of our work. This work focuses on improving models' performance in standard classification on benign samples, without considering their robustness against adversarial samples. To this end, we would like to discuss the definition of robustness that is somewhat confusing in the submitted version, and then highlight our contributions.\n\nIndeed, the definition of robustness varies in different contexts: for tasks with safety concerns, it's defined as the robustness against adversarial samples; for standard classification without safety concerns, the definition is unclear without explicit assumptions. Some prior works [15,16,17] reposition the robustness as the one against different common corruptions. Besides, robustness might be the models' stability for some problems or models' insensitivity to some noise. For example, the noise in the classical Batch Normalization (BN) layer affects the performance generalization of the test data. We find that few works aim to solve this problem, so this paper aims to handle this problem.\n\nTo this end, we highlight our contributions as follows:\n\n1. We identify the robustness against the noise in BN statistics. Enhancing such robustness by adversarial training (AT) improves the generalization on benign samples. We bridge the gap between models' generalization and the robustness.\n2. We proposed APART following an AT paradigm to achieve such robustness. Empirically, models trained by APART are robust to BN statistics noise and meanwhile they enjoy significant accuracy gains. These empirical results substantiate our insights of the identified robustness.\n3. APART has plug-and-play nature that allows the combination with other training methods, which leads to further accuracy gains.", " The paper proposes a method to improve the generalization of neural networks by training them to be robust to adversarial perturbations in the statistics of the batch normalization (BN) layers. The approach combines gradients computed on unperturbed BN statistics with gradients computed on perturbed statistics. Perturbations or noise in the BN statistics are obtained through 1) signed gradients from the first update and 2) reductions in the batch size for the second update.\nExperiments demonstrate improvements over standard training, especially in the case of smaller-scale datasets, i.e., CIFAR and Time-ImageNet. The method can also be combined with other techniques, such as Mixup and SAM optimization, typically leading to further improvements. Strengths:\n- The method benefits the generalization of neural networks trained on smaller datasets considerably\n- The technical presentation of the method in Section 3.2 is detailed and sufficiently clear\n- The method can be combined with other training methods, such as SAM. \n\n\nWeaknesses:\n- The paper claims to bridge the gap between robustness and generalization. Experiments are focused mainly on the generalization ability of the learned networks, and robustness experiments are restricted to perturbations of the BN statistics. This is quite limited, and it is unclear if the learned networks are robust to various other adversarial attacks. Indeed, it is unclear what the relevance of Sections 4.4 and 4.5 are regarding the robustness of the networks in practice. \n- Another contribution of the paper is \"a new AT paradigm, termed model-based AT.\" It appears that the main idea of perturbing model parameters has been explored in various prior works (e.g., [8, 28]). It is not clear what the generic formulation in Eq 2 contributes or what novel insights are provided. \n- The benefits of the method seem to disappear during large-scale experiments on ImageNet. This is somewhat concerning, and it might be good to investigate this issue further. \n- Section 3.3 is somewhat confusing: L206 claims \\mathcal{R}=0, but then \\mathcal{R} appears in the perturbation computation of (7). It is also unclear if a term similar to g_\\phi exists in this case. I would appreciate it if the authors could address the weakness listed above, especially the first three points, i.e., \n- How is the robustness to other typical adversarial attacks?\n- How is the introduced AT paradigm different from what prior works proposed, or what novel insights are provided?\n- What are the issues with large-scale training? As mentioned above, it might be good to further address the performance on larger scale datasets if this turns out to be a limitation. Also, depending on how robust the method is to other adversarial perturbations, this could also be mentioned in the limitations.", " While Adversarial Training is one of the most successful methods to increase robustness, it usually degrades performance of the models on clean images. The authors attribute this to distributional discrepancy in Batch Norm statistics. They propose Adversarially Perturbed bAtch noRmalizaTion (APART) to achieve robustness against BN statistics noise, and to bridge the gap between models’ generalization and robustness. They perform backward passes twice over each batch of clean samples. The first backward pass produces two gradient computations: a normal gradient that helps update parameters of model, and a statistics gradient that is used to perturb the statistics parameters in BN. The second pass is performed to generate the defensive gradient that helps the model resist the adversarial statistics perturbation. The normal and defensive gradients are combined to improve both generalization and robustness of the model. Experiments are performed on CIFAR, Tiny-ImageNet and ImageNet, and show improved clean accuracy over standard training and SAM [28]. \n\n\n **Originality and Significance**:\n- The paper presents a new way of bridging the gap between models’ generalization and robustness. It is known in the literature that there is discrepancy between Batch Norm statistics of clean and adversarial examples [13] (as well as the statistics from different batches). AdvProp proposes using two batch norm statistics, one for clean images and one auxiliary for adversarial examples [13]. Rather than creating a separate layer to deal with this discrepancy, the paper attempts to make the models robust to the BN statistics noise. This approach is interesting and novel to the best of my knowledge. \n- The method can be combined with other augmentations to further boost performance. The proposed combination with SAM (as one of the state-of-the-art methods) is particularly promising. \n\n**Quality**:\n- Overall the paper is well-structured and well-written.\n- The proposed approach is sound, and is described clearly. \n- Experiments are performed on various datasets including CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet. Overall, experimental results are convincing. They demonstrate improvements on clean accuracy over the baselines as well as robustness against perturbed BN statistics. Comparing with baselines using the same budget is important given the additional cost of the proposed approach. \n- The authors report detailed experimental results in the supplementary material, and show that ARAPT is relatively insensitive to hyper-parameters. \n\n**Clarity**:\n- Scalability of ARAPT to large datasets and models is not clearly supported in the experiments. The authors use the relatively small ResNet-18 model on ImageNet-1K. ARAPT underperforms standard training on ImageNet at 2x budget and outperforms it at 4x budget (Table 2). The authors note that \"APART employed on the large-scale dataset requires more steps to show its promise\", but do not provide further explanation or experiments on this. \n- All experiments are performed on the ResNet family. On ImageNet the achieved accuracy of 72.14% (Table 2) is far from the state-of-the-art. It'd be good to include experiments on other architectures (e.g. EfficientNet), and see if the gains are significant. See the Clarity section above. - The authors have addressed limitation of the work in terms of suffering from potential degeneration in case of the combination with other training methods implicitly involving BN. \n- The authors can address potential limitation of their work on large-scale datasets and models. \n- There are no potential negative societal impact that need to be specifically addressed. ", " This paper proposes to add adversarial noise on the BN statistics to improve classification accuracy on in-distribution images. Strength:\n1. The paper is well-written and easy to follow. The related works are thoroughly discussed. \n\nWeakness:\n1. The novelty is limited. The proposed method is almost identical with AdvBN [15] (NeurIPS'21). Although the authors mentioned three differences in related work section, I still think they are all minor differences. \n2. In experiments, no results on [11] or [15] are reported. This makes it hard to evaluate whether the proposed method can outperform previous works. Please see above Please see above", " This paper introduces an ‘Adversarially Perturbed Batch Normalization’ to improve the model’s generalization and robustness. Experiments on CIFAR, Tiny-ImageNet, and ImageNet show that the proposed methods can improve the models’ performance, compared with the baseline model. \nStrengths:\nCompared with the previous AdvBN [15], the proposed APART is more appliable and easy-training. \nThe paper is well-written, and the theoretical analysis is clear.\n\n\nWeaknesses:\nExperiments\nFrom the reviewer’s view, the experiments in this paper are not sufficient.\n(1)\tAs mentioned in Lines 62-63, the author mentioned that they want to bridge the gap between the model’s generalization and robustness. The reviewer thinks experiments on ImageNet-C or Stylized ImageNet are needed to show the advantages of robustness. \n(2)\tThe comparison with other methods is missing. The reviewer thinks a comparison with normalization methods [18-20] and adversarial methods [11, 15] is needed.\n(3)\t‘Mix-Up’ experiments on ImageNet are missing.\n(4)\tSimilar to the experiments on CIFAR-10 and CIFRA-100, the authors are suggested to conduct the experiments on one more backbone on Tiny-ImageNet and ImageNet.\n I would like to see the experimental results on the robustness benchmarks (e.g., ImageNet-C). As the main contribution lies in the model's generalization and robustness, the robustness result is must-needed, in my opinion. For me, the current experiments are not sufficient. The authors are suggested to add more experiments to show the advantages of their paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "kUAUmsa75zZ", "FsXHRovW6BS", "cQ48IUAjIlj", "cQ48IUAjIlj", "BLP-ko1mdm3", "GQufvbUajNF", "GQufvbUajNF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF", "nips_2022_2EQzEE5seF" ]
nips_2022_LEqYZz7cZOI
Singular Value Fine-tuning: Few-shot Segmentation requires Few-parameters Fine-tuning
Freezing the pre-trained backbone has become a standard paradigm to avoid overfitting in few-shot segmentation. In this paper, we rethink the paradigm and explore a new regime: {\em fine-tuning a small part of parameters in the backbone}. We present a solution to overcome the overfitting problem, leading to better model generalization on learning novel classes. Our method decomposes backbone parameters into three successive matrices via the Singular Value Decomposition (SVD), then {\em only fine-tunes the singular values} and keeps others frozen. The above design allows the model to adjust feature representations on novel classes while maintaining semantic clues within the pre-trained backbone. We evaluate our {\em Singular Value Fine-tuning (SVF)} approach on various few-shot segmentation methods with different backbones. We achieve state-of-the-art results on both Pascal-5$^i$ and COCO-20$^i$ across 1-shot and 5-shot settings. Hopefully, this simple baseline will encourage researchers to rethink the role of backbone fine-tuning in few-shot settings.
Accept
This paper presents a solution to overcome the overfitting problem in few-shot segmentation. Specifically, the proposed method decomposes the backbone parameters into three matrices via singular value decomposition (SVD) and fine-tunes only the singular values, while leaving the others frozen. This allows the model to adjust the feature representation in a new class while maintaining the semantic cues in the pre-trained backbone. All reviewers admit that this paper is well written, and the proposed method is applicable and novel. Furthermore, the authors provide great additional experiments and answers to the reviewers’ concerns. These made all reviewers positive for this paper. The AC agreed with the reviewers that the proposed method would make waves in the few-shot learning paradigm where the parameters of the pre-train model should be frozen. The AC recommends including the results described in the rebuttal for the final camera-ready version.
train
[ "V-Ybl7lwE3U", "hynhqWz7QE", "ysHodz71zOR", "fvmbJiLXmq", "rPuWKhqM9c", "bMreHwSKVDP", "aiBEZFA-9al", "9I8ALjDxLQHj", "DLMKVx2Rch6", "8Ni53g-pq-", "yo4vr2ibeUnV", "__bmyGgw4MaL", "cM6Bn11jmJ", "eRAkE_rPPB2", "5JVWvrZiquY", "XzNUL7mU5JS", "OeVQOCQPl2e", "HjLr_IpBVSz", "dXGQryd7iy1", "XxpRT9q6noL", "vCQAVh-_rml", "8E_wAQvQjNO" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We agree with the reviewer that good performance should be achieved when R=U. The above analysis of 2 and 3 is for random rotation matrix, and does not include the special case of R=U. According to the above results and analysis, we conclude that the choice of R is very important. \n\nFollowing the reviewer's suggestion, we conduct two experiment for the absolutely most rigorous comparison, where the weight W becomes URSR'V$^T$ (Fine-tuning S or freeze backbone). The results below show that introducing a random rotation matrix R gives poor results. It demonstrate that the introduction of random rotation matrices (without R=U and R=I) destroys semantic clues in pre-train weights. Meanwhile, we find that fine-tuning the singular value space S brings positive effects to the model under different weights. It proves that the singular value space is indeed uniquely non-destructive.\n\n| Mehod | Backbone |Expression of weight |Fine-tune param | Fold-0 | Fold-1 | Fold-2 | Fold-3 | Mean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | ResNet-50 | W | - | 65.60| 70.28| 64.12| 60.27| 65.07 |\n| baseline |ResNet-50 |USV$^T$|S| 67.42 | 71.57 | 67.99 | 61.57 | 67.14 |\n| baseline |ResNet-50 |URSR'V$^T$|S| 23.20 | 35.62 | 34.52 | 27.69 | 30.26 |\n| baseline |ResNet-50 |URSR'V$^T$|-| 23.64 | 33.19 | 33.89 | 26.51 | 29.31 |\n\nIn addition, we add this experiment to main paper page 9 Section 4.4 & Tab.9 (marked in red).", " The response by the authors addressed my concerns well, also consider the comments from other reviewers, I will keep my previous rating of this paper and suggest to **strong accept** this paper.", " Thanks for the authors' response. Since the authors addressed my initial concerns about the backbone fine-tuning, more comparisons, extra training time, and more references, I would like to raise my final rating.", " Thank you for the impressive response time and the new results, this is a fascinating and (to me) very surprising outcome! Regarding the explanations, I don’t think 2 and 3 hold – if we let R=U, then according to arguments 2 and 3 this model should still underperform, but in this case the model reduces to: RS’R’W -> US’U’W -> US’U’USV -> US’SV. This is functionally equivalent to USS’V, which we know performs quite well. Nevertheless, these new results pretty conclusively demonstrate that channel-aligned fine-tuning is not uniquely destructive; rather the singular value space is indeed uniquely non-destructive, as originally claimed. I’d highly recommend repeating this experiment (perhaps using URS’R’V for the absolutely most rigorous comparison) and adding it to the main paper, possibly as an extension to Tab.7, as it greatly strengthens the argument being advanced. Regardless, I’ve raised my rating to Accept – I still don’t know what makes the singular value space so special, but it matters less in the face of such a strong empirical argument that it truly is so. ", " We appreciate your valuable comments. We were wondering if our responses have addressed your concerns. Please let us know if you have additional questions. Thank you!", " \nWe appreciate your valuable comments. We were wondering if our responses have addressed your concerns. Please let us know if you have additional questions. Thank you!", " Thanks for your positive feedback. We think the suggested changes and additions made here have greatly improved the work.\n\nFollowing the suggestions, We add the 5-shot experimental results in the main body page 5 (marked in red). And the experimental comparisons with Adaptor and Bias tuning are added in revision appendix section B.2. \n\nIn addition, we will open all the source code of SVF later.", " We thank the reviewer for providing detailed illustration about the random rotation matrix setting. We conduct a new experiment with a random initialized rotation matrix R (we use the scipy.stats.special_ortho_group function). The formulation of the weight becomes RS’R’W. Note that S’ is initialized with an identity matrix as done in previous experiments. During the fine-tuning, we only train S’ while keep others frozen in the backbone. We provide the results below. Random rotation formulation gives poor results.\n\n| Mehod | Backbone |Expression of weight |Fine-tune param | Fold-0 | Fold-1 | Fold-2 | Fold-3 | Mean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | ResNet-50 | W | - | 65.60| 70.28| 64.12| 60.27| 65.07 |\n| baseline |ResNet-50 |USV$^T$|S| 67.42 | 71.57 | 67.99 | 61.57 | 67.14 |\n| baseline | ResNet-50 | S'W | S' | 60.96 | 71.99 | 62.54 | 58.58 | 63.52 |\n| baseline |ResNet-50 |RS'R'W|S'| 32.91 | 51.93 | 51.00 | 37.60 | 43.36 |\n\nWe try to explain the results:\n\n1. In fact, if we set R as an identity matrix (identity matrix is a rotation matrix), RS’R’W = S’W. As shown in the table, S’W is much better than random RS’R’W. It seems that **the selection of the rotation matrix R is critical to the final segmentation performance**.\n2. If we consider RS’R’ (it is a diagonal matrix in the initialization stage) as a whole, RS’R is only related to one dimension of the weight W. Thus for the middle matrix S’, **it is also channel-aligned with respect to weight W**.\n3. But if R is random initialized, **we can not guarantee that RS’R’ is a diagonal matrix when updating S’ during training** (we verify this phenomenon with the saved checkpoints when we finish the training). Note that the weight W is the one from the pre-trained backbone, which contains semantic clues or learned knowledge. The non-diagonal matrix RS’R’ may bring unexpected transformation to the pre-trained weight W, leading to poor results.\n\nIn addition, following the reviewer's suggestion, we upload a new revision supplementary, where we add these results and analysis in above discussion to supplementary D.3 (marked in red).\n", " The authors' responses have addressed my concerns. My final rating is **ACCEPT**, and the initial review has been updated.\n\nIn general, this paper presents a new direction for promoting the research of FSS, and the method itself is novel to the community. In particular, the proposed method is applicable to various models without structural constraints. After rebuttal, my initial concerns regarding the comparison have been well addressed by the authors in that they have supplemented additional experimental results and in-depth discussions. \n\nAlso, I have read Reviewer xj2U's comments, and I encourage the authors to put the experimental comparisons with Adaptor and Bias tunning to the appendix for better completeness. \n\nTo this end, given the convincing new experimental results and discussions, **I vote and argue for accepting this paper** and hope the authors could **open-source the related implementations**.", " Thank you for the in-depth response, the new results and analysis are greatly appreciated. I consider my main concern partially addressed at this point: authors have fairly convincingly demonstrated that tuning in the singular value space is crucial (and please do add these results to paper or supplementary!) but I’m still not sure what is responsible for this fact. Authors postulate that the US’V formulation outperforms the S’W and WS’ formulations because it is not channel-aligned, and thus better contextualized w.r.t. gradient updates. This could be the case, but this argument would also apply to the original review’s proposed random rotation formulation RS’R’W (with some abuse of notation in the apostrophe), which is not evaluated. The fact that SVF is not channel-aligned, and therefore outperforms channel-aligned alternatives, does not in and of itself explain why we should use the singular value space in particular. However, the new results do provide necessary clarity, so I have updated my review to Weak Accept. ", " We agree with the reviewer that fair comparisons should be conducted when proving the effectiveness of the proposed method. Following the reviewer's suggestion, we upload a new revision, where we discuss the dataset trick brought by BAM in page 6 Section 4.1 (marked in red). We add the results in Q1/A1 to Table 1 in the main body page 5 (marked in red). For the 5-shot setting, we are running new experiments to get the results. We will update them to Table 1 in the final version. \nPlease let us know if the reviewer has further suggestions about the comparisons.", " The reviewer appreciates the authors' responses. \n\nStill, the reviewer wants to ask the authors whether they can **put the results in the rebuttal Q1/A1 to the main body of the submission, instead of the appendix**. It would be much better if these results are clearly added to Table 1 and Table 2 in the main submission. Please submit a new revision if it is allowed, and the reviewer will give a quick review again.\n\nThe reviewer encourages this necessary action and believes it may help broader readers be aware of the effects brought by **the extremely unfair training setting** introduced by BAM, which may help the community grow better by retaining a fair performance comparison that could better tell the effectiveness of the proposed SVF.\n\n", " Thanks a lot for your time and feedback. Below are address all raised concerns of the paper.\n\n---\n\n**Q1: The paper does not analyze the impact of SVF from different perspectives**\n\n**A1:** Thanks for pointing it out. The two perspectives of SVF are theoretically equivalent, and the purpose of fine-tuning S and S' is to change distribution of the singular values space. S' of the other implementations of SVF is a learnable parameter initialized to 1, and its size is the same as S. From a theoretical point of view S = SS', therefore SS' constitutes a new S. Below we compare the performance of the models under two perspectives.\n\n|Method\t|Backbone\t|init\t|Fine-tune param|Fold-0|\tFold-1|\tFold-2|\tFold-3|\tMean|\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline + SVF|ResNet-50|-\t|S|67.42\t|71.57\t|67.99\t|61.57\t|67.14 |\n| baseline + SVF' |ResNet-50 | 1 |S'| 67.16|\t71.58|\t68.59|\t61.08|\t67.10|\n| baseline + SVF' | ResNet-50 | 0 with exp|S'| 67.50|\t72.35|\t67.70|\t61.66|\t**67.30** |\n\nwhere SVF' represents other implementations of SVF. The experimental results show that when S' initialized to 1, the performance of SVF under both views is consistent. SVF performs better when initialized to 0 with exp. The exp adds nonlinear factors to SVF, which further improves the expressiveness of SVF. It shows that SVF has the possibility of further improvement.\n\n---\n\n**Q2: Compare with bias tuning**\n\n**A2:** Thanks for pointing it out. In the ResNet backbone, the convolution layers do not contain bias term. The bias terms that can be used for tuning is the ones in BN layers. Below we supplement the test results of bias in the fine-tune the bias terms in all BN layers.\n\n| Method|Backbone| Finetuning method|\tFold-0|\tFold-1|\tFold-2|\tFold-3|\tMean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | ResNet-50 |Freeze Backbone | 65.60 | 70.28 | 64.12 | 60.27 | 65.07 |\n| baseline | ResNet-50 | SVF | 67.42|71.57|\t67.99|\t61.57|\t**67.14** |\n| baseline | ResNet-50 | Bias-Tuning |61.62|\t70.10|\t64.80|\t55.19|\t62.93 |\n\nThe experimental results show that bias tuning does not achieve better results than freeze backbone. The BN layer does not contain semantic information, and the convolution layer does not contain bias. Therefore, the bias-tuning cannot have a positive impact on the few-shot segmentation model\n\n---\n\n**Q3: The change of all singular values of different convolutions**\n\n**A3:** Thanks for pointing it out. We add the change of all singular values for different convolutions to revised appendix. The changes of singular values reveal the importance of different semantic cues in the backbone to downstream tasks. We find that the singular value change after TOP-30 tends to 0. Therefore, we believe that TOP-30 can describe the variation of all singular values.", " Thank you for your valuable feedback! Below are address all raised concerns of the paper.\n\n---\n\n**Q1: Unfair comparisons with the previously proposed methods in Table 1 and Table 2.**\n\n**A1:** The purpose of Table 1 and Table 2 is to verify the effectiveness of SVF under the same training setting, and to verify the universality of SVF on different methods. Our purpose in adding PFENet (without dagger) results is to hope that researchers will notice the impact of dataset trick on FSS model performance. And, we perform the detailed analysis of dataset trick in appendix. \n\nWe agree with the reviewer about the dataset trick affects performance of FSS model. Therefore, we supplement the analysis of some unfair training tricks in few-shot segmentation on appendix. And we provide fair comparison results with previously proposed methods in appendix Table 4. All methods in Table 4 adopt the same setting as described in [A]. Below we provide the results of Baseline, PFENet, and BAM with or without SVF in Pascal-5$^i$ 1-shot. Following [A], the dataset used in this experiment did not remove images containing the novel classes from the training set. \n\n| Method |Backbone | Training Trick | Fold-0 | Fold-1 | Fold-2 | Fold-3 | Mean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | ResNet-50 |w/o | 66.36|\t69.22|\t57.64|\t58.73|\t62.99|\n| baseline + SVF | ResNet-50 | w/o |66.88\t|70.84|\t62.33|\t60.63|\t**65.17**|\n| PFENet | ResNet-50 | w/o |67.06\t|71.61\t|55.21\t|59.46\t|63.34 |\n| PFENet + SVF | ResNet-50 | w/o |68.31\t|71.99\t|56.25\t|61.82\t|**64.59**|\n| BAM | ResNet-50 | w/o | 68.37\t|72.05\t|57.55\t|60.38\t|64.59 |\n| BAM + SVF | ResNet-50 | w/o | 68.17\t|72.86\t|57.77\t|62.04\t|**65.21**|\n\nExperimental results show that retaining novel categories in the base training stage and setting them as backgrounds does not negatively affect SVF. It also shows that whether or not the dataset tricks is used does not affect the effectiveness of SVF. The purpose of our detailed discussion of training trick is to promote the development of community health.\n\n---\n**Q2: what causes the success on Fold-2 when the novel classes are removed from the training set?**\n\n**A2:** Below we count the number of images in each fold before and after using the dataset trick.\n\n| Pascal 5$^i$ | Fold-0 | Fold-1 | Fold-2 | Fold-3 |\n| ------------ | ------------ | ------------ | ------------ | ------------ |\n| w/o remove novel classes | 4760\t|4588\t|4097\t|5108 |\n| remove novel classes\t|4208\t|3726\t|2752\t|4510 |\n| reduction rate\t|11.6%\t|18.8%\t|**32.8%**\t|11.7% |\n\nThe statistical results show that the number of images containing novel classes in Fold-2 training set is 2-3 times that of other folds. We guess that the removed images negatively affect the results of Fold-2. Therefore, the performance improvement of Fold-2 is most obvious when removing images containing novel classes in training set.\n\n---\n\n**Q3: Visualizations in Figure 5 and Figure 6 are confusing.**\n\n**A3:** In Figures 5 and 6 we use images from the Fold-1 training set. We guess that the semantic cues with the largest singular value growth are conducive to Few-shot segmentation, therefore the base class area will be displayed in the visualization results. Below, we show base classes of each fold on Pascal-5$^i$. It can be seen that both 'boat' and 'person' are in the base classes of Fold-1. Therefore, the weight after finetuning focus on not only the 'person', but also the 'boat'. \n\n| Pascal-5$^i$ | base classes |\n| ------------ | ------------ |\n| Fold-0 | bus, car, cat, chair, cow, diningtable, dog, horse, motorbike, person,potted plant, sheep, sofa, train, tv/monitor |\n| **Fold-1** | aeroplane, bicycle, bird, **boat**, bottle,diningtable, dog, horse, motorbike, **person**, potted plant, sheep, sofa, train, tv/monitor |\n| Fold-2 | aeroplane, bicycle, bird, boat, bottle, bus, car, cat, chair, cow, potted plant, sheep, sofa, train, tv/monitor |\n| Fold-3 | aeroplane, bicycle, bird, boat, bottle, bus, car, cat, chair, cow, diningtable, dog, horse, motorbike, person |\n\n---\n\n[A] Shaban, Amirreza, Shray Bansal, Zhen Liu, Irfan Essa, and Byron Boots: One-Shot Learning for Semantic Segmentation. BMVC. 2017.", " **Q3: What causes the differences between SVF and WS' or S'W?**\n\n**A3:** In this question, we try to provide our understanding of what causes the superior performances of SVF over WS' and S'W. We conjecture that this may be related to the context that S or S' can access when fine-tuning the parameters. Assume that W has the shape of [M, N]. S and S' are diagonal matrices. S has the shape of [Rank, Rank], and S' has the shape of [M, M] or [N, N]. When optimizing the parameters, S' only has relations on dimension M or dimension N in a channel-wise manner, while S can connect all channels on both dimension M and dimension N, as S is in the singular value space. This differences can affect the received gradients when training S or S', which results in different performance. To give more evidences, we design more variants of SVF and provide their results in the table below.\n\n| Mehod | Backbone |Expression of weight |Fine-tune param | Fold-0 | Fold-1 | Fold-2 | Fold-3 | Mean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | ResNet-50 | W | - | 65.60| 70.28| 64.12| 60.27| 65.07 |\n| baseline |ResNet-50 |USV$^T$|S| 67.42 | 71.57 | 67.99 | 61.57 | 67.14 |\n| baseline | ResNet-50 |USS'V$^T$|S'| 67.16 | 71.58 | 68.59 | 61.08 | 67.10 |\n| baseline | ResNet-50 |USS'V$^T$| S + S'| 66.42 | 71.73 | 67.23 | 61.12 | 66.63 |\n\nWe find that given S and S' are lie in the singular value space, all variants can outperform the freezing backbone baseline.\n\n---\n\n**Q4: references and typos.**\n\n**A4:** Thanks for pointing it out. We add all the related literature in our revised version. And we fix the typos. The modification are illustrated with red color in the revised paper.\n\n---", " Thanks a lot for your time and feedback. We have to say that the reviewer asks valuable questions and provides thoughtful clues. We appreciate your inspiring reviews. And we are happy to address the concerns.\n\n---\n\nThe main question of the reviewer is: what is truly responsible for the success of SVF? According to the reviewer's comments and suggestions, we split it into three sub-parts:\n\n- Does fine-tune another small part of parameters in the backbone work? Comparing with only fine-tuning BN can be a good example.\n- Is it really necessary to fine-tune the singular values? What if we introduce a new small part of parameters S', which is not in the singular value space, and only fine-tune the S'? (We simplify the experiment posed by the reviewer to fine-tuning the S' in S'W or WS'. As R is a random rotation matrix, thus R'R=I. )\n- What causes the differences between SVF and WS' or S'W?\n\nWe give the responses below.\n\n---\n**Q1: Does fine-tune another small part of parameters in the backbone work? Comparing with only fine-tuning BN can be a good example.**\n\n**A1:** We conduct experiments on Pascal-5$^i$ with the 1-shot setting. We compare our SVF with methods that only fine-tune the parameters in the BN layers. The results below show that only fine-tuning the parameters in BN layers does not bring overfitting in few-shot segmentation methods, but they perform worse than the conventional paradigm (freezing backbone). While our SVF outperform other methods by large margins. \n\n| Method |Backbone | Fine-tuning Method | Fold-0 | Fold-1 | Fold-2 | Fold-3 | Mean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | ResNet-50 |Freeze Backbone | 65.60 | 70.28 | 64.12 | 60.27 | 65.07 |\n| baseline | ResNet-50 | Fine-tuning BN scale (weight) | 62.28 | 68.66 | 61.19 | 58.18 | 62.58 |\n| baseline | ResNet-50 | Fine-tuning BN shift (bias) | 61.62 | 70.10 | 64.80 | 55.19 | 62.93 |\n| baseline | ResNet-50 | Fine-tuning BN (weight+bias) | 61.93 | 70.67 | 62.02 | 57.86 | 63.12 |\n| baseline | ResNet-50 |SVF | 67.42 | 71.57 | 67.99 | 61.57 | **67.14** |\n\n---\n\n**Q2: Is it really necessary to fine-tune the singular values? What if we introduce a new small part of parameters S', which is not in the singular value space, and only fine-tune the S'?**\n\n**A2:** To answer this question, we conduction two experiments, where the weight becomes S'W or WS', and only fine-tune the introduced small part of parameters S'. The results are consistence with the experiments in Q1. Both of them can avoid overfitting but show slightly worse performance than the freezing backbone baseline.\n\n| Method | Backbone | Expression of weight | Fine-tune param | Fold-0 | Fold-1 | Fold-2 | Fold-3 | Mean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | ResNet-50 | W | - | 65.60| 70.28| 64.12| 60.27| 65.07 |\n| baseline | ResNet-50 | S'W | S' | 60.96 | 71.99 | 62.54 | 58.58 | 63.52 |\n| baseline | ResNet-50 | WS' | S' | 62.82 | 71.69 | 62.84 | 61.13 | 64.62 |\n| baseline | ResNet-50 | USV$^T$ | S | 67.42 | 71.57 | 67.99 | 61.57 | **67.14** |\n\nThe above experimental results in Q1 and Q2 suggest that fine-tuning a small part of parameters is a good way to avoid overfitting when fine-tuning the backbone in few-shot segmentation. But **it is non-trivial to find such a small part of parameters that can bring considerable improvements**.\n", " **Q3: The reviewer cannot find any components within the proposed SVF specially designed for the \"few-shot\" scenario. As a result, it is suggested to deploy the proposed SVF on various tasks, including classification and detection, to demonstrate the overfitting-proof advantages of the proposed SVF method.**\n\n**A3:** We understand the reviewer's concern and would like to provide more explanation about our logic:\n\n- First of all, **this paper focuses on the few-shot segmentation task**. \n- Then, we notice that current few-shot segmentation methods follow a paradigm of freezing backbone. By revisiting this paradigm, we find this convention exists may due to the fact that fine-tuning backbone results in overfitting [D,E]. \n- To solve this problem, we provide SVF as a solution, which only fine-tunes part of parameters in the backbone and gives better results. \n\nThus, **SVF is proposed to solve the existing problem in few-shot segmentation**. \n\nMoreover, we would like to thank the reviewer for recognizing our SVF as a general method that can be applied to few-shot classification and few-shot object detection. As the settings and baseline methods may change in the above two tasks, applying SVF to these two tasks need more specific designs according to task settings. We leave them for future work.\n\n___\n\n**Q4: In line 78, the authors claimed that the work [10] is not suitable for few-shot vision tasks, yet there are no related experiments showing this issue.**\n\n**A4:** Thank you for pointing this out. We notice that the claim in Line-78 is not appropriate. We revise it here: The above methods are proposed in a transformer-based model, but modern few-shot segmentation models use CNN-based backbones. Applying prompt-based methods to various few-shot segmentation methods may need further adjustments. We have added this new sentence to the revised version of our paper.\n\n___\n\n**Q5: It is better to include the experiments in discussing the required extra training time.**\n\n**A5:** We follow the reviewer's advice and measure the training time of models on Pascal-5$^i$ with the 1-shot setting. Compared with the baseline model (freeze backbone), SVF increases the training time from 2 hours to 5.5 hours on Fold-0. Given the setting of the few-shot scenario, there are only limited samples, enabling fast training for models. It is acceptable even if the training time increases. Moreover, SVF is only applied in model training and does not affect model inference (in inference, we combine the U, S, and V back to the weight of convolution layers, which is the same as the original model).\n\n___\n\n**Q6: The reference is not sufficient.**\n\n**A6:** Thanks for pointing it out. We will include all those works in our final version. The detailed discussions are as follows:\n\n- Both [A] and [B] constrain the distribution of the singular values $s$ where [A] forces the singular value around 1 and [B] clamps the large singular values into a constant, hence serving as a regularization term. We did not pose an extra constraint on $s$, instead, encouraged the fully trainable singular values.\n- As illustrated in [A]'s Figure 1, the singular values of well-trained weights are widely spread around [0,2]. The strong regularization proposed in [A,B] should damage the performance of pre-trained networks. Therefore, they turn to training from scratch, which is infeasible in the circumstance of few-shot segmentation. Our method coupled with pre-trained parameters can further exploit the capacity of the backbone, leading to superior results.\n\n___\n\n[A] Kui Jia, Dacheng Tao, Shenghua Gao, Xiangmin Xu: Improving Training of Deep Neural Networks via Singular Value Bounding. CVPR 2017: 3994-4002. \n\n[B] Hanie Sedghi, Vineet Gupta, Philip M. Long: The Singular Values of Convolutional Layers. ICLR 2019.\n\n[C] Houlsby, Neil, et al. Parameter-efficient transfer learning for NLP. ICML. PMLR, 2019.\n\n[D] Dong, Nanqing, Eric P. Xing. Few-shot semantic segmentation with prototype learning. BMVC. Vol. 3. No. 4. 2018.\n\n[E] Min, Juhong and Kang, Dahyun, Cho, Minsu: Hypercorrelation squeeze for few-shot segmentation. ICCV 2021: 6941--6952", " Thanks a lot for your time and feedback. We given the responses to all raised concerns below.\n\n___\n\n**Q1: The strong connection between the backbone fine-tuning and few-shot segmentation is unclear. It is curious Why it must fine-tune the backbone for tackling the few-shot segmentation.**\n\n**A1:** There may be some misunderstandings, and we provide further explanations about our SVF in this response. We did not say it *\"must\"* fine-tune the backbone for few-shot segmentation. Instead, we agree with the claim that freezing the backbone in few-shot segmentation is a good way to achieve promising segmentation results. In this paper, we revisit the above conventional paradigm and provide **an alternative way** -- fine-tuning a small part of parameters in the backbone. And the experimental results show that the alternative regime can achieve better results on various few-shot segmentation methods over the conventional paradigm. Thus, **the connection** between the backbone fine-tuning and few-shot segmentation lies in: **fine-tuning part of parameters in the backbone can serve as an alternative way to the freezing backbone paradigm in few-shot segmentation and can give non-trivial improvements over various few-shot segmentation methods.** Our method brings new thoughts to few-shot segmentation. It suggests that not just the mechanism design in fusing different extracted features or generating prototypes is essential, but **the quality of the extracted features from the backbone also matters** to the final segmentation results.\n___\n\n**Q2: For fine-tuning a backbone network as a goal, why not compare with the methods of meta-learning, adaptor, bias tuning, or domain adaptation?**\n\n**A2:** Thanks for your constructive suggestions of comparing our SVF with Adapter and Bias Tuning. For quick check, we conduct experiments on Pascal-5$^i$ with the 1-shot setting. The details for adapter and bias tuning are given below:\n\n- Adapter: Adapter is proposed in transformer-based models. When applying it into CNN-based backbone (ResNet), we make simple adjustments. We follow [C] to build the adapter structures and add them after the stages in the ResNet.\n- Bias Tuning: In the ResNet backbone, the convolution layers do not contain bias term. The bias terms that can be used for tuning is the ones in BN layers. We fine-tune the bias terms in all BN layers in this method.\n\nThe experimental results are given in the table below. It shows that **SVF outperform Adapter and Bias Tuning by large margins**. Moreover, we find that the introduction of Adapter will directly lead to over-fitting, while Bias Tuning reduces performance of the baseline model.\n\n| Method | fine-tune method | Fold-0 | Fold-1 | Fold-2 | Fold-3 | Mean |\n| ------------ | ------------ | ------------ | ------------ | ------------ | ------------ | ------------ |\n| baseline | Freeze Backbone | 65.60 | 70.28 | 64.12 | 60.27 | 65.07 |\n| baseline | SVF | 67.42 | 71.57 | 67.99 | 61.57 | **67.14** |\n| baseline | Adapter | 18.41 | 20.21 | 26.62 | 17.62 | 20.71 |\n| baseline | Bias-Tuning | 61.62 | 70.10 | 64.80 | 55.19 | 62.93 |\n\nFor meta-learning and domain adaptation, we would like to make some clarifications.\n\n- In the few-shot segmentation, meta-learning is applied in the segmentation head to learn the knowledge in support images but not in the backbone, posing challenges in directly comparing SVF with meta-learning methods.\n- In addition, domain adaptation is another research direction whose setting differs from the setting in few-shot segmentation. It would be much appreciated if the reviewer could give more details on conducting fair comparisons between our SVF and domain adaptation methods.\n\n", " In order to handle the overfitting issue in few-shot segmentation, this paper proposes to fine-tune a small part of backbone parameters recognized via the singular value decomposition. Precisely, the proposed Singular Value Fine-tuning (SVF) method suggests merely tuning the decomposed singular-value diagonal matrix for each convolutional layer. The experiments on few-shot segmentation among two datasets show the positive effects of fine-tuning the backbone using the SVF approach. [Strengths]\n+ The idea of recognizing the singular-value-related backbone parameters for fine-tuning is interesting.\n+ The manuscript is well organized and has several experiments.\n\n[Weaknesses] \n- The motivation is vague since the strong connection between the backbone fine-tuning and few-shot segmentation is unclear. The frozen pretrained-backbone could be treated as a feature extractor for a downstream task; hence the downstream task could focus on the mechanism design for employing the extracted features. For fine-tuning a backbone network as a goal, why not compare with the methods of meta-learning, adaptor, bias tuning, or domain adaption? Therefore, it is curious why it must fine-tune the backbone for tackling the few-shot segmentation. Specifically, the reviewer cannot find any components within the proposed Singular Value Fine-tuning specially designed for the “few-shot” scenario. As a result, it is suggested to deploy the proposed Singular Value Fine-tuning on various tasks, including classification and detection, to demonstrate the overfitting-proof advantages of the proposed SVF method.\n- In line 78, the authors claimed that the work [10] is not suitable for few-shot vision tasks, yet there are no related experiments showing this issue.\n- Since the proposed SVF is used to decompose the basic convolutional layers within a backbone, it may result in additional computational time while carrying out SVF. Therefore, it is better to include the experiments in discussing the required extra training time.\n- The reference is not sufficient. For example, two related methods shown as follows should be compared.\n[A] Kui Jia, Dacheng Tao, Shenghua Gao, Xiangmin Xu: Improving Training of Deep Neural Networks via Singular Value Bounding. CVPR 2017: 3994-4002.\n[B] Hanie Sedghi, Vineet Gupta, Philip M. Long: The Singular Values of Convolutional Layers. ICLR 2019.\n The major concern is the vague motivation since the reviewer cannot find any connection of the proposed Singular Value Fine-tuning is specially designed for the “few-shot” scenario. In addition, the other concern of this paper is lacking compared with previous methods [A, B] related to manipulating the weight matrices of the convolutional layers. Please see [Weaknesses] for reference. The authors adequately addressed the limitations and potential negative societal impact of their work.", " Authors re-examine the idea of fine-tuning the backbone feature extractor during few-shot semantic segmentation, showing that overfitting can be avoided by limiting updates to a small set of parameters. Specifically, authors decompose convolution layers using SVD, and fine tune only the singular values S. Results indicate that this approach increases performance relative to using a frozen backbone, while fine-tuning other groups of parameters consistently decreases performance. STRENGTHS:\n\nMotivation is clear, and approach is sensible, straightforward, and highly applicable. Results are convincing and a thorough analysis is provided. Paper is well organized (though with occasional typos and some awkward language, see below). \n\nWEAKNESSES:\n\nWhile the paper convincingly demonstrates that SVF is effective, and acts as expected, it does not adequately explain why. Authors imply that fine-tuning in the singular value space is uniquely non-destructive, but it is not intuitively obvious that this should be the case. Indeed, in Fig.4 many values \nswitch signs, indicating that nothing is stopping SVF from zeroing out large swaths of the output manifold in practice. Without knowing why the singular value space is so particularly effective, the contribution is limited, as constrained fine-tuning is already a widely known approach in the few-shot regime (see below) and is not novel in and of itself. \n\nAlong these lines, while the analysis is broad and involved, the comparisons are somewhat apples-to-oranges. Rather than singular values being special, it could instead be that SVF simply allows the authors to fine-tune a smaller number of parameters than their comparative baselines. For starters, all fine-tuning comparisons except for BN involve a far larger number of parameters, and so the observed overfitting is not surprising, and even the BN baseline involves twice as many parameters (scale _and_ shift). Additionally, it could be that SVF allows for a smaller number of _effective_ updates compared to BN. For example, if matrix U is highly axis-aligned, changes to S will shrink or vanish in the subsequent BN layer. If U=I, then S should not update at all. \n\nFurther analysis is required. \n\nLess importantly: while the related work is quite broad, there exist similar approaches in the broader few-shot literature also based on fine-tuning a highly constrained subset of introduced parameters. These may be worth mentioning – e.g. LEO (Meta-Learning with Latent Embedding Optimization, ICLR2019), CNAPS (Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes, NeurIPS2019), or possibly FiLM (FiLM: Visual Reasoning with a General Conditioning Layer, AAAI2018). What is truly responsible for the success of SVF? \n\nI propose the following experiment to start: let R be a random rotation matrix, and set U=R’ and V=RW, where W is the original weight matrix for the given layer. Then attempt SVF fine-tuning. This will show whether SVF depends crucially on the singular value space, or simply on the number of effective updated parameters. Fine tuning only the BN scale terms might also be worth a try, for a true apples-to-apples comparison – it could be that the bias term is uniquely destructive. \n\nAlternatively, if authors could elaborate on what makes the singular value space so unreasonably amenable to fine-tuning, this may become clearer. Perhaps I am simply failing to understand the intended argument. \n\nSMALL COMMENTS AND TYPOS:\n\nPg3 line 100: while does not  while it does not\n\nPg4 line 103: one need  one needs\n\nPg4 line 132: parameters, also  parameters, and also\n\nPg5 line 177: all classes all classes  all classes\n\nPg5 lines 178-179: are used … are used  used … used\n\nPg7 line 214: shows that the  shows the\n\nPg7 lines 227-228: you seem to have a redundant sentence here (Finally… Finally)\n\nPg8 line 251: without destroy  without destroying\n Discussion of limitations is fair. Societal impacts are not discussed, though do not extend beyond those of few-shot learning in general. ", " The authors have proposed a new training scheme for FSS frameworks by only adapting a few parameters of the ImageNet pre-trained backbone to the segmentation task. The core idea is to adjust the singular values of the pre-trained kernel weights of the backbone, bringing considerable improvements to representative FSS methods (PFENet and BAM). Extensive experiments have shown the effectiveness of the proposed method. Strength:\n\n+ A good extension of the current training scheme for FSS frameworks.\n\n+ Decent performance gain has been brought to representative FSS methods (PFENet and BAM).\n\n+ Clear motivation and good presentation of the method and discussion. \n\n+ The overall submission is well-prepared with comprehensive appendix.\n\nWeakness: \n\n- Unfair comparisons with the previously proposed methods (please see Questions section for details).\n\n- Visualizations in Figure 5 and Figure 6 are confusing. \n\nIf the above weaknesses could be well addressed, I would like to give a higher rating.\n\n\n\n This paper does present a good method for boosting existing few-shot segmentation methods whose backbone parameters are fixed, and considerable improvements have been achieved. However, my biggest concern is still about the training setting. \n\nThe authors of submission 1788 follow BAM to remove all training images that contain novel classes, for the purposes of avoiding information leakage, but all previous methods in the community follow Shaban et al[1] to keep those images during the training phase by setting the labels for the novel classes as background, which explains why PFENet without dagger in Table 1 is much worse than the one with the dagger. \n\nMore specifically, in section 3 of the paper [1], the authors of [1] wrote ''In this problem, unlike image classification, examples from L_test might appear in training images. This is handled naturally when an annotator unaware of some object class, labels it as background''. \n\nTherefore, the reviewer thinks that BAM actually has introduced an unfair comparison in their paper, and it would be better if the authors of submission 1788 could clearly present the results of Baseline + SVF, PFENet + SVF, and BAM + SVF in Table 1 and Table 2, by keeping the novel categories but setting them as the background during the base training phase, for a fair comparison with previous methods that adopt the same setting as described in [1]. \n\nIn Table 2 of the appendix, what causes the success on Fold-2 when the novel classes are removed from the training set? If the novel classes are included as [1] and set as the background, whether the proposed method will be negatively affected? \n\nMinor issues:\n1. Are the PFENet and BAM without dagger shown in table 3 trained with novel classes whose labels are set to the background?\n\n2. Visualizations in Figure 5 and Figure 6 are confusing and they could be improved by indicating what the target classes are. For example, in the last examples of Figure 5 and Figrue 6, the 1x1 weights focus on the person more than boat, but contrarily, the 3x3 weights are more curious about the boat, which contradicts the problem setting that only one target class exists in each evaluation episode. \n\n\nReferences\n[1] One-Shot Learning for Semantic Segmentation. BMVC 2017\n\n As described in the Questions section, there might be some unfair comparisons with previous methods, and some critical aspects are not clear enough. The authors are encouraged to show additional results to support their claims.\n\n[updated after rebuttal] My initial concerns regarding the comparison have been well-addressed in the revision. Thus my final rating is ACCEPT (increased from 4 -> 7).", " In this paper, the authors propose a novel SVF to changes the standard paradigm of freeze backbone in few-shot segmentation.The results show that only fine-tune the parameters of the singular value subspace (S) and freeze other subspaces can not only effectively avoid the overfitting problem, but also significantly improve the performance of FSS model. Specifically, the visualization results prove that some semantic cues with high weight in the pretrained weight are not conducive to downstream tasks, thus SVF adjusts the weights of different semantic cues by fine-tuning the parameters of subspace S. They further confirm the effectiveness of their proposed method by running experiments on real datasets and applications and comparing them to other topologies and methods. Moreover, they further confirm the effectiveness and superiority of SVF by comparing them to other representative methods on two common datasets. - This paper is overall writing is clear and easy to follow. This is a really interesting paper. This paper not only analyzes the reason why the freezing backbone in FSS has become the traditional paradigm, but also proposes a novel fine-tuning method to try to break the traditional paradigm. The novel method SVF design is simple yet effective as proven by extensive experiments and analysis. Meanwhile, this paper analyzes the impact of hidden tricks in FSS on model performance to facilitate fair comparisons in future work. The core of SVF is a novel method of fine-tuning backbone, thus SVF has universality and can be easily used for all FSS models. As a result, this could serve as a new paradigm for few-shot semantic segmentation task. I believe that this paper will be a valuable contribution to the field, and I strongly recommend acceptance.\n\n- The experimental results demonstrate the effectiveness of SVF, but will increase the training cost by a small amount compared to freeze backbone. - In section 3.3 and appendix, the author details two perspectives of SVF. The two perspectives are implemented differently. The paper does not analyze the impact of SVF from different perspectives on the FSS model.\n\n- As an important part of the backbone, the bias has a huge impact on the model during the finetuning process. The authors lack an analysis of bias among different fine-tuning methods.\n\n- In section 4.4 and apendix, the author only shows the change of the top-30 singular values of different convolutions, and the change results of all the singular values are missing.\n This paper does not reflect any potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "fvmbJiLXmq", "cM6Bn11jmJ", "OeVQOCQPl2e", "9I8ALjDxLQHj", "8E_wAQvQjNO", "dXGQryd7iy1", "DLMKVx2Rch6", "8Ni53g-pq-", "yo4vr2ibeUnV", "5JVWvrZiquY", "__bmyGgw4MaL", "eRAkE_rPPB2", "8E_wAQvQjNO", "vCQAVh-_rml", "XzNUL7mU5JS", "XxpRT9q6noL", "HjLr_IpBVSz", "dXGQryd7iy1", "nips_2022_LEqYZz7cZOI", "nips_2022_LEqYZz7cZOI", "nips_2022_LEqYZz7cZOI", "nips_2022_LEqYZz7cZOI" ]
nips_2022_KCXQ5HoM-fy
Supported Policy Optimization for Offline Reinforcement Learning
Policy constraint methods to offline reinforcement learning (RL) typically utilize parameterization or regularization that constrains the policy to perform actions within the support set of the behavior policy. The elaborative designs of parameterization methods usually intrude into the policy networks, which may bring extra inference cost and cannot take full advantage of well-established online methods. Regularization methods reduce the divergence between the learned policy and the behavior policy, which may mismatch the inherent density-based definition of support set thereby failing to avoid the out-of-distribution actions effectively. This paper presents Supported Policy OpTimization (SPOT), which is directly derived from the theoretical formalization of the density-based support constraint. SPOT adopts a VAE-based density estimator to explicitly model the support set of behavior policy and presents a simple but effective density-based regularization term, which can be plugged non-intrusively into off-the-shelf off-policy RL algorithms. SPOT achieves the state-of-the-art performance on standard benchmarks for offline RL. Benefiting from the pluggable design, offline pretrained models from SPOT can also be applied to perform online fine-tuning seamlessly.
Accept
This work presents an interesting idea of constraining the policy network in offline reinforcement learning (RL) to not only be within the support set but also avoid the out-of-distribution actions effectively unlike the standard behavior policy through behavior regularization. The proposed Supported Policy OpTimization (SPOT) method leverages the theoretical framework of density-based support constraint. and adopts a VAE-based density estimator to model the supports of behavioral actions. Such a simple method indeed allows effective density-based regularization and can be flexibly be combined with most standard off-policy RL algorithms. Experiments also show that the propose algorithm achieves better performances than SOTA offline RL methods. All the reviewers think that the paper is written carefully, with the ideas explained intuitively, and algorithms tested extensively to showcase the effectiveness of SPOT. Therefore the consensus is to accept this paper for publication at NeurIPS22.
train
[ "iNsk8RUtzV", "63IidCfA4L9", "PpinE77wq0F", "WyvhAnLE_cN", "Z4w8lQIyc2w", "b4M_YuTp0G8", "8Z3fr7-pgvG", "qnYX29-RsnJ", "zYUZN57ZOev", "J9YqTTx39u", "1SZNVnn9tMjx", "4QhzeeRzkMI", "8Xi9NjFlG7d", "Kic2JGjLD98", "9RFg_XsEz2F", "3Wy8C7COq9", "YSlemDntD7o" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We'd like to thank you again for your time and efforts in providing a valuable review and carefully judging our feedback. We really enjoy the communication, and it helps us make our paper better.", " Thanks for the detailed response. Most of my concerns are solved. I will increase my score.", " Dear Reviewer,\n\nIt is a kind reminder that **this is the last day of the one-week Reviewer-author discussion**. Following your suggestion, we believe that we have made a great effort to provide all the experiments and clarifications that we can.\n\nIf you have read our **latest response**, please kindly let us know. Any further questions/discussions are welcome. \n\nThanks again for your review. Looking forward to your reply. Thank you!", " Thank you for your interactive response, which gives us further opportunity to answer your questions! \n\nFor IQL with constraint relaxation on the online fine-tuning setting, we took the official implementation and hyperparameters of IQL except that we tried to linearly increase the inv-temperature $\\beta$ of IQL during online fine-tuning to relax the implicit constraint, which enables a fair comparison with SPOT. We have experimented with linearly increasing the original $\\beta=10.0$ to $2\\beta$ ($=20.0$), $3\\beta$ ($=30.0$) and $5\\beta$ ($=50.0$), during 1M online steps. Besides, for IQL without constraint relaxation, this is exactly the experimental setting of the original IQL with a fixed $\\beta$ and we have presented the corresponding results in $\\underline{\\text{Table 4 of main paper}}$. We summarize here all experimental results, including initial performance after offline RL and performance after 1M online steps:\n\n| | IQL (fixed $\\beta$) | IQL ($\\beta \\rightarrow 2\\beta$) | IQL ($\\beta \\rightarrow 3\\beta$) | IQL ($\\beta \\rightarrow 5\\beta$) | SPOT |\n| ----------------- | ------------------------------------- | ---------------------------------- | ---------------------------------------------- | ------------------------------------- | ------------------------------------------- |\n| Umaze-v2 | 85.4 $\\rightarrow$ 96.2 | 86.8 $\\rightarrow$ 96.6 | 87.3 $\\rightarrow$ 97.0 | 89.3 $\\rightarrow$ $\\underline{98.0}$ | 93.2 $\\rightarrow$ $\\textbf{99.2}$ |\n| Umaze-diverse-v2 | 70.8 $\\rightarrow$ 62.2 | 68.0 $\\rightarrow$ 42.8 | 74.0 $\\rightarrow$ 63.3 | 69.0 $\\rightarrow$ $\\underline{64.0}$ | 41.6 $\\rightarrow$ $\\textbf{96.0}$ |\n| Medium-play-v2 | 68.6 $\\rightarrow$ 89.8 | 73.4 $\\rightarrow$ 92.8 | 68.0 $\\rightarrow$ $\\underline{93.7}$ | 68.7 $\\rightarrow$ 90.3 | 75.2 $\\rightarrow$ $\\textbf{97.4}$ |\n| Medium-diverse-v2 | 73.4 $\\rightarrow$ 90.2 | 73.8 $\\rightarrow$ 90.4 | 63.0 $\\rightarrow$ $\\underline{94.0}$ | 71.0 $\\rightarrow$ 92.0 | 73.0 $\\rightarrow$ $\\textbf{96.2}$ |\n| Large-play-v2 | 40.0 $\\rightarrow$ $\\underline{78.6}$ | 43.2 $\\rightarrow$ 77.6 | 46.3 $\\rightarrow$ 75.0 | 38.3 $\\rightarrow$ 75.3 | 40.8 $\\rightarrow$ $\\textbf{89.4}$ |\n| Large-diverse-v2 | 40.4 $\\rightarrow$ 73.4 | 41.4 $\\rightarrow$ 78.8 | 50.0 $\\rightarrow$ $\\underline{84.3}$ | 44.0 $\\rightarrow$ 78.0 | 44.0 $\\rightarrow$ $\\textbf{90.8}$ |\n| **Total** | 378.6 $\\rightarrow$ 490.4$\\pm$25.8 | 386.6 $\\rightarrow$ 479.0$\\pm$41.2 | 388.7 $\\rightarrow$ $\\underline{507.3}\\pm$22.2 | 380.3 $\\rightarrow$ 497.7$\\pm$27.0 | 367.8 $\\rightarrow$ $\\textbf{569.0}\\pm$12.4 |\n\nWe would also like to highlight that the constraint relaxation schedule of SPOT ($\\lambda$ linearly decayed to $0.2 \\lambda$) is set as common practices without careful tuning. A careful tuning of this may provide even stronger fine-tuning performance of SPOT.\n\nPlease let us know what else we can do to address any lingering issues. We'd be happy to answer your future questions.", " Thank you for your response. The authors' response partially solved my questions, but I am still concerned about the novelty of the paper. I also notice that Reviewer AVgj has the same concern with this.", " Thanks for your elaborate response and the added experiments. The response solves most of my concerns. I have further questions on Q5 and Q6. Could you show the hyper-parameter you tuned on IQL for constraint relaxation and the corresponding results? Besides, I am curious about the results of IQL on the online fine-tuning setting by removing the relaxation.", " We would like to thank the reviewers for their detailed and insightful comments. In this paper, we aim to propose a simple and pluggable offline RL method with a number of practical benefits for realistic applications, including implementation simplicity, strong empirical performance, and convenience for online fine-tuning.\n\nWe have made every effort to address all the reviewers' concerns and responded to the individual reviews below. We have also updated the paper with several modifications to address reviewer suggestions and concerns. Summary of updates:\n\n1. We added PLAS results to Table 3;\n2. We enriched the conclusion (Section 6) with discussion w.r.t. limitations and future work;\n3. We expanded the discussion in Section 5.4 to clarify our claim about inference efficiency;\n4. We added a footnote to explain our notation $\\log \\pi_{\\beta}\\left(\\pi_{\\phi}(s) | s\\right)$ in Equation 4;\n5. We added clarification of the intuition behind Figure 1(b) in Section 3.4 of supplementary material;\n6. We added comments that constraint relaxation empirically does not help IQL for fine-tuning in Section 3.3 of supplementary material.\n\nAll updates are highlighted in blue.", " We would like to sincerely thank Reviewer AVgj for providing the detailed review and the positive evaluation of the clarity, soundness and thoroughness of our paper.\n\n**Q1**: Rephrase the conclusion.\n\nWe appreciate the nice suggestion about the conclusion. We have added discussion w.r.t. limitations and future work into the uploaded revision. Here is the revised conclusion:\n\n> 6 Conclusion\n>\n> We present Supported Policy OpTimization (SPOT), a policy constraint method to offline RL built upon off-the-shelf off-policy RL algorithms. Capturing the standard formulation of the support constraint, SPOT introduces a pluggable regularization term applied directly to the estimated behavior density and obtains excellent performance across different tasks in the D4RL benchmarks, including standard Gym-MuJoCo tasks and challenging AntMaze tasks. Furthermore, when online fine-tuned after offline RL pre-training, the pluggable design of our algorithm makes it seamless to take full advantage of well-established online methods and exceed the state-of-the-art on the challenging AntMaze domains. One limitation of our current method, shared by most policy constraint methods, is that the performance may be limited by the accuracy of estimation of the behavior policy. An exciting direction for future work would be to develop an effective pluggable constraint mechanism excluding explicit estimation of behavior policy.\n\nIf the room can be made in the case of acceptance, we will enrich the conclusion with more possible areas of future improvement, for example, adaptive adjustment of constraint strength instead of manual tuning. \n\n**Q2**: Enrich the main text.\n\nWe are glad that you highlight some parts of the supplementary material interesting. We will consider moving those contents into the main text if the final version permits an additional page, which is the tradition of NeurIPS.", " Many thanks to Reviewer qPyz for providing the thorough review and valuable suggestions. \n\n**Q1**: Lack of theoretical analysis.\n\nThe main concern raised by the reviewer is the lack of precise theoretical characterization of the empirical improvement. We agree that the paper is primarily methodological, which aims to propose a simple and pluggable offline RL method. SPOT brings a number of practical benefits for realistic applications, including simple implementation, inference efficiency and convenience for fine-tuning. Moreover, our empirical results are strong enough to be impactful. We would also like to point out that we do provide some preliminary effort to connect our method to the theoretical framework of support constraint in $\\underline{\\text{Section 3.2 and 4.1 of main paper}}$. While BEAR [1] uses a maximum mean discrepancy (MMD) constraint to approximate such support constraint, we empirically find that it is ineffective and our method SPOT faithfully and elegantly realizes the idea of support constraint in an explicit regularization based on behavior-density estimation, which is believed to contribute the performance boost.\n\n[1] Kumar et al. Stabilizing off-policy q-learning via bootstrapping error reduction. NeurIPS 2019.\n\n**Q2**: Comment on Equation 4.\n\nThe notation $\\log \\pi_{\\beta}\\left(\\pi_{\\phi}(s) \\mid s\\right)$ is a bit confusing. As it is the most commonly used notation for deterministic and stochastic policies, we cannot think of a better notation. We have added a footnote to explain the current one in the uploaded revision. Thank you for pointing this out.", " Many thanks to Reviewer FcL9 for providing the insightful review and valuable comments.\n\n**Q1**: Lack of theoretical analysis. What do we mean by \"a closer connection between theory and algorithm\"?\n\nWhile the paper is primarily methodological, our algorithmic designs are greatly inspired by relevant theoretical analysis. As argued by many previous works [1,2,3,4], support constraint may be sufficient to be theoretically and empirically effective for offline RL method. Theorem 4.1 in [1] shows that threshold $\\epsilon$ of support set simultaneously trades off extrapolation error of Q estimation and performance of the constrained optimal policy. While BEAR [1] turns to using maximum mean discrepancy (MMD) to approximate such support constraint, we empirically find that it is ineffective. Our method SPOT directly converts the constraint $\\pi_{\\beta}\\left(a | s\\right)>\\epsilon$ into a simple and straightforward regularization with strong empirical performance, which is believed to be meaningful in real applications.\n\n[1] Kumar et al. Stabilizing off-policy q-learning via bootstrapping error reduction. NeurIPS 2019.\n\n[2] Ghasemipour et al. EMaQ: Expected-max q-learning operator for simple yet effective offline and online rl. ICML 2021.\n\n[3] Laroche et al. Safe Policy Improvement with Baseline Bootstrapping. ICML 2019.\n\n[4] Liu et al. Provably Good Batch Reinforcement Learning Without Great Exploration. 2020.\n\n**Q2**: Derivation through Equations 3, 4, and 5.\n\nThe constrained optimization problem (Equation 4) is exactly how we realize $\\max_{a^{\\prime}: \\pi_{\\beta}\\left(a^{\\prime} \\mid s^{\\prime}\\right)>\\epsilon} Q\\left(s^{\\prime}, a^{\\prime}\\right)$ in the definition of supported backup operator (Equation 3). As mentioned in $\\underline{\\text{Line 171-172 of main paper}}$, the approximation from Equation 4 into Equation 5 is heuristic, which does not satisfy a theoretical guarantee. However, we would like to highlight that it is commonly adopted by the derivation of previous work, including TRPO for online RL and BEAR, AWR for offline RL. Our empirical evaluation shows that this approximation is practically effective.\n\n**Q3**: On novelty.\n\nWhile the proposed algorithm is based on previous literature, including the theoretical framework of support constraint, well-established off-policy algorithms and a VAE-based density estimator, we conduct an elegant algorithmic design to achieve a practical and powerful method. We would also like to point out that our focus is meaningful and impactful. We do believe that a simple and pluggable design is sufficient to be effective for offline RL, eliminating implementation complexity, computational cost, or algorithmic gap with online RL, which is essential for practical applications.", " **Q6:** Comparison with IQL under the online fine-tuning setting. \n\nWe agree with the reviewer that the conservatism in IQL may limit its online performance. IQL learns policy using advantage-weighted regression [1], whose implicit KL-divergence constraint is controlled by inverse-temperature $\\beta$ in IQL. Thus, like how we gradually relax the constraint of SPOT, we further evaluate a variant of IQL whose inverse-temperature $\\beta$ is increased linearly to $2\\beta$ during online fine-tuning. Here are the results:\n\n| | IQL w/ constraint relax | SPOT (from main paper) |\n| ----------------- | ----------------------------- | --------------------------------------- |\n| Umaze-v2 | 86.8 $\\rightarrow$ 96.6 | **93.2** $\\rightarrow$ **99.2** (+2.6) |\n| Umaze-diverse-v2 | **68.0** $\\rightarrow$ 42.8 | 41.6 $\\rightarrow$ **96.0** (+53.2) |\n| Medium-play-v2 | 73.4 $\\rightarrow$ 92.8 | **75.2** $\\rightarrow$ **97.4** (+4.6) |\n| Medium-diverse-v2 | **73.8** $\\rightarrow$ 90.4 | 73.0 $\\rightarrow$ **96.2** (+5.8) |\n| Large-play-v2 | **43.2** $\\rightarrow$ 77.6 | 40.8 $\\rightarrow$ **89.4** (+11.8) |\n| Large-diverse-v2 | 41.4 $\\rightarrow$ 78.8 | **44.0** $\\rightarrow$ **90.8** (+12.0) |\n| **Total** | **386.6** $\\rightarrow$ 479.0 | 367.8 → **569.0** (+90.0) |\n\nAs shown in the above table, constraint relaxation does not help IQL on online fine-tuning, and SPOT still shows superior performance. We have added comments about this result in Section 3.3 of the uploaded revision of supplementary material.\n\nAs mentioned above, one advantage of SPOT's pluggable design is that we can totally remove the constraint if necessary (for example, when the bootstrap error is not severe) and restore a standard off-policy algorithm. However, implicit constraint serves as a native component in IQL. With $\\beta$ approaching infinity, there exist extreme numerical issues with exponential weighting. While this benifit is shared by all methods based on pluggable regularization, SPOT has the strongest offline performance.\n\n[1] Xue Bin Peng *et al*. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning.\n\n**Q7:** Computation cost of inference.\n\nWe agree with the reviewer that many methods also only need one forward pass of the policy network to do inference, thus making inference efficient. In $\\underline{\\text{Section 5.4 of main paper}}$, we aim to highlight the comparison with policy constraint methods via parameterization (BCQ, EMaQ, PLAS) that also utilize explicit density constraint but couple the policy with generative models or critic network ($\\underline{\\text{Section 2 of supplementary material}}$). SPOT enjoys the best of both worlds: explicit density-based constraint and inference efficiency. We have clarified this claim in the uploaded revision.", " **Q4**: How does our constraint mechanism contribute to performance improvement?\n\nWe have compared with baseline methods built upon TD3, including BCQ, PLAS, TD3+BC on Gym-MuJoCo datasets and BCQ, TD3+BC on AntMaze datasets in $\\underline{\\text{Figure 1, Tables 2 and 3 of main paper}}$. We further evaluate PLAS on AntMaze datasets. Here are the results, which are also included in the uploaded revision:\n\n| | Am-u | Am-ud | Am-mp | Am-md | Am-lp | Am-ld | Total |\n| ---- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ | ------------------- | -------------------- |\n| PLAS | 62.0 $\\pm$ 16.7 | **45.4** $\\pm$ 7.9 | 31.4 $\\pm$ 21.5 | 20.6 $\\pm$ 27.7 | 2.2 $\\pm$ 3.8 | 3.0 $\\pm$ 6.7 | 164.6 $\\pm$ 84.3 |\n| SPOT | **93.5** $\\pm$ 2.4 | 40.7 $\\pm$ 5.1 | **74.7** $\\pm$ 4.6 | **79.1** $\\pm$ 5.6 | **35.3** $\\pm$ 8.3 | **36.3** $\\pm$ 13.7 | **359.6** $\\pm$ 39.7 |\n\n*Note: Am = AntMaze, u = umaze, m = medium, l = large, d = diverse, p = play. SPOT results are from the main paper.*\n\nSPOT significantly outperforms these methods on both domains. Notably, all baseline methods based on TD3 provide poor performance on the AntMaze domain (total score 142.8 for BCQ, 120.2 for TD3+BC, 164.6 for PLAS). These results illustrate that the proposed constraint mechanism, namely direct regularization on behavior density of learned actions, essentially contributes to SPOT's performance on complex offline RL tasks. \n\nFurthermore, the proposed constraint mechanism combined with SAC works comparably to the original TD3 variant on Gym-MuJoCo. It outperforms baseline methods as well, which also shows the effectiveness of the proposed constraint mechanism.\n\n**Q5:** Advantage of SPOT for online fine-tuning \"seamlessly.\"\n\nHere by \"seamlessly,\" we mean to fully exploit well-established powerful online RL algorithms when online fine-tuned, with a minimal algorithmic gap, eliminating unnecessary complexity or hyperparameter tuning. We argue that not all offline RL methods can realize this idea well. \n\n(1) Policy constraint methods via parameterization, such as BCQ, introduce additional structure into the policy. Thus standard online RL algorithms that typically formulate the policy as a simple fully-connected network, cannot be initialized directly by these offline trained policies.\n\n(2) Even if we can initialize with the offline pretrained policy, [1] indicates that off-policy bootstrapping error can cause an initial performance decrease when online fine-tuned with a standard off-policy method. Thus we also need a conservative constraint, which is typically a component of offline RL methods.\n\n(3) However, If we directly use an offline RL algorithm for online fine-tuning, performance or training speed may be limited by excessive conservatism (see the IQL experiments below, for example). Further, how these offline methods perform when online fine-tuned is not fully understood, tuned, and benchmarked by the community.\n\nOur method SPOT is built upon standard off-policy algorithms (with well-established implementation and hyperparameters), and a single hyperparameter $\\lambda$ can easily control the strength of the pluggable constraint. Both advantages, as well as strong offline performance, contribute to the superior online performance of SPOT.\n\n[1] Ashvin Nair *et al*. AWAC: Accelerating online reinforcement learning with offline datasets.", " We would like to sincerely thank Reviewer Eyz5 for providing a detailed review and insightful questions. \n\n**Q1:** Why should SPOT be compared and achieve better performance under the same constraint strength?\n\nHere is the intuition behind the experiments: We assume that the same constraint strength implies the same risk of extrapolation error on Q estimation (related theoretical bound can be found in the BEAR paper). Benefiting from the exact constraint formulation ($\\underline{\\text{Eq. (4) of main paper}}$), SPOT can fully exploit feasible actions that is $\\epsilon$-supported: $\\\\{a: \\pi_{\\beta}(a | s)>\\epsilon\\\\}$. However, other kinds of constraints may deviate from the density-based formulation of $\\epsilon$-support set, thus feasible actions under these constraints may only constitute a subset of the minimal support set that covers them. Under the risk of Q estimation error but only exploiting a subset of the $\\epsilon$-supported actions, baseline methods limit their optimality and provide a fragile tradeoff between satisfied constraint strength and optimality. To quantitatively illustrate this, we plot Figure 1(b) and compare the performance of different methods under the same constraint strength.\n\nWe have added the clarification of this intuition into Section 3.4 of the uploaded revision of supplementary material. All modifications are highlighted in blue.\n\n**Q2:** Why are the other baselines only report a few points in Figure 1(b)?\n\nFirst, for better illustration, we zoom in on the figure and therefore some outlier points are out of the figure (for example, PLAS on hopper-medium-replay). As mentioned in the caption of $\\underline{\\text{Figure 1 of main paper}}$, we present extended results in $\\underline{\\text{Figure 6 of supplementary material}}$, which includes all points we reported. \n\nSecond, as mentioned in $\\underline{\\text{Section 3.4 of supplementary material}}$, for some specific combinations of hyperparameter and algorithm (especially for BEAR), the constraint is too loose and the training diverges. We exclude these points from the figures. \n\n**Q3**: Why does the effect become worse when using Gaussian models on datasets generated by Gaussian policies?\n\nIt is an interesting phenomenon, and we have tried our best to interpret it. Originally, we fit a tanh-Gaussian distribution with learned standard deviation to model behavior policies in our experiments of the main paper, since tanh-Gaussian is typically used for SAC to represent policies with bounded actions. However, we empirically find that if we alternatively use a standard (unbounded) Gaussian with fixed std (which is inspired by the design choice of our VAE decoder implementation), we result in $89.0\\pm9.7$ on hopper-medium and $87.0\\pm 1.6$ on walker-medium. These results are consistent with those of VAE. We suspect that tanh transformation over Gaussian introduces some optimization difficulties and a learned std may cause overfitting since each state-conditional action distribution has only one data point.\n\nNevertheless, it is notable that **with either VAE-based or Gaussian density model**, SPOT substantially outperforms state-of-the-art offline RL baselines on Gym-MuJoCo medium and medium-replay datasets ($\\underline{\\text{Table 2 of main paper}}$ and $\\underline{\\text{Table 3 of supplementary material}}$), which demonstrates the effectiveness of our proposed explicit density-based regularization. Besides, VAE has the advantage of modeling more complex distributions.", " \nThe author presents Supported Policy OpTimization (SPOT), a policy constraint method to offline RL built upon an off-the-shelf off-policy RL algorithm. SPOT introduces a pluggable regularization term applied directly to the estimated behavior density. The experiments are conducted in the D4RL benchmark and several SOTA baselines are taken for comparison.\n \nStrengths:\n\n1. the article is overall well-written and easy to follow.\n2. The motivation of the article, simple and pluggable offline RL, is meaningful in real applications.\n3. The solution is simple to implement and reasonable.\n\n\nWeaknesses\n\nThe weaknesses come from the experiment designs, which are unclear to me. I have listed the related questions for the authors to qualify it.\n 1. In figure 1(b), why should we compare the performance of SPOT relative to the baselines under the same constraint strength? And why SPOT can achieve better performance with the same constraint strength since the related evidence cannot be found in the current theorem. btw, I think a meaningful comparison is that SPOT can easily adjust the degree of constraint by controlling \\lambda, while other solutions do not do this well. Also, why are the other baselines only report a few points (e.g., In hopper-medium-replay-v2, PLAS only has one point)?\n2. In figure 2, why does the effect become worse when using Gaussian, especially in Walker2d-medium and Hopper-medium. The policies of these datasets are Gaussian distribution, so Gaussian should be the most accurate modeling.\n3. In the last paragraph of Section 5.2, you mentioned that the TD3 method is not as stable as SAC, especially in the Ant environment, and Figure 2 illustrates the problem. So, is it possible that the performance improvement of SPOT reported in Table 3 comes from our algorithm choice (TD3 instead of SAC) rather than our constraint mechanism? For a fair comparison, an ablation experiment may be required here.\n4. In Section 5.3, the author compares the performance improvement of SPOT and IQL under the online fine-tuning setting, and the author claims that the SPOT algorithm can be online fine-tuned seamlessly. This point of view is a bit confusing to me: if I have an online reinforcement learning algorithm, then, for any offline reinforcement learning algorithm, I only need to pass the parameters of the optimal policy solved offline to the online reinforcement learning algorithm as the initialization parameters of the policy, and then I can achieve “fine-tuned seamlessness”. Is this something only SPOT can do? Further, I think the comparison with IQL is unfair. The author directly uses the constraint mechanism of IQL for online training, while the SPOT algorithm reduces the constraint coefficient. Since conservative updates are unnecessary during online training, this obviously suppresses the ability of IQL-based online-trained policies.\n5. In Section 5.4, the authors compare the computation cost of inference of several algorithms and illustrate the inference speed advantage of SPOT. In my opinion, the advantage of SPOT's inference speed should exist in many algorithms, such as some algorithms based on Pluggable Regularization (such as CQL, BRAC, etc.) and most model-based offline reinforcement learning algorithms (such as MOPO). Is my statement correct? If so, I think the description in Section 5.4 is a bit overclaimed: we shouldn't consider it a SPOT-exclusive feature.\n\n I will consider increasing the score if the authors clarify the above questions or give a better experiment design to demonstrate the proposed method.", " This paper proposes supported policy optimization (SPOT), which introduces a \"pluggable\" regularization term applied to an estimated behavior policy. In the implementation, following VAE, the authors use the ELBO to estimate the behavior policy, and following TD3+BC, add a normalization term to the policy loss. The authors emphasize that SPOT is computationally efficient at inference, obtains excellent performance, and enables effective online fine-tuning. Strengths:\n+ This paper is well-written.\n+ The proposed method is simple and intuitive. \n+ The authors conduct extensive experiments to show the effectiveness of SPOT. \n\nWeaknesses:\n+ The paper lacks theoretical analysis.\n+ The method is somewhat incremental. 1. The authors state that \"our method benefits from a closer connection between theory and algorithm\". What does the author mean by \"theory\"? \n2. I think there is no essential difference between Equation 4 and Equation 3. Furthermore, the authors approximate Equation 4 and get Equation 5. Is the theoretical guarantee still satisfied? yes", " The paper present a regularization method for offline reinforcement learning called SPOT. The main idea is a policy constraint inspired by support constraint on behavior policy. The method is evaluated extensively on standard offline RL benchmarks and achieves state of the art result. Strengths: The idea of support constraint looks simple and effective as in Sec.4.1, and the empirical result as shown in Figure 1 and Table 2 demonstrate its effectiveness for practical problems. From the empirical result, this work achieves new state-of-the-art results on standard offline RL benchmarks and is beneficial to the related communities.\n\nWeakness: From my perspective, the effectiveness of the proposed algorithm is mainly demonstrated by empirical experiments. It would be more complete if the empirical improvement over previous methods can be justified with some theoretical analysis. A minor suggestion is on the symbols in $\\log\\pi_\\beta(\\pi_\\phi(s)|s)$ (4), as $\\pi_\\beta$ is the probability $\\in [0,1]$ while $\\pi_\\phi$ actually is the action $\\in \\mathcal{A}$. It may cause some confusion at first glance. The authors have addressed the societal impact of this work.", " The paper considers the offline Reinforcement Learning setting, and introduces a loss that allows any off-policy RL algorithm to learn from offline data. The loss addresses the issue of out-of-distribution actions being sampled by the actor being trained. The loss is obtained by first training a (conditional) Variational Autoencoder on state-action tuples. Then, its ELBO loss (used for training the VAE itself) is used as part of the actor training, to keep in close to the support of actions in the dataset. The critic training loss does not seem to have to be modified (Equation 1 is untouched), and leverages the actor to sample actions (so, indirectly, it samples actions in the support of the dataset).\n\nAn empirical evaluation in challenging environments shows that the proposed method, SPOT (the loss described above + TD3) outperforms all the baselines being considered. The results are sometimes moderate, sometimes much more significant. The proposed method is easy to implement, and indeed does not need to change fundamental aspects of an off-policy RL algorithm to make it offline.\n\nStrengths (quality, clarity, somewhat the significance):\n\n* The paper is well-written and easy to read. It flows naturally\n* The proposed method is sound, and its derivation from constraint-based optimization is intuitive and motivates the resulting loss well.\n* The empirical evaluation is thorough, with the impact of hyper-parameters being well-studied. The environments being used are challenging (not just toy environments), and many baselines are considered.\n* Source code is provided! This is a rare event, and greatly helps reproducibility and answering questions such as \"does the gradient flow to the actor through the VAE?\" (the answer is yes). The code is also clean, self-contained and easy to understand.\n\nThere is no big weakness in this work, even though its novelty is maybe a bit lower than the other aspects listed in Strengths. While the proposed method is novel, and quite distinct from the existing literature, it is still using a VAE in combination with some off-policy RL algorithm. BCQ uses the VAE to sample actions (while this work uses the VAE as part of the actor loss). BRAC estimates the behavior policy and uses the KL divergence between the actor and the behavior policy to regularize learning. BEAR is inspired from constraint-based optimization like this work, but uses the maximum mean discrepancy to keep the actor close to the actions in the dataset. So, overall, the originality of this work, albeit satisfactory, cannot be stated as a strength.\n\nMinor comments:\n\n* Section 2 in the appendix is quite interesting, and if room can be made in the paper, would deserve to be in the main text\n* The description of how $\\lambda$ decreases over time in the online fine-tuning setting is interesting and should also be in the main text (it is in the appendix currently)\n\n**Author response**\n\nThe authors mention that they will improve the conclusion of the paper and extend the main text, which were my two minor comments. The other reviewers also seem to be leaning towards acceptance, so I maintain my score of accept. I don't have any question for the authors There does not seem to be any negative societal impact of this paper that should be discussed. The proposed algorithm outperforms the baselines in every case, and is overall strong. As such, the lack of a \"Limitations\" section, for instance, is not problematic for this paper. A mention of possible areas of future improvement would have been nice in the conclusion, that for the moments sounds a bit like an advertisement." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2, 4 ]
[ "63IidCfA4L9", "PpinE77wq0F", "Kic2JGjLD98", "b4M_YuTp0G8", "J9YqTTx39u", "8Xi9NjFlG7d", "nips_2022_KCXQ5HoM-fy", "YSlemDntD7o", "3Wy8C7COq9", "9RFg_XsEz2F", "4QhzeeRzkMI", "8Xi9NjFlG7d", "Kic2JGjLD98", "nips_2022_KCXQ5HoM-fy", "nips_2022_KCXQ5HoM-fy", "nips_2022_KCXQ5HoM-fy", "nips_2022_KCXQ5HoM-fy" ]
nips_2022_DhmYYrH_M3m
Trajectory-guided Control Prediction for End-to-end Autonomous Driving: A Simple yet Strong Baseline
Current end-to-end autonomous driving methods either run a controller based on a planned trajectory or perform control prediction directly, which have spanned two separately studied lines of research. Seeing their potential mutual benefits to each other, this paper takes the initiative to explore the combination of these two well-developed worlds. Specifically, our integrated approach has two branches for trajectory planning and direct control, respectively. The trajectory branch predicts the future trajectory, while the control branch involves a novel multi-step prediction scheme such that the relationship between current actions and future states can be reasoned. The two branches are connected so that the control branch receives corresponding guidance from the trajectory branch at each time step. The outputs from two branches are then fused to achieve complementary advantages. Our results are evaluated in the closed-loop urban driving setting with challenging scenarios using the CARLA simulator. Even with a monocular camera input, the proposed approach ranks first on the official CARLA Leaderboard, outperforming other complex candidates with multiple sensors or fusion mechanisms by a large margin. The source code is publicly available at https://github.com/OpenPerceptionX/TCP
Accept
The paper got split reviews: 1x reject, 1x borderline reject, 1x weak accept, 1x accept. All reviewers found the impressive performance on the challenging CARLA leaderboard to be a major strength of the paper. Reviewer concerns stem from two factors: a) not enough technical contribution to warrant publication at NeurIPS (but results are still publication worthy at more domain-specific conferences eg ICRA, IROS), and b) bulk of the impressive performance (19 points) coming from the ensembling heuristic and only 6 points coming from proposed architectural modifications (shared backbone, multi-step control, temporal module and trajectory guided attention). The meta-reviewer read through the paper, the reviews, the author response, and reviewer discussion. For the meta-reviewer, the impressiveness of the empirical results on a well-studied and important benchmark dominates the above reviewer concerns. As long as there is clear attribution and some understanding as to where this impressive performance improvement is coming from, the community will benefit from being aware of the results even though the proposed method may not be as technically deep as typical NeurIPS papers. The authors are encouraged to include the additional experiments conducted during the rebuttal phase into the final version of the paper, in particular the ones that help distill out the contribution of the different parts of the proposed system.
test
[ "VLQgyJR6lxx", "rRmrTi7eQp", "i5jmgA1FPZ", "2H8wWONoaf-", "pFz88kim1DH", "RMceg6fcoUj", "iBcyEmrT68A", "JdsCqKxsVJR", "F5cIuqKUX_X", "fRueL2xtGWn", "5soHQOeZmw", "sIIzUHVrgSC", "amjsb3Bjx9O", "vDuRZrui4R", "yAE_ePKKe6X", "2uoCsQDqU-v", "Dpg9GwhGgXy", "eS4cC7EOJh8", "rTkwPauIJq" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the follow-up discussion.\n\n> Q3: Yes, I have seen the experiments on the fusion weight, but the unexplored part (and probably more critical, given the experiments with alpha values) is the \"situation\" detector. Seems like a very hard-coded rule in an otherwise learned approach.\n\nAgreed. Developing a learning-based approach to involve the fusion part in the end-to-end training paradigm and replace the rule one is indeed an interesting and promising direction. A few methods could be explored towards this, e.g., (1) modeling these two branches with a probalistic uncertainty, (2) learning a discriminative model to score the degree of specialty of two branches respectively, (3) incorporating a gating network as the (Conditional) Mixture-of-Experts (MoE). Thank you for the suggestion and we will add this into the future work part.", " Q2: The title is more of a personal thing but the message I get from your paper is that \"Trajectory Guided Control Prediction\" is one of the branches (the one with the waypoints), combined with pure reactive control, and that the combination is key. I think a title reflecting that would be more accurate, something like \"Fusing Planning and Reactive Control for ...\", but that is just my opinion, this is not a hard comment.\n\nQ3: Yes, I have seen the experiments on the fusion weight, but the unexplored part (and probably more critical, given the experiments with alpha values) is the \"situation\" detector. Seems like a very hard-coded rule in an otherwise learned approach\n\nQ4: thanks for including limitations", " Thanks for the follow-up discussion. We have updated detailed infraction statistics in the table above in **A1**. We can observe that trajectory-attended variants have higher *Collisions vehicles* rates, while other penalties (e.g., *Collisions layout* and *Off-road infractions* rates) are all lower compared with the original Trajectory-Only models. We hypothesize that using future waypoint for the attention weight calculation makes the model focus on those location-related and static information (e.g., curbs and lanes) more. This also validates our motivation to utilize trajectory waypoints to guide the multi-step control prediction. ", " Thanks the authors for their detailed response. It is interesting to see the trajectory-attended model attains significantly higher route completion despite having worse infractions. Can the authors further provide detailed infraction comparison between the two variants? e.g. what kind of infractions do the trajectory attended model have more compared to the original one. I would also like to thank the authors for adding the limitations discussion of the paper. ", " Thanks for the follow-up discussion. We further address each concern to clarify technical details below.\n\n> In A1, it is mentioned that the GRU in the trajectory branch can be replaced by MLPs to output all waypoints at once. This contradicts L182-183 which states that future waypoints are obtained in an auto-regressive fashion, inspired by [41]. Since an auto-regressive architecture is used, the waypoint output at each future timestep takes into account the prediction from the previous timestep. Can the authors provide some clarification about the GRU in the trajectory branch in Fig. 2? Is it similar to the waypoint prediction module of [41]? This would also help with a better understanding of `Traj-Only-multistep` and `MTL-2heads` architectures.\n\nYes, the GRU module to predict waypoints is similar to [41,10,27]. It is one implementation choice among many for the **policy head**. The policy head could be a GRU to auto-regressively obtain future waypoints one by one (L182-183), or MLPs to output all waypoints at once (A1). We choose GRU as the candidate, as commonly does in [41,10,27].\n\nAs we stated in A1, IMHO, it is not reasonable to directly add future feature loss to each step inside GRU since the output (four waypoints) is **only for the policy of current step**. The future feature is for future policy output. During rebuttal as requested, in order to add future feature loss to the trajectory branch, we change role of the original GRU to the temporal module, and use a MLP as the policy head. \n\nSpecifically, the input to GRU is the concatenation of the current feature $\\rm{j_t^{traj}=MLP(h_t^{traj})}$, a waypoint and the target point (c.f. original GRU input is a waypoint and the target point, as in [41,10,27]). This modification inherits from the temporal module design in the control branch (L209-211). In this case we treat a single waypoint as the policy output for one step, and we can add the future feature loss. The `Traj-Only-multistep` and `MTL-2heads` variants are designed based on this approach.\n\n> The results in A2 and Tables 3, 4 indicate that the main performance gain comes from multi-step control prediction, shared backbone between trajectory & control branch, and ensemble. The gains from the situation-based fusion scheme ($\\alpha=0.3$) and trajectory-guided attention seem marginal.\n\n\n**Gain from situation-based fusion scheme.** As we stated in A2, **setting $\\alpha$ to 0.3 $\\neq$ our situation-based fusion scheme**. One can design a more tailored rule both for the choice of $\\alpha$ and situaion criterion to achieve better performance (`Row2 vs Row3` in the Table of A2).\n\n\n**Gain from trajectory-guided attention.** Trajectory and control prediction are two closely related tasks which share common underlying representations while maintaining intrinsic differences. Thus it is reasonable to extract **shared features at an early stage** and have two interacted branches later (which is proposed in this work, `Row1` in the Table below). Separate backbones may lead to different feature representations and confuse the control branch when additional attention applies. IMHO, an interaction discarding the shared underlying representations (new experiment in rebuttal, `Row2` in the Table below) is inappropriate.\n\n||attention gain(DS)|\n|:------------------------------------:|:------:|\n|shared backbone -> attention -> fusion (ablations in paper)|3.21 (Table3, R3 vs R4)|\n|fusion -> attention -> shared backbone (exp. in rebuttal)|1.07 (TCB-SB w/o att vs TCP-SB)|", " Thanks for the follow-up discussion. We further address each concern to clarify technical details below.\n\n> The multi-step control prediction aims to match the state distribution between expert and policy but the training data would still be IID. I suggest that the argument about mitigating the IID assumption should be removed from Sec 3.2.2.\n\nWe agree with Reviewer that each training sample is IID. We appreciate your suggestion and will revise accordingly.\n\nPlease note that, since we predict multiple steps, and the temporal module involves previous action to predict the next one, corresponding expert data (GT) consists of continuous states and actions, which is not IID among the sequence. To this end, we wrote in the paper that it somehow mitigates the IID assumption. \n\n> I agree that the future waypoints of the ego-vehicle are indicative of the presence of the road but their association with curbs and lanes is not clear. I suggest that this argument should be removed from L202-203.\n\nThanks for the suggestion. We agree that there are no quantitative associations, and we use \"e.g., curbs and lanes\" to give concrete examples about the intuition behind the guided attention module in L202-203. In fact, we could observe that curbs and lanes are salient parts in Fig. 4 to validate this qualitatively. We are pleased to take suggestions and revise accordingly.\n\n> It is hard to interpret the provided results since the CARLA versions are different (this is also the case for LBC comparison in the FASNet paper). So, it is important to understand the differences in the capabilities of FASNet and TCP.\n\nAgreed and thanks. Here we make some clarifications on differences/capabilities between FASNet and TCP. We will add these in the revised manuscript accordingly.\n\n- **Multi-step actions.** Previous predicted action has **no influence on the future states and control predictions** in FASNet. The future control action prediction of FASNet is mainly designed to take the weighted average (while we do not use the future prediction results during testing). The motivation and detailed approach are different from TCP. \n- **Recurrent architecture.** FASNet uses another video prediction PredNet [a] to predict/extrapolate future images. The recurrent architecture is inside PredNet, which **is pretrained without any interactions with the control prediction** (Sec. 4.1).\n- **Jointly learns waypoint and control.** FASNet predicts positions and headings of the vehicle **only as auxiliary tasks**, and these predictions **are utilized only at the training time** (Sec. 3.2). The trajectory branch and multi-step control branch in TCP have interactions in between, and the policy outputs from both branches are utilized and fused.\n\n[a] William Lotter et al. Deep Predictive Coding Networks for Video Prediction and Unsupervised Learning. ICLR, 2017.\n", " A3: The multi-step control prediction aims to match the state distribution between expert and policy but the training data would still be IID. I suggest that the argument about mitigating the IID assumption should be removed from Sec 3.2.2.\n\nA4: I agree that the future waypoints of the ego-vehicle are indicative of the presence of the road but their association with curbs and lanes is not clear. I suggest that this argument should be removed from L202-203.\n\nA9: It is hard to interpret the provided results since the CARLA versions are different (this is also the case for LBC comparison in the FASNet paper). FASNet is an important baseline because:\n- it also predicts future actions for multiple timesteps (Sec 3.1)\n- it also uses a recurrent architecture for future state prediction (Fig. 2)\n- it also jointly learns waypoint and control prediction\n\nSo, it is important to understand the differences in the capabilities of FASNet and TCP.", " I appreciate the additional ablations and clarifications provided by the authors.\n\nIn A1, it is mentioned that the GRU in the trajectory branch can be replaced by MLPs to output all waypoints at once. This contradicts L182-183 which states that future waypoints are obtained in an auto-regressive fashion, inspired by [41]. Since an auto-regressive architecture is used, the waypoint output at each future timestep takes into account the prediction from the previous timestep. Can the authors provide some clarification about the GRU in the trajectory branch in Fig. 2? Is it similar to the waypoint prediction module of [41]? This would also help with a better understanding of `Traj-Only-multistep` and `MTL-2heads` architectures.\n\nThe results in A2 and Tables 3, 4 indicate that the main performance gain comes from multi-step control prediction, shared backbone between trajectory & control branch, and ensemble. The gains from the situation-based fusion scheme ($\\alpha=0.3$) and trajectory-guided attention seem marginal.", " We thank Reviewers for helpful and detailed comments on our work. \n\nTCP is a simple and yet effective vision-based solution for end-to-end autonomous driving framework. We get a unanimous agreement from all four Reviewers that \"the idea is clear and presentation is easy to follow. The pipeline is novel.\" Most importantly, we achieve impressive result with a large improvement to the second best method on the public Carla benchmark leaderboard, with simple camera input alone. \n\nWe have added more ablative experiments in the rebuttal and clarify some technical details. Please see each response below. Thanks.", " **Q3: How does the multi-step control mitigate the IID assumption.**\n\n**A3:** As we discussed in **A1**, our multi-step control prediction aims to match the future latent features and action predictions with those from the expert. Our model has the ability to reason about what to-take action could match the future states with the expert, mitigating the IID assumption to some extent.\n\n**Q4: How does the trajectory branch help to incorporate static layout information (eg. curbs).**\n\n**A4:** The static layout information (lanes & curbs) needed for driving policy is tightly related to the location of the vehicle at a certain time step. As the trajectory branch predicts waypoints for each future step, it contains rich information about possible future ego locations. Therefore, the trajectory branch can help the control branch to focus on the important static information to better generate policy output at future steps.\n\n**Q5: Why does TCP has lower RC compared to LiDAR-based methods. Is it because TCP gets blocked more often, or taking the maximum of brake values leads to an overly conservative policy.**\n\n\n**A5:** In Table 2 in the Supplementary, TCP gets higher agent block penalty than [10,43]. As we discussed in Sec. 4.2, LiDAR-based methods have better object detection ability to avoid blocking. When the agent stops for a long time, they would move slowly if no obstacles detected ahead. An overly conservative policy is also possible in theory, but we do not observe such situation in our experiments.\n\n\n**Q6: Evaluation protocol on local validation is different than online leaderboard submission. The authors should include them in the paper. How is $\\alpha$ selected and is there a separate validation protocol for tuning these values.**\n\n**A6:** As we mention in L298, we use the same validation routes as LAV[10]. For the choice of $\\alpha$, in order to achieve a better performance, we choose a non-constant $\\alpha$ value over action types and situations for online leaderboard submission. We also discuss this in **A2**. This setting is chosen by analyzing the results on our validation routes. We will add these ellaborations accordingly. \n\n**Q7: If PID controllers could be tuned to better follow the trajectory.**\n\n\n**A7:** PID parameters and several thresholds are already carefully tuned in [15,41]. We follow the setting in [15]. In this rebuttal, we also tune the parameters of PID controllers in some validation routes provided from official CARLA leaderboard; chosen routes are from different towns compared with the ones for ablations. Specifically, we make two sets of PID parameters so that we can alter according to current situation (whether the vehicle is turning) similar to our fusion mechanism. \nAlthough parameters are tuned carefully and two sets of parameters are obtained, **the performance improvement is trivial (Driving Score from 28.29 to 30.63)**. This is because the environment for tuning is different from that for evaluation (e.g. road topology and curvature of turnings). \nThis also **validates our motivation to avoid onerous parameter tuning** (which may still perform poorly in new environment).\n\n**Q8: Implementation details regarding the GRU modules, MLPs, and weights for loss terms.**\n\n\n**A8:** We have added details of the model structure, parameters of PID controllers, and loss weights in the revised Supplementary. Please check it in the **Implementation Details** section (Sec. B).\n\n\n**Q9: It would be helpful if the authors could provide comparison to FASNet[28]. This would greatly improve the paper.**\n\n**A9:** We do not exactly implement FASNet[28] due to limited time; however we address Reviewer's concerns below:\n1. In the original FASNet, it has inferior performance compared with LBC[11]. But TCP surpasses LBC and its follow-up works including WoR[12] and LAV[10] by a large margin on the leaderboard.\n2. Since FASNet is evaluated on `NoCrash` benchmark, we collect 6-hours data in Town01 under `NoCrash` training protocol (FASNet uses 100-hours), and train our Control-Only model and TCP model on it. We provide results on the most challenging setup of `NoCrash` benchmark: new town new weather with dense traffic in the table below. TCP is significantly better than FASNet. \n\n|NoCrash-New Town,New Weather,Dense|*FASNet[28]*|Control-Only|**TCP**|\n|:------------------------------------:|:------:|:------------:|:-------:|\n|Success Rate (%)|*32*|24|**60**|\n\n**Q10: Visualize GradCAM attention maps for multi-step control prediction branch without the trajectory-guided attention module.**\n\n\n**A10:** We have added visualization examples and discussions in the revised version of the Supplementary. Please check it in the **Experiments** section (Sec. C.3).\n\n**Relevent References**. We will add them in the revised manuscript.", " Dear Reviewer cv3D,\n\nThanks for the detailed and helpful review; we really appreciate it. The main concerns in the review feedback concentrate on technical clarifications and additional experiments to verify the claims. We address each comment in details below.\n\n**Q1: Difference between two GRUs in control and trajectory branches.** \n\n> This corresponds to requested ablations on 1) trajectory-Only model with feature loss applied to each prediction timestep, 2) a single GRU module to predict both future waypoints and multi-timestep control predictions.\n\n**A1:** Thanks for the comment. We explain it from two perspectives below.\n\n**Technical clarification.** We might argue that the two GRUs have **different roles**. For the trajectory branch, the policy output for **current step** is a series of waypoints (4 waypoints all together), without involving planning at future steps. The GRU in the trajectory branch is the policy head, and it can be replaced by other non-recursive implementations such as MLPs to output waypoints all at once. The GRU in the control branch works as the **temporal module**, aiming to encode latent dynamics for the future step based on current feature and action. The control branch makes multiple policy outputs for future steps with the temporal module. Our temporal module aims to match the policy-related future features with those from the expert. The temporal module cannot generate future states all at once.\n\n**Additional ablations.** Since the GRU in the trajectory branch is just the policy head for current step, it's not reasonable to directly add future feature supervision at each step. Therefore, we modify the role of it from a policy head to the \"temporal module\", and the policy head is now implemented using an MLP. Specifically, the trajectory branch only regresses one waypoint as the policy output for each step. The modified GRU takes in current feature and the single waypoint, and generates latent feature for the next step, based on which the policy head (MLP) regresses the next waypoint. In this way, the future feature loss could be added to each step. We refer this model as **`Traj-Only-multistep`**. Another policy head for control (implemented by another MLP) is added to Traj-Only-multistep so that we get both the waypoint and the control action with **a single GRU**, and we call it **`MTL-2heads`**. The performance of these variants is listed below.\n\n||Driving Score|Route Completion|Infraction Penalty|\n|:-------------------------:|:-----:|:-----:|:-----:|\n|Trajectory-Only (Original)| 28.29 |58.11|0.50|\n|Traj-Only-multistep|26.30|60.78|0.46|\n|MTL (2nd row in Table 4)|48.27|81.62|0.60|\n|MTL-2heads|43.42|79.51|0.56|\n\nTraj-Only-multistep performs slightly worse than the original Trajectory-Only model. This is probably because the policy head now predicts one waypoint only, making it less stable. Combining two branches with a shared GRU harms the performance. Though we combine these two branches in an MTL approach in the original TCP model, these two tasks still have intrinsic differences. Therefore, using different MLP layers at the last stage alone to generate different outputs could hinder performance.\n\n**Q2: Does the Control-Only model predict only single-step control values. If yes, compare to new experiments of direct ensemble (simple average), an ensemble of multi-step control prediction and trajectory prediction without trajectory-guided attention.**\n\n\n**A2:** \n\n||Driving Score|Route Completion|Infraction Penalty|\n|:-----------------------------------------:|:-----:|:-----:|:-----:|\n|Ensemble ($\\alpha$ = 0.3, 1st row in Table4)|45.03| 79.30 |0.59|\n|Ensemble ($\\alpha$ = 0.5)| 44.30 |80.44|0.60|\n|Ensemble (non-constant $\\alpha$)| 50.98|80.82|0.64|\n|TCP-SB w/o traj att ($\\alpha$ = 0.3)|51.39|80.26|0.63|\n|TCP-SB w/o traj att ($\\alpha$ = 0.5)|46.87|83.68|0.59|\n\nYes, it does. As requested, we add new ablations for *direct ensemble* (2nd row in the table above), and *ensemble of multi-step control prediction and trajectory prediction w/o trajectory-guided attention* (last two rows in the table above). Our purpose is to provide a general and flexible fusion mechanism (L360). We **do not claim a constant $\\alpha = 0.3$ and current turning-based criterion are optimal**. Sometimes $\\alpha=0.5$ has similar performance to $\\alpha=0.3$. It is reasonable since a large portion of steps belongs to non-turning situation, so the control branch is not utilized enough with a small $\\alpha$. Thus, we design a sophisticated fusion rule to keep a non-constant $\\alpha$ (3rd row in the table above). In this case, when *traj-specialized*, $\\alpha$ is 0.5 for steer and throttle to utilize the control branch better, and the maximum brake value is taken. And $\\alpha$ is set to 0.3 for all actions when *control-specialized*. We choose routes from different towns with the ones used for ablations from the CARLA leaderboard repo to determine this rule.", " **Q2: Incorporating DAgger to improve the driving of the proposed agent.**\n\n**A2:** Traditional DAgger needs policy output from the expert to be the same with that from the student model. \nIn this work, the expert predicts the single step action at each step, while the student model requires trajectory and multi-step actions. \nThese supervisions are not available since the expert does not interact with the environment when we adopt DAgger. Therefore, conventional DAgger can **not** be applied directly. In our preliminary experiments (not shown in the paper), we designed an approach to overcome the caveat in DAgger, by letting the expert take over the vehicle for certain steps to provide the required supervision. \nResults show that the refinement is feasible, but there are still many details to be determined or optimized, which is out of scope of this work. Thanks for the great advice.\n\n\n**Q3: How well the temporal module can predict the future. For example by running the open-loop action predictions in CARLA or simulating a vehicle model.**\n\n**A3:** As we do not reconstruct images from the temporal module, we provide the action and feature error for current and feature steps prediction on validation dataset below. From these new results, we conclude that TCP can generate satisfactory predictions with the proposed temporal module.\n\n||Steer L1 Error|Throttle L1 Error|Brake L1 Error|Feature MSE|\n|:------------:|:--------------:|:-----------------:|:--------------:|:-----------------:|\n|Current Step|0.026|0.097|0.057|0.614|\n|Future Step1|0.029|0.139|0.113|0.633|\n|Future Step2|0.033|0.185|0.142|0.742|\n|Future Step3|0.035|0.209|0.151|0.914|\n|Future Step4|0.039|0.232|0.159|1.120|\n\n\n**Q4: Several approaches (pure pursuit/curvature-based feedforward/MPC) for lateral trajectory tracking; such a controller would change the finding and result in a different conclusion.**\n\n**A4:** Thanks for the comment. Note that PID parameters, the choice of the target angle for lateral control, and other hyperparameters are exquisitely tuned in [15,41] already. We follow the setting in [15]. \nAs requested, in rebuttal we provide experiments to use Pure Pursuit for lateral control. \nThe experimental results are shown below.\n\n||Driving Score|Route Completion|Infraction Penalty|\n|:--------------------------------------:|:-------------:|:----------------:|:------------------:|\n|Trajectory-Only (PID)|28.29|58.11|0.50|\n|Trajectory-Only (Pure Pursuit)|20.24|55.97|0.37|\n|Trajectory-Only (Situation-based PID)|30.63|66.13|0.53|\n\n> Experiment settings: We tune the parameters of PID controllers in the validation routes provided from official CARLA leaderboard; \nthese routes are from different towns compared with the ones used for our ablation study. \nFor `Trajectory-Only (Situation-based PID)`, we have two sets of PID parameters so that we can alter according to current situation \n(whether the vehicle is turning in this case) similar to the proposed fusion mechanism. \n\nOne can observe that replacing `Trajectory-Only (PID)` with `Pure Pursuit` leads to **worse performance, from 28 degrading to 20.** \nEquipping with sophisticated controllers in the pipeline needs heavy engineering work, which is not the focus here. \nWe would like to avoid the tuning process and keep the virtue of simplicity for end-to-end autonomous driving framework. \"Trajectory planning is all you need similar to some work by Lyft\" may be suitable in modular design where the planner has map information and accurate perception results.\n\n\nNonetheless, we tune the parameters of PID controllers on other routes carefully and investigate if such an experiment would result in different conclusion. We can conclude that **the performance improvement is trivial** (`Situation-based PID` vs `PID`, 30 vs 28). \nThis is because the environment for parameter tuning is different from that for evaluation (e.g. the road topology and the curvature of turnings). \nThis new observation **validates the motivation to avoid onerous parameter tuning.** Besides, heuristic tuning may still perform poorly in the new scenarios.\n\n\n**Limitations.** We will shift the Limitation and Impact parts from Supplementary to Main paper in the revised manuscript.", " Dear Reviewer 6yad,\n\nThank you for your comments. We address your concerns on the weaknesses below.\n\n**Q1: Weakness: Interesting links between the temporal module and world models. The proposed training approach for the temporal module might not work well for world models. Potentially RSSM could work better.**\n\n**A1:** Thanks. \nThe proposed temporal module shares some spirits with world models. World models [See references **a,b,c** below] formulate latent dynamic of the environment, which can then be used by **model-based RL methods.** \nIn this work, we formulate the method in Imitation Learning (IL) domain; we do not aim to fully model the complex driving environment. \nThe temporal module is designed to match future latent feature representations with the ones from the expert based on current feature and action. \n**Our policy head does not interact with the temporal module in the way where model-based RL interacts with world models.**\nSince the temporal module is differentiable, our model has the capability to reason about what current actions could match (i) future states and (ii) actions with the expert. To some extent, this mitigates IID assumption problem for the imitation learning task in an easier way. \n\nWe might argue that our work is an IL-based method. Our temporal module spans a short horizon into the future (4 steps in our case) with direct supervision on future features. Different from world models, **a simple GRU module with deterministic states suffices to perform this task.** \n\n\n**Potential Hybrid Adaptation such as RSSM.** Agreed. In this rebuttal, we add discussions on RSSM in Dreamer [b], where the world model in [b] is built on a recurrent state space model [a].\nThe world model in Dreamer servers as an environment to provide long-horizon trajectory, and it has to capture the stochastic nature of the world. \nIt needs both stochastic and deterministic components. However, as world models are mainly utilized for reinforcement learning, how to devise and utilize a RSSM like world model for imitation learning for more complex environment (like autonomous driving) is an interesting topic to explore.\n\nWe will add discussions about links between the temporal module and world models in the revised manuscript. \n\n***If this paper fits NeurIPS***. Several pioneering works [d,e] (date back to the 1980's) to adopt neural network for end-to-end autonomous driving were published on NeurIPS. In recent years, there are also published work [f,g,h,i] in the NeurIPS main proceedings. IMHO, this work fits NeurIPS audience.\n\n\n> [a] Danijar Hafner et al. Learning latent dynamics for planning from pixels. ICML, 2019.\n>\n> [b] Danijar Hafner et al. Dream to control: Learning behaviors by latent imagination. arXiv, 2019.\n> \n> [c] David Ha et al. World models. NeurIPS, 2018.\n> \n> [d] Dean Pomerleau. Alvinn: An autonomous land vehicle in a neural network. NeurIPS, 1989.\n> \n> [e] Yann LeCun et al. Off-road obstacle avoidance through end-to-end learning. NeurIPS, 2007. \n> \n> [f] Yunpeng Pan et al. Learning Deep Neural Network Control Policies for Agile Off-Road Autonomous Driving. NeurIPS, 2017.\n> \n> [g] Andrew Spielberg et al. Learning-In-The-Loop Optimization: End-To-End Control And Co-Design of Soft Robots Through Learned Deep Latent Representations. NeurIPS, 2019.\n> \n> [h] Arthur Delarue et al. Reinforcement Learning with Combinatorial Actions: An Application to Vehicle Routing. NeurIPS, 2020.\n> \n> [i] David Acuna et al. Towards Optimal Strategies for Training Self-Driving Perception Models in Simulation. NeurIPS, 2021.\n", " Dear Reviewer X2A2,\n\nThank you for commenting on the strengths and contributions in the manuscript. We address Reviewer's questions each below.\n\n\n**Q1: Originality: In any case, the paper brings novel ideas; it's better to extend the search to other application domains and include them in the related work.**\n\n**A1:** Thank you for bringing up the paper, Pokle et al. in robotics; we will add in the Related Work section alongside with other potential applications.\n\n\n**Q2: Clarity: Confusion with the title. Final method is a combination but the title indicate otherwise.**\n\n**A2:** The title *Trajectory-guided Control Prediction* represents the way we fuse and obtain better feature representation from two branches (trajectory and control). The general idea is to introduce a novel end-to-end systematic philosophy for the ultimate task (controlling) in autonomous driving - whether it is a trajectory-guided control or/and end-to-end direct control.\nWe are pleased to discuss with Reviewer and take suggestions about the title.\n\n\n**Q3: Quality and Questions: add ablation on the fusion mechanism - \nan evaluation to the hardcoded heuristic with two types of driving cases.\nHow to identify the type of driving mode to select a different fusing criterion.**\n\n\n**A3:** Thanks. Though directly averaging two branches achieved good performance, we provide an example for more flexible and general scheme to fuse the two branches (L360). More tailored if-then schemes can be further designed as well. In our experiment, we choose **whether the vehicle is turning** as the criterion of the *situation* as we mention in L282-284. If the vehicle is turning (half of the steering actions within the last 1 second are larger than 0.1), the *situation* is *control specialized*, otherwise *trajectory specialized*. Under this criteria, we have conducted the experiments on different choices of the fusion weight $\\alpha$ (aka heuristic parameter) in the **last part of Sec. 4.4** and presented in Fig. 5. The results on the importance of the fusion scheme is shown in the **last row in Table 3**. \n\n\n\n**Q4: Limitations. A clear evaluation of the error cases would be informative. What are the common failures.**\n\n\n**A4:** Thank you for the comments. As requested, we have provided additional examples and visualizations in the **Limitation** section of the Supplementary (due to space limit in the Main paper) in our revised version (Sec. D.1.1). In general, typical failure cases include:\n\n1. Vehicles initially outside the ego agent's front view rush into the ego path with a high speed, causing a collision when emergent braking fails. \n2. The ego agent fails to consider the possible trajectory of other vehicles, resulting in a blocking or collisions.\n\n**Analysis.** For the first case, it is because of the limited view of the monocular camera, hence a straightforward future direction is to add multi-view cameras or a LiDAR input to our agent. For the second case, the reason is that our model lacks the ability to reason about trajectories of other vehicles without an explicit prediction module, therefore another possible direction is to extend the multi-task learning framework with detection and motion prediction modules, and combine their results with our planned trajectory.", " Dear Reviewer Zcj9,\n\nThank you for appreciating our work. We address Reviewer's concerns below.\n\n**Q1: Weakness. Trajectory-Only variant to have situation-based PID controller for the Trajectory-Only model.**\n\n**A1:** Thanks for the suggestion. In this rebuttal, we implement the `Trajectory-Only` model with `Trajectory-attended` image features and test them both with original `PID` and `Situation-based PID`. The experiment results are listed below.\n\n| | Driving Score | Route Completion | Infraction Penalty | Collisions vehicles | Collisions layout | Off-road infractions | Agent blocked | Red light infractions |\n|:------------------------------------------:|:-------------------:|:-----------------:|:--------------------:|:-------------------:|:-----------------:|:--------------------:|:-------------:|:---------------------:|\n|Trajectory-Only (Original PID)| 28.29 | 58.11 | 0.50 | 0.85 |0.77 |0.74| 0.77|0.41|\n| Trajectory-Only (Situation-based PID ) | 30.63 | 66.13 | 0.53 |0.68|0.32 | 0.54 | 0.54 | 0.59 |\n| Trajectory-attended (Original PID) | 26.35 | 70.64 | 0.38 | 1.70 | 0.23 | 0.47 | 0.49 | 0.23 |\n| Trajectory-attended (Situation-based PID ) | 30.84 | 76.09 | 0.41 | 1.78 | 0.16 | 0.38 | 0.40 | 0.29 |\n\n\n> Experiment settings: we have tuned parameters of our PID module in some routes, as provided in the official CARLA leaderboard; these routes are from different towns compared with the ones for our validation evaluation. Specifically, two sets of PID parameters are generated so that we can alter according to current situation (whether the vehicle is turning in this case) similar to the proposed fusion mechanism.\n\nAlthough we tune the parameters carefully and obtain two sets of parameters to choose, **the performance improvement is trivial** (see Row `1 vs 2`, `3 vs 4` respectively). \nThis is because the environment for parameter tuning is different from that for evaluation (such as the road topology and the curvature of turnings). \nThis actually **validates our motivation to avoid onerous parameter tuning**. Besides, the heuristic tuning may still perform poorly in new environment. \nOn the elaboration of `Trajectory-attended vs Trajectory-Only`, \nthe original trajectory-guided attention is designed to guide the control prediction at each future step, and we treat all four waypoints together as the current policy for current step. Using its previous waypoint prediction to re-aggregate the image and combining it with the hidden state in the GRU of the trajectory branch make the training and waypoints prediction process less stable.\n\n\n**Q2: Questions and Limitations: whether TCP will work when scaling to the real world; what might be the main obstacles and bottlenecks.**\n\n**A2:** Thank you for pointing out the potential limitations of TCP. We have added discussion about possible problems when scaling our model to real world in the **Limitation** section of the revised Supplementary (Sec. D.1.2). There are several aspects to consider for real world setting:\n\n\n- **Balance Data distribution.** The real world driving scenario is more complex and diverse than the simulation, and the data distribution is highly skewed. We need to carefully adjust the training data manually to assure different driving maneuvers are distributed more uniformly. And we need to assure those rare but safety-critical cases can be captured by our model.\n- **Ensure the generalizability of the perception model.** Since the scene looks very different due to weather and time changes in real world, it is important for our perception model to learn a robust and generalizable representation of the scene which is invariant to irrelevant factors. The possible approach is to add auxiliary tasks like depth estimation and semantic segmentation.\n- **Incorporate motion prediction and object detection modules.** To improve safety and explainability, additional modules to detect other objects and make predictions about other dynamic agents could be combined with our model.\n- **Learn from corrections.** Only learning from perfect human demonstrations may make the model fail during testing due to the distribution shift problem. It is important to utilize techniques like DAgger to let the model learn corrections in an online setting.\n- **Improve open-loop evaluation for real world.** It is natural to test the model in closed-loop in simulation, but the cost for real world closed-loop evaluation is too high. Therefore, we need to improve the evaluation approaches based on offline dataset to get a good indicator of the performance in real world setting.", " This paper presents a novel method for autonomous driving based on a single RGB camera on the vehicle. The method combines direct control commands from images into steering, and an indirect method with a policy that outputs local trajectories, which are then converted into steering commands with an analytical PID controller. The feature representation from the images is shared for both the trajectory and the direct control branches. The combination of commands is performed by a hard-coded heuristic method using a weighted mean based on two predefined situations: turning or not turning. The proposed approach provides good results on the CARLA benchmark, even compared to other methods that use more modalities or more RGB images as input.\n - Originality: the idea of combining a learned method for trajectory prediction and a learned method for motion control is novel in the domain of autonomous car driving. However, it has been explored for other applications in robotics, e.g., Pokle et al. “Deep Local Trajectory Replanning and Control for Robot Navigation”, 2019. I’d recommend extending the search to other application domains and include them in the related work. In any case, the paper brings novel ideas.\n- Significance: the strength of this paper is in the evaluation. The method has been compared to other methods in a public benchmark, CARLA, and provided good results.\n- Clarity: the paper is clearly written and it is easy to follow. There is some confusion with some terms, e.g., the final method is a combination of trajectory based control and direct end-to-end control, but the title seems to indicate something different.\n- Quality: the paper has acceptable quality. The experiments are mainly the CARLA scores. I miss an important experiment: the fusing mechanism. The paper includes ablations using only trajectory-based or direct control, but not an evaluation of the importance of the hardcoded heuristic with the two types of driving cases. Even the information about how to classify in these two types is not clearly provided.\n - How do you identify the type of driving mode to select a different fusing criterion?\n - There is not much discussion about limitations, it would be good to have it. Also a clear evaluation of the error cases would be informative. What are the common failures of the solution?\n", " The paper studies how to combine image-to-trajectory and image-to-control approaches for autonomous driving in the Carla simulator. Both approaches have their advantages and disadvantages, by combining them the paper is able to achieve a new state of the art on the carla leaderboard only using a front facing camera. To fuse the temporal trajectories and static control predictions, the paper proposes to predict input trajectories which are feed through a temporal model similar to a world model that predicts future features and allows trajectory guided attention in the control branch.\n Strengths: The paper is well written, clear to follow and achieves impressive results. I especially like the idea of the temporal module in the multi step control branch, which allows to match control inputs and input features. \n\nWeaknesses: My main concern is that I am not sure if this paper fits NeurIPS? There are little technical contributions and interesting links such as the one between the temporal module and world models are not discussed. On the link to world models, the proposed training approach for the temporal module is actually know to not work well, since it is a purely deterministic model which is not able to capture the stochastic future. Potentially a hybrid approach such as RSSM could work better. -Did you investigate DAgger to improve the driving of the proposed agent?\n-Did you evaluate how well the temporal module can predict the future? For example by running the open loop action predictions in Carla or simulating a vehicle model?\n-PID controllers are not well suited for lateral trajectory tracking since the are missing a \"feedforward\" component. However, there are several approaches which can deal with this, such as pure pursuit, curvature based feedforward, or as mentioned MPC. All of them are easy to implement, so I was wondering if using such a controller would change the finding, and result in a different conclusion (mainly trajectory predictions is all you need similar to some works done by Lyft/Woven Planet Level 5). Both are discussed but only in the supplementary. ", " This work presents a novel approach (TCP) that combines trajectory planning and control prediction in a multitask learning framework for end-to-end autonomous driving. It analyzes the limitations of both paradigms (Fig. 1), eg. inertial problem, incorrect turnings, and single-step prediction, and proposes a multi-step control prediction module with trajectory-guided attention and situation-based fusion scheme to incorporate the best of both worlds. Extensive experiments on the CARLA leaderboard (Table 1) show state-of-the-art performance by a wide margin, with only a monocular camera, compared to baselines that use multiple sensors as input. Moreover, the ablation study (Table 2,3,4) and visualizations (Fig. 4 and supplementary) provide valuable insights into the capabilities of the proposed approach. Strengths:\n\n- The idea to combine trajectory planning and control prediction in a multitask learning framework is simple and easy to understand. Algorithm 1 and Fig. 2, 3 are great and provide a clear understanding of the proposed approach.\n- TCP achieves state-of-the-art results on the official CARLA leaderboard, surpassing the previous best model by 13.29 points. This is quite impressive since TCP uses a single monocular camera as input whereas the other top models use multiple sensor inputs: multiview camera and LiDAR.\n\nWeaknesses:\n\n- The loss function for the control branch (Eq. 5) contains feature loss for the entire prediction horizon whereas that is not the case for the trajectory branch (Eq. 4). Since the trajectory branch contains a GRU that outputs a hidden state at each prediction timestep, the feature loss (from t=1 to K) could also be applied in Eq. 4 at each future timestep. The authors should include an ablation with a trajectory-only model with feature loss applied to each prediction timestep and auxiliary loss (on speed and value prediction) used as well.\n\n- What is the reason for using 2 separate GRU branches for predicting waypoints and control values? Since both waypoints and control are alternate representations of driving behavior, they could be directly predicted from a single branch. Since the waypoint branch also contains a GRU, the hidden state of that GRU could be used to predict both waypoints and control values through separate MLP heads. It is not clear why a separate temporal module is required since it is already present in the trajectory branch.\n\n- The authors should compare to a baseline which does a direct ensemble (average control predictions) of control-only and trajectory-only models in Table 4. This is important to understand if the situation-based fusion scheme is indeed better than simple averaging.\n\n- Does the Control-Only model in Table 2 and L340 predict only single-step control values? If yes, the authors should also compare to a baseline that does an ensemble of multi-step control prediction and trajectory prediction (both situation-based fusion and direct averaging should be considered) without the trajectory-guided attention module.\n\n- L121-122 mentions that the proposed idea is closely related to FASNet. It would be helpful if the authors could also provide comparisons to FASNet. Since the code of FASNet is not publicly available, implementing, training and testing may not be possible in time for rebuttal but this would greatly improve the paper.\n\n- Sec. B in supplementary mentions that for online submission to the CARLA leaderboard, $\\alpha=0.5$ is used if situation is trajectory specialized whereas $\\alpha=0$ is used when it is control specialized. Also, the maximum of brake is used instead of taking the average. It'd be great if the authors could explain the reasoning behind using a different fusion scheme. Also, how is $\\alpha$ selected? Is there a separate validation protocol for tuning these values?\n\n- Relevant references that should also be included:\n - Zhihao Li, Toshiyuki Motoyoshi, Kazuma Sasaki, Tetsuya Ogata, Shigeki Sugano. Rethinking self-driving: Multi-task knowledge for better generalization and accident explanation ability. arXiv 1809.11100, 2018.\n - Alex Kendall, Jeffrey Hawke, David Janz, Przemyslaw Mazur, Daniele Reda, John-Mark Allen, Vinh-Dieu Lam, Alex Bewley, Amar Shah. Learning to Drive in a Day. ICRA 2019\n - Jeffrey Hawke, Richard Shen, Corina Gurau, Siddharth Sharma, Daniele Reda, Nikolay Nikolov, Przemyslaw Mazur, Sean Micklethwaite, Nicolas Griffiths, Amar Shah, Alex Kendall. Urban Driving with Conditional Imitation Learning. ICRA 2020\n - Nicholas Rhinehart, Rowan McAllister, Sergey Levine. Deep Imitative Models for Flexible Inference, Planning, and Control. ICLR 2020\n - Albert Zhao, Tong He, Yitao Liang, Haibin Huang, Guy Van den Broeck, Stefano Soatto. SAM: Squeeze-and-Mimic Networks for Conditional Visual Driving Policy Learning. CoRL 2020\n - Yi Xiao, Felipe Codevilla, Akhil Gurram, Onay Urfalioglu, Antonio M. López. Multimodal End-to-End Autonomous Driving. arXiv 1906.03199, 2019 Based on the weaknesses mentioned above, the following experiments are required to verify the claims in the paper:\n- Trajectory-only model with feature loss applied to each prediction timestep and auxiliary loss (on speed and value prediction) used as well. This requires training a new model and should be compared in Table 3.\n- A single GRU module to predict both future waypoints and multi-timestep control predictions from the hidden state of the GRU. This requires training a new model and should be compared in Table 3.\n- Direct ensemble (average control predictions) of control-only and trajectory-only models in Table 4. This does not require training any model.\n- An ensemble of multi-step control prediction and trajectory prediction (both situation-based fusion and direct averaging should be considered) without the trajectory-guided attention module. This does not require training any model and should be compared in Table 4.\n\nCertain parts need clarification:\n- L194-195: How does the multi-step control prediction branch mitigate the IID assumption?\n- L202-204: How does the trajectory branch improve the ability to incorporate static layout information (eg. curbs)?\n- L309-L313: Why does TCP has lower route completion compared to LiDAR-based methods. Is it because the TCP gets blocked more often than other methods? It is also possible that taking the maximum of brake values from different models for the ensemble (L33-34 in supplementary) leads to an overly conservative policy.\n- The evaluation protocol for the experiments in Table 2,3,4 is different than Table 1 which uses the CARLA leaderboard. The authors should include a brief description of this evaluation protocol in the paper.\n - Fig. 1(a) shows a failure case at turnings for trajectory-based methods. Since the details of the PID controllers are not provided in the paper, it is unclear if the PID controller could just be tuned to better follow the trajectory.\n - Implementation details regarding the GRU modules, MLPs in the trajectory-guided attention, and weights for loss terms (Eq. 4,5,6) should also be included in the paper.\n\nAdditional suggestions to improve the paper:\n- As mentioned earlier, it would be helpful if the authors could also provide comparisons to FASNet. This would greatly improve the paper.\n- It would be interesting to visualize the GradCAM (Selvaraju et al. IJCV 2019) attention maps for the multi-step control prediction branch without the trajectory-guided attention module. This could provide useful insights into whether the multi-step control prediction helps the model to focus on important regions for future control predictions. These GradCAM attention maps can be compared to attention maps in Fig. 4 to verify the utility of trajectory-guided attention. The authors have provided a discussion on limitations and societal impact in the supplementary. Additional suggestions are mentioned above. \n\n### Update after Rebuttal ###\nI appreciate additional ablations and discussions with authors which helped me get a better understanding of the paper. After rebuttal and discussion with other reviewers, I am retaining my original score. \n\nMy main concern is that majority of the gain is coming from the heuristic $\\alpha$ used for ensembling. Table 2 in the main paper shows Control-only=`32.45`, trajectory-only=`28.29`. In one of the ablations provided by the authors in the rebuttal, ensemble of control-only & trajectory-only methods with non-constant $\\alpha$ gives a score of `50.98`. TCP has a score of `57.01` in Table 4 in the main paper. The ensemble heuristic leads to a gain of around `19` points whereas all the architectural modifications - shared backbone, multi-step control, temporal module, trajectory-guided attention - combined result in a gain of `6` points. \n\nWhile it might be possible to use the fusion mechanism in a general and flexible way (eg. as a mixture of experts or using probabilistic uncertainty, as mentioned by the authors in the rebuttal), the paper currently uses a heuristic for fusion. Based on this, I agree with reviewer `6yad` that the paper does not have sufficient technical contribution for NeurIPS, and venues like IROS, ICRA are a better fit. The authors have mentioned multiple interesting directions, eg. link between the temporal module and world models, using mixture of experts or probabilistic uncertainty for fusion. I encourage the authors to explore these directions.", " This paper presents TCP, a novel trajectory predicting/controlling network architecture for vision-based end-to-end autonomous driving. At test time, TCP ensembles the control outputs from a direct prediction branch and a trajectory prediction w/ PID controllers. Both branches use GRU to predict waypoints/controls auto-regressively and are jointly trained. At each timestep, the control prediction branch additionally takes as input attention-weighted image backbone features computed from the trajectory prediction branch.\n === Strengths ===\n+ The strongest aspect of this paper is its impressive performance on the challenging CARLA leaderboard. The fact that TCP is capable of attaining such a strong performance with a simple vision backbone is inspiring, suggesting prior methods for end-to-end driving in CARLA might all have suboptimal controller designs. \n+ The ablation studies are mostly comprehensive, and the design choices are well justified. \n+ The attention map visualization is insightful and shows what areas are attended for the control prediction task.\n\n=== Weaknesses ===\n- I do not have any major complaints regarding the technical content. My main question is how much performance gain comes from fusing the trajectory/control prediction branches as the title and paper implies, or suboptimal PID parameters for the trajectory-only branch, and the trajectory-attended image features inputs in the autoregressive prediction stage. I would like to see a trajectory-only variant where, similar to the proposed situation-based fusion, has situation-based PID controller parameters (different PID gains based on different high-level commands, or whether it’s turning etc.), which still has the novel trajectory-computed attention fusion module (i.e. the attention-weighed image features get consumed by the trajectory branch instead of the control branch). This will further help distinguish the main source of improvements among the design choices.\n Apart from the ablation mentioned in the Strengths & Weaknesses section, I would like to hear from the authors how they think TCP will work in a real world setting, what might be the main obstacles and bottlenecks that the proposed method could have when scaling up to real world data.\n Same as my question in the section above – I would like to hear from the authors what they think might be the limitations of the proposed method when scaling to the real world.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 4 ]
[ "rRmrTi7eQp", "vDuRZrui4R", "2H8wWONoaf-", "yAE_ePKKe6X", "JdsCqKxsVJR", "iBcyEmrT68A", "fRueL2xtGWn", "5soHQOeZmw", "nips_2022_DhmYYrH_M3m", "eS4cC7EOJh8", "eS4cC7EOJh8", "Dpg9GwhGgXy", "Dpg9GwhGgXy", "2uoCsQDqU-v", "rTkwPauIJq", "nips_2022_DhmYYrH_M3m", "nips_2022_DhmYYrH_M3m", "nips_2022_DhmYYrH_M3m", "nips_2022_DhmYYrH_M3m" ]
nips_2022_jAL8Rt7HqB
Adaptive Attention Link-based Regularization for Vision Transformers
Although transformer networks are recently employed in the various vision tasks with the outperforming performance, large training data and a lengthy training time are required to train a model to disregard an inductive bias. Using trainable links between the channel-wise spatial attention of a pre-trained Convolutional Neural Network (CNN) and the attention head of Vision Transformers (ViT), we present a regularization technique to improve the training efficiency of Vision Transformers (ViT). The trainable links are referred to as the attention augmentation module, which is trained simultaneously with ViT, boosting the training of ViT and allowing it to avoid the overfitting issue caused by a lack of data. From the trained attention augmentation module, we can extract the relevant relationship between each CNN activation map and each ViT attention head, and based on this, we also propose an advanced attention augmentation module. Consequently, even with a small amount of data, the suggested method considerably improves the performance of ViT while achieving faster convergence during training.
Reject
Four reviewers provided detailed feedback on this paper. The authors responded to the reviews and I appreciate the authors' comments and clarifications, specifically that each question/comment is addressed in detail. Additional experiments were also performed. The authors also uploaded a revised version of the paper. After the two discussion periods, one of the four reviewers suggest to reject the paper while three reviewers rate the paper as "weak accept", so no reviewer strongly advocates for acceptance. I considered the reviewers' and authors' comments and also tried to assess the paper directly. I believe that the paper should not be accepted to NeurIPS in its current form. Weaknesses include: * Readability: While at least one reviewer describes part of the paper as "clear and easy to follow" one other reviewer mentions clarity as the main weakness and one other reviewer also comments in this direction. I personally found the paper hard to read as well (even after the improvements made in the revision), and I found some of the claims to be fairly generic and partially not well supported. E.g. "resolve the issues of overfitting and lengthy training time of ViT", or "The proposed scheme preserves the original architecture of ViT, which results in its general employment regardless of the architecture of ViT." * Experimental Results: Several questions have been raised regarding the experimental results (e.g. influence of the attention link, choice of hyperparameters). These have been addressed in the discussion, but it seems to me that they were at best partially resolved. * Relation to distillation: The results in the low-data regime rely on learning from a teacher model. This relation to distillation is recognized but somewhat under-explored. This could be a confounding factor in the analysis of the approach. For example, in one response, the authors argue that "However, we believe our source of performance gain is due to transferring CNN's inductive bias with attention". It remained somewhat unclear, whether this transfer would also hold when the CNN is not a more powerful teacher model. Strengths include: * The idea of regularizing the global token’s attention maps with the CNN activation maps is novel and interesting. * The reported experimental improvements in the low-data regime are interesting. Despite recommending the paper for rejection in its current form, I would like to encourage the authors to continue this line of work and present it again to the community with more focused discussions, insights (and possibly experiments). This is an interesting paper and it was evaluated to be close to (but below) the acceptance threshold.
test
[ "GK2am7ybwFK", "qlNe7hkPzf", "lt76YID-1Ps", "yBuKzhMW_VM", "cW8NYNyP02d", "Hq4ONy50sJs", "B_-4XuUVp8N", "UHK6pmc-Nr", "Jb_YkkGcN7e", "tlfAB9moQzf", "Xg7j2IIn4y", "4iP7wmbEuuK", "cCfTKzuVwO", "HDsJW7_bmq", "zZScJCw4V2y", "ClaTtsIXhAq", "i-O8m0K5Z9AL", "fQMhc4cObUA", "5yQVNosj4Rc", "epwEjRrexh", "ZNFgWgmWyl", "bhz8LzRV8q0", "pTJg_iFaJTM", "56HfaXRneGe", "Swgcg0vQvD0" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors appreciate the reviewer for their detailed review of our manuscript and positive feedback. We are happy that our response has addressed your concerns.\n\nBest regards,\n\nAuthors", " Dear Authors,\n\nAfter having read the rebuttal in detail, my concerns have been addressed and I recommend the acceptance of the paper.\n\nBest wishes.", " The authors appreciate your feedback and are happy that your initial concerns have been resolved.\nWe updated the revised version of the paper with supplementary materials which are currently accessible on this page.\nThe authors would be thankful to the reviewer if the reviewer could check the revised version of the article.\nAs the reviewer commented, during the short rebuttal time, we had no choice other than to focus on the revision of experimental parts in the paper.\nHowever, we are continually working on polishing the paper with a thorough language check, and we are sure that we can finalize the revised version according to the feedback from every reviewer.\nIf we have a chance to submit the camera-ready version, we are planning to hire professional proofreaders to improve the writing quality further.\n\nBest regards,\n\nAuthors", " Dear Reviewer, \n\ncould you please indicate that you have considered the authors' rebuttal? (E.g. by replying to the rebuttal or at least by using the \"Author Rebuttal Acknowledgement\".)\n\nThe [reviewer guidelines](https://nips.cc/Conferences/2022/ReviewerGuidelines) ask: \"Even if the author response didn’t change your opinion about the paper, please acknowledge that you have read and considered it.\"\n\nThank you!", " I thank the authors for the detailed response, which have cleared parts of my initial concerns. However, I am still concerned about whether the revised manuscript can be improved significantly and satisfyingly in such a short time, with so many modifications to make. ", " The authors appreciate your thorough feedback and recommendation for the revised version of the article. We are happy that our answer to your questions have cleared the confusions.\n\nBest regards, Authors", " Thank you to the authors for your detailed replies to my questions. Having cleared my confusions, I recommend publication for the revised version of the article.", " We appreciate your feedback, and we are happy that our response was helpful to resolve your concerns and questions.\n\nBest regards,\nAuthors", " As the end of the open discussion gets closer, we would like to gently remind you to read our rebuttal, which should hopefully answer all of your concerns and comments on the reviews.\n\nIf you have any comments or any questions, we would be happy to address them.\n\nSincerely,\n\nAuthors", " As the end of the open discussion gets closer, we would like to gently remind you to read our rebuttal, which should hopefully answer all of your concerns and comments on the reviews.\n\nIf you have any comments or any questions, we would be happy to address them.\n\nSincerely,\n\nAuthors", " As the end of the open discussion gets closer, we would like to gently remind you to read our rebuttal, which should hopefully answer all of your concerns and comments on the reviews.\n\nIf you have any comments or any questions, we would be happy to address them.\n\nSincerely,\n\nAuthors", " I appreciate the authors' detailed responses. My doubts / confusions have been cleared up.", " Q2. In Table 1, each CNN block level is associated with different numbers of ViT layer levels. Besides the ViT layers at a high level, the way how others are assigned to CNN block levels is also different between the alpha-link and the beta-link. I would appreciate it if there is any ablation study or verbal explanation on such a “non-intuitive” design. \n\nThe weights of a full link are initialized by random values, so the training needs the initial computations to align the noisy weights letting the augmented attention maps be similar to the CNN activation map.\nIn contrast, when we utilize the selective attention link, the noisy links can be ignored at the initial training phase, which results in reduced computation and stable training.\nThis leads to the improved performance of the selective attention link, which also validates the analyzed correlation between the CNN activation map and the ViT attention map.\n\nIn addition, we write down the pseudo-code for the selective attention link in the supplementary material.\n\n\nQ3. In Table 2, could you please explain what the two numbers in each cell stand for? (as it’s not clearly written in the caption or the main text.)\n\nTo improve the readability, we entirely reorganize the tables and add captions to describe the values.\n\n\nQ4. In Figure 2, if my guess about the x-axis corresponding to the ViT layers is correct, I’m wondering how the weights of different heads within each layer are aggregated to produce the plot. Besides, as it’s hard to tell that the block diagonal is generally darker than the side in Figures 2(b) and (c), would you mind elaborating more on how to reach the conclusion that “the ViT attention maps are highly related to the CNN activation maps located at a similar level”? \n\nYes. In figure 2, the x-axis corresponds to the ViT layers. To analyze the relations of two different architectures across layer depth, we head-wise averaged the absolute scale of link weights. We agree that those heatmaps are not strictly diagonal but show correlation along relative layer depth. This accords with previous work [*], which used cross-model Centered Kernel Alignment to check representation similarity between ViT and ResNet across layers. We add the captions to describe the plots.\n\n[*] Maithra Raghu, Thomas Unterthiner, Simon Kornblith, Chiyuan Zhang, and Alexey Dosovitskiy. Do Vision transformers see like convolutional neural networks? ArXiv, abs/2108.08810, 2021.691\n\n\nQ5. I would appreciate it if the authors could explain more about the argument in line 239 - “the inductive bias is hard to be trained without a large dataset”.\n\nFrom the sentence, we tried to emphasize the need for a large dataset for Vision Transformer without the inductive bias. A similar statement was mentioned in L21-L24. To clear out the misunderstanding, we revised the argument as 'semantic information that ignores the inductive bias is hard to be trained without a large dataset'.\n\n\nQ6. As stated in Question 1, there are other knowledge distillations methods. For example, they may use a transformer as the teacher model instead of a CNN and also utilize the attention map regularization. I’m wondering whether it would be better to add the empirical comparison with these methods.\n\nWe agree that it is a probable approach. However, we believe our source of performance gain is due to transferring CNN's inductive bias with attention. Also, while one of our contributions is to reduce ViT's lengthy training time, using transformers as a teacher model would require more FLOPs and longer runtime.", " We appreciate the reviewer found our work easy to follow, the idea is novel, with impressive results.\nWe answered the reviewer's comments as follows.\n\nW1. Some of the tables and figures lack elaboration in the caption.\n\nWe add the captions to tables and figures for readability, and all the boldfaced captions are entirely removed.\nIf we have a chance to submit the camera-ready version, we are planning to hire professional proofreaders to improve the writing quality further.\n\nW2. More empirical evidence is needed to support the superiority of the attention link.\n\nAs the empirical evidence to show the superiority of the proposed attention link, we perform three additional experiments.\nFirst, we empirically validate the effectiveness of the selective links including $\\alpha$-link and $\\beta$-link in the various scenarios with the comparison of the full link. $\\alpha$-link showed 1.5% better Top-1 accuracy than $\\beta$-link on ImageNet 5%. From the Figure 4 of supplementary materials, When the entire ImageNet dataset is used for training, $\\beta$-link shows faster convergence than Full-link while their final accuracy becomes similar to each other. This result is reasonable because the connection to the high-level heads in Full-link can be disconnected through the update. These empirical results can verify the hypothesis since $\\alpha$-link and $\\beta$-link are based on the two hypotheses.\n\nSecond, we additionally evaluate our framework through Weakly Supervised Object Localization (WSOL), which is frequently used to show space awareness~[1, 2]. WSOL trains the network model to classify the input image and evaluates the localization of target objects. We determine the position of the target objects by averaging the entire attention maps of Eq.3. We measure the localization performance by using the Intersection of Union (IoU) in CUB-200-2011, and the results are given in Table 10 in supplementary materials. We refer [2] for the evaluation method of WSOL. Compared to DeiT, although the proposed algorithm shows only 1% performance improvement in Table 9 in supplementary materials, its localization accuracy is 53% higher than that of DeiT with IoU threshold 0.5 in Table 10 in supplementary materials, which validates the superiority of the attention link for space awareness.\n\nThird, in Figure 4 of supplementary material, we compare the learning curve of our approach with those of DeiT and ConViT. We can see that our knowledge distillation achieves 70% accuracy at about 50 epochs while the DeiT needs 120 epochs to reach the same accuracy. This result validates the rapid convergence of our approach, which comes from the correlation of the attention links in our hypothesis.\n\nFinally, we enumerate the attention weight changes according to the training epochs. As shown in Figure 5 of supplementary material, the initial attention weight spread out randomly goes to become correlated by the similar depth levels of ViT and CNN models. Furthermore, after 300 epochs of training, the attention weights connecting the CNN models with the heads of last depth levels in ViT models become decreased, which represents the reduced influence of CNN activation maps on the high-level ViT attention maps. We added the visualization and the discussions in the revised manuscript and the supplementary material.\n\n[1] \"Learning Deep Features for Discriminative Localization\", Zhou et al. [2] \"Evaluating Weakly Supervised Object Localization Methods Right\", Choe et al.\n\n\nQ1. In Natural Language Processing, knowledge distillation has also been widely adopted in transformer-based models to enhance training efficiency. In particular, MiniLM (Wang et al., 2020) and TinyBert (Jiao et al., 2019) also focus on the regularization of attention maps. I think it’s worth including this line of literature in Related Work.\n\nWe appreciate the reviewer's recommendation, and we add the previous studies to the related work section accordingly on lines 100-102. As the reviewer commented, the previous studies focus on the employment of knowledge distillation in natural language processing, so the target objectives are different from our proposed model.\n", " Q4. What is your interpretation of the augmented attention links, what do they represent?\n\nFor the detailed interpretation of the augmented attention links, we perform three additional experiments.\nFirst, we additionally evaluate our framework through Weakly Supervised Object Localization (WSOL), which is frequently used to show space awareness~[1, 2].\nWSOL trains the network model to classify the input image and evaluates the localization of target objects.\nWe determine the position of the target objects by averaging the entire attention maps of Eq.3.\nWe measure the localization performance by using the Intersection of Union (IoU) in CUB-200-2011 and Oxford Flower 102 datasets, and the results are given in Table 9 in supplementary materials. We refer [2] for the evaluation method of WSOL.\nCompared to DeiT, although the proposed algorithm shows only 1\\% performance improvement, its localization accuracy is 53% higher than that of DeiT with IoU threshold 0.5, which validates the superiority of the attention link for spatial awareness.\n\nSecond, in Figure 4 in supplementary materials, we compare the learning curve of our approach with those of DeiT and ConViT. \nWe can see that our knowledge distillation achieves 70\\% accuracy at about 50 epochs while the two comparisons need more epochs for the same accuracy.\n\nFinally, we enumerate the attention weight changes according to the training epochs.\nAs shown in Figure 5 in supplementary materials, the initial attention weight spread out randomly goes to become correlated by the similar depth levels of ViT and CNN models.\nFurthermore, after 300 epochs, the attention weights connecting the CNN models with the heads of last depth levels in ViT models become decreased, which represents the reduced influence of CNN activation maps on the high-level ViT attention maps.\nWe added the visualization and the discussions in the revised manuscript and the supplementary material.\n\n[1] \"Learning Deep Features for Discriminative Localization\", Zhou et al. [2] \"Evaluating Weakly Supervised Object Localization Methods Right\", Choe et al.\n\nQ5. Are the X/Y results in the tables always teacher/student?\n\nA5. \nIn Table 2, X/Y indicate Top-1/Top-5 test accuracy, respectively.\nIn the column 'Gain' of Table 4, X and Y represent the performance gap between the previous study and ours.\nTo clear out the misunderstanding, we entirely revise the structure of the tables and add the captions for the description.\n\n\nQ6. What does attention maps “at a similar level” mean? At a similar layer depth?\n\nA6. Yes, \"attention maps at a similar level” means the attention maps selected at similar layer depth of the respective architecture.\n\n\nQ7. L266, when you say remarkable results, which results are you referring to then?\n\nA7.\nWe agree that the word 'remarkable' should be used more carefully. \nWe refer to the remarkable results for the performance in the low-data regime, which outperforms DeiT and ConViT with large performance gaps. However, we think that the order of the table may cause your misunderstanding, so we reorganized the experimental sections and removed the strong words.\n\n\nQ8. L327-332: do you compare self-supervised approaches to your classically supervised approach here? Or do you make some kind of SS version of your approach? This is not clear\n\nA8. We compared self-supervised approaches to our classically supervised approach. We have yet to consider a self-supervised version of our approach, but we believe it is possible for our future work.", " We appreciate that the reviewer said our idea is novel, clear, and conceptually appealing. In addition, the reviewer commented that our extensive experiment and ablation studies support our conclusion well. As the reviewer mentioned, our approach is flexible with the student ViT models and the teacher CNN models.\n\n\nW1. The main weakness of the paper, in my view, is its clarity.\n\nWe appreciate the recommendation, and we revise the commented sentences accordingly. In addition, we add the captions to tables and figures for readability, and all the boldfaced captions are entirely removed. If we have a chance to submit the camera-ready version, we are planning to hire professional proofreaders to improve the writing quality further.\n\n\nW2. Another weakness are lacking of mention of how the hyperparameters lambda were chosen\n\nAt every experiment, we set $\\lambda$ to $2000$, and we are sorry for the missing initialization value of $\\lambda$.\nTo justify the value choice, we perform the ablation studies by changing the value of $\\lambda$ by $1500$, $2000$, and $2500$.\nIn ImageNet 10%, we obtain top-1 accuracy of 56.5, 64.7, and 64.6 respectively for lambda=1500, 2000, 2500, and we acquire the best performance when $\\lambda$ is set to $2000$.\nSince $\\lambda$ controls the scale of the regularization loss term, its value highly correlates with the overfitting and the underfitting of the target model. As a result, the value of $\\lambda$ should be set to avoid both the overfitting and the underfitting issues, and we empirically found that the value of $\\lambda$ is a proper choice. Due to the lack of time, we fail to show various results, but we will supplement the ablation study with the additional values of in the final version.\n\nW3. and a lacking discussion of the potential weaknesses of the method\n\nEven with its training efficiency, the loss term of $L_{att}$ could feed the inductive bias of CNN models into the ViT models, which results in unnecessary regularization when sufficient training data is given.\nWe ensure that our approach shows slightly less performance than ConViT because of this issue when the full ImageNet dataset is considered.\nEven though we relieve the issue by employing the decay rate of $\\lambda$, additional adaptation is required to effectively control the power of inductive bias from the CNN models.\nWe are planning to solve the problem in our future work.\n\nThe current framework can only handle one teacher model that is limited by one of the CNN models. \nIn contrast, when we can integrate multiple teacher models trained by various datasets, the performance generalization would be further improved.\nIn addition, the current model cannot consider the ViT-based teacher model, which limits the variety of the teacher model.\nDue to the simplicity of this work, we guess that employing the multiple teacher models is a reasonable approach, so we will extend this algorithm for the multiple teacher models even including the transformer-based models.\nIf we have a chance to submit the camera-ready version, we will add these limitations to the paper.\n\nW4: No code is released.\n\nWe release our code willingly if the paper is accepted.\n\nQ1. How was lambda = 0.99 and 0.98 chosen? (L288)\n\nAs shown in Figure 5 of supplementary material, the ViT attention maps seem similar to the CNN activation maps at the related depth levels, while the relation becomes weakened as training goes on.\nAccordingly, we designed our approach to reduce the power of the attention-based knowledge distillation loss terms by employing the decay constant of $\\lambda$.\nIn addition, we perform the ablation study where the variant is built by fixing the decay constant with $0.99$.\nIn ImageNet 50\\%, the variant with the fixed decay constant shows the performance of $77.8\\%$ and $93.8\\%$ for top-1 and top-5 accuracy, respectively, which are less than $78.9\\%$ and $94.5\\%$ of the proposed algorithm.\nThis result shows that we can tune the decay constant to improve the efficiency of our approach, which would be analyzed in our future work.\n\n\n\nQ2. In lines 37-43, do you mean source dataset instead of target dataset?\n\nAs the reviewer commented, the target dataset is a source dataset to train the final model.\nHowever, we found that the sentence is ambiguous to deliver our meaning, so we revised the entire sentence as follows:\nHowever, the previous studies have the remaining limitations where the training datasets must be equivalent for both the student and teacher models.", " W3. I would appreciate additional evaluations.\n\nWe perform the additional experiments by using CUB-200-2011 and Oxford Flowers 102 datasets, which are challenging due to their enormous categories and the lack of class-wise training data.\nAs shown in Table 9 in supplementary material, the proposed framework shows much better performance than the previous studies, which validates its generality across various scenarios.\nEspecially, our approach shows an enlarged performance gap of 92.7\\% when the ViT model is randomly initialized, which shows its robustness in the absence of a well-trained model.\n\nW4. I am missing some qualitative comparisons on why the techniques improve data efficiency.\n\nAs recommended by the reviewer, we qualitatively visualize the learning curve and the attention weights change to show the data efficiency.\nIn Figure 4 of supplementary material, we compare the learning curve of our approach with those of DeiT and ConViT. We can see that our knowledge distillation achieves 70\\% accuracy at about 50 epochs while the two comparisons need at most 120 epochs for the same accuracy.\nIn addition, we enumerate the attention weight changes according to the training epochs.\nAs shown in Figure 5 of supplementary material, the initial attention weight spread out randomly goes to become correlated by the similar depth levels of ViT and CNN models.\nFurthermore, after 300 epochs, the attention weights connecting the CNN models with the heads of last depth levels in ViT models become decreased, which represents the reduced influence of CNN activation maps on the high-level ViT attention maps.\nWe added the visualization and the discussions in the revised manuscript and the supplementary material.\n\nW5. Have the authors played a bit with the CNN architecture?\n\nWe use EfficientNet-b3 and ResNet34 as our teacher models, which are much smaller models than RegNetY-160 which is a teacher model of DeiT.\nEven though EfficientNet-b3 needs only 14.63\\% parameters compared with RegNetY-160, our model achieves better performance than DeiT, which validates its impressive training efficiency.\nHowever, to resolve the reviewer's concern, we perform the additional experiments where the teacher model of our approach is replaced by ResNet-18.\nFor ImageNet 5\\%, the variant results in 55.0\\% and 78.9\\% for top-1 and top-5 accuracy, respectively.\nThus, even though ResNet-18 needs only 13.98\\% parameters compared with RegNetY-160, our approach still suppresses the performance of DeiT (34.8\\%\\&57.8\\%) by showing the performance gaps of 58\\% and 37\\%.\n\nLimitations: Lacking discussion of the potential limitations;\n\nEven with its training efficiency, the loss term of $L_{att}$ could feed the inductive bias of CNN models into the ViT models, which results in unnecessary regularization when sufficient training data is given.\nWe ensure that our approach shows slightly less performance than ConViT because of this issue when the full ImageNet dataset is considered.\nEven though we relieve the issue by employing the decay rate of $\\lambda$, additional adaptation is required to effectively control the power of inductive bias from the CNN models.\nWe are planning to solve the problem in our future work.\n\nThe current framework can only handle one teacher model that is limited by one of the CNN models. \nIn contrast, when we can integrate multiple teacher models trained by various datasets, the performance generalization would be further improved.\nIn addition, the current model cannot consider the ViT-based teacher model, which limits the variety of the teacher model.\nDue to the simplicity of this work, we guess that employing the multiple teacher models is a reasonable approach, so we will extend this algorithm for the multiple teacher models even including the transformer-based models.\nIf we have a chance to submit the camera-ready version, we will add these limitations to the paper.", " We appreciate your comments that our approach is simple but effective. As the reviewer mentioned, we are expecting that the community can employ our approach to improve the training stability and efficiency of the ViT-based models. Furthermore, the positive responses to our analysis and results have encouraged us to perform additional experiments with passion. As the reviewer commented, we focus on the problem of ViT-based models in a low data regime, and we show reasonable results to solve the issue by using the attention maps of CNN models. We will add the discussion about limitations in the camera-ready version.\n\nW1. I think the comparison table should take that into account and report FLOPS or runtime for a complete picture.\n\n| | DeiT-S | ConViT-S | AAL |\n|---|---|---|---|\n| Params | 22.4M | 27.8M | 22.5M |\n| FLOPs | 4.27G | 5.35G | 4.27G |\n| Runtime | 0.40 | 1.23 | 0.36 |\n| Teacher | RegNetY_160 | - | EfficientNet_B3 |\n| Params | 83.6 M | - | 12.2 M |\n| FLOPs | 15.9G | - | 0.98 G |\n\nAccording to the reviewer's comment, we observed the training runtime and inference FLOPS.\nWe measure the runtime by measuring the time for processing a batch with batch size 128 at the training phase. \nDeiT-S and AAL shares similar value in the number of parameters and FLOPs due to the minimal design of the attention augmentation module.\nAs shown in table above, our framework shows the fastest runtime among the three comparisons including DeiT and ConViT, which validates the training efficiency of our framework. \nFurthermore, our approach needs 20.2\\% fewer inference FLOPS than ConViT even though they have similar performance.\n\nW2. I wonder if authors have considered evaluating on additional task.\n\nAccording to the reviewer's comment, we additionally evaluated our framework through Weakly Supervised Object Localization (WSOL), which is frequently used to show space awareness~[1]. WSOL trains the network model to classify the input image and evaluates the localization of target objects. \nWe determine the position of the target objects by averaging the entire attention maps of Eq.3.\nWe measure the localization performance by using the Intersection of Union (IoU) in CUB-200-2011. The results is shown in Table 10 of supplementary materials.\nWe refer [2] for the evaluation method of WSOL.\nCompared to DeiT, although the proposed algorithm shows only 1\\% performance improvement, its localization accuracy is 53% higher than that of DeiT with IoU threshold 0.5, which validates the space awareness of our knowledge distillation scheme.\n\n[1] \"Learning Deep Features for Discriminative Localization\", Zhou et al.\n[2] \"Evaluating Weakly Supervised Object Localization Methods Right\", Choe et al.", " Q8. Clarify whether the baseline model in Table 4 has been trained with a teacher model or not. If not, the comparison may not be fair.\n\nA8. Among the compared algorithms, DeiT is trained with a teacher model of RegNetY-16GF, which is a CNN model showing better performance than our teacher models such as EfficientNet-b3 and ResNet-34.\nEven though the base model of ConViT does not use the teacher model, their supplementary material reports that the hard knowledge distillation for ConViT shows an ignorable improvement in performance compared to accuracy gain in DeiT.\n\nIn addition, we show comparision table for baseline models.\n\n| | DeiT-S | ConViT-S | AAL |\n|---|---|---|---|\n| Params | 22.4M | 27.8M | 22.5M |\n| FLOPs | 4.27G | 5.35G | 4.27G |\n| Runtime | 0.40 | 1.23 | 0.36 |\n| Teacher | RegNetY_160 | - | EfficientNet_B3 |\n| Params | 83.6 M | - | 12.2 M |\n| FLOPs | 15.9G | - | 0.98 G |\n\nWe measure the runtime by measuring the time for processing a batch with batch size 128 at the training phase. \nDeiT-S and AAL shares similar value in the number of parameters and FLOPs due to the minimal design of the attention augmentation module.\nAs shown in table above, our framework shows the fastest runtime at training phase which includes the inference of teacher CNN among the three comparisons including DeiT and ConViT. This additionally validates the training efficiency of our framework. \nFurthermore, our approach needs 20.2\\% fewer inference FLOPS than ConViT even though they have similar performance.\n\nQ9. It seems the performance cannot outperform the ConViT with the full train set, and therefore the generality to large-scale data is in doubt.\n\nA9.\nAs we mentioned in the introduction section, our main objective is to improve the data efficiency and training load of ViT models.\nAccordingly, we successfully show the impressive gain in performance when insufficient data is given for the training as shown in the first rows in Table 2.\nWe show the experimental results using the large-scale data to verify that the proposed algorithm is not harmful even for the large-scale data, which is important for its generality to the various scenarios.\nInterestingly, our model shows similar performance to ConViT even though its model size is 22.3\\% less than that of ConViT, which also verifies its training efficiency.", " Q5. First describe the dataset and different experiment settings, and then explain the results one by one.\n\nA5. We appreciate the recommendation, and we entirely rewrote and reorganized the experiment section accordingly. The reviewer can refer to the red lines of the submitted revised paper.\n\nQ6. I don't think Figure 2 is sufficient to support the two hypotheses in 4.1.\n\nA6. For the detailed interpretation of the augmented attention links, we perform four additional experiments.\n\nFirst, we empirically validate the effectiveness of the selective links including $\\alpha$-link and $\\beta$-link in the various scenarios with the comparison of the full link. $\\alpha$-link showed 1.5\\% better Top-1 accuracy than $\\beta$-link on ImageNet 5\\%.\nFrom the Figure 4 of supplementary materials, When the entire ImageNet dataset is used for training, $\\beta$-link shows faster convergence than Full-link while their final accuracy becomes similar to each other.\nThis result is reasonable because the connection to the high-level heads in Full-link can be disconnected through the update.\nThese empirical results can verify the hypothesis since $\\alpha$-link and $\\beta$-link are based on the two hypotheses.\n\nSecond, we additionally evaluate our framework through Weakly Supervised Object Localization (WSOL), which is frequently used to show space awareness~[1, 2].\nWSOL trains the network model to classify the input image and evaluates the localization of target objects.\nWe determine the position of the target objects by averaging the entire attention maps of Eq.3.\nWe measure the localization performance by using the Intersection of Union (IoU) in CUB-200-2011, and the results are given in Table 10 in supplementary materials. \nWe refer [2] for the evaluation method of WSOL.\nCompared to DeiT, although the proposed algorithm shows only 1\\% performance improvement in Table 9 in supplementary materials, its localization accuracy is 53\\% higher than that of DeiT with IoU threshold 0.5 in Table 10 in supplementary materials, which validates the superiority of the attention link for space awareness.\n\nThird, in Figure 4 of supplementary material, we compare the learning curve of our approach with those of DeiT and ConViT. \nWe can see that our knowledge distillation achieves 70\\% accuracy at about 50 epochs while the DeiT needs 120 epochs to reach the same accuracy. This result validates the rapid convergence of our approach, which comes from the correlation of the attention links in our hypothesis.\n\nFinally, we enumerate the attention weight changes according to the training epochs.\nAs shown in Figure 5 of supplementary material, the initial attention weight spread out randomly goes to become correlated by the similar depth levels of ViT and CNN models.\nFurthermore, after 300 epochs of training, the attention weights connecting the CNN models with the heads of last depth levels in ViT models become decreased, which represents the reduced influence of CNN activation maps on the high-level ViT attention maps.\nWe added the visualization and the discussions in the revised manuscript and the supplementary material.\n\n[1] \"Learning Deep Features for Discriminative Localization\", Zhou et al.\n[2] \"Evaluating Weakly Supervised Object Localization Methods Right\", Choe et al.\n\nQ7. The selective attention link should be discussed in more detail, e.g. why it can achieve better results than the full link.\n\nA7. The weights of the full link are initialized by random values, so the training needs the initial computations to align the noisy weights letting the augmented attention maps be similar to the CNN activation map.\nIn contrast, when we utilize the selective attention link, the noisy links can be ignored at the initial training phase, which results in reduced computation and stable training.\nThis leads to the improved performance of the selective attention link, which also validates the analyzed correlation between the CNN activation map and the ViT attention map.\n\nIn addition, we write down the pseudo-code for searching the selective attention link in supplementary material.", " We appreciate the reviewer's comment that our method is interesting and its motivation is reasonable. We respond to every comment with the additional experiments, and we revised the paper and added the supplementary material accordingly.\n\nQ1. Is $w_c$ in Eq.4 end-to-end trainable with Eq.6?\n\nA1. Yes, the adaptive attention link that is $w_c$ in Eq.4 is trained in the end-to-end scheme with Eq.6.\nSince $A_c^+$ is differentiable by $w_c$, $L_{att}$ of Eq.5 directly updates $w_c$ to reduce the L2 distance between the CNN activation map and the augmented attention map .\nMeanwhile, $L_{ce}$ also affects indirectly the update of $w_c$ due to its influence on the ViT attention map ($A$) in Eq.4.\n\nQ2. Do the attention map ($A_c$) and activation map ($B_c$) in Eq.5 have the same shape?\n\nA2. From Eq.3, the spatial size of ViT attention map ($A_c$) is defined by $(P\\times P)$ where $P^2$ is the number of initial input patches.\nThus, we resize each CNN activation map ($B_c$) by using bi-cubic interpolation to be $(P\\times P)$.\nThe resizing steps are described in lines 179-181, but we revise the sentence to mention the bi-cubic interpolation.\n\nQ3. The computation complexity of Eq.4 is also a concern. Since $w_c$ is posed for each pixel in $A_(m,n)$, the complexity of $w_c$ will grow exponentially.\n\nA3. The sub-indices $(m,n)$ of attention link $w_c$ and the attention maps $A_c$ are the indices for the self-attention head and level depth, respectively.\nThus, the size of $w_c$ is determined by the channel depth size of the CNN model and the number of self-attention heads in ViT model, which is irrelevant to the spatial size of the attention map.\nWe found that the missing definitions of $M$ and $N$ cause ambiguity, so we add the definitions below Eq.4.\nFurthermore, as we mentioned in line 298, $w_c$ needs only 0.068M additional parameters which are quite small compared to the parameter size of conventional ViT models.\n\nQ4. Please include the initialization value of $\\lambda$ in Eq.6 and justify the value choice as it is an important hyper-parameter.\n\nA4. We set $\\lambda$ to $2000$ at every experiment scenario, and we are sorry for the missing initialization value of $\\lambda$.\nTo justify the value choice, we perform the ablation studies by changing the value of $\\lambda$ by $1500$, $2000$, and $2500$ for ImageNet 10\\%.\nIn ImageNet 10%, we obtain top-1 accuracy of 56.5, 64.7, and 64.6 respectively for lambda=1500, 2000, 2500, and we acquire the best performance when $\\lambda$ is set to $2000$.\nSince $\\lambda$ controls the scale of the regularization loss term, its value highly correlates with the overfitting and the underfitting of the target model.\nAs a result, the value of $\\lambda$ should be set to avoid both the overfitting and the underfitting issues, and we empirically found that the value of $2000$ is a proper choice.\nDue to the lack of time, we fail to show various results, but we will supplement the ablation study with the additional values of $\\lambda$ in the final version.", " This paper proposes a loss regularization method for training Vision Transformer (ViT) through matching the attention maps of ViT with the activation map of a pre-trained CNN. The authors aggregate ViT's attention maps from different layers into the number of CNN's activation maps and then use their L2 norm as the additional loss regularization of training ViT. It is shown that the proposed method can outperform the selected baseline on ImageNet when the number of training samples is small. Strengths\nThe idea of matching attention maps of ViT with the activation maps looks interesting, and the motivation behind this seems reasonable.\n\nWeakness\n1. How do the authors learn the w_c in Eq. 4, is it end-to-end trainable with Eq. 6 or learned separately? Besides, do the attention map (A_c) and activation map (B_c) in Eq. 5 have the same shape? If not, how do you address it?\n2. The computation complexity of Eq. 4 is also a concern. Since w_c is posed for each pixel in A_(m,n), the complexity of w_c will grow exponentially as P or the number of layers in ViT grows. How will it perform when facing larger or deeper ViT models?\n3. I have checked the manuscript, but I cannot find the initialization value of λ in Eq. 6. Please include it and justify the value choice as it is an important hyper-parameter.\n4. The current description of the experimental setting and results, in my opinion, is difficult for the readers to understand. Please re-organize it - first describe the dataset and different experiment settings, and then explain the results one by one. \n5. I don't think Figure 2 is sufficient to support the two hypotheses in 4.1. Besides, the selective attention link should be discussed in more detail, e.g. why it can achieve better results than the full link.\n6. Please clarify whether the baseline model in Table 4 has been trained with a teacher model or not. If not, the comparison may not be fair. Besides, it seems the performance cannot outperform the ConViT with the full train set, and therefore the generality to large-scale data is in doubt.\n\nOverall, although the idea of this paper is interesting, the methodology is not explained or designed properly, the descriptions of the current experimental settings and results are difficult to read, and also the achieved performance is not convincing. I don't think this paper can be accepted, and I tend to vote for rejection. See weaknesses above. I didn't check but I don't think this work will lead to a potential negative societal impact", " The paper introduces a novel regularisation method to train ViT. Specifically, they rely on the activation features from a convolutional neural network to transfer the knowledge and augment the training of the vision transformer. Furthermore, they discuss how to select which links to keep during training in order to make the learning more efficient. Finally, they ablate and evaluate the approach on image datasets such as ImageNet and CIFAR. **Strengths**:\n\n- S1. The approach is simple and effective. Authors join the advantages of CNNs and ViT with their augmentation procedure in order to make learning more efficient. I really like the simplicity of the approach, which also would make it more likely to be used by the community.\n\n- S2. Analysing the most important links is also one extra step in terms of efficiency. The paper does a good job introducing the overall intuition and the general method and then going into link selection.\n\n- S3. The results show how the approach can be specially useful for low data regimes.\n\n- S4. Most of previous works using transformers specially rely on very large datasets while this paper tackles the problem of training in a low data regime.\n\n- S5. Authors provide sufficient experimental evidence to answer the main questions in the paper. The ablation study is complete and useful for the reader.\n\n**Weaknesses**\n- W1. In my opinion the paper avoid the issue of the additional overhead of running the CNN alongside the transformer training. I think the comparison table should take that into account and report FLOPS or runtime for a complete picture.\n\n- W2. The methodology has only been used for classification. However, it seems like other task which require for awarness of the space would be a better fit for such a paper. I wonder if authors have considered evaluating on additional task.\n\n- W3. Authors evaluate on CIFAR and ImageNet, which are very saturated dataset at this point. Although I think the point of the paper is valid, I would appreciate additional evaluations.\n\n- W4. I am missing some qualitative comparison on *why* the techniques improve data efficiency. What is different in the learning curves? How do the attention weights change? \n\n- W5. Have the authors played a bit with the CNN architecture? I wonder if they could safe computation by running a very basic model. My concerns are listed in the weaknesses section. I would expect authors to discuss:\n\n- Include additional metrics in the table for a fair comparison with the baselines. What is the effect of the cost of running the CNN? \n\n- Have the authors considered other task beyond classification? \n\n- Have the authors considered using other datasets?\n\n- Have the authors considered including more qualitative analysis of the learned model compared to the baseline? Although in the form authors claim that Section 6 discussed limitations, I do think it would be necessary to extend this discussion for the paper to be accepted. I see Section 6 as a conclusion but no specific mention to the limitation is done.", " This paper introduces a new method to increase data efficiency for Vision Transformers (VTs). The idea of the method is that Adaptive Attention Links (AAL) between the convolutional filters of a CNN and between the attention heads of the VT are trainable parameters which connect these feature maps between the teacher CNN and the student VT, with only a small parameter increase. This way, the locality inductive bias can be learned from the teacher model, decreasing training time. The paper confirms previous results that higher layer attention maps in a VT contain more high-level semantic information than a CNN, but that other layers roughly correspond to the same type of hierarchical information. Extensive experiments show that the method improves on previous distillation approaches between a CNN and a VT. S1: The idea of the paper is novel, to the best of my knowledge.\nS2: The idea is clear and conceptually appealing.\nS3: The paper presents extensive experimentation to support its conclusions (a number of different datasets and variants of models). \nS4: The paper promotes better data efficiency for VTs, which is a well-known issue.\nS5: The paper presents ablations and one example of repeated runs which show stable results (this seems reasonable in order to save computation).\nS6: the method is flexible with regards to the VT student architecture, any model containing hierarchical attention maps can be used, as well as wrt the teacher model: any CNN containing hierarchical feature maps can be used. The down-stream task of the teacher model does not matter.\n\nW1 The main weakness of the paper, in my view, is its clarity. This can be straightforwardly fixed by doing a more thorough language check, for example by using Grammarly or similar. Many sentences are hard to parse, eg lines 320-321, 234-239. Furthermore, the captions to tables and figures should be more informative which would help the reading a lot, right now they are not possible to get the gist from without finding their reference in the text. Also, captions should not be boldfaced, but that is a detail of course. \n\nW2, W3: Other weaknesses are lacking mention of how the hyper parameters lambda were chosen, and a lacking discussion of the potential weaknesses of the method— this should be easily amendable.\n\nW4: No code is released. 1. How was lambda = 0.99 and 0.98 chosen? (L288)\n2. In lines 37-43, do you mean source dataset instead of target dataset?\n3. L175-176, did you find an improvement when using channel-wise attention maps compared to some aggregate version of attention maps?\n4. What is your interpretation of the augmented attention links, what do they represent? Is it mainly about their increased flexibility of representing the CNN maps?\n5. Are the X/Y results in the tables always teacher/student?\n6. What does attention maps “at a similar level” mean? At a similar layer depth?\n7. L266, when you say remarkable results, which results are you referring to then? If you say a strong word such as remarkable it should be clear what you mean, apart from that the result should be strong too, of course. \n8. L327-332: do you compare self-supervised approaches to your classically supervised approach here? Or do you make some kind of SS version of your approach? This is not clear\n The choice of the hyper parameter lambda could be more accounted for (did it include tuning, etc). The authors do not state any potential limitations of their work, which seems a little worrying, perhaps, in terms of transparency. There are always limitations. However, if this can be added the work is sound.", " This paper proposes a new knowledge distillation method to improve the training efficiency of ViT. More concretely, the weighted average of the attention maps in the student ViT model is regularized to approach the activation maps of the CNN teacher model, which is the so-called attention link. Based on the empirical observations, the paper further makes the link selective by only building relations between CNN activation maps and ViT attention maps at similar levels and excluding the attention maps at the high levels. Empirically, the proposed method is compared with DeiT, ConViT and other knowledge distillation methods on ImageNet or CIFAR10. In the setting of small-sized data, the attention link is shown to improve the test accuracy with shorter training time and less computation cost. Strengths:\n1. The writing of Sections 1-3 is clear and easy to follow.\n2. The idea of regularizing the global token’s attention maps with the CNN activation maps is novel.\n3. In the setting of small-sized data, the improvements in accuracy and training time are impressive.\n\nWeaknesses:\n(Please refer to the Questions section for more details.)\n1. Some of the tables and figures lack elaboration in the caption.\n2. More empirical evidence is needed to support the superiority of the attention link.\n 1. In Natural Language Processing, knowledge distillation has also been widely adopted in transformer-based models to enhance training efficiency. In particular, MiniLM (Wang et al., 2020) and TinyBert (Jiao et al., 2019) also focus on the regularization of attention maps. I think it’s worth including this line of literature in Related Work.\n2. In Table 1, each CNN block level is associated with different numbers of ViT layer level. And beside the ViT layers at high level, the way how others are assigned to CNN block levels is also different between the alpha-link and the beta-link. I would appreciate it if there is any ablation study or verbal explanation on such a “non-intuitive” design. \n3. In Table 2, could you please explain what the two numbers in each cell stand for? (as it’s not clearly written in the caption or the main text.)\n4. In Figure 2, if my guess about the x-axis corresponding to the ViT layers is correct, I’m wondering how the weights of different heads within each layer are aggregated to produce the plot. Besides, as it’s hard to tell that the block diagonal is generally darker than the side in Figure 2(b) and (c), would you mind elaborating more on how to reach the conclusion that “the ViT attention maps are highly related to the CNN activation maps located at a similar level”? \n5. I would appreciate it if the authors could explain more about the argument in line 239 - “the inductive bias is hard to be trained without a large dataset”.\n6. As stated in Question 1, there are other knowledge distillations methods. For example, they may use a transformer as the teacher model instead of a CNN and also utilize the attention map regularization. I’m wondering whether it would be better to add the empirical comparison with these methods. Please refer to the Weakness and Questions section for details." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "qlNe7hkPzf", "i-O8m0K5Z9AL", "cW8NYNyP02d", "pTJg_iFaJTM", "bhz8LzRV8q0", "Xg7j2IIn4y", "Xg7j2IIn4y", "4iP7wmbEuuK", "pTJg_iFaJTM", "bhz8LzRV8q0", "56HfaXRneGe", "Swgcg0vQvD0", "Swgcg0vQvD0", "Swgcg0vQvD0", "56HfaXRneGe", "56HfaXRneGe", "fQMhc4cObUA", "pTJg_iFaJTM", "bhz8LzRV8q0", "bhz8LzRV8q0", "bhz8LzRV8q0", "nips_2022_jAL8Rt7HqB", "nips_2022_jAL8Rt7HqB", "nips_2022_jAL8Rt7HqB", "nips_2022_jAL8Rt7HqB" ]
nips_2022_cj6K4IWVomU
Rotation-Equivariant Conditional Spherical Neural Fields for Learning a Natural Illumination Prior
Inverse rendering is an ill-posed problem. Previous work has sought to resolve this by focussing on priors for object or scene shape or appearance. In this work, we instead focus on a prior for natural illuminations. Current methods rely on spherical harmonic lighting or other generic representations and, at best, a simplistic prior on the parameters. We propose a conditional neural field representation based on a variational auto-decoder with a SIREN network and, extending Vector Neurons, build equivariance directly into the network. Using this, we develop a rotation-equivariant, high dynamic range (HDR) neural illumination model that is compact and able to express complex, high-frequency features of natural environment maps. Training our model on a curated dataset of 1.6K HDR environment maps of natural scenes, we compare it against traditional representations, demonstrate its applicability for an inverse rendering task and show environment map completion from partial observations.
Accept
The paper introduces a rotation-equivariant conditional spherical neural fields for illumination priors. Reviewers mostly like the novelty of the proposed approach, its fit for the considered task of illumination priors, technical soundness and experimental evaluation that is thorough and shows merits of the approach. The rebuttals to the reviewers also were thorough and addressed reviewers concerns well. All in all this is a conceptually and experimentally solid and interesting paper that merits publication at NeurIPS.
train
[ "5Nh617Vxq6F", "BLyyRGmBm6X", "m91U7bas5X", "QPRoW-lEu4w", "QCN8QTSpJRD", "TJ2XkGmm22", "RkkH4MQVTUP", "ofI1LGV2GQc", "KK88f4KUEtB", "XwS44e70ky", "gijX96DtHC2", "0vnYhBVD64m" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and the updated manuscript. The rebuttal clearly addresses my concerns, including ablation without equivariance (at different latent code dimensions as well), and implementation details about the choice of latent code dimensions and resolution of environment maps that can help reproduce the paper. The reference format is improved, but it still needs to be fixed (e.g., “et al” in [18], unnecessary {Neurips}, _eprint, and so on). In any way, thank you for the thorough response, my major concerns are clearly addressed, so I would keep my rating.", " We are pleased that our previous replies helped to clarify and promote our contribution and hope our further responses help address the reviewer's remaining concerns.\n\n***\n\n### Similarity to Code-SLAM\n\nCode-SLAM learns a generative model (decoder) of depth maps for indoor scenes. It does so using a convolutional architecture. Hence, a given latent code defines a complete depth map of fixed resolution via the series of convolutional layers. There is no invariance or equivariance built into the representation. On the other hand, our generator is coordinate-based (more precisely, direction-based). For any direction and conditioning latent code, it outputs the value of the signal in that direction. Using a coordinate-based representation means we do not have to choose or fix a specific image resolution in advance (enabling multiresolution training using the same single network). Also, it is this construction that enables the rotation-equivariant representation. So, we see the relevance of Code-SLAM but also argue that our contribution is significantly different.\n\n***\n\n### Reliance on vector neurons\n\nWe want to take this opportunity to further emphasise our difference with vector neurons. Their approach was designed and envisaged for point cloud data. For classification or segmentation architectures, their input is a variable length list of 3D points, and they build upon either PointNet or DCGNN architectures. For implicit neural reconstruction, they build an occupancy network conditioned on a vector neuron latent code such that an occupancy probability is outputted for any point in 3D space. Our set-up is quite different but inspired by the rotation equivariance of the vector neuron representation. We represent a continuous spherical (directional) signal (i.e. an image), seek a different (SO(2)) rotation equivariance, and our coordinate inputs are unit vectors instead of points in 3D space. We do not see reliance on the vector neuron representation as a concern. Instead, we see it as a novel extension of a clever idea that has not been widely picked up by the community yet. Another way to view this is that every other neural network based method uses scalar neurons, but we do not consider this alone as a reason to cite a lack of novelty between methods.\n\n***\n\n### Lack of real evaluation\n\nWe have shown that our model can better represent natural illumination environments using the same parameters as the most widely used representations (spherical harmonics and spherical Gaussians). We have also shown that the latent space is well behaved when optimising for illumination, either to fit the model directly to a partial or complete environment map or when solving an inverse rendering task with other parameters known. We believe that these two conclusions already demonstrate that replacing SH or SG lighting with RENI in an inverse rendering problem will improve performance. The model's capability to better represent lighting means that the potential accuracy of the estimated lighting is higher. Due to the entangled nature of the inverse rendering problem, improving one of geometry, material properties, or lighting means that the estimation of the other two quantities will also be more accurate. While we agree that improving the reconstruction of high-frequency details remains an open problem, even without this, our lighting approximation is better than SH or SG, so we believe our conclusion already holds. We also agree that including further, more challenging inverse rendering problems would be a good addition to our evaluation, but within the 9-page limit did not feel this was possible while still doing justice to explaining our method (and note that another reviewer felt we should have included more description of supporting background methods).\n", " Thank you very much for your very thorough response to my concerns. I read all the responses from authors and other reviewers' comments carefully and become more positive about this work. Especially, the comparison of SO(2) over SO(3) equivariance was very helpful because more clarity on the value of providing rotational equivalence to the generative model of the illumination map. I agree that the presence or absence of gravity direction is not a fundamental issue either.\n\nI have to apologize that I incorrectly mentioned CNN-SLAM but what I should have mentioned was Code-SLAM (Bloesch2018). This method does not pixelwisely regress depth values from the image, but rather generates the only plausible depth map from compact latent variables by optimizing them with a pre-trained decoder, which I believe has some similarities in terms of the methodology and motivation with the proposed method. Also, the fact that the proposed method relies strongly on vector neurons, though the domain is not a point cloud, is still a concern.\n\nBut more than novelties, my major concern still lies in the lack of real evaluation. Though the information given by the authors about other inverse rendering tasks was promisingly useful, I still wish they had been properly verified in the submitted paper. It is often said that what works in theory, can often not work in reality. Of course, it is not wrong to emphasize the clarity of the storyline, but I believe evaluation on the real data is also important, especially for the learning-based approach which easily overfits the specific domain of data. The discussion about the high-frequency components and the benefit of natural illumination prior is somewhat persuasive, but I also think it should have been actually verified.\n\nIn any case, the authors have given me great clarity on many of my concerns. Thank you very much for your very thoughtful response. I will carefully discuss in the post-rebuttal phase whether or not to change my evaluation.", " We sincerely thank the reviewer for the constructive feedback. We hope our responses address the reviewer's questions and concerns.\n\n***\n\n## 1. Ablation without rotation equivariance.\n\nHere we provide results for models with SO(2), SO(3) and no equivariance at three sizes of latent code dimensions $D$. For the model with no equivariance, we augmented the dataset with rotations of the images at increments of $0.785 rad$ for a training dataset size of 13384 images. The SO(2) case performs best for all latent code sizes, and both the SO(2) and SO(3) outperform the model trained purely using augmentation whilst using significantly less data.\n\n**Table-1** *The mean PSNR on the test set for models with varying levels of equivariance. Error calculated in LDR sRGB space.*\n\n| Equivariance | D = 27 | D = 108 | D = 147 |\n| ------------------- | -------------- | -------------- | -------------- |\n| None | 11.32 | 15.85 | 14.64 |\n| SO(2) | **17.02** | **19.58** | **19.97** |\n| SO(3) | 14.00 | 18.27 | 17.45 |\n\n***\n\n## 2. More preliminaries.\n\nWe agree this would be a good addition and will look to add this to the supplementary. We have open-sourced the code and models and hope this will allow the easier development of derivative works and enable RENI to function as a plug-in replacement for SH and SG in many inverse rendering pipelines.\n\n***\n\n## 3. Mistakes in references.\n\nThank you for highlighting this issue. This has now been fixed with all references now showing the full list of authors' names and any that were missing the name of the conference or journal have been updated.\n\n***\n\n## 4. Choice of latent code dimensions in Figure 1 and Figure 3.\n\nThe choice of latent code dimension of $3 \\times 20$ in Figure 1 was an aesthetic choice to balance a high number of vectors in the plot of the latent code without over-crowding. We did not include the results for this latent dimension size in the results Table 1, due to it not being an exact size match for SH, the nearest of which would be a 4th order SH with $N = 25$. The choice not to include the $N = 100$ case in Figure 3 was made to reduce the size of the figure.\n\n***\n\n## 5. Resolution of environment maps.\n\nThe full resolution of all the environment maps used during training and testing is $64 \\times 128$. However, during training, we use a progressive training regime incrementally increasing the resolution of the images from $16 \\times 32$ up to the final full resolution. This multi-resolution training enables the network to quickly learn the low-frequency content of the images early in training and progressively fit to higher frequency content.", " We sincerely thank the reviewer for the constructive feedback. We hope our responses address the reviewer's questions and concerns.\n\n***\n\n### 1. More suited to a graphics or vision venue.\n\nWhile we agree the work could potentially appeal to a graphics or vision audience, we feel the impact of a rotation-equivaraint generative model for spherical signals may lay beyond just graphical applications. In addition, we believe there is machine learning methodological novelty in our framework. Finally, much of the recent work on distributed representations for inverse rendering has been published at NeurIPS.\n\n\n***\n\n### 2. Requires studying the background literature.\n\nWe agree with the reviewer and hope that by our sharing of the source code and models it will enables easier development of derivative works and enable RENI to function as a plug in replacment for SH and SG in many inverse rendering pipelines.\n\n***\n\n### 3. The contribution is to some extend incremental.\n\nWe agree with the characterisation that a generative model for spherical domains is an interesting problem class but believe that this does amount to methodological novelty in machine learning. No one has used the vector neurons framework for representing domains other than point clouds to the best of our knowledge. We are, therefore, the first to do so for image data (specifically spherical images using a directional neural field) and propose the more complex variant to handle SO(2) equivariance. \n\n***\n\n### 4. Additions required to use in a more complex inverse rendering scenario.\n\nOcclusions and inter-reflection effects are not handled by an illumination model, but by the way it is applied in the scene i.e. rendering of global vs local illumination. Both are possible using RENI and for complex rendering we would also need explicit modeling of material etc. Whilst this is not the case in many frameworks we are convinced that our model will be a key component to go in this direction as the particular weakness of SH and SG is specularities.", " ### 6. Ablation of equivariance.\n\nHere we provide results for models with SO(2), SO(3) and no equivariance at three sizes of latent code dimensions $D$. For the model with no equivariance, we augmented the dataset with rotations of the images at increments of $0.785 rad$ for a training dataset size of 13384 images. The SO(2) case performs best for all latent code sizes, and both the SO(2) and SO(3) outperform the model trained purely using augmentation whilst using significantly less data.\n\n**Table-1** *The mean PSNR on the test set for models with varying levels of equivariance. Error calculated in LDR sRGB space.*\n\n| Equivariance | D = 27 | D = 108 | D = 147 |\n| ------------------- | -------------- | -------------- | -------------- |\n| None | 11.32 | 15.85 | 14.64 |\n| SO(2) | **17.02** | **19.58** | **19.97** |\n| SO(3) | 14.00 | 18.27 | 17.45 |\n\n***\n\n### 7. Mistakes in references.\n\nThank you for highlighting this issue. This has now been fixed with all references now showing the full list of authors' names and any that were missing the name of the conference or journal have been updated.\n\n***\n\n### 8. Please explain more clearly what new ideas in the paper could be useful in the community.\n\nThis work is the first example of a rotation-equivariant generative model for spherical signals. No one has used the vector neurons framework for representing domains other than point clouds to the best of our knowledge. We are, therefore, the first to do so for image data (specifically spherical images using a directional neural field) and propose the more complex variant to handle SO(2) equivariance. The other reviewers seem to agree with our view that this model is useful for the community.\n\nFor inverse rendering, our primary requirement is a low dimensional parametric representation (since it is these parameters we will have to optimise at test time) that leads to renderings with low error. We do not necessarily need a perfect reconstruction of the environment image. Our generative model for illumination environments can also be used to quickly generate realistic synthetic data, which can be used to make other models more robust to varying illumination conditions. More generally, the same equivariant spherical generative model could find application in modelling any other spherical signal such as 360 video, geospatial data, distributions of directional data and so on.\n\n***\n\n### 9. Please clarify more how to represent the high-frequency lighting information with the proposed method. Also, please provide more theoretical evidence of how useful individual components in the proposed method (i.e. latent representation and rotational invariance) are in inverse rendering in practice.\n\nThere are several approaches we see to enable RENI to capture higher frequency lighting information. For example, with a larger dataset, we expect that using a FiLM-Conditioned SIREN could enable a more expressive model. We also envisage using our rotation-equivariant conditional spherical neural fields as the generator in a GAN framework and expect this to enable the reproduction of high-frequency illumination components. \n\nThere are many applications in inverse rendering where our latent representation and rotational invariance can be used. For example, our latent representation of natural illuminations could help resolve albedo-illumination ambiguities. When estimating a person's skin albedo from a single image, there is ambiguity between the contributions of skin colour and illumination. This ambiguity is largely unaddressed in current research, with practically all current models biasing strongly towards light skin colours. This bias results in dark skin tones being estimated as lighter skin under dark illumination. Our prior for natural illuminations could potentially alleviate this issue, either at inference time, where our prior would restrict the search space of illuminations by providing an expressive illumination model encoding only natural illuminations in a small number of latent parameters thus enabling the models to converge quickly on the most likely explanations. Or, alternatively, at training time, where our model could be used to produce large amounts of synthetic training data, where the rotation-equivariance of our model would enable simple rotations of those illumination conditions. Futhermore, as shown in Table 1, our equivariant models also produce more expressive latent spaces with significantly less data and time required to train. \n\n***\n\n[1] Taco S. Cohen, Mario Geiger, Jonas K ̈ohler, and Max Welling. Spherical CNNs. In International Conference on Learning Representations, 2018\n\n[2] Wenqi Xian, Zhengqi Li, Matthew Fisher, Jonathan Eisenmann, Eli Shechtman, and Noah Snavely. Uprightnet: Geometry-aware camera orientation estimation from single images. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019.", " We sincerely thank the reviewer for the constructive feedback. We tried our best to address the reviewer's concerns and questions and hope the reviewer finds our responses satisfactory. \n\n***\n\n### 1. The novelty of idea in relation to StyleNeRF and CNN-SLAM.\n\nWe could not see the relation of CNN-SLAM to generative models. If the reviewer can add an additional explanation, we are happy to discuss it further. However, taking the general point that building a generative model of illumination environments is no different to building generative models of other domains such as images or NeRFs, we slightly disagree. Illumination environments are spherical and have a high dynamic range. These two properties require special handling. A Spherical CNN [1] based GAN with an HDR loss would, to the best of our knowledge, be a novel solution in itself. However, we took a neural fields based approach because it enables rotation equivariance in the representation. It also avoids having to choose the sample grid in advance, and therefore 1. has the potential to more compactly represent the spherical signal by assigning network capacity adaptively and 2. allows multiresolution training with a single network.\n\n***\n\n### 2. Lack of work done to address the loss of high-frequency components.\n\nWe did implement and test a variant of RENI that used FiLM-Conditioning in the hopes that this model would be capable of capturing higher frequency signals. However, we found this not to be the case due, we believe, to the small size of our dataset. An alternative we are interested in trying in future work will be to implement this rotation-equivariant spherical neural field in a GAN framework. Using a spherical CNN as the discriminator will help the model capture higher frequency, more detailed images. \n\n***\n\n### 3. Our claim of a natural illumination prior.\n\nWe have an explicit prior over natural illumination since we assume the elements of $\\mathbf{Z}$ are normally distributed (line 143), and our decoder learns the nonlinear mapping from this latent space to the spherical signal value. We can therefore sample from this prior distribution to generate realistic illumination environments or use a prior loss on an estimated latent code to regularise inverse tasks.\n\nWhilst other generative models, e.g. StyleGAN, when trained on images of the natural environment, would also inherently capture statistical regularities of the illuminations present in those images, these would require large amounts of data, something prohibitively challenging to obtain for HDR equirectangular environment maps, and would not be rotation-equivariant. \n\nBy training on HDR equirectangular images and designing our model to be rotationally equivariant, our model can easily replace SH and SG as the representation of distant illumination in many inverse rendering tasks. Furthermore, when sampled, our model will only produce plausible environment maps, which is highly useful for resolving albedo-illumination ambiguities in inverse rendering. We feel this combination of benefits warrants using the term 'natural illumination prior'.\n\n***\n\n### 4. Negative consequences of rotational invariance.\n\nThe reviewer raises a valid point that not all digital cameras have a gyro sensor; however, we believe that in such a case, this could be compensated for via external estimation of the gravity vector using a method similar to [2] or via explicit optimisation of the rotation angle during inverse rendering. Our key motivation though is that the captured environments themselves do have a canonical up direction and that by respecting this we learn a more parsimonious and expressive model. \n\n***\n\n### 5. Lack of real-world experiments.\n\nThe core proposition of our paper was to introduce a new generative model for spherical signals and that a demonstration of its use in a simple inverse rendering problem would suffice to show its value in this domain. We assume that RENI will perform comparably or superiorly in any framework where SH or SG is used (assuming that the illumination is natural). We spent quite some time deciding how to tell the story in this paper and thought it would be most effective if isolated from other potential extensions and applications. This aids in keeping the story clean and not diluted in further complexity and choices of frameworks that RENI could be combined with. We do plan in future work to implement RENI in more substantial inverse rendering applications with real-world 'in-the-wild' images.\n", " We sincerely thank the reviewer for their constructive feedback and are pleased they see the potential of our work. We hope this response answers the reviewer's remaining questions and concerns.\n\n***\n\n### 1. An ablation of neural field sizes.\n\nWe have run an ablation of different model sizes with an increase and decrease in the number of layers in the network. With the smaller network, reconstruction quality suffers due to the representational power of the network being reduced. Whereas the larger networks over-fit on the training data and optimising latent codes to fit unseen images becomes more challenging, perhaps due to the small size of the dataset.\n\n**Table-1** *The mean PSNR on test set for varying network and latent sizes. Error calculated in LDR sRGB space.*\n\n| # of Hidden Layers | D = 27 | D = 108 | D = 147 |\n| --------------------------------- | ------------- | -------------- | -------------- |\n| 3 Layers | 16.25 | 18.29 | 18.57 |\n| 5 Layers | **17.02** | **19.58** | **19.97** |\n| 7 Layers | 16.38 | 18.13 | 18.15 |\n\n***\n\n### 2. Drop in performance in SH from 147 to 300.\n\nThis issue was due to a bug in the generation of the SH representation. Line 146 of hdri\\_dataset.py file applies the sinewighting function to the equirectangular image; however, this was applied again in the function getCoefficientsFromImage() called on line 149 of hdri\\_dataset.py. All results affected by this bug have been corrected in the updated version of the paper. No conclusions are altered by these changes.\n\n***\n\n### 3. Reddish colours in Figure 4.\n\nThe reviewer is correct and we agree this is unusual. We think this may be due to our use of HDR images. The source and particularly target have very bright sun regions; even though our loss is in log space, this likely dominates the reconstruction meaning slight colour tone errors in the darker regions are not penalised. Including the cosine loss function during training might help resolve this. We have moved this example to supplementary as a failure case and replaced it in the main paper with a different example that does not have an artefact.\n\n***\n\n### 4. Include SG in inverse rendering comparison.\n\nWe agree that this would be an good addition to the paper and have included this as Figure 6 in the updated supplementary. \n\n***\n\n### 5. Small performance gains for size of latent code and limiting factors in higher frequency details and approaches to solving these.\n\nFirst, we believe that this observation demonstrates a strength of our model. It already saturates generalisation ability with a relatively low dimensional latent space due to its nonlinear and rotation equivariant representation. In other words, our model is efficient in learning common low frequency shared components but, due to the limited size of the training set, it cannot learn any higher frequency components even with a higher dimensional latent space. \n\nHowever, we agree that some consideration of how to improve on this front is important and we have already begun to explore two ideas. First, as mentioned in the paper, we tried a FiLM-Conditioned SIREN and believe this should lead to better reconstruction of high frequency details (though likely still requiring a larger training dataset). Second, we believe that the introduction of a discriminator applied to the images generated by our model would help it to learn high frequency features in order to increase realism. This would require design of a rotation invariant discriminator, probably in the form of a spherical CNN [1].\n\n***\n\n[1] Taco S. Cohen, Mario Geiger, Jonas K ̈ohler, and Max Welling. Spherical CNNs. In International Conference on Learning Representations, 2018", " The paper proposes a neural representation based on spherical neural fields \nfor natural illuminations that are compact and rotation equivariant. \nIt proposes a method based on variational auto-encoders to learn such a \nrepresentation. The learned model can act as a statistical prior for natural \nilluminations. The experiment results show that the proposed neural\nrepresentation can better represent natural illuminations than traditional \nspherical lighting representations such as spherical harmonics and spherical\ngaussian. The paper also shows that the learned model can be incorporated in\ninverse renderings tasks as illumination priors and leads to better lighting\nestimation. \n Strengths\n\n1. The paper proposes a novel solution to an important problem. The idea of \nusing spherical neural fields to represent natural illuminations is inspiring,\nand the design of the input to make the represent rotation equivariant is \nalso technically sound. The paper also proposes to apply variational\nauto-encoders to learn such a model, which also makes sense to me.\n\n2. The paper performs a thorough evaluation against baseline methods to show the\nadvantages of the proposed representation. It also performs different ablation\nstudies to validate the capability of the representation and different design\nchoices. \n\n3. Such a representation has great potential in many applications, especially in \ninverse rendering tasks. The paper does experiments which show that the learned\npriors help estimate more accurate lighting.\n\nWeakness:\n\n1. The paper evaluates the performance of the model with different latent code\nsize. I am also wondering how the performance changes with different sizes of \nthe spherical neural field network. This is an important factor because ideally,\nwe would like the network to be as small as possible so that the cost of\nevaluating such a network would be minimal, especially when using it in inverse\nrendering tasks.\n\n2. In Table 1, the performance of SH drops when D increase from 147 to 300. What is\nthe reason for it?\n\n3. In the third row of Figure 4, why there will be reddish colors that do not\nexist in the two source environment maps?\n\n4. In the inverse rendering tasks, the paper only compares to spherical\nharmonics. It's also worth adding comparisons to spherical gaussians as they are also \ncommonly used in inverse rendering tasks as in [60].\n\n5. When fitting the environment maps in Figure 3, it seems that the performance \nof the proposed method does not improve much with the increase of the coding\nsize, while SG and SH can better approximate the shape of the lighting. What \nare the limiting factors that prevent the method from further reconstructing more\naccurate environment maps? How such a problem can be tackled?\n\n\nOverall, I like the idea of the paper, and the evaluations are also thorough. I\nbelieve that such an illumination prior can be useful in many tasks such as\ninverse rendering and is worth introducing to the community. \n See above. The limitations are well discussed.", " This paper presents the neural illumination model based on the variational auto-decoder. By introducing the rotational invariance to the latent variable, the model can represent plausible environment map that could be useful for the inverse rendering task. The evaluation showed that the proposed model has better representation ability with a compact latent code. Strengths:\n\n- The attempt to represent light sources in a generative model is very interesting. Adding rotational invariance to the latent space in a generative model is also an interesting attempt.\n- The authors properly addressed potential limitations in the method.\n- The description of the proposed method is clear and almost no ambiguity exists.\n\nWeakness:\n\nThis research consists of two elements: defining the lighting with a generative model and making the model rotationally invariant (around the gravity axis). Therefore, I would like to address concerns about each of these factors. \n\n- As for the first part, instead of directly optimizing the pointwise function, learning a latent space (i.e. generative model) for a target domain from external data in advance and optimizing its latent variables to generate an instance that satisfies specific criteria have already been done in many computer vision tasks such as CNN-SLAM and StyleNeRF. Therefore, essentially using a similar strategy to generate an environment map doesn’t seem to be a very novel idea. Although an embedded lighting space can indeed represent a wide range of environment maps with fewer parameters, the spaces generated in this way are generally limited to the low-to-mid-frequency range as was mentioned in the paper. Nevertheless, there is little effort to express high-frequency components, as has been done in recent generative models such as StyleGAN. Furthermore, I feel that claiming a contribution for the learnt latent space as “natural illumination prior” is somewhat of an overclaim, as it is true of all data-driven models including other than generative models.\n\n- As for the rotational invariance, I recognize the advantage of being able to learn a latent space with a small number of data, including overlap due to rotation, but I feel that the negative effects of this constraint are not small. As authors have properly identified this in the paper, in real inverse rendering applications, it is very unusual that the camera is parallel to the ground, and if we want to use the environment map generated by the proposed method, we need to compensate for the gravity direction for the coordinate system transformation. But not all digital cameras unfortunately have a gyro sensor. In addition, the realization of rotational invariance is almost entirely based on existing methods, except for the transfer of coordinates to angle representation, so there seems to be little novelty in itself though it seems novel that this invariance is plugged in the generative mocel.\n\n- Due to the lack of real experimental results about the inverse rendering task, it is not clear how useful this model in the real applications (e.g., how the reconstruction accuracies of other attributes such as shape and reflectance improve). In addition to synthetic examples as presented in the paper, there should have been results showing the superiority of the proposed method in real experiments.\n\n- Authors claimed that the O(n) trick proposed in [15] can resolve the scalability issue but have not been verified in the paper. The paper also didn’t show the comparison between SO(2) and SO(3) invariance.\n\n- There are too many mistakes in references. - Please explain more clearly what new ideas in the paper could be useful in the community. \n- Please clarify more how to represent the high-frequency lighting information with the proposed method. Also, please provide more theoretical evidence of how useful individual components in the proposed method (i.e. latent representation and rotational invariance) are in inverse rendering in practice.\n- If there is a misunderstanding, please correct me on the concerns pointed in weakness.\n The authors describe limitations and negative societal impact.", " The paper proposes a generative model for illumination (spherical incident radiance fields of incoming light from sources at infinite distance, not interacting with the scene). It uses a vector-neuron-based network (encoding the illumination function in MLPs with SIREN-activations), which provides SO(3)-equivariance (of which the SO(2)-part is used/relevant in practice) which is trained as a variational \"auto-decoder\", a variant of a VAE that does not require the encoding part (which is harder to implement in an equivariant way). The model is trained on a set of HDR environment maps and can later be used for inference task. Applications demonstrated include inverse rendering of objects with local illumination models.\n\nThe main contribution seems to be the design of a network representing a statistical illumination prior that is also equivariant, i.e., can learn lighting scenarios under various rotations without (cumbersome) data augmentation. According to the paper (I am not familiar enough with the literature to confirm this), only few pieces of prior work have considered priors on illumination conditions but had less sophisticated solutions in terms of representation and handling large dynamic range. **Strength:**\n- The paper proposes a sophisticated system with many complex parts that are carefully tuned to fit together. In particular, maintaining equivariance at all stages involves some non-straightforward and recent techniques.\n- Handling SO(3)-invariance is hard (and group-theoretic \"brute-force\" solutions based on Wigner-Matrices easily become conceptually very challenging), and the proposed approach appears to navigate around the difficulties well, providing a practical solution, while still remaining expressive and accurate.\n- The method apparently does not have fixed resolution limits that linear representations would be prone to (even if just serving as output layer of a more complex network). Functional complexity is still limited by scaling issues (Gram matrix).\n- Results are good, surpassing linear SH and non-linear SG base-lines numerically. Visual results are also plausible/convincing.\n- The paper is very well written.\n\n**Weaknesses:**\n- The paper is mostly a \"systems\" approach to solve a specific application problem. While very well executed, the broader impact (methodological novelty) to machine learning is limited (although non-negligible, as generative models on spherical domains might be a relevant class of problems). One could debate whether a graphics or vision venue would be a better fit for this topic.\n- Quantitative evaluations are limited to SH/SG base-line methods; however, it is probably difficult to find better comparisons due to limited prior work and complexity of the overall setting.\n- Despite good writing, the compositional nature of the contribution requires studying the background literature (in particular, vector neurons and auto-decoders) in depth to get a full picture. Again, this is probably hard to avoid.\n- The contribution is to some extend incremental (but solid, nonetheless).\n If used in an inverse-rendering scenario (say, reconstruction from multi-view photography), what would be required (in addition) to employ the method)? As far as I could see, occlusion and interreflection effects are not modeled (which is fine, but it might be interesting to understand if the method is directly applicable to more challenging tasks in practice). I have not found major limitations that are not discussed adequately. The discussion of broader social impact also appears adequate to me.", " This paper proposes a generative model for natural illumination using a neural field representation based on a variational auto-decoder. The key idea is that natural illumination is highly structured (e.g., lighting comes from above), which has a prior that can be learned and represented well by a generative model. Also, natural illumination generally has a geometric symmetry that a rotation with respect to the vertical axis (i.e., gravity axis) is equally likely, which can be used to restrict the possible illumination space. The proposed method is based on this property and is rotation-equivariant by adding a rotation-invariant transformation on the input direction and the feature vector before the network. The method is demonstrated by extensive evaluations both qualitatively and quantitatively. ### Strengths\n- This paper is well written. It is a good primer for understanding the structure of natural illuminations and their representation including Spherical Harmonic (SH), Spherical Gaussian (SG). The motivation of rotation invariance and equivariance is well presented, and the core idea of SO(2) equivariance is described intuitively and clearly.\n- Promising results. The proposed method shows promising results both qualitatively and quantitatively as shown in Table 1 and Figures 3-5. Figure 3 shows that the proposed method RENI captures high-frequency detail such as the primary light source (i.e., sun) more accurately compared to SH and SG where the sun is more blurry. The interpolation and inpainting results in Figures 3 and 4 are also promising and interesting.\n- Extensive evaluation. The paper provides extensive evaluations including interpolation, inpainting, and inverse rendering, as well as the comparison to other methods (e.g., SH, SG). The experiments are also done with different dimensions and comparisons are conducted accordingly with various latent dimensions.\n\n### Weaknesses\n- Ablation study is missing. While the paper provides extensive evaluations of a range of tasks, an ablation study is missing. A comparison to the baseline model without rotation-equivariance would help show the efficiency of the proposed method in restricting the space of natural illuminations.\n- Some preliminaries can be explained more in detail. As the proposed method has some overlap with Vector Neurons (e.g., invariant layer), adding preliminaries regarding Vector Neurons would help readers better understand the proposed method.\n- Reference format. Although it is not directly related to the proposed method, the reference section should be fixed. Most references miss the name of the conference or journal, and the names of authors are replaced by “et al”, which should be fixed to the full list of the authors. - Figure 1 shows the result with 3 x 20 latent code, but all the results used in Section 4 (Evaluation) has different latent dimensions (i.e., N = 9, 36, 49, 100). Is there a special reason why Figure 1 has a different dimension? Also, Figure 3 does not show the case of N = 100, which is in the quantitative comparison in Table 1.\n- What is the resolution of the environment maps that are used in the training? Do all the images have the same resolution? If not, how is it sampled in the training? The limitations and impact are well described including the memory footprint of the gram matrix, and the possible misalignment of the y-axis in the image. Broader impact and possible bias in the illumination data (i.e., captured in Europe) are also addressed properly." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 2, 4 ]
[ "QPRoW-lEu4w", "m91U7bas5X", "TJ2XkGmm22", "0vnYhBVD64m", "gijX96DtHC2", "RkkH4MQVTUP", "XwS44e70ky", "KK88f4KUEtB", "nips_2022_cj6K4IWVomU", "nips_2022_cj6K4IWVomU", "nips_2022_cj6K4IWVomU", "nips_2022_cj6K4IWVomU" ]
nips_2022_8LE06pFhqsW
E-MAPP: Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance
A critical challenge in multi-agent reinforcement learning(MARL) is for multiple agents to efficiently accomplish complex, long-horizon tasks. The agents often have difficulties in cooperating on common goals, dividing complex tasks, and planning through several stages to make progress. We propose to address these challenges by guiding agents with programs designed for parallelization, since programs as a representation contain rich structural and semantic information, and are widely used as abstractions for long-horizon tasks. Specifically, we introduce Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance(E-MAPP), a novel framework that leverages parallel programs to guide multiple agents to efficiently accomplish goals that require planning over $10+$ stages. E-MAPP integrates the structural information from a parallel program, promotes the cooperative behaviors grounded in program semantics, and improves the time efficiency via a task allocator. We conduct extensive experiments on a series of challenging, long-horizon cooperative tasks in the Overcooked environment. Results show that E-MAPP outperforms strong baselines in terms of the completion rate, time efficiency, and zero-shot generalization ability by a large margin.
Accept
This paper deals with complex long-horizon tasks with multi-agent RL. The authors propose E-MAPP method that leverages parallel programs to guide multiple agents with goals to accomplish the task jointly. Generally, this paper is with an interesting idea and has sound technical contributions. The presentation is a bonus point of this paper. The rebuttal mostly eases the concerns of the reviewers. As a result, all the reviewers vote for an acceptance of this paper. The major weakness of the proposed method lies in the inconvenience of applying E-MAPP to a new environment or task since it requires a huge amount of work. Maybe due to this reason, the experiment is conducted on overcooked v2 environment only. In sum, I think this is an interesting paper tackling a type of challenging task and thus recommend an acceptance of this paper.
train
[ "ABHKPGESPQ", "RfikRszirVV", "N_hNxASIfKS", "aBEEmOx2aZy", "hPjFyilPyv2", "S1fHcsbsbg", "dlNHsV0sXK", "C_iJXUI9WuU", "JEtDjQWkHy2", "4d39UPGM5rZ", "JmjhIdXayWV", "hJBx41_JTt9", "MGMt1ghKVa" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your extensive rebuttal and additional experimentations; I especially appreciate the results showing performance in the partially-observed setting. As several of my proposed weaknesses have been addressed I will increase my score to a 6. I struggle to go above this score for many of the same reasons addressed by Reviewer KHSD: it seems like applying E-MAPP to a new setting will require a huge amount of work (e.g. defining primitives, pretraining modules, and determining which assumptions can be leveraged to lower subtask allocation time). ", " We would like to first thank you again for your constructive comments and helpful suggestions. Since we are near the end of the discussion phase, we would like to post a follow-up response.\nIn our previous response and our revision, we have addressed your following concerns:\n\n- We have conducted additional experiments and added additional analysis to show that our method can scale to more complex environments. (e.g., partially-observable environments, environments with larger number of agents)\n- We further refined the paper by fixing the typos and clarifying some facts.\n\nDo you find our responses satisfactory? Or if you have any other suggestions on further improving the manuscript, it would be great if you can post them during the discussion phase. We are more than happy to add them to our paper and submit a new revision before the discussion phase ends. Thank you!\n", " Thank you for your extensive rebuttal!\n\nMy concerns have been sufficiently addressed to warrant bumping this up to a 6. \n\nI still think this approach is limited by knowing the correct subtask primitives, and look forward to further work addressing this. ", " Q: Lines 156-159 Things are quite vague, give an example.\nA: Here is an example explaining the program executor’s rule. If the program pointer points to “Serve(Onion)”, but a dish of tomato is served(wrong subtask is completed!), the program executor will terminate the program and end the current episode.\n```\nIf (is_ordered(Onion)):\n Serve(Onion) < - - - - - - - - - - - - - - -\nElse:\n Serve(Tomato)\n```\n\nQ: Lines 181-182 An ablation of this choice would be nice. (Do you also use self-imitation learning with the MAPPO baseline? If not, what is the impact of self-imitation learning alone?)\n\nA: Yes. We use self-imitation in the MAPPO agent and the NL-guided agent. We will add it to the paper.\n\nQ: Why apply log to feasibility and reachability and not cost-to-go?\n\nA: Intuitively, an unfeasible subtask should not be allocated to any agent. The feasibility and reachability ranges from 0 to 1. Applying log to the feasibility function and the reachability function can bring a huge cost to an unfeasible subtask, thus preventing it from allocating to any agent.\n\nQ: What's the definition of \"novel\" here?\n\nA: The maps in the testing environments are unseen during training, thus requiring the agent to generalize to new maps.\n\nQ: Table 3 The error bars for the average scores seem huge?\n\nA: Yes. The testing maps are diversified(i.e., object positions are random), leading to a huge variance among scores on different maps.\n\nMisc: We have fixed the typos and the improper phrasing you mentioned.\n", " Dear reviewer GMQT, thank you for your detailed and thorough review. We seek to address each of your concerns with the following responses:\n\nQ: This work appears to depend on a very large number of pretrained components: perception modules, single-agent policies, and each of the three auxiliary functions (reachability, feasibility, and cost-to-go). That these all must be trained in different phases with separate datasets and different training techniques (e.g. self-imitation learning) seems to make it quite challenging to E-MAPP in new settings.\n\nA: Although our framework is modular, the training process is natural. The perception module and the auxiliary functions are all trained in a supervised manner, which is efficient (within several hours) and accurate (with validation accuracy over 90%). The training techniques such as self-imitation used in E-MAPP policy learning are also commonly used in standard multi-agent RL algorithms including MAPPO.\n\nQ: The current strategy for subtask allocation, from my understanding, requires searching over a search space that grows exponentially in the number of tasks and agents (M^N possible allocations of N agents to M tasks). This appears to limit the applicability of E-MAPP to tasks with larger numbers of agents.\n\nA: With some assumptions, we can obtain a polynomial-time subtask allocator that scales well. Please refer to the \"Computational complexity of subtask allocation\" section in the general response for further details.\n\nQ: Fully observed?\n\nA: Although the experiments are conducted in a fully observable setting, E-MAPP can also be applied to partially observable environments. Please refer to the \"Why fully observability\" section in the general response.\n\nQ: The need to train many separate components to high levels of accuracy suggests that E-MAPP may not work well in complex, high-dimensional, environments (e.g., learning from pixels or learning in partially observed environments), where even training the perception modules may be very challenging.\n\nA: While E-MAPP does require a few pretrained components, we argue that it can scale well even in the face of complicated tasks. Specifically \n\n1) For branching subroutines (e.g., IsOnFire()), we keep a memory to track and constantly check whether the previous branching output is correct. For example, if the perception module erroneously outputs True for IsOnFire() under some unseen states, in the next few steps, it will correct its own error with high probability and guide the agent to the correct branch promptly.\n2) We use a hyperparameter “maximum timesteps of tries” in the subtask allocator to prevent agents from attempting unfeasible subtasks. This reduces the impact of inaccurate feasibility/reachability functions.\n\nQ: Explain the continuous control experiment.\n\nA: The continuous control task requires two arms to concurrently stack two piles of blocks. We include this task for showing that parallel programs can also be extended to continuous control, and are useful tools in other domains. A program example is\n```\nparallel:\n stack(yellow, green) # stack yellow block on green block\n stack(red, blue)\n```\nQ: The inputs given to the various baseline models (MAPPO and the model with NL guidance)?\n\nA: The MAPPO agent is trained separately for each task. The natural language-guided model takes in a goal described in natural language and an observation. Then, it learns a goal-conditioned policy based on MAPPO. The goal is encoded with a pretrained sequence encoder(BERT) and a learnable MLP that shares structure with that in E-MAPP. We will add a clearer description of the NL model in the paper.\n\nQ: Several figures take up quite a lot of space (e.g. Fig 2/3/5) while the heart of the paper, namely the description of how exactly the E-MAPP framework works (e.g. pointer allocation and when/how the auxiliary functions are used) is largely relegated to the appendix.\n\nA: We have adjusted the layouts of the paper in the revised version.\n\nQ: Line 33 \"different feasibility\" sounds odd.\n\nA: We have replaced it with “different abilities”.\n\nQ: Line 64 Factorization approaches have also been investigated on the policy side, see e.g. \"A Cordial Sync: Going Beyond Marginal ...\" Jain et al. (ECCV'20)\n\nA: Thanks for your suggestion for the reference. We have cited and discussed these papers in the revised version.\n\nQ: Highlight imitation learning, and related ideas, as an approach for solving long-horizon tasks and the sparse reward problem. E.g. generative adversarial imitation learning.\n\nA: Thanks for your suggestion for the reference. We have cited and discussed these papers in the revised version.\n", " Q: What the Program Executor actually suggests or how this is synthesized? Can you provide examples of the programs devised?\n\nA: We have added another section \"Examples of Programs Used in Evaluation\" in Appendix A.11 to show more program examples. An example is as follows:\n```\nparallel:\n 1: if is_ordered(“Onion”):\n Merge(“ChoppedOnion”,”Plate”)\n Serve(“ChoppedOnion”) \n 2: if is_ordered(“Onion”):\n Merge(“ChoppedTomato”,”Plate”)\n Serve(“ChoppedTomato”)\n 3: while (True):\n If (IsOnFire()):\n PutOffFire() \n```\n\nQ: Analysis of queries asked and performance of the state encoder on producing correct answers would be useful to understand how well the Program Executor /State Encoder is working.\n\nA: The accuracy of correct answers to the queries are about 99.2%. We will add this to the paper.\n\nQ: Understanding the number of unfeasible tasks produced would be useful.\n\nA: The subtask allocator tasks input 3-5 subtasks, and about 50% of them are unfeasible on average.\n\nQ: Why E-MAPP performs imperfectly in the easy environment? Can the authors add some comments on this. \n\nA: The MAPPO algorithm is not a goal-conditioned RL algorithm; we train it from scratch on each of the tasks. This training procedure gives MAPPO the advantage to overfit in short-horizon tasks while E-MAPP uses a single goal-conditioned policy for all the tasks. However, when the complexity of the tasks grows large, MAPPO fails to learn meaningful behaviors even with the advantage.\n\nQ: Ablation analysis of E-MAPP (as it has many components) in general would be appreciated, it's unclear where to attribute performance gains to currently.\n\nA: We have conducted ablation experiments on the key components in Section 5.3, showing that both the parallel program executor and the auxiliary functions (feasibility, reachability, cost-to-go) for subtask allocation are important. The perception module and the policy module are commonly used modules in the program-guided-agent framework, which are also indispensable components.\n\nQ: It would be useful to discuss limitations in computation resources and scalability of this approach. Overcooked is neither a game of many players or many coordination conventions. \n\nA: With some assumptions, we can obtain a polynomial-time subtask allocator that scales well to the number of agents and subtasks. E-MAPP can also be adapted to new domains. Please refer to section\"Computational complexity of subtask allocation\" and section \"How to scale to new domain\" in the general response for further details.\n", " Q: The criteria for subtask allocation. Surely simple reward maximizing would lead to this criteria naturally arising within the Task Allocator. \n\nA: Our approach aims at discovering solutions for generalizable and composable tasks instead of a single task. The proposed criteria for subtask allocation rules out the unfeasible subtasks and greedily choose the subtask with smallest cost conditioned on current states. It is a general criteria for tasks with any subtask structure. In contrast, an end-to-end reward maximization approach may fail to discover general policies for different tasks. This is supported by the experiment on NL agents, in which the complete task is embedded as a goal and the agents are required to maximize the cumulative reward in a complete episode. The results show that end-to-end reward maximizing may fail to learn efficient task-planners for multi-tasks. \n\nQ: Computationally complexity or feasibility of this approach versus baselines. In comparison to the baselines this seems to contain significantly more networks, a search for legal subtask allocation which scales O(n!) to the number of subtasks and finally a curriculum which requires pre-training a state on ground truth sub-goals, training n agents separately and finally then training the joint n-agents. \n\nA: With some assumptions, we can obtain a polynomial-time subtask allocator that scales well. Please refer to the \"Computational complexity of subtask allocation\" section in the general response.\n\nQ: Explain the baseline of “Natural language–guided model”. A comparison of the goals provided by the Natural Language (or vocab size) vs the Task Allocator (and number of primitives) would be useful.\n\nA: The natural language-guided model takes in a goal described in natural language and an observation. Then, it learns a goal-conditioned policy based on MAPPO. The goal is encoded with a pretrained sequence encoder (BERT) and a learnable MLP that shares structure with that in E-MAPP. We havel added a clearer description of the NL model in the paper in Appendix A.9. The vocab has a size of 30, containing task description words and conjunctive words (e.g., if, then, while). The number of possible primitives is 24, including compositions of behavior types and subjects. Here is an example\n| task in program | task in natural language |\n| :-------------------------: | :---------------------------------: |\n| if IsOnFire(): PutOutFire() | If there is fire, put out the fire. |\n\nQ: E-MAPP fundamentally is a centralized execution algorithm (a joint policy is learnt) - so … a much better comparison would be a centralized agent. Such a comparison would help justify the claim of “time efficiency” presented in the abstract.\n\nA: The MAPPO agents share the same observation in our setting, so cooperative behaviors are likely to emerge. We add another experiment to compare E-MAPP with centralized agents. We implement a centralized PPO where joint policy is directly produced by a centralized network. The results indicate that the centralized PPO suffers from the high-dimensionality of the joint action space and fails to learn the cooperation and the coordination. \n| model | score |\n| :------------------: | :---------: |\n| E-MAPP | 1.58±0.60 |\n| MAPPO(decentralized) | 0.59± 0.27 |\n| MAPPO(centralized) | 0.48 ± 0.21 |\n\n\nQ: Significance of E-MAPP’s performance vs. MAPPO or other methods on discovering winning strategies for harder tasks. \n\nA: E-MAPP improves two types of fundamental challenges in multi-agent settings: solving long-horizon tasks efficiently and performing compositional generalization. In trade off, we devise a domain-specific language to describe the structures of the tasks. The DSL is very natural to design from task knowledge. For example, we only hint the agents with a $\\textit{parallel}$ keywords from DSL, while the agents learn to find an efficient way to execute the programs automatically. Hence, considering the performance gain, we argue that the introduced inductive biases are reasonable and worth the merits. \n\nWe believe this is a promising direction to learn more inductive biases from videos or narrations. Our work opens this welcoming avenue in the multi-agent setting. \n\nQ: The choice of rewards (0.2 for subtask, 1 for task) provided in the “Average Scores” seem arbitrary. Please provide reasoning for these.\n\nA: We empirically use a smaller value for dense reward and a larger value for the final goal. We note that the baselines (MAPPO and NL-guided MAPPO) also used the reward 0.2 for subtask completion and 1 for the whole task completion. \n\n\n", " Dear reviewer KHSD, thank you for your detailed and thorough review. We seek to address each of your concerns with the following responses:\n\nQ: The approach seems original (in the context of MultiAgent systems) however I can not comment on how novelly it extends Program use within Single Agent RL.\n\nA: E-MAPP is more than a simple extension of program use with single agent RL. We propose original approaches to tackle the following unique challenges: 1) Subtask structure discovery. We combine parallelization keywords and the feasibility function to automatically infer the parallelizable subtasks, while single agent RL can only strictly follow the order of program subroutines. 2) Allocating subtasks to agents. We design three auxiliary functions as the criteria for subtask allocation. However, in program-guided single agent RL settings, the sequential program provides only one subtask each time for the agent, so there is no need to match different subtasks to different agents. 3) Cooperative policies. We design a lead-assist framework for cooperative subtasks to address the resource racing problem, which is also a non-existent problem in single agent RL settings.\n\nQ: Information hardcoded into the program executor (what decides the perception and behavior primitives), and the design of the Possible Subroutine Set.\n\nA: We have listed the DSL used in the overcooked environment in Appendix A.1. The list involves all the domain-specific information. The elements of the possible subroutine set are primitives listed in the DSL. There is no additional information encoded other than what is mentioned above.\n\nQ: Hardcoding such components reduces the ability for this approach to be applied to any novel task. Thus this approach does not help discover/explore novel solutions to the game of Overcooked.\n\nA: The DSL describes only the basic operations and necessary procedures in the game. The low-level policies and the subtask planning are all learned. Discovering novel solutions such as good cooperative policies(e.g., one agent help another agent by delivering tools) and time-efficient task planners(e.g., raising priority on subtasks that are preconditions of other subtasks) will not be hindered.\n\nQ: To further this, the additional auxiliary tasks used to shape the policy network (feasibility, reachability, cost-to-go) are also hardcoded heuristics to evaluate sub-agent fitness for a task. In new problem domains it is unclear how you can devise this.\n\nA: We respectfully disagree that the auxiliary functions are specially devised heuristics for the game Overcooked. The feasibility function filters out the subtasks with uncompleted predecessor subtasks, the reachability function denotes whether a subtask is cooperative or non-cooperative. The cost-to-go function can be trained to fit any cost functions in the new domain. These functions and the training process are general for different tasks, and can all be transferred to new domains without extra effort. For example, in the multi-stacking environment mentioned in Appendix A.10, the reachability function refers to whether the target block is within the arm’s reach, while the feasibility function refers to whether the precondition (e.g., B is in the right position) is satisfied for a subtask (e.g., stack A on B).\n\n", " Dear reviewer uuPM, thank you for your detailed and thorough review. We seek to address each of your concerns with the following responses:\n\nQ: While not explicitly stated in the paper, this work seems to assume the agents operate in a fully observable environment and the same observation is shared across all agents. This could fundamentally limit applying the proposed framework to more realistic domains.\n\nA: E-MAPP can also be applied to partially observable environments. Please refer to the \"why fully observability\" section in general response for further details.\n\nQ: There are very few task (program) examples given in the main paper and the supplementary material. It is tough to judge the performance of the proposed framework and the baselines without knowing the tasks used to evaluate them. How easy are the easy tasks? How hard are the hard tasks? How different are the tasks used for learning and the tasks used for evaluating zero-shot generalization?\n\nA: The easy tasks involve only one perception primitive and one behavior primitive. An example is as follows.\n```\nif IsOnFire(): \n PutOutFire()\n```\nThe hard tasks involve concurrently preparing more than one dish and contain at least 5 subroutines. An example is as follows.\n```\nparallel:\n 1: if is_ordered(“Onion”):\n Merge(“ChoppedOnion”,”Plate”)\n Serve(“ChoppedOnion”) \n 2: if is_ordered(“Onion”):\n Merge(“ChoppedTomato”,”Plate”)\n Serve(“ChoppedTomato”)\n 3: while (True):\n If (IsOnFire()):\n PutOffFire() \n```\nThe tasks used for learning and for evaluating zero-shot generalization are different compositions of the subtasks in the DSL. For example, the hard tasks used for learning involve preparing dish $\\textit{SingleOnion}$ and dish $\\textit{SingleTomato}$, but for zero-shot generalization test, a new dish $\\textit{OnionTomato}$ is added to the evaluation. We have added another section \"Examples of Programs Used in Evaluation\" in Appendix A.11 to show more task examples.\n\nQ: How to obtain tasks/programs\n\nA: Tasks/programs in a new domain can be devised with program synthesis approaches. Please refer to the \"How to scale to new domain\" section in the general response for further details.\n", " We thank the reviewers for their insightful feedback! We address common concerns here and will reply to each reviewer separately to address the remaining concerns.\n\n### Why assuming full observability \n\nThe goal of our work is to solve long-horizon cooperative tasks with rich subtask structures (e.g., video games, multi-drone delivery) where a central controller or inter-agent communication exists. Therefore we follow the original game $\\textit{Overcooked}$ and assume full observability.\n\nDespite this, we add an experiment to show that E-MAPP can still outperform other methods in a partially observable environment. In the new setting, the observation of each agent is only part of the map within reach. The results are as follows. Under the new setting, E-MAPP can still learn to allocate sub-tasks to agents and accomplish the tasks efficiently. \n\n| model | score | completion rate |\n| :-----: | :-----: | :-----: |\n| E-MAPP(original) | 1.58±0.60 | 56.3% |\n| E-Mapp(partial obs) | 1.01±0.38 | 27.1% |\n| MAPPO | 0.59± 0.27 | 0.0% | \n\n### Computational complexity of subtask allocation\n\nIn an environment with $M$ subtasks and $N$ agents, the brute force search for an optimal allocation indeed has a complexity of $O(M^N)$. However, the practical complexity is much smaller than it. The reasons are as follows: \n1) In a certain stage of a long-horizon task, only a small amount of subtasks are feasible. Thus, the subtask amount $M$ can be pruned into a smaller number $L$ by checking the feasibility function $O(M\\times N)$ times. \n2) The engaging $N$ agents can be classified into $C$ roles. The agents sharing the same role have the same reachability functions. $C$ is often a property of the task that does not scale with $N$. For example, in the overcooked environment, $C$ can be the number of connected components of the map. Note that, in E-MAPP, the assistive agents aim to increase the reachability of the leading agents. We define $C\\times L$ new subtasks by pairs $(\\tau, c)$, where $\\tau$ comes from the $L$ feasible subtasks and $c$ comes from the $C$ roles. The goal of each new subtask $(\\tau, c)$ is to help agents with role $c$ to gain reachability on subtask $\\tau$. We can obtain a new subtask set of size $O(C\\times L)$ by extending the original subtask set with these newly defined subtasks. Assume that the number of agents is smaller than the number of feasible subtasks (otherwise, idle agents will inevitably emerge). Under this assumption, each agent will choose to either complete a subtask alone or assist a certain group of agents with the same role, and each subtask in the new subtask set is allocated to at most one agent to avoid conflict. Then the task allocation problem turns into finding the best matching of $N$ agents and $O(C\\times L)$ subtasks with the smallest total cost, which can be solved by the Hungarian algorithm. The computational complexity is $O((N+CL)^3)\\leq O((N+CM)^3)$ that scales well.\n\nWe have added an additional experiment with a doubled number of agents to evaluate our algorithm. The results show that E-MAPP can scale to environments with more agents and further boost the time efficiency by parallelization.\n| model | score | completion rate |\n| :--------------: | :--------: | :-------------: |\n| E-MAPP(original) | 0.99±0.22 | 43.7% |\n| E-MAPP(larger) | 1.13± 0.31 | 46.3 % |\n\n### How to scale to new domain\n\nE-MAPP can also be applied to various domains. When it comes to a new domain, we can devise new perception primitives and behavior primitives based on object properties and interactions among objects. These primitives along with the branching and parallelization keywords compose the DSL. Previous approaches on program synthesis can be applied to synthesize programs for tasks. For example, we can synthesize programs from diverse video demonstrations. The activities (subtasks) of a task in a video can be segmented out as a subroutine for program extraction. By summarizing the chronological order of subtask completions, we can obtain the dependence of subtasks and put the possibly parallelizable subtasks in one parallel subroutine. We have added this section to Appendix A.14.\n", " This paper addresses the problem of learning to fulfill a task described by programs designed for parallelization with multiple agents. To this end, the paper proposes a framework that can infer the structure of parallelism from programs and efficiently allocate subtasks by enforcing cooperation and division of labor among agents. The experiments on the overcooked domain, where agents need to collaborate to make dishes, show that the proposed framework outperforms baselines and achieves higher task completion rates and better generalization. Ablation studies suggest that the proposed feasibility predictor, reachability predictor, and cost-to-go predictor all contribute to the improved performance. I believe this work studies a promising problem and presents an interesting framework with sufficient evaluation. Yet, I still have some concerns detailed in the following section. ## Paper strengths and contributions\n**Motivation and intuition**\nI believe learning to fulfill a task described by programs designed for parallelization with multiple agents is an important problem and has a wide range of applications.\n\n**Novelty**\nTo the best of my knowledge, this is the first work researching following program instructions with multiple agents.\n\n**Technical contribution**\n- The DSL used in this work seems like a reasonably good DSL for describing the overcooked domains.\n- The proposed task allocator learning reachability, feasibility, and cost-to-go seems effective.\n\n**Clarity**\nThe overall writing is clear.\n\n**Ablation study**\nAblation studies are helpful for understanding thecontributionsn of each component (i.e. reachability, feasibility, and cost-to-go) of learning to allocate tasks.\n\n**Experimental results**\nThe experimental results show that the proposed framework outperforms a state-of-the-art multi-agent reinforcement learning baseline (MAPPO) and a baseline that is instructed by natural language description.\n\n## Paper weaknesses and questions\n\n**Fully observability**\nWhile not explicitly stated in the paper, this work seems to assume the agents operate in a fully observable environment and the same observation is shared across all agents. This could fundamentally limit applying the proposed framework to more realistic domains.\n\n**Task example**\nThere are very few task (program) examples given in the main paper and the supplementary material. It is tough to judge the performance of the proposed framework and the baselines without knowing the tasks used to evaluate them. How easy are the easy tasks? How hard are the hard tasks? How different are the tasks used for learning and the tasks used for evaluating zero-shot generalization?\n\n**How to obtain tasks/programs**\nThis work studies how to fulfill tasks described by programs. It would make more sense to shed some light on how such programs can be obtained in the first place to motivate the problem. I suggest the authors include a discussion on program synthesis works that aim to produce such programs, such as RobustFill: Neural Program Learning under Noisy I/O (ICML 2017), Neural Program Synthesis from Diverse Demonstration Videos (ICML 2018), Execution-Guided Neural Program Synthesis (ICLR 2019), Learning to Describe Scenes with Programs (ICLR 2019), Latent Execution for Neural Program Synthesis (NeurIPS 2021), etc.\n Described in Strengths And Weaknesses section. Described in Strengths And Weaknesses section.", " The paper introduces a new method “E-MAPP” , a framework which utilizes a centralized program to issue subtasks to agents within a team to achieve tasks which require long-term planning. EMAPP contains a central programmatic controller which issues goals to a team of subagents. These agents are goal-conditioned policies who are trained specifically to solve these subgoals. The paper evaluates their claims on the Overcooked environment and evaluates against baselines such as MAPPO. Another notable addition is the increased complexity added to the game dynamics within the Overcooked environment.\n The paper is well written and the diagrams are exceptionally useful to articulate the approach being taken. The approach seems original (in the context of MultiAgent systems) however I can not comment on how novelly it extends Program use within Single Agent RL.\n \nMy main criticism of this work is the sheer complexity of the proposed approach, the limited analysis of its improvement over baselines, and its overall usefulness. \n\n1. It is unclear how much information is arbitrarily being hardcoded into the program executor (what decides the perception and behavior primitives), nor is it clear how you devise the Possible Subroutine Set. Hardcoding such components reduces the ability for this approach to be applied to any novel task. Thus this approach does not help discover/explore novel solutions to the game of Overcooked.\n\n2. To further this, the additional auxiliary tasks used to shape the policy network (feasibility, reachability, cost-to-go) are also hardcoded heuristics to evaluate sub-agent fitness for a task. In new problem domains it is unclear how you can devise this. In particular the Criteria for subtask allocation confuses me. Surely simple reward maximizing would lead to this criteria naturally arising within the Task Allocator. \n\n3. There is no discussion on the computationally complexity or feasibility of this approach versus baselines. In comparison to the baselines this seems to contain significantly more networks, a search for legal subtask allocation which scales O(n!) to the number of subtasks and finally a curriculum which requires pre-training a state on ground truth sub-goals, training n agents separately and finally then training the joint n-agents. \n\n4. Given the complexity of E-MAPP it is unclear where to attribute performance gains to. In particular the baseline of “Natural language–guided model” is never explained. A comparison of the goals provided by the Natural Language (or vocab size) vs the Task Allocator (and number of primitives) would be useful.\n\n5. E-MAPP fundamentally is a centralised execution algorithm (a joint policy is learnt) - so to compare to decentralised execution algorithms such as MAPPO seems disingenuous. A much better comparison would be a centralised agent. Such a comparison would help justify the claim of “time efficiency” presented in the abstract.\n\n6. Given the sheer amount of biases and knowledge bestowed into E-MAPP it seems trivial to outperform MAPPO or other methods on discovering winning strategies for harder tasks. \n\n7. The choice of rewards (0.2 for subtask, 1 for task) provided in the “Average Scores” seem arbitrary. Please provide reasoning for these.\n \nMostly stated above the in the weaknesses section:\n\n1. I find it very hard to understand what the Program Executor actually suggests or how this is synthesised. Can you provide examples of the programs devised.\n\n2. Analysis of queries asked and performance of the state encoder on producing correct answers would be useful to understand how well the Program Executor /State Encoder is working.\n\n3. Understanding the number of unfeasible tasks produced would be useful.\n\n4. It is unclear why E-MAPP performs imperfectly in the easy environment. Can the authors add some comments on this. \n\n5. Ablation analysis of E-MAPP (as it has many components) in general would be appreciated, it's unclear where to attribute performance gains to currently.\n\n It would be useful to discuss limitations in computation resources and scalability of this approach. Overcooked is neither a game of many players or many coordination conventions. \n", " This work introduces Efficient Multi-Agent Reinforcement Learning with Parallel Program Guidance (E-MAPP), a methodology designed to enable multi-agent, long-horizon task completion. Toward this goal, E-MAPP structures tasks as (parallelizable) programs using a hand-designed domain-specific language (DLS) which specifies behavior subtasks (e.g. chop a tomato) and \"perception primitives\" (e.g. is anything on fire); a top-level controller then uses these perceptual primitives, along with the given program, to assign agents to subtasks so as to efficiently complete the overall task. E-MAPP is evaluated using a novel extension of the Overcooked gridworld from prior work (in Overcooked, multiple agents must work together to compose dishes. Compared to competing baselines (MAPPO and a variant of MAPPO with language guidance), E-MAPP produces agents that are substantially more competent at solving challenging, long-horizon, tasks in the Overcooked environment even when those tasks were not seen during training.\n ## Strengths\n\nThis paper studies an interesting and challenging problem (long-horizon planning in the cooperative multi-agent setting). The proposed E-MAPP framework makes several non-trivial extensions to the idea of program-guided single-agent task completion introduced in prior work. The presented empirical results suggest that E-MAPP can be quite effective, especially when compared to standard \"end-to-end\" multi-agent RL methods (e.g. MAPPO) which do not use programs as an inductive bias. The paper is well-written grammatically although, as I note below, some work can be done to improve overall clarity.\n\n## Weaknesses\n\nWhile I very much appreciate the core goal of this paper, I do believe there are a few weaknesses that temper my enthusiasm (described below).\n\n- Many pretrained components\n\nThis work appears to depend on a very large number of pretrained components: perception modules, single-agent policies, and each of the three auxiliary functions (reachability, feasibility, and cost-to-go). That these all must be trained in different phases with separate datasets and different training techniques (e.g. self-imitation learning) seems to make it quite challenging to E-MAPP in new settings.\n\n- Efficiency of subtask allocation\n\nThe current strategy for subtask allocation, from my understanding, requires searching over a search space that grows exponentially in the number of tasks and agents (M^N possible allocations of N agents to M tasks). This appears to limit the applicability of E-MAPP to tasks with larger numbers of agents.\n\n- Fully observed and low-dimensional\n\nThe need to train many separate components to high levels of accuracy suggests that E-MAPP may not work well in complex, high-dimensional, environments (e.g. learning from pixels or learning in partially observed environments) where even training the perception modules may be very challenging.\n\n- Clarity\n\nWhile the paper is quite easy to read from a grammatical standpoint, I found several parts of the paper to be needlessly vague. For instance:\n\n1. The continuous control experiment is essentially completely unexplained, no results are even mentioned in the main paper and the appendix provides little additional detail.\n2. The inputs given to the various baseline models (MAPPO and the model with NL guidance) is unclear in the main paper. Even reading the appendix I'm unsure how the goal is encoded and given to the MAPPO agent.\n3. Several figures take up quite a lot of space (e.g. Fig 2/3/5) while the heart of the paper, namely the description of how exactly the E-MAPP framework works (e.g. pointer allocation and when/how the auxiliary functions are used) is largely relegated to the appendix.\n\n## Line-by-line notes\n\nHere are a few minor line-by-line notes:\n\n- Line 33\n * \"different feasibility\" sounds odd.\n\n- Line 64\n * Factorization approaches have also been investigated on the policy side, see e.g. \"A Cordial Sync: Going Beyond Marginal ...\" Jain et al. (ECCV'20)\n \n- Line 65\n * It seems a bit strange to not highlight imitation learning, and related ideas, as an approach for solving long-horizon tasks and the sparse reward problem. E.g. generative adversarial imitation learning.\n\n- Line 152-153\n * \"...to a subtask that requires to be...\" -> \"...to a subtask that must be...\"\n \n- Line 155\n * \"a perceptive query is responded\" -> \"a response to a perceptive query is received\"\n \n- Lines 156-159\n * Things are quite vague, give an example.\n\n- Lines 181-182\n * An ablation of this choice would be nice.\n \n- Line 208\n * \"reducing the binary\" -> \"minimizing the binary\"\n\n- Lines 227-228\n * Why apply log to feasibility and reachability and not cost-to-go?\n\n- Lines 247-249\n * The phrasing here is a bit confusing, it almost seems like your extension is a whole new environment. You should rephrase to make more clear that you didn't build a new gridworld but you adapted an existing one.\n \n- Line 259\n * What's the definition of \"novel\" here?\n\n- Line 266\n * \"accumulative\" -> \"cumulative\"\n \n- Line 292\n * \"find removing\" -> \"find that removing\"\n\n- Table 3\n * The error bars for the average scores seem huge?\n \n- Appendix Lines 562-567\n * Do you also use self-imitation learning with the MAPPO baseline? If not, what is the impact of self-imitation learning alone?\n \n- Appendix Lines 624\n - \"sclae\" -> \"scale\"\n Currently, I lean towards rejection as the weaknesses I've listed above seem quite substantial when considering applying E-MAPP to a new domain. I would be more than happy to increase my rating if strong arguments could be given as to why I might be mistaken. I would be particularly interested in hearing responses to my concerns regarding the number of pretrained components and the requirements for full observability / low dimensional states. I have also included a few, more minor, questions in my line-by-line notes above.\n Assuming the weaknesses I've flagged above are true weaknesses, it would be great to be upfront and add these to the limitations section. I don't believe there are any substantive ethical concerns for this work.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "RfikRszirVV", "MGMt1ghKVa", "C_iJXUI9WuU", "MGMt1ghKVa", "MGMt1ghKVa", "hJBx41_JTt9", "hJBx41_JTt9", "hJBx41_JTt9", "JmjhIdXayWV", "nips_2022_8LE06pFhqsW", "nips_2022_8LE06pFhqsW", "nips_2022_8LE06pFhqsW", "nips_2022_8LE06pFhqsW" ]
nips_2022_htM1WJZVB2I
Vision GNN: An Image is Worth Graph of Nodes
Network architecture plays a key role in the deep learning-based computer vision system. The widely-used convolutional neural network and transformer treat the image as a grid or sequence structure, which is not flexible to capture irregular and complex objects. In this paper, we propose to represent the image as a graph structure and introduce a new \emph{Vision GNN} (ViG) architecture to extract graph-level feature for visual tasks. We first split the image to a number of patches which are viewed as nodes, and construct a graph by connecting the nearest neighbors. Based on the graph representation of images, we build our ViG model to transform and exchange information among all the nodes. ViG consists of two basic modules: Grapher module with graph convolution for aggregating and updating graph information, and FFN module with two linear layers for node feature transformation. Both isotropic and pyramid architectures of ViG are built with different model sizes. Extensive experiments on image recognition and object detection tasks demonstrate the superiority of our ViG architecture. We hope this pioneering study of GNN on general visual tasks will provide useful inspiration and experience for future research. The PyTorch code is available at \url{https://github.com/huawei-noah/Efficient-AI-Backbones} and the MindSpore code is available at \url{https://gitee.com/mindspore/models}.
Accept
This paper proposes to explore the graph structure of images by considering patches as nodes, where the graph is constructed by connecting nearest neighbors. Extensive experiments on various visual tasks, i.e., image recognition and object detection have demonstrated the effectiveness of the proposed ViG. All the reviewers agree on the inspiring and promising exploration. The paper is also well-written and the experimental results are impressive.
train
[ "DhbzYk698Y", "fuc26t8aOb", "oeiiz8mSuXR", "0Y-VognFRHL", "T3VlqR7vIJX", "AVpHDIZwe61", "2zpOJaJjYLU", "i6uOZ2fKwUH", "_tIwhPuP2_o" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After seeing other reviewers' comments and the received freedback from the authors, I keep my score as is. The paper is clear and has its clear novelty beyond vision transformer.", " Thanks for the valuable comments. We respond to weaknesses and questions in the following.\n\n> **Q1:**\nBased on my experience, the inference speed of GNN is not as fast as CNN. It’s a general issue for the researchers to explore for mobile applications of Vision GNN.\n\n- **A1:**\nWith similar parameters and FLOPs, the GPU latency of ViG has no advantage over convolutional network or Transformer. The main parameters and FLOPs are occupied by the fully-connected layers and graph convolution layers which are common operations. The model compression and acceleration of GNN is an important topic for future research.\n\n> **Q2:**\nTypos: In line41: GCN->Grapher. In supplemental material, the output channel of self.fc1 in FFN should be hidden_channels rather than in_channels.\n\n- **A2:**\nThanks. We'll correct the typos and improve the writing.\n\n> **Q3:**\nIn the default setting for ImageNet, there are 196 nodes for an image. How does the number of nodes affects the performance of ViG?\n\n- **A3:**\nWe evalute the effect of the number of nodes. The number 196 is the proper one for visual recognition, as larger number leads to more computational cost and smaller number works not so well. Thus, we empirically use 196 nodes for an 224x224 image.\n\n|#nodes|49|196|784|\n|-|-|-|-|\n|FLOPs|0.4G|1.3G|4.8G|\n|Top-1|67.7|73.9|73.2|\n\n> **Q4:**\nIn the Mask-RCNN experiment, the overall AP of Pyramid ViG is better than Swin but the AP_75. Could you explain this phenomenon?\n\n- **A4:**\nThe overall AP (IoU=.50:.05:.95) of ViG is higher than that of Swin, which denotes that ViG has better detection performance under most of IoU thresholds. The AP_75 of ViG is slightly lower than that of Swin, which denotes the localization ability at 0.75 IoU. Averaging over IoUs rewards detectors with better localization. In COCO competition, AP (IoU=.50:.05:.95) is considered the single most important metric when considering performance on COCO.\n", " Thanks for the valuable comments. We respond to weaknesses and questions in the following.\n\n> **Q1:**\nIn different layers, will the constructed graph structure be updated?\n\n- **A1:**\nYes, the constructed graph structure will be updated in different layers. After aggregation and transformation of nodes, the node features change and the graph edges should also be reconstructed.\n\n> **Q2:**\nHow ViG is used in object detection is not described in detail. Please include the implementation details.\n\n- **A2:**\nThe ViG based object detection models are implemented using MMDetection [1]. We utilize the ImageNet pretrained Pyramid ViG-S as the backbone of RetinaNet and Mask R-CNN. To process inputs with different size, the position encoding is resized to match different inputs. The output features of all 4 stages are fed into FPN. The models are trained in the commonly-used “1x” scheduler. We'll include the details in the paper.\n\n[1] Chen, Kai, et al. \"MMDetection: Open mmlab detection toolbox and benchmark.\" arXiv preprint arXiv:1906.07155 (2019).\n\n> **Q3:**\nUsing graph neural networks as a component (although not as the backbone) for image recognition has been investigated in some previous works [1,2]. Please discuss with these works.\n\n- **A3:**\nML-GCN [2] utilizes GCN to process label dependency based on the outputs of CNN. It builds a directed graph over the object labels (nodes), and GCN is learned to map this label graph into a set of inter-dependent object classifiers. SGGNN [3] extract features of person images using CNN and utilizes GNN to update the pairwise relationships between probe-gallery pairs. These previous works of GNN for visual tasks mainly utilize GNN as a post component to model relationships between objects, which is much different from the backbone network ViG. We'll include these discussion in the final version.\n\n[2] Chen, Zhao-Min, et al. \"Multi-label image recognition with graph convolutional networks.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.\n\n[3] Shen, Yantao, et al. \"Person re-identification with deep similarity-guided graph neural network.\" Proceedings of the European conference on computer vision (ECCV). 2018.", " Thanks for the valuable comments. We respond to weaknesses and questions in the following.\n\n> **Q1:** The idea of graph neural network for visual recognition is appealing, but it seems to be great to expoloits image structures to adjust deep network structures. However, it seems that ViG just takes the simplest setting of graph neural network. It looks like a complex verision of vision transformer. The hierarchical structures involved is in a Swin Transformer style. Since there are a lot of techniques in ViT used, such as multihead attention, feed forward network and etc. I am a little confused about whether ViG is just a simplfied version of Swin Transformer.\n\n- **A1:**\nViG is much different from Swin Transformer:\n1. To capture spatial information, ViG utilizes graph convolution to aggregating nodes, while Swin Transformer utilizes self-attention among tokens.\n2. Swin Transformer introduces shifted window for locality inductive bias, while ViG needs less inductive bias.\n3. Swin Transformer represents the image feature as sequence structure, while ViG construct a graph structure for image feature.\n4. ViG is a graph neural network, while Swin is a self-attention model.\n\n\n> **Q2:** The feature dimensions, resolutions and other settings listed in Table 2 are very similar to that of Swin Transformer. So when there are inputs, ViG outputs feature maps. But in Table 5, I don't find obvious advantage of ViG when compared with Swin Transformer.\n\n- **A2:**\nOur work is a pioneering exploration of graph neural network for general visual recognition. It reveals that GNN can also work well for visual tasks and GNN provides another alternative beyond CNN and Transformer.\nWith the similar architecture settings, and without specific designs like shifted window, Pyramid ViG models can be competitive and even better than Swin Transformer (Pyramid ViG-S 82.1% vs. Swin-T 81.3%).\n\n\n\n> **Q3:** Another big problem is that Table 3 lists a lot of tricks used in ViG. According to the MAE paper (Masked Autoencoders Are Scalable Vision Learners, CVPR 2022), a vanilla ViT-B model (86M parameters) can get 82.3% top 1 ImageNet accuracy with the Exponential Moving Average trick (EMA). So the proposed VIG doesn't show any advantages compared with the ViT-B model (in Table 1).\n\n- **A3:** In all the experiments of our paper, we used the same training setting in Swin Transformer. Here we provide the comparison for different supervised training settings in DeiT, Swin and MAE papers. We can see that under the same training setting, ViG-B consistently outperforms ViT-B by about 0.5%.\n\n|Model|training setting|\\#parameters|top-1|\n|-|-|-|-|\n|ViT-B|DeiT|86.4M|81.8|\n|ViG-B|DeiT|86.8M|82.4|\n|ViT-B|Swin|86.4M|81.9|\n|ViG-B|Swin|86.8M|82.3|\n|ViT-B|MAE|86.4M|82.3|\n|ViG-B|MAE|86.8M|82.7|\n\n\n> **Q4:** It seems that when compared with their ViT-B and Swin-B competitor, either ViG-B or Pyramid ViG-B show obvious advantages on ImageNet classification results. I am curious about whether it's the training tricks rather than the algorithm itself that brings the performance gains.\n\n- **A4:** For ViT-B vs. ViG-B, we compare them in different training settings fairly in the above table. For Swin-B vs. Pyramid ViG-B, their training settings are extactly the same, so the comparison is fair. These experimental results show that it's the ViG itself that brings the performance gains.\n\n> **Q5:** I suggest the authors check whether it's possible to use image intrinsic structures proposed in GraphFPN (GraphFPN: Graph Feature Pyramid Network for Object Detection) to guide the feature learning of ViG. It will be very interesting then.\n\n- **A5:**\nWithout complex design, ViG simply using uniform division of image can obtain a competitive performance.\n\nGraphFPN is a \"CNN backbone + GNN head\" network for object detection built on a superpixel hierarchy. It's a good proposal to use image intrinsic structures to guide the feature learning of ViG. Nevertheless, there are still several problems to overcome:\n\n1. For each input image, the COB algorithm is applied to obtain a hierarchical segmentation. The superpixel segmentation algorithm including COB requires a large latency which will be a burden for ViG.\n2. For training, the obtained superpixels have various size. How to transform different-size superpixels to same-size vectors as inputs of ViG is a open problem.\n3. Adding a segmentation process before ViG will corrupt the end-to-end training manner.\n\nThese topics will be good directions for future research.\n", " Thanks for the valuable comments. We respond to weaknesses and questions in the following.\n\n>**Q1:** Some parts of the manuscript are unclear. For example, how to initialize a graph of an image is unclear. It is said the graph is built based on K nearest neighbors, but how to compute the KNN is unclear. Is the KNN constructed based on the position or the similarity of the feature?\n- **A1:**\nFor the construction of the graph, each node is connected with its K nearest neighbors. The KNN is based on the Euclidean distance between node features. The position information is introduced by the position encoding. We'll improve the presentation of these parts.\n\n>**Q2:** Does the method to compute the KNN influence the performance?\n- **A2:**\nThanks for the question. We compare different distance metrics between node features including Euclidean distance (our default manner), Manhattan distance and dot product. From the results, we can see that the method to compute the KNN slightly influence the performance.\n\n|KNN metric|Euclidean distance|Manhattan distance|Dot product|\n|-|-|-|-|\n|Top-1|73.9|73.6|73.8|\n\n> **Q3:** It would be great if the authors can show more visualization cases of the constructed graph under a more complicated scenario that several objects are involved.\n- **A3:**\nThanks for the suggestion. We provide visualization examples for the images with more objects in the anonymous links: xxx and xxx. We can see that give a anchor node, the nodes with the same semantic content will be connected, since the training object is to recognize the image category.\n\n>**Q4:** For Tab.8, what’s the meaning of \"9 to 18\" in the last column?\n- **A4:**\nSorry for the unclear notation. The number in the first row of Table 8 means the value of `K` in KNN. \"9 to 18\" denotes the value of `K` increase from 9 to 18 as layer goes deeper.\n\n>**Q5:** The authors compare the ViG with several Transformer-based methods in Tab. 4 in terms of both parameters and FLOPs. What about the real inference time of the proposed method in a standard GPU platform?\n- **A5:**\nWith similar parameters and FLOPs, the GPU latency of ViG has no advantage over convolutional network or Transformer. The main parameters and FLOPs are occupied by the fully-connected layers and graph convolution layers which are common operations. The model compression and acceleration of GNN is an important topic for future research.\n", " In this paper, the authors proposed to represent the image as a graph structure and introduce a graph neural network (ViG) architecture to extract graph level feature for visual tasks. The graph neural network can be aligned with standard vision transformers with many shared micro designs. Images are spilt into patches as nodes in graphs. Each node is connected with its neighborhoods. The ViG network is in a hierarchical feature extraction style like that of Swin Transformer. The authors conducted extensive experiments on image recognition and object detection and comaprable performance with other state-of-the-art methods. ***Strength***\n1. This paper is well-organized and can be easily understood by readers. The technical details are introduced clearly.\n2. The authors conducted extensive experiments on multiple benchmarks to investigate the effectiveness of different modules and designs in this paper.\n\n***Weakness***\n1. The idea of graph neural network for visual recognition is appealing, but it seems to be great to expoloits image structures to adjust deep network structures. However, it seems that ViG just takes the simplest setting of graph neural network. It looks like a complex verision of vision transformer. The hierarchical structures involved is in a Swin Transformer style. Since there are a lot of techniques in ViT used, such as multihead attention, feed forward network and etc. I am a little confused about whether ViG is just a simplfied version of Swin Transformer. \n2. The feature dimensions. resolutions and other settings listed in Table 2 are very similar to that of Swin Transformer. So when there are $224 \\times 224$ inputs, ViG outputs $7 times 7$ feature maps. But in Table 5, I don't find obvious advantage of ViG when compared with Swin Transformer.\n3. Another big problem is that Table 3 lists a lot of tricks used in ViG. According to the MAE paper (Masked Autoencoders Are Scalable Vision Learners, CVPR 2022), a vanilla ViT-B model (86M parameters) can get 82.3% top 1 ImageNet accuracy with the Exponential Moving Average trick (EMA). So the proposed VIG doesn't show any advantages compared with the ViT-B model (in Table 1).\n It seems that when compared with their ViT-B and Swin-B competitor, either ViG-B or Pyramid ViG-B show obvious advantages on ImageNet classification results. I am curious about whether it's the training tricks rather than the algorithm itself that brings the performance gains. I suggest the authors check whether it's possible to use image intrinsic structures proposed in GraphFPN (GraphFPN: Graph Feature Pyramid Network for Object Detection) to guide the feature learning of ViG. It will be very interesting then.", " This manuscript proposes a new kind of backbone named (ViG), which represents the image as a graph and extracts graph-level features for vision tasks. Specifically, the input image is separated into patches as nodes in a graph. Grapher module and FFN module are used to aggregate the information among nodes and transfer feature space. Isotropic and pyramid architectures are proposed to build models of different sizes. The ViG are compared with other SOTA backbones on both image classification task and object detection task. Pros:\n\n- Representing an image as graph is novel and interesting.\n\n- The proposed grapher module and FFN are soundness. Also, the visualization of the graph structure indeed shows that the proposed model has learned meaningful relationships among image patches.\n\n- The ablation studies and experiments on several vision tasks are good. Both isotropic and pyramid architectures show the effectiveness of the proposed ViG model.\n\nCons:\n\n- Some parts of the manuscript are unclear. For example, how to initialize a graph of an image is unclear. It is said the graph is built based on K nearest neighbors, but how to compute the KNN is unclear. Is the KNN constructed based on the position or the similarity of the feature?\n\n- Does the method to compute the KNN influence the performance?\n\n- It would be great if the authors can show more visualization cases of the constructed graph under a more complicated scenario that several objects are involved.\n - For Tab.8, what’s the meaning of \"9 to 18\" in the last column?\n\n- The authors compare the ViG with several Transformer-based methods in Tab. 4 in terms of both parameters and FLOPs. What about the real inference time of the proposed method in a standard GPU platform?\n Yes", " Recently, ConvNets and transformers have achieved state-of-the-art results on various visual recognition tasks. This paper explores graph neural networks (GNN) for visual tasks. By constructing a graph structure from the input image, the paper discusses the differences and advantages of graph structure over grid and sequence structure. GNN is applied on the graph data and FFN is introduced for improving the feature diversity. The obtained ViG backbone can achieve comparable and even better performance than the SOTA ConvNets and transformers. Strengths\n\nThe paper is easy to follow and well-written.\nThe pioneering exploration of GNN as vision backbone is inspiring to more works. This work will much appeal to the community.\nThe experiments on image classification and object detection show the effectiveness of vision GNN.\n\nWeaknesses\n\n- In different layers, will the constructed graph structure be updated?\n- How ViG is used in object detection is not described in detail. Please include the implementation details.\n- Using graph neural networks as a component (although not as the backbone) for image recognition has been investigated in some previous works [1,2]. Please discuss with these works.\n\n[1] Chen, Zhao-Min, et al. \"Multi-label image recognition with graph convolutional networks.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019.\n[2] Shen, Yantao, et al. \"Person re-identification with deep similarity-guided graph neural network.\" Proceedings of the European conference on computer vision (ECCV). 2018. See questions in the weaknesses part. The limitations and potential negative societal impact were addressed in the paper.", " This paper introduces a GNN-based backbone model for image tasks that is mainly built on the GNN layers and FC layers. Basically, it splits an image into patches and takes each patch as a node to construct the graph structure. A GNN-based architecture is utilized to the graph for visual representation learning. To address the degradation of feature diversity, more transformation of feature is added in the network. Extensive experiments of image classification and object detection show that the proposed Vision GNN can outperform representative convolutional networks and transformers with similar number of parameters. Strengths:\n+ This work is clearly motivated and well written\nThe background of the research, the motivation of graph representation for images and the related work are all clearly stated and summarized.\n+ The first GNN-based backbone for visual tasks\nThe simple yet effective Vision GNN is introduced by adapting the GNN with reasonable modification by patch-based graph construction and adding more node feature transformations.\n+ Extensive evaluation and impressive results\nThis paper demonstrates the power of the Vision GNN model on ImageNet and COCO datasets, outperforming representative CNN and transformer models. The results are impressive and interesting.\n\nWeaknesses:\n- Based on my experience, the inference speed of GNN is not as fast as CNN. It’s a general issue for the researchers to explore for mobile applications of Vision GNN.\n- Typos: In line41: GCN->Grapher. In supplemental material, the output channel of self.fc1 in FFN should be `hidden_channels` rather than `in_channels`.\n 1. In the default setting for ImageNet, there are 196 nodes for an image. How does the number of nodes affects the performance of ViG?\n2. In the Mask-RCNN experiment, the overall AP of Pyramid ViG is better than Swin but the AP_75. Could you explain this phenomenon?\n\n The limitations are addressed." ]
[ -1, -1, -1, -1, -1, 4, 7, 8, 8 ]
[ -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "fuc26t8aOb", "_tIwhPuP2_o", "i6uOZ2fKwUH", "AVpHDIZwe61", "2zpOJaJjYLU", "nips_2022_htM1WJZVB2I", "nips_2022_htM1WJZVB2I", "nips_2022_htM1WJZVB2I", "nips_2022_htM1WJZVB2I" ]
nips_2022_7a2IgJ7V4W
Semi-supervised Vision Transformers at Scale
We study semi-supervised learning (SSL) for vision transformers (ViT), an under-explored topic despite the wide adoption of the ViT architectures to different tasks. To tackle this problem, we use a SSL pipeline, consisting of first un/self-supervised pre-training, followed by supervised fine-tuning, and finally semi-supervised fine-tuning. At the semi-supervised fine-tuning stage, we adopt an exponential moving average (EMA)-Teacher framework instead of the popular FixMatch, since the former is more stable and delivers higher accuracy for semi-supervised vision transformers. In addition, we propose a probabilistic pseudo mixup mechanism to interpolate unlabeled samples and their pseudo labels for improved regularization, which is important for training ViTs with weak inductive bias. Our proposed method, dubbed Semi-ViT, achieves comparable or better performance than the CNN counterparts in the semi-supervised classification setting. Semi-ViT also enjoys the scalability benefits of ViTs that can be readily scaled up to large-size models with increasing accuracies. For example, Semi-ViT-Huge achieves an impressive 80\% top-1 accuracy on ImageNet using only 1\% labels, which is comparable with Inception-v4 using 100\% ImageNet labels. The code is available at https://github.com/amazon-research/semi-vit.
Accept
This paper explores Semi-ViT, a semi-supervised learning approach for vision transformers. Semi-VIT build-on three stages pipeline such as SimCLRv2. The authors introduce a probabilistic mixup for the semi-supervised finetuning stage which gives consistent experimental improvements. Semi-ViT shows strong empirical results as it achieves 80% top-1 accuracy on ImageNet using only 1% labels, which is comparable with Inception-v4 using 100% ImageNet labels, Demonstrating that ViT+semi-supervised training enables to reach 80% top-1 accuracy on 1% ImageNet is novel and of potential interest to the SSL community. I therefore recommend acceptance. However, I would encourage the authors to clarify that the three-stages pipeline is not a contribution of the paper and focus the novelty on the probabilistic mixup and the experimental study.
test
[ "3pxHGebqe3V", "6gUqPKzFzMM", "8syA9Negenm", "xZo1QIywRB5", "porVH-KIsTa", "VN2h-VVB27ZA", "u7LZRmqPEg9", "qsCJJ1nS7X4", "vRNddoCQ22W", "sKpQmAYiYK1", "J8xafR-ISSs", "Vt1HdGSp6u1" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer, thanks for taking your time to read our responses. We have tried our best to answer your questions and address your concerns. Is there still any further confusion or concern we can help you to address? If it is still about the technical novelty, we appreciate if the reviewer could also read the other reviews to have a comprehensive evaluation. The code will also be provided to reproduce the results.", " Dear Reviewer,\n\nThanks for these valuable suggestions! The responses are as below.\n\n* For confidence threshold $\\tau$, we totally agree this comment. We are running the experiments for other thresholds, e.g. $\\tau=0$, and also on the baselines without using probabilistic pseudo mixup. We believe, without probabilistic pseudo mixup, the model will be much less robust to $\\tau$.\n* We will also run the experiments beyond 0.99, e.g. 0.9 or even 0.0, and also on 1% labels. We agree that the experiments on 1% will be less robust on $m$, since the training on 1% labels is less stable.\n* We think the best time to start semi-supervised finetuning is when the performances saturate at the stage of supervised finetuning, e.g. 100 (50) epochs for small/base (large/huge) models. But it is possible that semi-supervised finetuning could start earlier. Due to the limited time, we could only run the experiments for 50, 100, 200 epochs. We will run the experiments for 1 or 10 epochs of supervised finetuning.\n\nSince it is expensive to extensively run these ablation studies, we can't finish all these experiments during the rebuttal period. We will try out best to finish them as soon as possible. And these ablation experiments and discussion will be presented in the final version.\n\nPlease let us know if you have further questions.", " Thank authors for replying the review! However, the rebuttal does not change my understanding of the work and its results. ", " Dear authors,\n\nThanks for additional ablations and discussions. Please find below some thoughts and suggestions I would like to see in the final revision:\n- (regarding ablation on $\\tau$). Thanks for these quick experiments! Very well that algorithm is robust. Still the most fair experiment (**also to justify more strongly proposed probabilistic mixup**) will be having $\\tau=0$ both for the baseline and Semi-ViT. I suspect that probabilistic mixup is doing proper regularization here and will perform much better than the baselines. Thus it can be viewed as alternative to uncertainty estimation / confidence filtering / weighting methods which people use in SSL to be robust to noise in pseudo-labels. That is why I wanna you to confirm that probabilistic mixup can be viewed even wider as done right now in the paper and we can remove one hyperparameter (simplification, which is good). However, it is not critical for my decision on the paper at current stage of discussion.\n- (regarding your ablations on the EMA decay factor) Add details when EMA accumulation starts (is it before first batch of unlabeled data is used or right away from this first batch?). Your experiments show that model should change not too slow and probably not too fast. I wonder to what limit you can push EMA decay, say 0.9, 0.1? I believe in 1% sup. data scenario it could be less stable and larger decay factor is needed compared to 10%.\n- (regarding how many supervised epoch we do before pseudo-labels are involved) Thanks for quick ablations. With respect to 1% setting: seems that we tends to overfit to the labeled set. Do you know what is the best supervised baseline we can have here for both 1% and 10% settings (to have understanding when we start pseudo-label training)? I suggest to add additional supervised model quality for these 50, 100, 200 epochs. This could give a hint for future research how the quality of model when we start pseudo-labeling correlates with the final performance. Could you extend a bit this table in final revision to have 1, 10 epochs too? I wonder to what limit we can push this to be able to bootstrap model with pseudo-labels.", " Thanks for the constructive review! We provide the detailed responses to each question as below.\n\n**Q1: It will be good to show more ablation study over some hyper-parameters, such as the momentum decay and confidence score.**\n\nA: Thanks for the suggestion. We are adding more ablation studies. \n\n| Method | data | $\\tau$=0.3 | $\\tau$=0.4 | $\\tau$=0.5 | $\\tau$=0.6 |\n|:---------------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|\n| Semi-ViT-Base | 10% | 79.5 | 79.7 | 79.7 | 79.6 |\n| Semi-ViT-Base | 1% | 71.4 | 71.3 | 71.3 | 71.0 |\n\nFirst, we show the ablation on confidence threshold $\\tau$ in the above table. It can be found that our Semi-ViT is quite robust to the filtering threshold. One possible reason is that we use probabilistic pseudo-mixup, even when some samples are filtered out by the threshold, they still can contribute to the final loss, and their contributions depend on their confidence scores, so the low-confidence samples won’t hijack the training. In the submission, we used $\\tau$=0.5 ($\\tau$=0.6) for Semi-ViT-Base on 10% (1%) labels.\n\n| Method | data | $m$=0.99 | $m$=0.999 | $m$=0.9999 | $m$=0.99999 |\n|:---------------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|\n| Semi-ViT-Base | 10% | 79.6 | 79.8 | 79.7 | 79.0 |\n\nNext, we show the ablation on the momentum decay rate $m$ in the above table. We can find that our Semi-ViT is also quite robust to the momentum decay rate. Somewhere between 0.99 to 0.9999 works fine.\n\nThese two experiments support that our Semi-ViT are robust to hyperparameters.\n\n**Q2: Under a more general semi-supervised learning set-up, how does the proposed method work when labelled data and unlabelled data are from different datasets?**\n\nA: It is an interesting idea to have unlabelled data from a different dataset. However, this is a more challenging task. It requires to have a more robust filtering mechanism, since using another dataset will introduce the problems of domain gap and category differences. Thus, probably new algorithms need to be developed. In fact, this is a subtask of SSL, and some papers specifically work on this. Due to the limited time, we are unable to come up new algorithm for this, but it will be remained as an interesting future work.\n\n**Q3: The large-scaled self-supervised pre-training (MAE) may generate more carbon emission.**\n\nA: Yes, the self-supervised pretraining is usually expensive. However, we simply just reused the pretrained models provided by those works. So in our experiments, we didn’t generate those carbon emission caused by self-pretraining.", " Thanks for the valuable review! We provide the detailed responses to each question as below.\n\n**Q1: The proposed training pipeline are not new compared with former works, such as [14].**\n\nA: The current popular pipelines for SSL are somehow similar, as we described in Section 2.1. However, our pipeline does have some differences from [14], which uses knowledge distillation in their final stage, instead of a semi-supervised fine-tuning as we use. We are not claiming that the three-stage pipeline is technically novel, and the goal of our paper is not to argue which pipeline is better. We just found our three-stage pipeline enables to have stable training of Semi-ViT and reduce the hyperparameter tuning. It is an important recipe to have stable training and good results for semi-supervised ViT. The extra ablation studies we provide to other reviewers do support that.\n\n**Q2: The improvements are based on existing works (i.e. EMA-Teacher) that are easy to come up with in the semi-supervised domain.**\n\nA: Perhaps in hindsight it may seem like an easy change. The choice does not seem as easy when faced with hundreds of options for techniques to apply to SSL and the prospect of having to try them exhaustively. This stands in contrast with other choices, for instance SemiFormer’s use of FixMatch [60] which yields disappointing performance. We view the simplicity of the resulting method, and its use of established techniques, as a strength rather than a limitation. \n\nWhat we do claim novelty for is the probabilistic pseudo mixup, which has some appealing properties. For example, although some samples don’t pass the confidence threshold, they still can contribute to the final loss, and their contributions depend on their confidence scores, so the low-confidence samples won’t hijack the training. As a result, our Semi-ViT is robust to confidence threshold, as also pointed out in the response to Q2 of Reviewer-mtWa.\n\n**Q3: What are the main differences between ViTs and CNNs that need the community to pay spatial attention to achieve good SSL results?**\n\nA: Our paper shows that three aspects are key to achieving results such as 80% top-1 accuracy with 1% of ImageNet labels: training pipeline, SSL framework, and data augmentation. And we have provided good recipes for them, which could ease the overhead of the future efforts in this direction.\n\n**Q4: Does the proposed method also benefit CNNs models?**\n\nA: Yes, as shown in our Table 5 (e.g. ConvNeXt). Those experiments illustrate the generalization properties of the proposed techniques in our paper.", " Thanks for this constructive review and the recognition of our work! We provide the detailed responses to each question as below.\n\n**Q1: Absence of some recent literature on theoretical justification of pseudo-labeling and similar EMA study and stability in other domains**\n\nA: We thank the reviewer for providing pointers to those papers, which are on a domain outside of our expertise. We will address them correctly in our camera-ready. \n\n**Q2: Absence of investigation at what extent filtering is important in the pipeline (regarding that mixup with filtered data helps) - this could be another baseline for probabilistic mixup justification**\n\nA: Does the reviewer mean what’s the effect of filtering in our pipeline? The filtering depends on the confidence threshold. And we are adding more ablation studies on the filtering threshold $\\tau$ as below. It can be found that our Semi-ViT is quite robust to the filtering threshold. One possible reason is that we use probabilistic pseudo-mixup, even when some samples are filtered out by the threshold, they still can contribute to the final loss, and their contributions depend on their confidence scores, so the low-confidence samples won’t hijack the training. In fact, when $\\tau$=0.3, 99% unlabeled samples pass the threshold, and we can almost say there is no filtering in this case. This observation is somehow aligned with the observation in the speech domain as mentioned by the reviewer. This is kind of interesting and maybe it is possible we can remove filtering in our Semi-ViT as well. But more experiments are needed to make that conclusion. In the submission, we used $\\tau$=0.5 ($\\tau$=0.6) for Semi-ViT-Base on 10% (1%) labels.\n\n| Method | data | $\\tau$=0.3 | $\\tau$=0.4 | $\\tau$=0.5 | $\\tau$=0.6 |\n|:---------------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|\n| Semi-ViT-Base | 10% | 79.5 | 79.7 | 79.7 | 79.6 |\n| Semi-ViT-Base | 1% | 71.4 | 71.3 | 71.3 | 71.0 |\n\n**Q3: Absence of study how many epochs of supervised training / supervised finetuning is needed before starting EMA pseudo-labeling process.**\n\nA: Thanks for the suggestion, and we are adding these experiments. The below table is the accuracies of supervised finetuning. The default setting of Semi-ViT is using 100 supervised training epochs. We can find when training less epochs, the finetuning results will be slightly worse for 10% labels, but much worse for 1% labels. When training longer, the accuracies usually have minor drops.\n\n| Method | data | supervised epoch=50 | supervised epoch=100 | supervised epoch=200 |\n|:---------------------:|:----------------:|:----------------:|:----------------:|:----------------:|\n| ViT-Base | 10% | 72.9 | 73.7 | 73.2 |\n| ViT-Base | 1% | 53.6 | 57.4 | 56.9 |\n\nNext, we show the accuracies of Semi-ViT starting from different numbers of epochs of supervised finetuning. We can find the Semi-ViT is robust enough (with minor differences), although the supervised finetuning accuracies have some substantial differences, e.g. for 1% labels. These experiments show the robustness of our three-stage pipeline.\n\n| Method | data | supervised epoch=50 | supervised epoch=100 | supervised epoch=200 |\n|:---------------------:|:----------------:|:----------------:|:----------------:|:----------------:|\n| Semi-ViT-Base | 10% | 79.6 | 79.7 | 79.6 |\n| Semi-ViT-Base | 1% | 70.4 | 71.0 | 70.9 |\n\n**Q4: Could author do sorting of references, so that their numbers appear in the order they first mentioned in the paper?**\n\nA: Our references are sorted by the last names of the authors, which is the most common in computer vision community. We will check what is the requirement of NeurIPS and will comply with it.\n\n**Q5: Missing references**\n\nA: Please see Q1\n\n**Q6: Extra observations on divergence and instability of FixMatch**\n\nA: It is interesting to know the same observations have also been seen in other domains, e.g. speech. These extra observations do provide additional support that FixMatch is unstable in some scenarios, and EMA-Teacher could be a better choice in general. We will add some discussion and references on speech too, as suggested in this comment.\n\n**Q7: line 119 - use set difference not subtraction**\n\nA: Thanks for pointing this out. Will fix it.\n\n**Q8: reference to ImageNet**\n\nA: ImageNet reference is [46]\n\n**Q9: how does the filtering really affect the training?**\n\nA: See Q2.\n\n**Q10: I wonder how number of epochs of supervised finetuning influences the overall EMA pseudo-labeling convergence?**\n\nA: See Q3.", " Thanks for the valuable review! We provide the detailed responses to each question as below.\n\n**Q1: All the experimental results are as expected. I did not learn much new here.**\n\nA: To the best of our knowledge, ours is the first paper that shows that a pure ViT can achieve comparable or better results than a CNN for semi-supervised learning. Specifically, we provide three novel insights: 1) Semi-ViT can achieve SOTA results in SSL, 2) the popular FixMatch is not stable for semi-supervised ViT, 3) probabilistic pseudo mixup can bring significant gains for SSL as an effective regularization.\n\n**Q2: Lack of novelty. The proposed three-stage training, EMA teacher, and the probabilistic Pseudo Mixup are all well-known techniques.**\n\nA: We are not claiming that the three-stage pipeline or the EMA-Teacher are a novel per se. What is novel is their use in achieving stable training in SSL, which is decisive to achieve state-of-the-art results with ViT. This stands in contrast with other choices, for instance SemiFormer’s use of FixMatch [60] which yields disappointing performance. The fact that our improvements are obtained using known techniques makes it simpler to understand and use, sparing others extensive experimentation to achieve stable training.\n\nWhat we do claim novelty for is the probabilistic pseudo mixup, which has shown nontrivial improvements over standard Pseudo Mixup, under different scenarios. It has some appealing properties that are not present in Pseudo Mixup. For example, although some samples don’t pass the confidence threshold, they still can contribute to the final loss, and their contributions depend on their confidence scores, so the low-confidence samples won’t hijack the training. As a result, our Semi-ViT is robust to confidence threshold, as also pointed out in the response to Q2 of Reviewer-mtWa.\n\n**Q3: Unfair comparison with SimCLRv2, PAWS, EMAN**\n\nA: We have described their pipelines and our differences with them in Section 2.1. SimCLRv2 also has a three-stage pipeline, but it uses knowledge distillation in their final stage. It would not be appropriate for us to change their approach for the purpose of comparison. In addition, the comparison with SimCLRv2 is fair, as the computations for the two pipelines are almost the same. Besides direct comparison of final results, our three-stage pipeline enables stable training for Semi-ViT and reduces the hyperparameter tuning, as seen in the extra ablation studies we provide in our response to other reviewers.\n\n**Q4: The results with 100% data for Semi-ViT**\n\nA: First, we would like to highlight that training a SSL method on 100% of the data is not a common procedure and not commonly reported in SSL papers. Instead, the community reports the performance of standard finetuning techniques on 100% of the data. Following the community, we report that upper bound in Tables 1 and 8. \n\nThat said, we run the experiments that the reviewer requested (table below). Due to the limited time available, we did not have the opportunity to fully exploit hyperparameter tuning (i.e., results can get better), but still achieved encouraging preliminary result: about 0.5 point higher than finetuning on 100% of the data. Note that the improvement is not brought by longer training, because finetuning for 200 epochs won’t increase the accuracy. This experiment does show the robustness/generalization of our Semi-ViT.\n\n| Method | 1% | 10% | 100% |\n|:---------------------:|:----------------:|:----------------:|:----------------:|\n| Finetune | 57.4 | 73.1 | 83.7 |\n| Semi-ViT-Base | 71.0 | 79.7 | 84.2 |", " This paper proposed Semi-ViT, a semi-supervised learning approach for vision transformers. The proposed method consists of three stages: first un/self-supervised pre-training, followed by supervised fine-tuning, and finally semi-supervised fine-tuning. At the semi-supervised fine-tuning.\n\nAt the semi-supervised fine-tuning stage, Semi-ViT adopts two techniques: an exponential moving average (EMA)-Teacher framework and a probabilistic pseudo mixup mechanism, to improve the performance.\n\nSemi-ViT, achieves comparable or better performance than the CNN counterparts in the semi-supervised classification setting.\n\nThe authors also show promising scaling up experiments, such as: Semi-ViT-Huge achieves an impressive 80% top-1 accuracy on ImageNet using only 1% labels, which is comparable with Inception-v4 using 100% ImageNet labels.\n Strength:\nThe proposed method is clearly written. It can be understood easily.\n\nWeakness:\nLack of novelty. The proposed three-stage training, EMA teacher, and the probabilistic Pseudo Mixup are all well-known techniques. (the specific techniques in probabilistic Pseudo Mixup is new, but Pesudo Mixup is very natural). All the experimental results are as expected. I did not learn much new here.\nFor comparisons in Figure 1 (a,b), I'm not sure whether the baseline methods (SimCLRv2, PAWS, EMAN) are also trained in a three-stage manner (e.g., the third semi-supervised stage for SimCLRv2 can be just standard semi-supervised learning with EMA teacher). If not, the merge of three techniques together in Semi-ViT makes this comparison unfair to other methods.\nThe results with 100% data for Semi-ViT in Table 1 should be reported. No matter it's better, equal or worse than the baseline, it is valuable point to make fair comparison with the baselines (the MAE paper only reports the 100% data results). Similarly, 100% data results should be reported in Table 8. As in \"Strengths And Weaknesses\" yes. ", " Recently pseudo-labeling demonstrated powerful results in many domain, including object detection, speech and image recognition, NLP and others. Current paper continues series of works on pseudo-labeling in context of ViT architecture and understanding different aspects of successful pipeline for ViT models with respect to scaling and reducing supervised data. First, authors proposed probabilistic mixup which allows to use filtered pseudo-labeled data to augment non-filtered pseudo-labeled data: weights of mixup are not sampled from beta distribution but pseudo-label score defines them. This scheme is shown with many experiments and ablations to be very effective and give consistent significant improvement. Second, authors confirm that FixMatch is unstable scheme of training in both cases having or not the self-supervised pretraining in the regime of low supervision (1% or 10% of ImageNet is used as labeled data). Third, authors demonstrated that self-supervised pretraining is complementary to pseudo-labeling and combination together improves results by a lot especially in 1% labeled data setting (this result was shown in several domains too, e.g. speech recognition). Finally authors show great scalability of pseudo-labeling for ViT models (with self-supervised pretraining, supervised finetuning and then EMA pseudo-labeling finetuning) and reach impressive results with only 1% labeled data of ImageNet compared to ImageNet supervised baselines. **Strengths**\n- Very well, clearly written paper with all necessary details and deep explanations\n- Comprehensive experimental study of pseudo-labeling for ViT and proper ablations showing consistent results across the board\n- New idea of probabilistic mixup which gives consistent experimental improvement across the board for different scenarios and pipelines\n- Ablations on FixMatch confirming training instability for low supervision setting\n- Ablations showing complementary property of pseudo-labeling and self-supervised pretraining\n- Impressive results with 1% labeled data only\n- Demonstration of scaling property for pseudo-labeling for ViT architecture\n\n**Weaknesses**\n- [not important] Absence of some recent literature on theoretical justification of pseudo-labeling and similar EMA study and stability in other domains (see Questions section on more details)\n- Absence of investigation at what extent filtering is important in the pipeline (regarding that mixup with filtered data helps) - this could be another baseline for probabilistic mixup justification\n- [maybe future work?] Absence of study how many epochs of supervised training / supervised finetuning is needed before starting EMA pseudo-labeling process.\n\n The paper is very well written with strong results, clear explanation of experiments and settings. I have very little suggestion on improving the paper:\n- Could author do sorting of references, so that their numbers appear in the order they first mentioned in the paper? In this case much simpler to find works from the introduction section.\n- References are great, thanks for this! Only would be great to have extra references to other domains, like speech recognition and NLP, where pseudo-labeling is actively developed too, including EMA and stabilization of training for online pseudo-labeling (model is trained with continuously updated teacher). Let me know if you need particular pointers as I am familiar with these works. There are also 2 recent theoretical papers on explaining why pseudo-labeling works (both teacher-student and online versions) [1,2].\n- Extra observations on divergence and instability of FixMatch are very important to have for future pseudo-labeling development. I like details provided in Sec 2.2 which gives proper combined references to prior works and observations. I would like to see here also references to several speech recognition works and have even stronger overview on the instability training. Yes, it is a bit different variant of FixMatch people use in speech, but the same observation on usage this type of teacher holds - people observe divergence and huge instability especially (if teacher and student share weights) with very little supervision (see [3] paper Fig.1) and they propose either history cache [3] or EMA [4,5] to stabilize the training.\n- line 119 - use set difference not subtraction\n- Maybe I missed but I didn't notice reference to ImageNet\n- Authors did comprehensive experiments showing what is the contribution of self-supervised pretraining, probabilistic mixup in different scenarios, different models, different pretraining. Only one thing remains questions for me: how does the filtering really affect the training? Could it be that with probabilistic mixup we even can avoid it and reduce number of hyperparameters (e.g. have ablation in Table 3 where filtering is not used)? (Just in context, in speech recognition absence of filtering works very well, and filtering techniques so far show only marginal gain for CTC models, where there is no problem for filtering long/short sequences as in seq2seq type of loss). I also didn't find the final filtering threshold in the Appendix, only the set which was grid searched.\n- [Extra question]. I wonder how number of epochs of supervised finetuning influences the overall EMA pseudo-labeling convergence? Are there different scenarios for 1% and 10% labeled data? (See e.g. such study in [3, 4, 5] for speech).\n\nOverall, thanks authors for the deep investigation!\n\n[1] Zhang, S., Wang, M., Liu, S., Chen, P.Y. and Xiong, J., 2022. How does unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis. ICLR 2022.\n\n[2] He, H., Yan, H. and Tan, V.Y., 2021. Information-theoretic generalization bounds for iterative semi-supervised learning. arXiv preprint arXiv:2110.00926.\n\n[3] Likhomanenko, T., Xu, Q., Kahn, J., Synnaeve, G. and Collobert, R., 2020. slimipl: Language-model-free iterative pseudo-labeling. Interspeech 2021. \n\n[4] Manohar, V., Likhomanenko, T., Xu, Q., Hsu, W.N., Collobert, R., Saraf, Y., Zweig, G. and Mohamed, A., 2021, December. Kaizen: Continuously improving teacher using exponential moving average for semi-supervised speech recognition. In 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU) (pp. 518-525). IEEE.\n\n[5] Higuchi, Y., Moritz, N., Roux, J.L. and Hori, T., 2021. Momentum pseudo-labeling for semi-supervised speech recognition. Interspeech 2021.\n Limitations are listed in the conclusion section.", " This work proposes a three steps training framework for pure ViTs, including un/self-supervised pre-training, followed by supervised fine-tuning, and finally semi-supervised fine-tuning. EMA and a probabilistic pseudo mixup mechanism are used and results are competitive. + The paper is well written and easy to follow. \n+ First to use pure ViT for SSL.\n\n- The proposed training pipeline are not new compared with former works, such as [14].\n- The improvements are based on existing works (i.e. EMA-Teacher) that are easy to come up with in the semi-supervised domain. Except that ViTs are sometimes data hungry, what are the main differences between ViTs and CNNs that need the community to pay spatial attention to achieve good SSL results? Does the proposed method also benefit CNNs models? The necessity to design spatial methods for SSL on ViTs needs to be clarified.", " This paper proposes a semi-supervised framework for vision transformers. In which, author introduces two techniques to improve the robustness and performance of ViT in semi-supervised learning. They are 1. EMA-teach network update which is the moving average of the student network. 2. Probabilistic Pseudo Mixup which is a novel mix-up method under pseudo-labelling based SSL framework. Strengths:\n\n1.\tThis paper is well written, and the core idea is easy to understand. The proposed method and formulate is clean, straightforward, and easy to re-implement. \n\n2.\tThe proposed method effectively improves the semi-supervised training for ViT. Compared to the baseline, both EMA-teacher updating, and Probabilistic Pseudo Mixup achieve significant improvement.\n\n3.\tThe Probabilistic Pseudo Mixup is novel, which provide a new direction to employ mix-up in ViT under pseudo-labelling based SSL framework.\n\n4.\tThe experimental result is remarkable. Compared to fully supervised finetuning after MAE, this paper is only 2% lower with only 10% imageNet data. In addition, the proposed method works well under various self-supervised pretraining pipelines.\n\nWeaknesses:\n\n1.\tIt will be good to show more ablation study over some hyper-parameters, such as the momentum decay and confidence score.\n 1.\tUnder a more general semi-supervised learning set-up, how does the proposed method work when labelled data and unlabelled data are from different datasets? 1.\tThe large-scaled self-supervised pre-training (MAE) may generate more carbon emission. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 8, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "8syA9Negenm", "xZo1QIywRB5", "qsCJJ1nS7X4", "u7LZRmqPEg9", "Vt1HdGSp6u1", "J8xafR-ISSs", "sKpQmAYiYK1", "vRNddoCQ22W", "nips_2022_7a2IgJ7V4W", "nips_2022_7a2IgJ7V4W", "nips_2022_7a2IgJ7V4W", "nips_2022_7a2IgJ7V4W" ]
nips_2022_gtCPWaY5bNh
Deep Model Reassembly
In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse. Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models give rise to gratifying performances with the user-specified constraints satisfied. We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning, which could be further elevated to 83.2% with end-to-end fine-tuning. Our code is available at https://github.com/Adamdad/DeRy.
Accept
This paper proposes an interesting new way to think about how to use a model zoo of pre-trained models: extract modular building blocks that are swappable from the networks and then stitch them together. To do the former, a cover set optimization methods is proposed, and the blocks can then be combined in a way that respects various resource and performance constraints. The idea is both interesting and ambitious, and has the potential to open up various avenues of research if done well. The paper lives up to the task: It is well-written (k5P1, 3qgQ, q2yK), conducts experiments to validate if the such stitched networks can do well, and proposes an intuitive principled method to extract the blocks (3qgQ, k5P1). The reviewers did express some concerns about scalability/generalizability to other tasks (k5P1, 3qgQ), larger zoos (k5P1 ,3qgQ), other architectures (all reviewers), and computation (q2yK) as well as several other potential issues such as limited performance improvements. The authors provided strong rebuttals to these, including some new experiments. At the end of the process, the reviewers were all satisfied with most of the concerns, and the overall consensus on the paper seems to be with high scores. Given the potentially high-impact, novel perspective as well as the solid execution, I highly recommend this paper for acceptance.
test
[ "bI9ScNdvKn", "E4_iG2sao3v", "mxPVFQ22pM", "VdiJ8ooV10j", "foaAVaWbzdG", "EYsFvLP2T6Y", "szRgeO3AtEF", "r1mhNcHzmJv", "3jWusBXyE_I", "kZjksFZKLP2", "vD--DKTNL-L", "1o4P2MmYS8a", "6Uw9xMIQA-g", "dwlyT1q7-9k", "XjfgzI9s2s8", "CwuGsPiWqjA" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors and appreciate their effort in improving the manuscript. To summarize, all of my concerns are now well addressed by the authors and hence I am increasing my initial score to Strong Accept.", " Thank you for the detailed response that resolves most of my concerns. As the first effort toward reusing/resembling pretrained models, the paper presents promising results. I’d like to raise the score to weak accept (6). The authors are encouraged to include the rebuttal in the final paper to provide more context about the task and the solution. From the discussion, there seem to be open problems down the road, which could be explored in future research. ", " `>>> Q8` **Manuscript Update**\n\n`>>> A8` We sincerely appreciate the reviewer for the constructive feedback. As advised, we have further updated the manuscript. Specifically, we add the experimental results on the homogeneous model zoo in `Supplementary Section 5.1` and add the ablation study for partition number $K$ in `Supplementary Section 5.2`. We also provide in-depth discussions on the potential applications in `Supplementary Section 4`, the limitation on model bias in `Supplementary Section 3`, the extension to other tasks in `Supplementary Section 6`, and the node definition in `Supplementary Section 7.3`. \n\n`>>> Q9` **Model Bias Elimination**\n\n`>>> A9` \nYes, the reviewer's comment is well taken. It is indeed possible that DeRy transfers the biased knowledge from the predecessors to the reassembled model. To address this problem, in our `Supplementary Section 3`, we discuss how to resolve this issue. Specifically, two techniques could be incorporated into the DeRy framework to mitigate the model bias.\n\n**First**, we can expand the model zoo size and limit the block size for each model. It ensures that no single block is dominant in the reassembled model, which largely rules out the possibility of a large bias from each individual network.\n\n**It is also possible** to increase the diversity among the reassembled blocks instead of blindly optimizing the target performance. A diversity regularization term could be added to Equation 8 to promote unbiased predictions. \n\nWe will extend our study to those fields to eliminate the bias introduced by DeRy in future work.", " I thank the authors for their response and effort on the new experiments. While most of my concerns have been addressed by the reviewers, I personally think it would be much better if authors could fix the problems I mentioned and update the manuscript accordingly, rather than just making a promise that they will add a discussion in the main paper. The new experiments and discussion presented here should be either included in the main paper or supplementary to further strengthen the proposed work.\n\nMoreover, given that DeRy combines different models, what about the bias of the resulting model? Is there any way to control the biases of the combined models so that they won't be reflected on the final model? While this is beyond the scope of the current paper, a discussion on limitations wrt to biases would be interesting.", " Dear Reviewer,\n\nWe would like to thank you again for your constructive comments and kind effort in reviewing our submission. Please do let us know if our response has addressed your concerns, and we are more than happy to address any further comments.\n\nThanks!", " Dear Reviewer,\n\nWe would like to thank you again for your constructive comments and kind effort in reviewing our submission. Please do let us know if our response has addressed your concerns, and we are more than happy to address any further comments.\n\nThanks!", " Dear Reviewer,\n\nWe would like to thank you again for your constructive comments and kind effort in reviewing our submission. Please do let us know if our response has addressed your concerns, and we are more than happy to address any further comments.\n\nThanks!", " We would like to thank the reviewer for the insightful feedback and comments. We are encouraged that the reviewer found the paper well written and the problem setup is interesting and ambitious. We address the reviewer’s comments below and will include all the feedback in the revised version of the manuscript.\n\n`>>> Q1` **Path Graph**\n\n`>>> A1` Admittedly, as the first endeavor towards general network reassembly, we focus on path-graph-based models for illustrating our strategy; heavy skip-based architectures are, in fact, *intentionally* excluded from this pilot study, so as to make this investigation more self-contained and logically clear.\n \nNevertheless, we are not the only ones who made such assumptions. The path-based hypothesis is indeed prevalent for pilot studies among other research tasks: for cell-based neural architecture search[A], network stitching[12], and gaussian process understanding of DNN[B], pioneering attempts all rely on the assumption of path-graph models, based on which variants and extensions thrive in the following work.\n\nIn fact, DeRy can be extended to network architectures with skip connections by applying node duplication and introducing a novel feature similarity estimation: node duplication aims to remove the skip connections of the original graph topology, while the similarity estimation facilitates the network partition. For example, given a four-layer UNet $A$ with architecture `L1->L2->L3->L4` and skip-connection `L1->L4`, we can replicate node `L1` into two identical layers `L’1` and `L’’1`; this results in a transformed graph with two paths `L’1->L2->L3->L4` and `L’’1->L4`, where `L4` is the joint node. As a result, the transformed network has no skip-connections but a multi-branch structure. For any subgraph $A’$ from $A$ with a line graph structure, we can conduct the network partition as described in our manuscript. For subgraphs $A’$ with multiple inputs or multiple outputs, we may define a new multiple-to-multiple representation similarity `s*()` to carry out the partition. Specifically, `s*()` takes two sets of features and produces a similarity score. The original one-to-one representation similarity is then replaced by `s*()` to further cluster and partition the network. As such, we can assemble the derived block with any graph topology. We would like to provide a more comprehensive study on this in our future work.\n\n[A] Liu H, Simonyan K, Yang Y. Darts: Differentiable architecture search[J]. arXiv preprint arXiv:1806.09055, 2018.\n\n[B] Lee J, Bahri Y, Novak R, et al. Deep neural networks as gaussian processes[J]. arXiv preprint arXiv:1711.00165, 2017.\n\n`>>> Q2` **Granularity of Partition K = 4**\n\n`>>> A2` We thank the reviewer for the comment. In fact, the granularity of partition `K` cannot be set to be too large, since small networks like ResNet18 contain only 8 nodes, while ViT-T/S/B contains only 12 nodes. As such, `K=4` is indeed a reasonable choice, as the number of nodes in various networks is mostly multipliers of 4, and most manual-designed networks have 4 stages.\n\nTo see the impact of `K`, here we show the experimental results with different `K` settings, i.e., `K=4/5/6`, in the table below. We set the configuration to `DeRy(K, 30, 6)`. The reassembled network is trained on ImageNet for 100 epochs. All other experimental settings are kept the same as those in the manuscript. We find that, as the partition number `K` increases, the performance of the reassembled model remains quite stable.\n\n| K | \\# Param | GFLOPs | Top1 Acc | Top5 Acc |\n| - | -------- | ------ | ----------------- | ----------------- |\n| 4 | 24.89 | 4.47 | 79.62 | 94.83 |\n| 5 | 21.14 | 5.53 | 79.68 | 94.89 |\n| 6 | 23.38 | 5.39 | 79.83 | 95.02 |", " `>>> Q3` **Complexity Level and Efficient Model Architectures**\n\n`>>> A3` Thanks for the comment. To our best knowledge, 5 param/gflop levels are indeed sufficient as compared to prior works for neural architecture designs. For example, ViT has 5 scales of ViT-T/S/B/L/H, ResNet has 5 scales of ResNet18/34/50/101/152, MobileNetv3 has 4 scales of V3-Large 1.0/0.75, and V3-Small 1.0/0.75. The current 5 computational scales, therefore, indeed cover most of the models in the model zoo.\n \nAs suggested by the reviewer, we add a new configuration of `DeRy(4, 8, 1)` to demonstrate DeRy’s capability to reassemble efficient network architectures. Specifically, we train the network on ImageNet for 300 epochs and compare it with other efficient models. The parameter number/GFLOPs and top-1 accuracy on ImageNet are provided in the table below. We note that our DeRy still delivers competitive results with only `< 1 gflops` computational overhead.\n| Network | \\# Param | GFLOPs | ImageNet Top1 Acc |\n| ---------------------- | --------- | --------- | ----------------- |\n| ViT-T16 | 5.7 | 1.3 | 74.1 |\n| MobileNet v3 Large 1.0 | 5.4 | 0.23 | 74.0 |\n| RegNetY-600M | 6.1 | 0.6 | 75.5 |\n| EFFICIENTNET-B1 | 7.8 | 0.7 | 75.9 |\n| DeRy(4, 8, 1) | 6.19 | 0.86 | **76.0** |\n\n`>>> Q4`**Performance on ImageNet and Transfer-learning Datasets**\n\n`>>> A4` \n1. **On ImageNet**, DeRy shows on-par or superior performances but comes with a much faster convergence speed. For example, under DeRy(4,10,3), our small model yields a result of 2% improvement as compared to counterparts with comparable complexity.\n\n\tAlso, as shown in Table 7, DeRy under 100-epoch training is comparable to counterpart networks trained for 300 epochs; on the other hand, when both are trained for 100 epochs, DeRy delivers results better than ResNet and Swin (Table 9 and Fig 10). As such, DeRy indeed yields significantly finer results when performance and convergence combined are considered.\n \n2. **On 8 transfer learning datasets**, DeRy, without pretraining, consistently outperforms all the random initialized nets, in many by a significant gap, as shown in `Fig 11`. In the case of Flower and Cars with configurations `DeRy(4,10,3)` and `DeRy(4,30,6)`, for example, the improvement is around 1~2 %, which is truly not marginal. The exact performances can be found in Table 1 in Supplementary Material.\n \n\n`>>> Q5` **Were the hyper-parameters for the base models also tuned in this way?**\n\n`>>> A5` Yes. As described in `Line 314-315`, we perform careful parameter tuning for all model-task combinations, not only our DeRy model. The hyper-parameter search space is described in `Supplementary Material Line 164-184`.\n\n`>>> Q6` **Computation for Similarity**\n\n`>>> A6`Thanks for the comment. Indeed, the similarity computation is heavy, as discussed in `Section 2.4 of the supplementary material`. However, in practice, this estimation can be pre-computed offline and saved to a lookup table, which, in turn, greatly reduces the computation time. In fact, this gives us a **speed up of more than 20 times** as compared to the computational overhead listed by the reviewer, which applies to online computation.\n\nSpecifically, we may repeat the similarity calculation for the same pair of networks over iterations. In our implementation, all similarity tables are built before partitioning using an offline manner. Hence, the computation of `s(,)` is independent of $K$ and $R$. Assume that, for $B$ batches of data, we pass it through each of the $N$ networks once for $N \\times B$ times, and collect all intermediate features and save them into local files. This process is fast, which takes no more than 20 mins on an 8x3090 server for all 28 models. Next, we conduct `s(,)` computation for $\\sum_{i, j=1,\\dots, N} L_i * L_j$ times for each pair of layers. This is where the large computation overhead comes from since there is a large matrix multiplication for each `s(,)`. We implement a mini-batch version full CKA algorithm to save memory and do multi-thresholding. As such, the computation for each pair of networks reduces to around 1~2 min. For a zoo of 28 models, we need around $28\\times 28 \\times 1 \\sim 28\\times 28 \\times 2$ minutes, with about 12-24 hour in total. Once the similarity computation is done, the partition and reassembly steps can be done without hassle. Please see our code at `similarity/get_rep.py` and `similarity/compute_sim.py` on the implementation details.\n\nNotably, this large computational overhead is attributed to estimating representation similarity itself, which is in fact orthogonal to the proposed DeRy pipeline. When other efficient representation-similarity estimations are available, we may readily adopt them to DeRy and accelerate the overall process.", " `>>> Q7` **Trivial Solution of Partition Algorithm and Its Necessity**\n\n`>>> A7` Thanks for the comment. Our partition results are indeed not trivial solutions. We start the partition by treating all the blocks equally without any prior knowledge, bias, or assumptions, and then carry out the proposed representation-based clustering. It turns out that, encouragingly, the obtained clusters mostly align with our human intuitions, where network blocks at similar depths have similar functionality.\n\nIn other words, we would consider this coincidence as an interesting observation that follows our intuitive hypothesis, which, without the proposed partition, cannot be validated. In fact, through our experiment, we did observe some exceptional cases where the network blocks are not grouped by the stage ID, indicating the necessity of network partition.\n\n`>>> Q8` **Minor issues**\n\n`>>> A8` We truly appreciate the reviewer for the comment. We have corrected the typo and uploaded the new version for the reviewer's reference.\n\n\n\n`>>> Q9` **i) What is used as the s(,) metric?**\n\n`>>> A9` Yes, we use Linear CKA for representation calculation. It has been mentioned in `Line 238-239`.\n\n`>>> Q10` **ii) The overhead (complexity) of building the s(,) index, partition, and stitching. Table 2 shows the GPU time. I wonder if it includes building the s(,) index.**\n\n`>>> A10` We thank the reviewer for the comment. The computation for building the s(,) index is described above. The partition computation is fast, with around 5 seconds per run and we run 200 times to ensure convergence. For the reassembly stage, we need to evaluate 500 candidates, each with a 5-batch average to estimate the NASWOT score and. Total stitching time is less than 5 hours. We do not include the similarity calculation time in `Table 2`, since it is a pre-processing step. We only account for the partition and reassembly search time. Even if we include it in `Table 2`, only 1 GPU day is accumulated to the first row, which is still faster and more efficient compared to Row 2 without partition.\n\n\n`>>> Q11` **iii) I wonder if the s(,) table needs to be re-built for each dataset?**\n\n`>>> A11` Representation similarity s(,) is only built on ImageNet and then it is kept fixed for all downstream datasets. As mentioned above, 1 GPU day is needed to compute a representation similarity table.\n\n`>>> Q12` **iv) I wonder if the partition and stitching need to be re-run for each dataset?**\n\n`>>> A12` No, we do not re-run partitions and stitching on each dataset. In our paper, we run the DeRy on ImageNet and apply it to all downstream tasks, which saves the cumbersome search cost on every task. In fact, this is the common practice for other tasks, such as NAS, to search on proxy data and then train on target ones.\n\n`>>> Q13` **v) The list of the 28 base models used**\n\n`>>> A13` Thanks for the comment. In total, there are 21 architectures and 28 pre-trained weights.\n\n1. Swin-T/S/B/L sup in1k(4), Swin-B sup in21k(1),\n \n2. ResNet18/50/101 sup in1k(3), ResNet50 MoCov2/SimCLR/BYOL in1k(3), ResNet50 sup iNatualist (1),\n \n3. RegNetY-800MF/1.6GF/3.2GF/8GF/16GF/32GF sup in1k(6),\n \n4. ViT-T/S/B/L sup in1k(4), ViT-S MoCov3 in1k(1), ViT-B MAE in1k(1),\n \n5. MobilenetV3 Large 1.0/0.75 sup in1k(2),\n \n6. ResNeXt 50 32x4d/ResNeXt 101 32x8d sup in1k(2).\n \nPlease see `Table 1 and Table 2 of the Supplementary` and code `blocklize/block_meta.py` for more details.\n\n`>>> Q14` **vi) Models shown in Fig. 11**\n\n`>>> A14` Thanks for the comment. We would like to include as many models as possible in `Fig 11`; however, some of the models are excluded for better visualization quality. Instead, all the detailed results are shown in `Table 1 and Table 2 of the Supplementary`. Specifically, we exclude extra-large (e.g. ViT-L, Swin-L) or models with extremely poor performance (e.g. Swin-T only gets an accuracy of 5.3% on Flower) because they are far from the majority of data points.\n\n`>>> Q15` **vii) The detailed architectures of the obtained DeRy models.**\n\n`>>> A15` Thanks for the comment. Since we only search once on ImageNet, the detailed architecture is already mentioned in `Figure 8`. The stitching layer structure is also provided in `Supplementary Table 3`.\n\n\n\n", " We would like to thank the reviewer for their insightful feedback and interesting observations. We are encouraged that the reviewer found the problem setup novel, the experiments thorough, the proposed method well-designed, and the paper well-written and easy to follow. We thank the reviewer for the support of our work. We address the reviewer’s comments below and will include all the feedback in the revised version of the manuscript.\n\n`>>> Q1:` **Heterogeneous Models VS Homogeneous Models**\n\n`>>> A1:` We thank the reviewer for the question. Yes, DeRy can indeed be applied for homogeneous models, since, in fact, a homogeneous model zoo is a particular and simplified case for heterogeneous models. In our manuscript, we have focused on the more challenging heterogeneous models. As advised by the reviewer, here we adopt DeRy on homogeneous models and compare the results with those of [54] on CIFAR-100, AirCraft, and Cars, using the same homogeneous model zoo settings as in [54]. The approaches of [55, 59] are, however, explicitly designed for federal learning and hence do not fit our goal; the approach of [14], on the other hand, had no open-sourced implementation online. \n\nNote that we do not further pre-train DeRy on ImageNet to make sure the comparison is fair. As shown below, we indeed outperform [54] significantly with the same experimental setup.\n\n| | # Param |CIFAR-100 | AirCraft| Cars|\n|--|--|--|-- |-- |\n|Zoo Tuning [ICML 2021] | 23.71 |83.39 |85.51 | 89.73|\n|DeRy(4, 30, 6)|24.89 | 84.05(**+0.66**) |88.86(**+3.35**) |93.86(**+4.13**) |\n\n`>>> Q2:` **Model Zoo Size and DeRy Scalability**\n\n`>>> A2:` We truly thank the reviewer for the comment. Due to the computational constraints, we did not compare the performance of different model hub sizes. We will extend our experiment in that direction in future work.\n\nBut a quick thought experiment is that, since our method partitions and reassembles the model zoo in an AutoML style, scaling up the model zoo is truly nothing beyond increasing the feasible set size. Therefore, the global optimal reassembly could ideally perform better. The only problem is that scaling up the feasible set makes the optimization harder, which requires transversing more candidates to reach the optimality. As our zero-shot proxy is very cheap in terms of computation, we believe our method generalizes well to large model zoos.\n\n`>>> Q3:` **Potential Applications and Reducing the Training Cost**\n\n`>>> A3:` Thanks for the question. DeRy indeed has a wide range of potential application scenarios. Taking the foundation model as an example, instead of training a network from scratch, we can take several pre-trained small models, partition them into building blocks, and assemble them into the large model as an efficient method for network initialization. As suggested in the paper, reassembling pre-trained models provides faster convergence and reduces the training cost.\n\nAnother example is multi-task learning. Given a bunch of trained single-task models, we can bring up a method to aggregate their capacities into a reassembled model. For example, assemble a new model with a shared backbone and multiple task prediction components, each taken from a single task. As such, we reassemble a multi-task model at a very low cost using DeRy.\n\nAs suggested by the reviewer, we will add a discussion in the main paper.", " \n`>>> Q4:` **Atomic Node Definition**\n\n`>>> A4:` We thank the reviewer for the comment. In our study, not every operation in the DNN can be treated as an atomic node. Consider a `Conv->ReLU` with skip-connection; we cannot make the single convolution layer as our node because a skip-connection breaks the line graph assumption. DeRy, in this current form, can not cut off multiple parallel paths at the same time. Therefore, we need to specify the node in each network. For example, a ViT-B contains 12 transformer blocks and hence has 12 nodes, and a ResNet-18 has 8 residual blocks and hence has 8 nodes. The detailed definition of atomic nodes is listed in the source code `blocklize/block_meta.py`.\n\n`>>> Q5:` **Partition Number, Performance and Complexity**\n\n`>>> A5:` We repeat the network partition and reassembly step with `K=5` and `K=6` to see how our method performs with different partition granularities. We set the configuration to `DeRy(K, 30, 6)`. The network is trained on ImageNet for 100 epochs. All other experimental settings are kept the same as those used in the manuscript. As illustrated in the table below, we observe that the partition number `K` has little effect on the model performance. As for the computational complexity, a large `K` does increase the search time with less than 10% time growth.\n\n| K | \\# Param | GFLOPs | Top1 Acc | Top5 Acc |\n| - | -------- | ------ | ----------------- | ----------------- |\n| 4 | 24.89 | 4.47 | 79.63 | 94.81 |\n| 5 | 21.14 | 5.53 | 79.68 | 94.89 |\n| 6 | 23.38 | 5.39 | 79.83 | 95.02 |\n\n`>>> Q6:` **CLAFusion Comparison**\n\n`>>> A6:` We thank the reviewer for pointing out this interesting and inspiring reference [A]. Though Cross-Layer Alignment Fusion (CLAFusion) and our paper share similar high-level motivations, the problem setup and the solution are quite different. We have cited the paper in our revision in [72] and provided a discussion.\n\n1. **Problem setup**: CLAFusion also aims to fuse heterogeneous neural networks, but it is restricted to the case where two networks come from the same architecture family, with the same input and output dimension but different numbers of hidden layers. DeRy, by contrast, does not impose any assumption on the network structures, where CNN, MLP, and transformer can be reassembled in a unified framework.\n \n2. **Solution**: CLAFusion first solves a layer assignment problem between two networks, and then transforms them to the same depth through either adding or merging layers. As such, their fusion is `task-agnostic`: for any two networks, the fusion does not depend on the final task or data. We, instead, partition the networks into building blocks and then reassemble them to maximize the target performance. Hence, our method is `target-related`: the reassembled model is searched on a specific target task.\n \n3. **Extensibility**: CLAFusion is originally designed for fusing two networks. It is hard to extend CLAFusion to multiple models. To fuse K networks, the solution in the CLAFusion paper needs to run the pairwise fusion for K times. DeRy is, on the other hand, more scalable, since we only need to run the algorithm once, regardless of the number of models involved.\n \n[A] Nguyen, Dang, et al. \"Model Fusion of Heterogeneous Neural Networks via Cross-Layer Alignment.\" arXiv preprint arXiv:2110.15538 (2021).\n \n\n`>>> Q7:` **Extension to Other Vision Tasks**\n\n`>>> A7:` Yes, DeRy can be indeed applied to other vision tasks. There are several advantages of DeRy when it is applied to downstream tasks. \n1. **First**, as DeRy directly searches for a general backbone, we may readily apply the same network to other tasks without any hassle. \n2. **Second**, the training-free proxy of NASWOT does not depend on the ground-truth label and is therefore label-agnostic. It enables us to assemble new networks on any task and any modality of input. \n3. **Third**, DeRy is highly computationally efficient; it only requires several hours to search for the optimal structure on a large-sized dataset. \nWe indeed look forward to extending DeRy to other vision tasks in our future work.", " We thank the reviewer for the constructive comments and would like to address them as follows. \n\n`>>> Q1` **Over-complicated Pipeline and Scalability**\n\n`>>> A1` Thanks for the question. In fact, DeRy introduces an acceptable complexity and a low computational cost; also, it can be easily extended to more complicated vision tasks other than classification. The reason lies in the following aspects. First, DeRy searches for a general vision backbone, so it's easy to apply the same network to a handful of tasks. Second, NASWOT, which is adopted in DeRy, is a task-agnostic proxy, and it allows us to build customized networks on arbitrary tasks or input modalities. Lastly, DeRy has a low computational overhead: the search time is no longer than 0.5 GPU day. Hopefully, we will be able to extend DeRy to other applications in the future.\n\n`>>> Q2` **Training Time**\n\n`>>> A2` Thank the reviewer for the question. For a fair comparison, we use a training schedule of 100~300 epochs in accordance with the setting in other works. In fact, one advantage of DeRy is that it converges faster than randomly initialized networks. As demonstrated in Table 9 and Figure 10, our model is superior to baselines with faster convergence, where the loss function decreases rapidly. In section 4.1 `Similarity, Position and Reassembly-ability` section, we also experiment with training the reassembled model for 20 epochs. The best-reassembled network (ResNet50 with stage 3 replaced by ResNet101) also archives a competitive top-1 accuracy of around 78% on ImageNet. These results indeed demonstrate that DeRy enjoys faster convergence.\n\nWe sincerely hope that the reassembled model may perform well with very little training time (less than 10 epochs or 1 epoch) or even zero-shot reassembly since all network blocks have previously been trained. However, we are still far from achieving this ambitious goal. We look forward to handling this issue in future work.\n\n`>>> Q3` **Diverse Pre-training Tasks and Datasets**\n\n`>>> A3` Thanks for the comment. In the long run, we would like to reassemble models in a larger, more diverse model zoo. In spite of this, we are not able to test our method on a larger scale due to computational limitations. Secondly, most of the model zoos available online are based on ImageNet, which is quite homogeneous and difficult to use. To eliminate data bias, we will explore collecting a more diverse model zoo in the future.\n\n`>>> Q4` **Comparison between DeRy and NAS**\n\n`>>> A4` There is a significant difference between DeRy and NAS\n\n1. **Partition Step**. Initially, DeRy subdivides a group of neural networks into blocks and then reassembles them into a customized network. The NAS, on the other hand, assumes that the search space has been predefined, so partitioning is not needed.\n \n2. **Distinct Objective**. While DeRy searches for the architecture and weights jointly, NAS concerns only the network architecture.\n\nWe truly thank the reviewer for the suggestion. We will consider adding a section on NAS and its connection with our study.\n\n`>>> Q5` **Functional Similarity and Swap-ability**\n\n`>>> A5` Thank the reviewer for the comment. Here's a quick thought experiment. Consider the *Linear Regression* as our similarity measurement. A large functional similarity of the two blocks indicates that both input $X, X’$, and output features $B(X), B’(X’)$ are highly similar. Consequently, a simple linear layer $F$ is able to transform one input into another input $X’\\approx F(X)$. The same applies to the outputs $B(X) \\approx F’(B’(X’) )$ with another linear transform $F'$. Therefore, we can replace $B \\approx F\\circ B’ \\circ F’$, where $\\circ$ stands for network stacking. According to the kernel functions employed in the similarity computations, different stitching layer structures could be specified to satisfy the swap-ability requirement. Hope our answer addresses the reviewer's question.\n\n`>>> Q6` **Writing Problems**\n\n`>>> A6` We truly appreciate the reviewer for the proofreading. We have fixed these typos in the revision.", " This paper brings up an ambitious and pioneering paradigm, termed Deep Model Reassembly (DeRy), that reuses the pretrained neural networks as building blocks for new model construction to address the transfer learning problem. A two-stage solution is proposed to jointly search for the optimal model architecture and weight for the reassembled model. DeRy first partitions the pre-trained networks jointly via a cover set optimization, and then assemble blocks to customize networks subject to hard constraints via solving an integer program backed up with a training-free proxy. Experimental results on ImageNet and 8 down-stream tasks validate that the reassembled model can achieve higher performance than any candidate models in the model zoo. [Strengths]\n- Overall, this is a well-written paper with an interesting and extensible idea at its core: considering the knowledge transfer as finding optimal layer-wise stacking of pre-trained blocks (In Definition 1). Comparing to the typical NAS problem with a fixed search space like operation or topology, the DeRy involves a dynamic search space (The partition of the pre-trained networks is unknown beforehand), which is much more challenging.\n\n- New Findings: This study strives to point out that, arbitrary trained networks are largely reassembleable, even though the models may have diverse architecture and source tasks. The grafted pre-trained models potentially could provide satisfying performance. \n\n- Overall, I am convinced with the intuition of the method: first dissecting the networks into swappable equivalence blocks, then constructing new models with the best performance. The dedicated solution draws inspiration from the conventional discrete optimization community, thus with a good convergence guarantee and low search complexity. I personally favor the current solution.\n\n- Evaluation is sufficient to support the argument of this paper. Section 4.1 validates the pipeline designs and lists several intriguing findings for partitioned blocks\n\n[Weaknesses]\n- If my understanding is correct, the current methodology may result in an over-complicated pipeline for the model reassembly, which may not be scalable to more sophisticated tasks (detection, segmentation and generation) or model zoo setting. I am not expecting the authors to resolve this issue in this paper, but it would be great to have a discussion over this matter in future work part.\n\n- The current solution still needs a long training time, like 100~300 epochs for ImageNet. This is OK since this is the first attempt along the line, but again it would be great if the authors can provide a discussion on this issue. \n\n- Despite the authors already including quite a significant number of experiments, it would be better if more pretraining tasks and datasets be included in the evaluations.\n - Add more literature review on NAS. Given that there is a focus on joint search for architecture and model weights with zero-cost proxy in this paper, the literature review should be extended beyond a handful of papers currently listed. What is the main difference between the proposed DeRy and the traditional NAS problem?\n\n- Although the staged solution largely eliminates the combinational search for a search space, however, the authors pose strong heuristics in the model partition stage. The blocks with high ``functional similarity'' are grouped and the partition should maximize the overall group-ability. Why the ``functional similarity'' introduced in this paper gives rise to swap-ability? \n\n- Typos:\n1. Line 170: a path graphs -> a path graph\n2. Line 179 and Eq (3): What does N_g stand for? Is it the number of equivalence sets? Should it be the same as K?\n3. Line 207: process -> possess\n4. Line 229: architecture -> architectures\n\n Yes, limitations have been discussed. I do not find the potential negative societal impact of this work.", " This paper presents a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse. Specifically, given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, authors first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Experiments on ImageNet dataset show the promising performance of the proposed method under different settings. Strengths:\n\n* The paper is very well written and easy to follow.\n\n* The problem of model reassembly is interesting with many practical use-cases.\n\n* The proposed method is novel and the two step procedure makes sense.\n\n* Empirically demonstrated that the proposed method obtains competitive performance on ImageNet dataset.\n\nWeaknesses:\n\nOverall, I liked the problem definition and proposed solution to address the interesting model reassembly task. However, I would like the authors to address my following concerns/questions to further improve the quality of the work.\n\n* Why the proposed approach focuses on fusing heterogeneous models, can it be applied to homogenous models? If yes, how does that compare to the existing model fusion methods that work only for fusing homogenous models, e.g., [54, 14, 55, 59]. I would suggest the authors to include experiments for this to strengthen the paper.\n\n* How does the size of model zoo effect the performance of the proposed method? In particular, experiments and discussion along the direction of scalability of the proposed method wrt to number of models in the model zoo would be interesting.\n\n* Can model reassembly help in reducing the training cost of the large models? A discussion on potential applications of model reassembly besides knowledge transfer would be good for demonstrating a broader impact of the work.\n\n* Authors mention that \"We manually identify the atomic node to satisfy our line graph assumption. Each network is therefore a line graph composed of atomic nodes.\" This is not clear from the current descriptions. Can authors explain this?\n\n* What is the effect of partition number on the final performance? How does it increase the complexity of the method?\n\n* How is the proposed method comparable to Model Fusion of Heterogeneous Neural Networks via Cross-Layer Alignment? A comparison and discussion should be included in the paper to verify the advantage of the method over similar methods.\n\n* Can the proposed approach DeRy be applied to other computer vision tasks beyond image classification? Address weaknesses. I would like the authors to address my questions regarding experiments and comparisons as described above. Given that DeRy combines different models, what about the bias of the resulting model? Is there any way to control the biases of the combined models so that they won't be reflected on the final model? While this is beyond the scope of the current paper, a discussion on limitations wrt to biases would be interesting.", " The paper proposes to reuse the building blocks of pretrained neural networks for new tasks by resembling them under given computation constraints. The proposed approach, DeRy, first learns to partition the layers of the base networks jointly into equivalent sets via a cover set optimization, then selects and stitchs the optimal blocks into a new network by solving an integer programming problem. DeRy is evaluated on ImageNet1K and 9 other transfer-learning benchmarks via linear probing and finetuning. Strengths\n\ni) the paper introduces a new task, deep model reassembly. The task is ambitious!\n\nii) it is interesting to learn that the task, when defined under specific assumptions, has a feasible solution that works to a certain extent.\n\niii) the writing is clear in general\n\n\nWeaknesses\n\nThe formulation is not as general as I would expect. I have concerns about the practical usefulness of the method and whether it could scale.\n\ni) The model space is small. DeRy assumes the base model forms a path graph (L170-171), which excludes architectures with heavy skip connections, e.g. UNets. The granularity of the partition is small, i.e. K = 4. The number of complexity levels, i.e. 5 param/gflop levels, is also small, which excludes efficient model architectures (e.g. < 1 gflops) that are of great interest to the community. \n\nii) Performance: on ImageNet1k, DeRy shows no advantage. On the transfer-learning datasets, DeRy shows marginal advantages with further pretraining on ImageNet1k. From L311-312, the hyperparameters for DeRy were tuned for each model-task combination, I wonder if the hyperparameters for the base models were also tuned in this way?\n\niii) Overhead. I would guess building the s(*,*) table involves computation of R * (N * N * K * K)/20 epochs (from L237, 1/20 training samples are used). Even with a small granularity (N=28, K=4, R=200), the computation looks huge?\n\niv) Insights: from Fig.1 in the suppl. , the partition tends to group blocks of the same stage id, with the current K (i.e. K=4), it seems the partition algorithm obtains a trivial solution? If this is the case, do we need the partition step at all?\n\n\n\nMinor issues\n\ni) Typos, L73 trasnfer → transfer L110 (k) → (l), L170 graphs → graph, L248 ImagNet → ImageNet, L308 testure –> texture, L89 (suppl): #praram → #param\nii) L121 “It is clear that no single model universally dominants in transfer evaluations. It builds up our primary motivation to reassemble trained models rather than trust the “best” candidate. ” Arguably, DeRy does not provide a universally dominant model either.\n i) What is used as the s(*,*) metric. From the paper, I would guess it is linear CKA [33]?\n\nii) The overhead (complexity) of building the s(*,*) index, partition, and stitching. Table 2 shows the GPU time. I wonder if it includes building the s(*,*) index.\n\niii) I wonder if the s(*,*) table needs to be re-built for each dataset? If not, I wonder what is the main dataset used for computing the s(*,*) table (e.g. the GPU time in Table 2 is for what dataset)\n\niv) I wonder if the partition and stitching need to be re-run for each dataset? (e.g. if the DeRy models in Table 7 and Fig 11 are the same).\n\nv) the list of the 28 base models used\n\nvi) It seems Fig. 11 only shows a subset of the models (less than 28 models). I wonder what are the models shown in Fig. 11.\n\nvii) the detailed architectures of the obtained DeRy models\t\n\t\t\t The limitations and the potential negative societal impact are NOT discussed in the submission." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "mxPVFQ22pM", "szRgeO3AtEF", "VdiJ8ooV10j", "EYsFvLP2T6Y", "6Uw9xMIQA-g", "1o4P2MmYS8a", "kZjksFZKLP2", "CwuGsPiWqjA", "CwuGsPiWqjA", "CwuGsPiWqjA", "XjfgzI9s2s8", "XjfgzI9s2s8", "dwlyT1q7-9k", "nips_2022_gtCPWaY5bNh", "nips_2022_gtCPWaY5bNh", "nips_2022_gtCPWaY5bNh" ]
nips_2022_ebuR5LWzkk0
Are You Stealing My Model? Sample Correlation for Fingerprinting Deep Neural Networks
An off-the-shelf model as a commercial service could be stolen by model stealing attacks, posing great threats to the rights of the model owner. Model fingerprinting aims to verify whether a suspect model is stolen from the victim model, which gains more and more attention nowadays. Previous methods always leverage the transferable adversarial examples as the model fingerprint, which is sensitive to adversarial defense or transfer learning scenarios. To address this issue, we consider the pairwise relationship between samples instead and propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC). Specifically, we present SAC-w that selects wrongly classified normal samples as model inputs and calculates the mean correlation among their model outputs. To reduce the training time, we further develop SAC-m that selects CutMix Augmented samples as model inputs, without the need for training the surrogate models or generating adversarial examples. Extensive results validate that SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning, and detects the stolen models with the best performance in terms of AUC across different datasets and model architectures. The codes are available at https://github.com/guanjiyang/SAC.
Accept
The reviewers agreed that the proposed method and validation overall are a good contribution. We urge the authors to update their paper to reflect the discussed clarifications, e.g., regarding the threat models in use.
train
[ "SemNRKkXLA-", "MqcMfWeGMx2", "nZXw-maTc0E", "vdr-fHSs4S5", "o8qWSaS0Tym", "925tQrtiS2", "DZq7fLYPBn", "wnByfUKTWT4", "qPmjf9ft3EP", "QPq-D_G_fT", "_eCKDlIUmTn", "zrEL33sttL", "sXRdtMW-5-Y", "kGN31Bvybdd", "NqpsASREIvR" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your support, the detailed reviews, and the suggestions for improvement!", " Thank you very much for your efforts in addressing these concerns. I maintain my rating and lean to accept this paper.", " Dear reviewer, \n\nThanks again for your thoughtful review. Does our response address your questions? We would appreciate the opportunity to engage further if needed. We also kindly ask you to consider stronger support for the paper if your concerns have been addressed. Thanks!", " Thanks for your support! We will include the clarification and new results.", " The response addresses my questions. I hope the clarification and new results will be included in the paper revision.", " Dear reviewer,\n\nThank you again for your thoughtful review. Does our response address your questions? We would appreciate the opportunity to engage further if needed.", " **Q6.**\nThe proposed method does not work well in the Transfer-A attack (Table 4). It would be great if the paper could provide some explanations on it.\n\n**A6.**\nTransfer learning (A) represents training the source model in all the layers on a new dataset. Compared with transfer learning in the last layers (transfer learning (L)), the feature layers in the transfer learning (A) model have a much larger difference from the source model, which makes it more difficult for the defenders to detect transfer learning (A) than transfer learning (L). Furthermore, because SAC-w and SAC-m use the images from the original datasets as the model's inputs, the transferred model on the new datasets generates a more soft and uncertain output. Thus, the transferred model may output [0.1, 0.1, 0.4, 0.3] and [0.1, 0.1, 0.3, 0.4] on two images belonging to the same class in the source model datasets. Although their output probabilities are similar, indicating they are more likely to belong to the same class, their outputs' hard labels are different, and it may cause SAC to fail under hard label setting with transfer learning (A). We have also explained it in the **revised paper**.\n", " **Q3.**\nIt is unclear why the misclassified samples help to identify the stolen model. How many misclassified samples are necessary for fingerprinting?\n \n **A3.**\nAs we stated around line 57, correctly classified samples hold common knowledge which most models will hold, and this will cause both the irrelevant models and the stealing models to produce similar right output on these samples, which will cause a problem to distinguish the stealing models and the irrelevant models. Furthermore, this phenomenon is also found in adversarial example based fingerprinting methods, such as CAE [6, ICLR2021], and IPGuard [19, ICCCS2021]. In our experiments, we use 472 misclassified samples in SAC-w and 50 normal samples to do data augmentation in SAC-m in our CIFAR10 experiments in Table 1 in our paper. Furthermore, to verify SAC's effectiveness with different numbers of samples, we conduct experiments with SAC-w on CIFAR10 (the same setting as Table 1 in our paper), and the results are as follows:\n\n| Samples Used |Finetune-A | Finetune-L| Pruning |Extract-L | Extract-P | Extract-Adv| Finetune-10C |Transfer-A |Transfer-L |\n| ------- | ------- |------- | ------- |------- | ------- |------- | ------- |------- |------- |\n| 472 |1.00 | 1.00| 1.00 |1.00 |1.00 | 1.00| 1.00 |1.00 |1.00 |\n| 300 |1.00 | 1.00| 1.00 |1.00 |1.00 | 1.00| 1.00 |1.00 |1.00|\n| 200 |1.00 | 1.00| 1.00 |1.00 |1.00 | 1.00| 1.00 |1.00 |1.00 |\n| 100 |1.00 | 1.00| 1.00 |1.00 |1.00 | 0.99| 1.00 |1.00 |1.00 |\n| 50 |1.00 | 1.00| 1.00 |0.99 |1.00 | 0.98| 1.00 |1.00 |1.00 |\n| 25 |1.00 | 1.00| 1.00 |0.81 |1.00 | 0.88| 1.00 |1.00 |1.00 |\n\nThe above experiment result demonstrates that our SAC method is robust even under the few-shot setting. And only with 50 samples, SAC can successfully detect different kinds of model stealing attacks.\n\n**Q4.**\nThe threshold d in Equation (4) is critical to the success of model stealing detection. However, it is unclear how to select an appropriate value of d.\n\n **A4.**\nTo evaluate the performance of detecting model stealing attacks, AUC (Area under the ROC Curve) is the most widely-used evaluation metric in this field, which does not rely on the threshold value d.\nFor example, both CAE [6, ICLR2021] and IPGuard [19, ICCCS2021] employ AUC to measure the detection performance of different stealing model detection methods. \nWhen the defenders want to determine the optimal threshold d in some situations, we can use the validation set to find the best threshold. For example, the defenders can use the average of the means of the correlation scores of the irrelevant models and the adversarial extracted models as d on the validation set. We will also leave it for our future work.\n\n**Q5.**\nAs shown in Tables 1 and 2, the existing detection methods can achieve 100% AUC in many attack categories (e.g., fine-tune, pruning). The proposed method only outperforms the existing methods in model extraction attacks.\n\n**A5.**\nThe different model stealing attacks listed in our paper are based on the setting of the former papers CAE [6, ICLR2021], IPGuard [19, ICCCS2021], and EWE [4, USENIX2021]. Thus, there are some model stealing attacks which can be easily detected by all the model IP protection methods, such as fine-tuning. Our SAC method outperforms former model IP protection methods such as CAE [6, ICLR2021], IPGuard [19, ICCCS2021], and EWE [4, USENIX2021], on both different kinds of model extraction attacks and transfer learning attacks. All the three compared methods CAE, IPGuard, and EWE fail when there is a transfer learning attack because of the label space change after transfer learning. Furthermore, because the defenders do not know exactly how the attackers steal their models, such as transfer learning or model extraction, the attacker can evade the defender's detection with these methods, causing a great threat. Our method, on the contrary, can detect all these model stealing attacks with a high AUC. Furthermore, our method does not need to be involved in the training process like EWE, and thus our method will not harm the source model's accuracy. \n", " Thank you for the detailed reviews, as well as the suggestions for improvement. And we hope to resolve some of your concerns in the following comments.\n\n**Q1.**\nThe threat model is not well-defined in the paper. What are the attacker's capabilities in each category of stealing attacks? Does the attacker have access to training data and model architectures?\n\n**A1.**\nWe share the same setting as most fingerprinting methods, e.g. CAE [6, ICLR2021]. The attacker and the defender hold different datasets, the attacker's dataset, and the defender's dataset. In our experiments, we divide the training set of CIFAR10, CIFAR100, FashionMNIST, and Tiny-ImageNet into two equal pieces as the attacker's dataset and the defender's dataset. Furthermore, we list five model stealing attacks in our paper, fine-tuning, pruning, transfer learning, model extraction, and adversarial model extraction. Attackers for fine-tuning, pruning, transfer learning have access to their own datasets (the attacker's dataset) and the source model (including the model architecture and the inner parameters). On the other hand, attackers for model extraction and adversarial model extraction do not have access to the source model inner parameters or the source model architecture. They only use their own unlabeled dataset (the attacker's dataset without labels) and the output from the source model to train their own models.\n\n \n**Q2.**\nThe implementation of the model stealing attacks is unclear. What is the threat model in each attack category?\n\n**A2.**\nIn line 208 to line 235 in this paper, we list all the implementation details about model stealing attacks. In our paper, all the IP protection methods are conducted experiments based on the same attacker's setting and try to detect these model stealing attacks. The different model stealing attacks listed in our paper are based on the setting of the former papers CAE [6, ICLR2021], IPGuard [19, ICCCS2021], and EWE [4, USENIX2021]. We list five model stealing attacks in our paper and show their implementation below:\n* **Fine-tuning:** The attackers can have access to the source model (including the source model's inner parameters) and try to avoid the model owner's detection by fine-tuning the model on their own datasets, the attacker's dataset. In our experiments, we assume the attacker fine-tunes the source model for 30 epochs with SGD optimizer with lr=5e-4 in either all the layers (Finetune-A) or the last layer (Finetune-L).\n* **Pruning:** The attackers can have access to the source model (including the source model's inner parameters) and try to avoid the model owner's detection by pruning the model on their own datasets, the attacker's dataset. The attackers in our experiments use Fine Pruning to avoid detection. Fine Pruning prunes the neurons in the order according to their activation and fine-tunes the model each time after pruning 50 neurons to maintain the model's accuracy.\n* **Transfer Learning:** The attackers can have access to the source model (including the source model's inner parameters) and a new dataset to which they want to transfer the source model. Then they train the well-trained source model to other datasets, e.g. CIFAR10 model to CIFAR10C (we choose snow as the corruption) or CIFAR100. We assume that the attacker retrains the source model on the new dataset with all images from the attacker's dataset for 30 Epoches with lr=5e-4.\n* **Model Extraction and Adversarial Model Extraction:** There are three common model extraction methods listed in our paper, label-based model extraction, probability-based model extraction, and adversarial model extraction. During model extraction, the attacker can not have access to the source model's inner parameters, and what they can have access to is the unlabeled dataset (attacker's dataset, but without labels) and the source models' output on these unlabeled data. Label-based model extraction can only have access to the source model's output labels and use them with their unlabeled data to train their models. Probability-based model extraction can have access to the source model's output probability and use Equation 9 in our paper to train their own models. Furthermore, based on label-based extracted models, the attacker can further avoid adversarial-example based model fingerprinting by adversarial training on the extracted models, with Equation 6 in our paper. Various model architectures are verified for the model extraction methods, including VGG, ResNet, DenseNet, and MobileNet to test different model IP protection models' effectiveness and robustness.", " Thank you for the detailed reviews, as well as the suggestions for improvement. And we hope to resolve some of your concerns in the following comments.\n\n**Q1.**\nSAC does not seem to work under transfer learning (A) in the hard label setting (Table 4). Is there a specific reason for why SAC fails under hard label setting with transfer learning?\n\n**A1.**\nTransfer learning (A) represents training the source model in all the layers on a new dataset. Compared with transfer learning in the last layers (transfer learning (L)), the feature layers in the transfer learning (A) model have a much larger difference from the source model, which makes it more difficult for the defenders to detect transfer learning (A) than transfer learning (L). Furthermore, because SAC-w and SAC-m use the images from the original datasets as the model's inputs, the transferred model on the new datasets generates a more soft and uncertain output. Thus, the transferred model may output [0.1, 0.1, 0.4, 0.3] and [0.1, 0.1, 0.3, 0.4] on two images belonging to the same class in the source model datasets. Although their output probabilities are similar, indicating they are more likely to belong to the same class, their outputs' hard labels are different, and it may cause SAC to fail under hard label setting with transfer learning (A). We have also explained it in the **revised paper**.\n\n**Q2.**\nThere is a recent work that proposes a similar solution of measuring pairwise responses to fingerprint models: https://arxiv.org/pdf/2106.12478.pdf. It would be good to cite this paper and point out differences compared to the proposed approach.\n\n**A2.**\nThanks for the advice and we have cited and compared this paper in the **revised paper**. This paper [C] proposes to generate the image pairs using the matching of the representation layers. Then it calculates the ratio of image pairs to be in the same class in the suspect model as a measurement to judge whether it transfers from the teacher model. It performs well against transferring attacks under different settings. Different from Teacher Model Fingerprinting, our method does not use the paired data (two images with similar features), and we use dozens or hundreds of samples, which belong to different labels, to form the correlation matrix. Our samples do not need to have similar features in the representation layers and thus, we do not need to know which part of the model is frozen and reused by the transferred model. Furthermore, we use the misclassified samples or data augmented samples as the model input and our framework can be applied more generally against different model stealing attacks, such as model extraction, pruning, adversarial training, fine-tuning, and transfer learning.\n\n[C] Chen et al. Teacher Model Fingerprinting Attacks Against Transfer Learning. USENIX 2022.", " **Q2.** Comparisons with some related works are missing. This paper claims that the proposed method outperforms previous methods. However, it lacks comparisons with two related works [A,B]. Therefore, it is hard to say if the proposed method achieves SOTA performance.\n\n[A] Li et al. Defending against model stealing via verifying embedded external features. AAAI 2022.\n\n[B] Chen et al. Copy, right? a testing framework for copyright protection of deep learning models. IEEE S&P 2022.\n\n**A2.** Thanks for the advice and we have added the discussions about both papers in the **revised paper**. VEF [A] proposes to detect the stolen models based on their gradients on the specific style-transfer samples with an MLP classifier. VEF, similar to watermarking, needs to involve in the training process, and it needs the white box access to the suspect model, including the suspect model's architecture and its gradients on the specific style-transfer samples. Although our experiments are based on a black-box setting, VEF can only be conducted in a white box setting, and thus we do VEF experiments in a white box setting. In general, there are two settings for the model IP protection, the white-box setting and the black-box setting. In the black-box setting, the defender can only get access to the suspect models' output, which can be applied more easily and widely. Furthermore, DeepJudge [B] proposes to use the robustness distance (RobD) and the neuron output distance to fingerprint the source model. And, in DeepJudge, there are two settings, the black-box setting, and the white-box setting. We conduct experiments using DeepJudge in a black-box setting. According to the paper, in the black-box setting of RobD, same as IPGuard [19, ICCCS2021], DeepJudge generates the adversarial examples and detects the stolen models using adversarial examples' attack success rate. Under the same setting, we reuse our experiment results of IPGuard for the black-box setting DeepJudge. Then we show the results of our method and these compared methods as follows:\n\n| IP protection |Finetune-A | Finetune-L| Pruning |Extract-L | Extract-P | Extract-Adv| Transfer-A |Transfer-L |\n| ------- | ------- |------- | ------- |------- | ------- |------- | ------- |------- |\n| VEF [A] |1.0 | 1.0| -- |0.86 | 0.68 | 0.86| x |x |\n| DeepJudge(RobD) [B] |1.0 |1.0| 1.0 |0.81 | 0.80 | 0.52| x |x |\n| SAC-w |1.0 |1.0| 1.0 |1.0 | 1.0 | 1.0 |1.0 | 1.0 |\n| SAC-m |1.0 |1.0| 1.0 |0.99 | 1.0 | 0.92 |1.0 | 1.0 |\n\nwhere x represents the model IP protection methods can not detect this kind of model stealing attacks, and -- represents because of the lack of part of the watermark injecting code on Github, we did not do this experiment (Only part of the code is not released, our experiments of VEF are based on the official code from Github).\n\nAll the experiment results mentioned above are based on CIFAR10 with the same setting as our experiments in Table 1. \nThe only difference is, in VEF [A], we use WideResNet as the source and the stolen models' architectures to directly use the well-trained source and classification models in the source code on Github, where the source model is based on WRN28 and the extracted models are based on WRN16. Both of our methods---SAC-w and SAC-m---only need the output (black-box) access to the victim model and do not need to be involved in the training process, to achieve the SOTA performance.\n\n**Q3.**\nTable 3 is hard to read. The title of Table 3 mentions \"accuracies under different source models\", but it seems there is only one source model. Therefore, this method's generalizability to different source models is also not clear.\n\n**A3.**\nThanks for your advice and we have clarified the unclear statement of the name of Table 3 in the **revised paper**. Besides, we have considered different source model situations in our supplementary materials (Table 6 in Appendix). We considered both VGG and ResNet as the source model, and the experiments demonstrated that our sample correlation based fingerprinting method (SAC) performs well among different source model architectures or datasets.", " Thank you for the detailed reviews, as well as the suggestions for improvement. And we hope to resolve some of your concerns in the following comments.\n\n**Q1.** This paper does not provide solid explanations about why the proposed method works. More discussion or analysis would make the intuition behind the proposed method more convincing.\n\n**A1.**\nIntuitively, samples with similar outputs in the source model are more likely to have similar outputs in the stolen models. Sample correlation (SAC), as a matrix to calculate the correlation of the model's outputs on the specific input samples, can well depict the model's behavior on these samples, and thus can be a unique characteristic of the model. In particular, we can employ the correlation difference between the source model and the suspect model as the indicator to detect the stolen model. The experiment also validates that sample correlation can be well preserved under different model stealing attacks. Although SAC has high effectiveness and good performance, the performance of SAC is still affected by the common knowledge shared by the models on the same task. In other words, the outputs for different models, including the source model, the irrelevant models, and the stolen models on the correctly classified samples are similar, and this will affect the defender to identify the stolen models. Thus, to get rid of the influence of the common knowledge, we propose to use the wrongly classified samples as the model input and calculate the samples' correlation. To avoid the attacker escaping our detection with adversarial training or adversarial extraction, we choose to use wrongly classified normal samples or data augmented samples (CutMix augmented samples) as the model input. Without the common knowledge shared by most models, SAC can be an effective indicator, and SAC-w and SAC-m outperform former model IP protection methods, including IPGuard [19, ICCCS2021], CAE [6, ICLR2021], and EWE [4, USENIX2021].", " This paper proposes a method to defend against model stealing attacks. This\nmethod is based on the mean correlation among the selected samples. It provides\ntwo ways to select samples: 1. Finding wrongly classified normal samples. 2.\nSelecting mixed samples via CutMix. Experiments on four datasets demonstrate the effectiveness of the proposed method.\n #Strengths\n\n* Interesting topic and good motivation.\n\n* SAC-m has high efficiency.\n\n#Weaknesses\n\n* It lacks solid explanations about why the proposed method works.\n\n* Generalizability to different source models is not clear.\n\n* Comparisons with some related works are missing.\n\n#Detailed comments\n\n* This paper does not provide solid explanations about why the proposed method\nworks. More discussion or analysis would make the intuition behind the proposed\nmethod more convincing.\n\n* Comparisons with some related works are missing. This paper claims that the\nproposed method outperforms previous methods. However, it lacks comparisons with\ntwo related works [1,2]. Therefore, it is hard to say if the proposed method\nachieves SOTA performance.\n\n* Table 3 is hard to read. The title of Table 3 mentions \"accuracies under\ndifferent source models\", but it seems there is only one source model.\nTherefore, this method's generalizability to different source models is also not\nclear.\n\n[1] Li et al. Defending against model stealing via verifying embedded external features. AAAI 2022.\n\n[2] Chen et al. Copy, right? a testing framework for copyright protection of deep learning models. IEEE S&P 2022.\n * Could you provide more discussion or analysis about why the proposed method\nworks? This paper has discussed the limitations and potential negative impacts.\n", " Model fingerprinting allows a model owner to claim ownership of a stolen model. Prior works on fingerprinting typically use transferable adversarial examples to perform fingerprinting. Such techniques have two key shortcomings: 1. They don’t work in the presence of defenses like adversarial training 2. They cannot be used when the stolen model is used for transfer learning as the output label space is different from the original model. To solve these issues, the authors propose to use pair-wise correlation of the model’s output of wrongly classified (SAC-w) or mixed (SAC-m) samples to perform fingerprinting. Using wrongly classified/mixed inputs (instead of adversarial examples) allows the technique to be used in the presence of defenses like adversarial training. Using pair-wise relationships (instead of point-wise predictions) allows fingerprinting to be performed when the stolen model is used for transfer learning. Evaluations show the the proposed technique can detect stolen models with high accuracy and can outperform prior works. ## Strengths\n\n1. The paper proposes a technique to perform fingerprinting in the presence of transfer learning, which is a new and interesting sub-problem in model fingerprinting that has received limited attention from prior works.\n2. In addition to the soft-label setting, the paper proposes a method to convert hard labels to soft labels, which enables fingerprinting when only the hard labels are available.\n3. The proposed fingerprinting techniques have been evaluated against several categories of stealing attacks: fine-tuning, pruning, transfer learning, model extraction and adversarial model extraction. The proposed techniques shows high detection performance for fingerprinting and outperforms prior works. \n4. The paper is well-written and easy to follow.\n\n## Weakness\n\n1. SAC does not seem to work under transfer learning (A) in the hard label setting (Table 4). 1. Is there a specific reason for why SAC fails under hard label setting with transfer learning? \n2. There is a recent work that proposes a similar solution of measuring pairwise responses to fingerprint models: https://arxiv.org/pdf/2106.12478.pdf. It would be good to cite this paper and point out differences compared to the proposed approach. The authors have addressed the limitations and societal impact of their work.", " This paper proposes a model stealing detection method based on sample correlation. The proposed method calculated the correlation among the model outputs for the misclassified samples. The cutmix approach is used to generate more effective sample inputs. The experiment evaluates the performance of the proposed method outperforms in different attack scenarios, such as fine-tuning, transfer learning, and adversarial training. Strengths:\n\n1. The paper considers many realistic attack scenarios in model stealing attacks, such as fine-tuning, pruning, adversarial training, and transfer learning, which have not been widely explored. \n\n2. It is nice to see that the protection method considers the label-only cases.\n\n3. Leveraging CutMix to augment data for fingerprinting is novel. \n\n\nWeaknesses:\n\n1. The threat model is not well-defined in the paper. What are the attacker’s capabilities in each category of stealing attacks? Does the attacker have access to training data and model architectures?\n\n2. The implementation of the model stealing attacks is unclear. \n\n3. It is unclear why the misclassified samples help to identify the stolen model. How many misclassified samples are necessary for fingerprinting? \n\n4. The threshold d in Equation (4) is critical to the success of model stealing detection. However, it is unclear how to select an appropriate value of d.\n\n5. As shown in Table 1 and 2, the existing detection methods can achieve 100% AUC in many attack categories (e.g., fine-tune, pruning). The proposed method only outperforms the existing methods in model extraction attacks.\n\n6. The proposed method does not work well in the Transfer-A attack (Table 4). It would be great if the paper could provide some explanations on it.\n 1. What is the threat model in each attack category?\n\n2. Why the misclassified samples can help to identify the stolen model? The authors have discussed potential societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "MqcMfWeGMx2", "_eCKDlIUmTn", "sXRdtMW-5-Y", "o8qWSaS0Tym", "925tQrtiS2", "NqpsASREIvR", "wnByfUKTWT4", "qPmjf9ft3EP", "NqpsASREIvR", "kGN31Bvybdd", "zrEL33sttL", "sXRdtMW-5-Y", "nips_2022_ebuR5LWzkk0", "nips_2022_ebuR5LWzkk0", "nips_2022_ebuR5LWzkk0" ]
nips_2022_xL8sFkkAkw
Towards Theoretically Inspired Neural Initialization Optimization
Automated machine learning has been widely explored to reduce human efforts in designing neural architectures and looking for proper hyperparameters. In the domain of neural initialization, however, similar automated techniques have rarely been studied. Most existing initialization methods are handcrafted and highly dependent on specific architectures. In this paper, we propose a differentiable quantity, named GradCoisne, with theoretical insights to evaluate the initial state of a neural network. Specifically, GradCosine is the cosine similarity of sample-wise gradients with respect to the initialized parameters. By analyzing the sample-wise optimization landscape, we show that both the training and test performance of a network can be improved by maximizing GradCosine under gradient norm constraint. Based on this observation, we further propose the neural initialization optimization (NIO) algorithm. Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost compared with the training time. With NIO, we improve the classification performance of a variety of neural architectures on CIFAR10, CIFAR-100, and ImageNet. Moreover, we find that our method can even help to train large vision Transformer architecture without warmup.
Accept
The paper introduces a new procedure to initialize the optimisation in training process of DNN models, including the recent ViT architecture. All the reviewers recommend acceptance and appreciate the promising empirical results backed by the strong theoretical foundations. AC recommends acceptance as well.
test
[ "ZGxmr3h_hns", "6rfdopQ8um4", "Lk1BmkA5QMc", "mIrezaM4l_I", "KsM-p-pn3_t", "5sWj80CTSh", "9mHbJHbrK-y", "2gd-f-2IYgh", "AgKuBcv1BFf", "fMDsKy4Jfc", "Ns4bRDT3ySS", "PSlLuTudKeb", "Om0dU8txmM-", "hHD-oBKDM5_", "a8HrRb-465T", "Txm4DVqI58A", "I_Zls_qdBc6", "URG-X4GXfRX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for appreciating our work. We are sorry that there are only a few hours left before the discussion deadline. We will be in a hurry for the revision version because we also need to consider how to fit in 9 pages after the revision and adding more results and discussions. But we would like to summarize the revisions that remain to be done as follows:\n\n(1) We will add the standard deviations for the results in Tables 4 and 5. We will also include the results of GradInit on Swin Transformer with and without warmup in Table 5. The sentence that \"These results indicate that NIO is able to produce a better initialization that benefits model performance agnostic of architecture and dataset\" (Lines 258-259) will be removed into the end of Sec. 6.2.\n \n(2)\tAs responded to Reviewer JX6Q, we may include the simple experiment of calculating the Kendall score to show that our Eq. (2) is more preferable to evaluate initialization quality. \n\n(3)\tAs responded to Reiviewer ugxF, we will make clearer definitions for “optimization path” and “optimization path consistency”. We will also include the discussions about the rationality and necessity of the approximation in Eq. (3). \n\n(4)\tAs suggested by Reviewer ugxF, we will rephrase the sentence when describing MetaInit and GradInit in Line 28 to avoid misunderstanding. We will also add more explanations for Figure 1 to avoid confusion. \n\n(5)\tAs suggested by Reviewer u98c, we think it is better to itemize the novelties over GradSign in the Related Work section. We will also include the ablation studies on the hyper-parameter $\\\\gamma$ and the number of iterations, and compare with GradInit when both are performed for the same iterations. \n\n(6)\tAll typos will be fixed. \n\nOther constructive suggestions and discussions will also be considered by us. \n", " Thank you for the clarifications and extra evaluation. If possible, I would also appreciate taking a look at a revised version, if you can push it before the revision deadline. However, this is a weak request and will fully understand if authors do not find time for that..", " We are glad to know that our response has addressed your concerns. We would like to thank you for appreciating our work and the constructive suggestions. We will include the discussions and results in our revised paper according to your suggestions. ", " We are glad to know that our response has addressed most of your concerns. We would like to thank you for appreciating our work and the constructive suggestions. \n\nBecause sample-wise optima are intractable, we rely on the approximation in Eq. (3) to construct a tractable optimization objective. If we use Eq. (3) to optimize for $\\\\theta^*_i$, the bias may cause an inaccurate solution. But our aim is to look for an initialization. Eq. (3) approximates $\\\\theta_i^*-\\\\theta_0$ as $-\\\\eta g_i$, so turns the objective from optimization path consistency into gradient consistency. It does not deviate from our goal for an initialization. Besides, prior studies also indicate that low gradient variance at initialization is favorable. \n\nWe will discuss it and revise our paper according to your suggestions. ", " Thanks for your response.\n\nThe explanation addresses most of my concerns. I would like to raise the score to 6. I hope the authors could clarify the definition of optimization path and the definition of consistency in the next version.\n\nNevertheless, I am still a little worried about the first-order approximation in Eq.3, though it is a commonly-used technique in bi-level optimization methods. While, such an approximation can be biased for the initialized weights. I hope the authors could further discuss it in the next version (or list it in the limitation).", " Thanks for your reply. \n\nSample-wise optimization refers to training the model only on each single sample. So, the optimization path for sample $i$ can be characterized by $\\\\theta^*_i-\\\\theta_0$, where $\\\\theta_0$ is the initialized point, and $\\\\theta^*_i$ is the converged optimum of only training on sample $i$. Because all sample-wise optimization paths have the same starting point $\\\\theta_0$, we use the averaged Cosine similarity to measure their consistency. The consistency of sample-wise optimization path can be formulated as:\n$\\\\frac{1}{n^2}\\\\sum_\\{ij\\}\\\\cos\\\\angle(\\\\theta^*_i-\\\\theta_0, \\\\theta^*_j-\\\\theta_0)$. So, if the angle between the paths from initialization $\\\\theta_0$ to each local optimum $\\\\theta^*_i$ is small, we will have more consistent sample-wise optimizaiton paths. It is approximated by our GradCosine in Line 6 in Algorithm1. \n\nIt is shown that our proposed Eq. (2) reflects the optimization path consistency and is a function of initialization. Our aim is to look for a $\\theta_0$ that minimizes Eq. (2). \n\n\nPlease let us know if anything still confuses you. We will make these definitions clearer in the revised paper. \n", " Thanks for your response.\n\nI am still confused about the claim that smaller angle between initialization and optima indicates more consistent sample-wise optimization paths. I wonder what is the definition of the consistency of sample-wise optimization path and could you express the relationship between it and the angle in mathematics? Or could you intuitively explain the relationship?\n\nOverall, the rebuttal resolves most of my concerns, except the above one. I tend to maintain my original score and would like to raise the score if the authors could address my confusion.", " Thanks for the responses to my concerns. The authors mostly clarified my concerns on novelty, effect of hyper-parameters, number of runs to report results, and results with aligned number of iterations. I am fine to increase score and tend to accept, and suggest that the revised paper should include these discussions and new results.", " Dear Reviewer,\n\nThanks for your valuable comments. \n\n+ W1: Compare NIO with baseline (GradInit) on Swin Transformer without warmup. \n\nGradInit tests on a language Transformer instead of vison Transformer in their paper, so we did not compare with GradInit in Table 5. We adopt their initialization settings for ResNet-50 on ImageNet to perform GradInit on SwinTransformer and then train the model with and without warmup. The results are shown as follows:\n\n| | Kaiming | TruncNormal | GradInit | NIO (ours) | \n| --- | --- | --- | --- | --- |\n| w/ warmup | 79.4 | 81.3 | 80.4 | 81.3 | \n| w/o warmup| fail | fail | 79.9 | 80.9 |\n\nWe observe that GradInit also successfully trains SwinTransformer without warmup, but has a lower performance than ours. We will add this result in our revised version.\n\n+ W2: Table 4 does not report standard deviations like prior results. \n\nUsually experiments on small datasets such as CIFAR are sensitive to randomness and have results with fluctuation. Repeated experiments on ImageNet classification usually have close performances with a small deviation. So, we did not report the standard deviations in Table 4. In order to relieve your concern, we report the means and standard deviations of GradInit and NIO results in Table 4 as follows:\n\n| Method | Accuracy (%)|\n| --- | --- |\n| GradInit | 76.50$\\\\pm$ 0.05|\n|NIO | 76.71$\\\\pm$ 0.07|\n\n+ W3: Typos and writing suggestions\n\nThanks for rectifying us. We will correct the typos and modify our paper according to your suggestions. \n\n+ Q1: Memory consumption of computing GradCosine\n\nWhen the input image/sentence size is large, the memory consumption of computing GradCosine is a problem. Our simple solution is to adopt a small batchsize when performing NIO. The GradCosine quantity can be optimized easily. It usually gets saturated (>0.9) after tens of iterations. So, we can use a small batchsize and train for more iterations. It nearly does not harm the initialization quality and the final performance. \n\nThanks for your guidance and reminding us of the references [1,2] that could potentially improve our method. Using the low rankness of one-sample gradients w.r.t weight matrices to reduce memory overhead is interesting. If we can efficiently get the decomposed matrices, the memory overhead will be reduced greatly, and the pair-wise inner-products of gradients can also be computed efficiently. We think there are some issues that need to be considered. \n\nWe notice that [1] adopts decomposed matrices as a proxy to optimize the editor parameters, and thus reduces the computational and memory cost. However, our method requires the second-order gradient to optimize our objective in the same neural network (instead of another network as adopted in [1]). In this case, it is not sure whether optimizing the similarity between the sample-wise gradients decomposed into (1, in_channles) and (1, out_channels) is still a good approximation. \n\nBesides, it is not sure whether the checkpointing technique (pytorch implementation of [2], please refer to https://pytorch.org/docs/stable/checkpoint.html) can be easily introduced to our optimization algorithm, since the second-order gradient requires the storage of both gradients and activations and has more complicated dependencies leading to a complex topological structure that is hard to free some memories on the fly. \n\nAs a future work, we will think carefully how to improve our method from both the theoretical (e.g. [1]) and systematic (e.g. [2]) perspectives. \n\n----\nReferences\n\n[1] Fast Model Editing at Scale, Mitchell et al., ICLR 2022.\n\n[2] Training Deep Nets with Sublinear Memory Cost, Chen et al., arXiv:1604.06174v2", " + W3: Mean and variance of the performance should be more insightful to compare the different methods. \n\nWe indeed run a same task for multiple times with different seeds. Each model is trained for four times as described in Line 250 in our paper. We report the means and standard deviations of results on CIFAR-10 and CIFAR-100, see Tables 1 and 2 in our paper. For experiments on ImageNet, the randomness is not as notable as that on CIFAR. Multiple experiments have close performances. So, we did not report the standard deviation in Tables 4 and 5. In order to relieve your concern, we report the means and standard deviations of GradInit and NIO results in Table 4 as follows:\n\n| Method | Accuracy (%)|\n| --- | --- |\n| GradInit | 76.50$\\\\pm$ 0.05|\n| NIO | 76.71$\\\\pm$ 0.07|\n\nThe means and standard deviations of SwinTransformer results in Table 5 are shown as follows:\n\n| | Kaiming | TruncNormal | NIO (ours) | \n| --- | --- | --- | --- |\n| w/ warmup | 79.44$\\pm$0.03 | 81.28$\\pm$0.03 | 81.30$\\pm$0.05| \n| w/o warmup| fail | fail | 80.91$\\pm$0.06|\n\n\n\n+ W4: How about the comparisons if all methods are aligned in number of running iterations for initialization?\n\n\nIn Table 4, our method is performed for 100 iterations, while GradInit requires 2000 iterations as suggested by their paper. We reimplement our method for 2000 iterations, and GradInit for 100 iterations. The results of performance and speed are shown as follows:\n\n| Init. Time | 100-iter | 2000-iter |\n|---|---|---|\n| GradInit | 0.01 | 0.21 |\n| NIO | 0.03 | 0.6 |\n\n|Top-1 Acc. | 100-iter | 2000-iter |\n|---|---|---|\n|GradInit | 75.96 | 76.50 |\n|NIO | 76.71 | 76.68 |\n\nIt is shown that our method has close performances for 100 and 2000 iterations. In contrast, GradInit has a decreased performance when performing for only 100 iterations. Although the speed of our method per iteration is slower than GradInit, we have a faster convergence and only require a less number of iterations for initialization. \n", " Dear Reviewer:\n\nThanks for your valuable comments. \n\n+ W1: the novelty over GradSign [52]\n\nIn GradSign [52], the metric of sample-wise optima density (Eq. (1) in our paper) is proposed and proved to be an upper bound of training and generalization error. Because sample-wise optima are not tractable, they approximate it by counting the gradient sign, which is non-differentiable. The metric is used to rank neural architectures without training for neural architecture search purpose. \n\nHowever, the goal of our work is to develop a method for initialization optimization, instead of architecture search. So, our theoretical metric needs to be differentiable and be a function of initialization. The differences between [52] and our work are summarized as follows:\n\n(1)\tThe metric Eq. (1) proposed in [52] is agnostic of initialization (as shown in Figure 1 (c) and (d)). The approximated GradSign quantity is non-differentiable. So, it cannot be used for initialization optimization. In contrast, our metric Eq. (2) reflects the optimization path consistency, which is a differentiable function of initialization, and thus can serve for our initialization optimization purpose. \n\n(2)\tOur method considers optimization path consistency, which is intimately related to a favorable training dynamic. The larger optimization path consistency induces a smaller gradient variance, whose benefits have been supported by prior studies [28,54]. See more details in Lines 174-184 in our paper. As a comparison, [52] does not enjoy this advantage. \n\n(3) The analyses of our work and [52] are similar because both metrics can be the upper bound of training and generalization error. They both rely on similar assumptions and Lemma1. But the proof details are different. We need to consider the sample-wise optimization path ($\\theta^*_i-\\theta_0$), while [52] is only concerned with sample-wise optima $\\theta^*_i$. \n\n(4)\tAlthough both metrics are related to model performance, [52] and we use them for different purposes. The other parts of our work, including our GradCosine quantity, the objective of our neural initialization optimization, and the algorithm to solve it, are original and have no overlap with [52]. \n\nSo, we think it is not fair and reasonable to deny our novelty and contribution just because both our work and [52] have a metric that can be the training and generalization error bound. They are studies for different purposes. \n\n+ W2: How is the performance affected by the hyper-parameters, including $\\\\lambda$ and the number of iterations?\n\nOur algorithm introduces three hyperparameters, the number of sub-batches $D$, the overlap ratio $r$, and the upper bound constraint of gradient norm $\\\\gamma$. We do not have a hyperparameter denoted as $\\\\lambda$ in our paper. We think what you refer to is $\\\\gamma$. We have made ablation studies for the hyperparameters $D$ and $r$, see Table 3 in our paper. For $\\\\gamma$, we just follow the choices suggested by GradInit [53] because it plays a similar role in our work and GradInit [53]. It ensures that the gradient norm at initialization for training will not be too large or too small. We list the performances of NIO using ResNet-50 on ImageNet with different $\\\\gamma$ as follows:\n\n| $\\\\gamma$ | 1 | 5 | 10 | 15 | 20 | 30 | \n|---|---|---|---|---|---| --- |\n|NIO| 76.25 | 76.57 | 76.71 | 76.76 | 76.63 |76.68 |\n\nWe run NIO for 100 iterations on ImageNet because we find that more iterations do not bring significant improvement. The effect of iteration is shown as follows:\n\n| iteration | 50 | 75 | 100 | 150 | 200 | 500 |\n|---|---|---|---|---|---|---|\n|NIO | 76.44 | 76.67 | 76.71 | 76.70 | 76.74 | 76.65 |\n\nIt is shown that as long as the hyperparameter $\\\\gamma$ or the number of iterations is within a proper range, the final performance will not deviate too much. \n\n----\nReference\n\n[28] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. In ICLR, 2020.\n\n[52] Z. Zhang and Z. Jia. Gradsign: Model performance inference with theoretical insights. In ICLR, 2022.\n\n[53] C. Zhu, R. Ni, Z. Xu, K. Kong, W. R. Huang, and T. Goldstein. Gradinit: Learning to initialize neural networks for stable and efficient training. In NeurIPS, 2021.\n\n[54] J. Zhuang, T. Tang, Y. Ding, S. C. Tatikonda, N. Dvornek, X. Papademetris, and J. Duncan. Adabelief optimizer: Adapting stepsizes by the belief in observed gradients. In NeurIPS, 2020.\n", " Dear Reviewer:\n\nThanks for your valuable comments. \n\n+ W1: Both MetaInit and GradInit adopt gradient descent method. \n\nYes, the objective of MetaInit is also a function of the model gradient. The objective of GradInit requires to perform a step of gradient descent. Their adopted gradient is the full gradient of the whole CE loss. As a comparison, we focus on the similarity of sample-wise gradients, which decompose the whole CE loss into each sample. Our objective has theoretical supports to be related to model performance. We will rephrase the sentence in Line 28 to make the comparison clear and avoid confusion. \n\n+ W2: Sec 3.2 is a little redundant and can be put into the supplementary. \n\n$\\\\Psi$ in Eq. (1) is agnostic of initialization. It is approximated by a non-differentiable quantity (counting gradient signs) in GradSign [1] to rank and search neural architectures. In contrast, our proposed $\\\\Theta$ in Eq. (2) reflects the optimization path consistency, which is a differentiable function of initialization and thus can serve for our initialization optimization purpose. Actually, we do not compare the tightness between $\\\\Theta$ and $\\\\Psi$. We want to show that both $\\\\Theta$ and $\\\\Psi$ can be the upper bound of training and generalization error and are related to model performance. But our metric $\\\\Theta$ is suitable for initialization optimization, while $\\\\Psi$ cannot serve for this purpose. \n\nWe will follow your suggestion and adjust the organization to avoid confusion. \n\n+ W3: Typo in lilne 6.\n\nThanks for rectifying us. We will fix the typo. \n\n+ Q1: The initial point with a larger angle is closer to the global optimum. \n\nIn Figure 1 (c) and (d), it seems that an initialization with a smaller cosine similarity (larger angle) is closer to the global optimum of the two samples. But we should note that:\n\n(1)\tThe optimum of a given network is not unique. It is dependent on its initialization. The model with different initialization points will converge to different optimal parameters, even though their performances may be close. So, what we do here is looking for an initialization whose sample-wise optimization paths are more consistent (smaller angle), instead of the inverse way--looking for a point closer to the global optimum of some given sample-wise optima and landscapes, which is impossible for a real network. The larger optimization path consistency induces a smaller gradient variance, whose benefits have been supported by prior studies [2,3]. See more details in Lines 174-184 in our paper. \n\n(2)\tThe illustration in Figure 1 is a toy example with only two samples and simple landscapes. In a real network, the landscapes are highly non-convex in a high dimension, and there are a larger number of training samples. In this case, the global optimum does not necessarily lie in the convex hull. The point with a larger angle to sample-wise optima is not ensured to be closer to the global optimum. \n\nIn conclusion, an initialization with more consistent sample-wise optimization paths is more favorable. What we do is looking for such initialization, instead of the global optimum of given sample-wise landscapes. Figure1 (c) and (d) are toy examples that are only used to show that the metric Eq. (1) is agnostic of initialization, while ours Eq. (2) reflects the optimization path consistency and is a function of initialization. We will make more explanation for Figure 1 in our revised version to avoid confusion. \n\n\n+ Q2: Discuss the rationality of the approximation in Eq. (3). \n\t\nNote that in Eq. (3) $\\\\theta^*_i$ is not the global optimum. It is the local optimum for sample $i$. If we optimize a model on only one training sample, it is very easy to finish the training. Only several iterations are needed to attain a zero loss. So, we make the first-order approximation for the sample-wise optimization, i.e., the sample-wise optimum can be reached via only one step of gradient descent. \n\n\n----\nReference\n\n[1] Z. Zhang and Z. Jia. Gradsign: Model performance inference with theoretical insights. In ICLR, 2022.\n\n[2] L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han. On the variance of the adaptive learning rate and beyond. In ICLR, 2020.\n\n[3] J. Zhuang, T. Tang, Y. Ding, S. C. Tatikonda, N. Dvornek, X. Papademetris, and J. Duncan. Adabelief optimizer: Adapting stepsizes by the belief in observed gradients. In NeurIPS, 2020.\n\n", " + Q4: Can we repeatedly use it during training for better performance? \n\nUsing our method during training does not contribute to the final performance. When we start training, the training and generalization abilities are mainly decided by the training loss and optimizer. Our objective is used to look for a better initialization. It just optimizes the starting point. During training, the parameter update direction should come from the signal of a loss function. But our objective of initialization is not a loss function. Besides, out method only tunes the variance scaling ($M$ in our paper line 196) of the parameters. Changing the scaling of parameters in each layer during training will surely deviate the original optimization. ", " Dear Reviewer:\n\nThanks for your valuable comments. \n\n+ W1 and Q1: Can we have experiments to show the proposed metric is better?\n\nThe density of sample-wise local optima $\\\\Psi$ in Eq. (1) is agnostic of initialization. It is approximated by a non-differentiable quantity (counting gradient signs) in GradSign [1] to rank neural architectures without training for neural architecture search purpose. As a comparison, our proposed cosine similarity of optimization paths $\\Theta$ in Eq. (2) reflects the optimization path consistency (cosine similarity between $\\\\theta_i^*-\\\\theta_0$ and $\\\\theta_j^*-\\\\theta_0$). It is a differentiable function of initialization $\\\\theta_0$, and thus can serve for our initialization optimization purpose. Although $\\\\Psi$ in Eq. (1) is also related to model performance as adopted in [1], it is used to evaluate different architectures agnostic of initialization and cannot reflect initialization quality. Our $\\\\Theta$ in Eq. (2) can evaluate a model under different initializations. \n\nIn order to quantitatively compare the metric $\\\\Psi$ in Eq. (1) with our proposed $\\\\Theta$ in Eq. (2), we train ResNet-110 on CIFAR-100 with 10 different initializations $\\\\theta_0^{(1)},\\\\cdots,\\\\theta_0^{(10)}$, so we have 10 trained models $M^{(1)},\\\\cdots,M^{(10)}$, and their accuracies $Acc^{(1)}, \\\\cdots, Acc^{(10)}$. For each model $M^{(i)}$, we select out the wrongly classified training samples, and finetune the model on each of these samples until the sample is correctly predicted. So, we have the sample-wise optimal models $\\\\{M^{(i)}_j\\\\}$, $j=1,…,\\\\mathcal{J}_i$, where $\\\\mathcal{J}_i$ is the number of wrongly classified training samples of model $M^{(i)}$. And then we calculate the quantities $\\\\Psi$ and $\\\\Theta$ using these sample-wise optimal model parameters according to Eq. (1) and Eq. (2) ($\\\\mathcal{H}$ is set as 1), respectively. We get the estimated quantities $\\\\Psi^{(1)},\\\\cdots,\\\\Psi^{(10)}$ and $\\\\Theta^{(1)},\\\\cdots,\\\\Theta^{(10)}$. Finally, we calculate Kendall score $\\\\tau$ between $\\\\{Acc^{(i)}\\\\}$ and $\\\\{\\\\Psi^{(i)}\\\\}$, and between $\\\\{Acc^{(i)}\\\\}$ and $\\\\{\\\\Theta^{(i)}\\\\}$. The score ranges from -1 to 1 and is able to evaluate rank correlation of data pairs. $\\\\tau=1$ when the rankings are identical, and $\\\\tau=-1$ when the rankings are reversed. If the rankings have a low correlation, $\\\\tau$ is near 0. We have their values as follows:\n\n| Metric | Kendall $\\\\tau$ |\n| --- | --- |\n| $\\\\Psi$ (Eq. (1)) | -0.28|\n| $\\\\Theta$ (Eq. (2)) | -0.73|\n\nIt reveals that $\\\\{Acc^{(i)}\\\\}$ are significantly inversely correlated with $\\\\{\\\\Theta^{(i)}\\\\}$, while $\\\\{Acc^{(i)}\\\\}$ and $\\\\{\\\\Psi^{(i)}\\\\}$ show a low correlation. So, $\\\\Theta$ is more preferable to evaluate a model under different initializations. \n\n+ W2: The citation in Appendix A for the proof of Lemma 1 is wrong.\n\nYes, thanks for rectifying us. We will correct the wrong citation. \n\n+ Q2: Does the initialization framework work for other optimization methods, like sharpness-aware-minimization?\n\nBoth SGD and Adam can be adopted to solve our initialization optimization problem (Eq. (8)). The advanced technique sharpness-aware-minimization is integrated on a loss function to induce better generalization. However, our objective in Eq. (8) is specified by the gradient cosine and gradient norm quantities. It is not a loss function at all that measures the error of a model. So, it has no access to the landscape of a loss function, let alone the sharpness. Our goal is to look for a better initialization, while generalization is more affected by the loss function and the optimizer for training. \n\n+ Q3: Can we have the same benefits if we initialize the networks with a different dataset? \n\nThanks for the interesting question. Our method is indeed aware of architecture and dataset. We perform our NIO with CIFAR-100 on a ResNet-110 and train the initialized model (with a new fc classification layer due to different numbers of classes) on CIFAR-10. We observe no significant benefit. The final performance is around the baseline result without NIO. We guess it is because that the gradient patterns on different datasets are different. Our objective of initialization is supervised. Its optimization on CIFAR-100 would be invalid when training on CIFAR-10 with a different label space. The exact dependency of our method on the dataset needs more exploration. Future work such as unsupervised learning-based initialization may relieve the dependence on dataset. \n\nDespite the dependence, we think it does not impede the practical implementation. We only need a small portion of the dataset. The cost of our initialization method is also friendly. So, it is not necessary to initialize with a small dataset and train on a large dataset. \n\n----\nReference\n\n[1] Z. Zhang and Z. Jia. Gradsign: Model performance inference with theoretical insights. In ICLR, 2022\n", " Authors study the problem of finding (learning) the best neural network initialization, and propose GradCosine - a measure of fitness for neural network initialization, based the similarity between individual sample gradients. This measure can then be optimized using and iterative procedure denoted as NIO (neural initialization optimization). Authors explain how GradCosine relates to model training and show a the relation between this quantity and the density of sample-wise local optima ([52] in the paper).\nFrom a theoretical point of view, the paper demonstrates that minimizing GradCosine results in favourable bounds on the training loss. For practical applications, authors offer a sub-batch version of GradCosine that can be computed more efficiently. To the best of my understanding, the theoretical analysis appears sound, based on reasonable assumptions. [That said, I am no expert in the area of learned initialization.] The main experiments look reasonable, though I would recommend making some extra comparisons.\n\n1. in Table 5, you evaluate how NIO trains SWIN transformer without warmup. At least one baseline (GradInit[53]) also claims to work in that setting. Perhaps it would be best to compare NIO against that.\n\n2. Table 4 does not report standard deviations, while all prior experiments do. It might be useful to include them - or explain why they are missing.\n\n\nOn an unrelated note, I must applaud authors for supplying a Dockerfile in their supplementary code. Publishing the corresponding docker container will make it easier for future researchers to reproduce this work and build on it, even if the required libraries break compatibilities.\n\n\n### Typos / nitpicking\n\n> L215 making the sub-batches overlapped with each other **stables** the optimization\n\nperhaps a typo? \"stables\" -> \"stabilizes\"\n\n\n> L250 Each model is trained **for** four times with different seeds. \n\nconsider removing \"for\"\n\n\n> NIO is able to produce a better initialization that benefits model performance agnostic of architecture and dataset.\n\n[nit] this is concluded at an early section, where the only evidence is training cnn-only models on CIFAR-10/100. As a weak suggestion, I would recommend making this conclusion later, once you demonstrate NIO performance for transformers and ImageNet.\n\n Note: the question / suggestion below bears little significance. If authors are constrained by response time, please ignore it.\n\nTo the best of my understanding, even batched GradCosine currently requires more GPU memory than the objectives of MetaInit or GradInit (per model parameter). This could be a limitation when applying NIO for large models, e.g. transformer language models.\nI wonder if it is possible to reduce the memory usage algorithmically.\n\nSuppose that most of that memory is contributed by gradients w.r.t. weight matrices in linear layers, whether in convolutions or transformer projections.\n\nFor linear layers, one can observe that __one-sample gradients w.r.t. weight matrices are low-rank__.\nIn the simplest case, an MLP linear layer with batch size 1 will always receive rank-1 gradient - due to the fact that the gradient w.r.t. weight matrix is a product of two matrices of shape (1 x in_features) and (out_features x 1), where both \"1\"s correspond to batch size.\nFor convolution and attention layers, this will similarly result in low-rank gradients.\n\nThis trick is described in more detail in [1], though i believe that it was invented prior to that work.\n\nPut simply, if your gradient w.r.t. weight matrix was computed with batch size 1, you can compute the pairwise products for GradCosine without computing the gradients w.r.t. weight, using only the gradients w.r.t. activations. Furthermore, you could avoid storing activations for all layers by re-computing them on the fly[2].\n\n\n[1] https://arxiv.org/abs/2110.11309\n\n[2] https://arxiv.org/abs/1604.06174v2\n\n\n To the best of my knowledge, authors have sufficiently addressed the limitations of their work.\nAs for the societal impact, this specific paper contains fundamental research in deep learning, thus it is hard to foresee its societal impact.", " The paper first introduces a quantity, the cosine similarity of sample-wise local optima to evaluate the model performance at the initialization. They theoretically proved that their proposed quantity is the upper bound of both the training and generalization error under certain assumptions. Based on this theoretical finding, they approximate the sample-wise optimum with the first-order approximation to make the quantity differentiable and tractable. As a result, they simplify the upper bound quantity and achieve the initialization by maximizing the quantity with the gradient-based method. Their empirical results show that they can achieve better performance on various datasets and network structures compared to other initialization methods. Strengths:\nThe paper is well organized and easy to follow. The theoretical analysis seems correct and motivated. Their empirical results are quite good, especially on CIFAR datasets. \n\nWeaknesses:\nIt will be better if we can have some experiments to show that the proposed cosine similarity of sample-wise local optima is useful or better than the previous methods. Since the initialization method is motivated by this quantity, it’s better to make the usefulness of this quantity clear. \n\nThe citation in Appendix A for the proof of Lemma 1 seems wrong. \n 1. Can we have some experiments to show that the proposed cosine similarity of sample-wise local optima is more suitable for evaluating the initialization quality?\n\n2. Does the initialization framework work for multiple optimization methods? For example, like sharpness-aware-minimization?\n\n3. It seems the method is dataset-dependent, will we have the same benefits if we initialize the parameter of the networks with a different dataset? For example, training on cifar-10 but initialized with cifar100 or ImageNet?\n\n4. Does this only work for initialization? Can we repeatedly use it during the training and will that be better?\n The authors fairly addressed the limitations and potential negative societal impact of their work.", " This work proposes a new initialization method for neural network. The authors first use Fig. 1 to illustrate drawbacks of the sample-wise local optima density $\\Phi_{S,l}$ that adopts Manhattan distance between the pair-wise local optima, and lead to a Cosine similarity of sample-wise local optima $\\Theta_{S,l}$. Then the authors introduce to approximate local optimum by one step gradient descent and approximate $\\Theta_{S,l}$ by Eq. 4 (GradCosine). Finally, the initialized weights $\\theta_0$ are obtained by maximizing GradCosine and GradNorm. Extensive experiments are conducted to verify the efficacy of this approach, showing that GradCosine surpasses MetaInit and GradInit on multiple datasets in both accuracy and speed. Strengths:\n1) The writing is logical and fluent.\n2) The experimental results are good.\n3) The motivation is interesting. Nevertheless, I have one question about it, please see the first point in Question.\n\nWeakness (Question):\n1) In Line 28, the authors claim that “these methods merely use the first order training dynamic as the main optimization objective”. However, MetaInit[8] and GradInit[53] both adopts gradient descent method.\n2) It seems that the theoretical analysis in Sec. 3.2 cannot show that $\\Theta_{S,l}$ has a tighter bound than $\\Phi_{S,l}$. Then I think Sec. 3.2 is a little redundant, which can be put into the supplementary.\n3) Typo: ‘GradCoisne’ in Line 6.\n 1. The authors claim that (Line 114-115): The optimization path from the initialization to the local optima of the two samples in Fig. 1(d) are more consistent compared to that in Fig. 1(c). However, I notice that $\\theta_0$ in Fig. 1(c) is closer to $\\theta_1^*, \\theta_2^*$ than $\\theta_0$ in Fig.1 (d), resulting in a larger angle $\\angle(\\theta_1^*-\\theta_0, \\theta_2^*-\\theta_0)$. My point is that the angle cannot reflect the quality of optimization path since an initial point that is closer to the optimal point might as a larger angle.\n2. I have one concern on the approximation in Eq. 3. For initialized weights $\\theta_0$, such an approximation seems not reasonable. I hope the authors could discuss the rationality of it.\n The limitations are addressed.", " This paper works on the initialization of neural network parameter based on theoretically inspired optimization algorithm. It is to substitute the previous network initialization algorithm, such as Kaiming's method, [52], etc. The proposed approach is closely related to [52] by introducing the cosine similarity of sample-wise gradient for optimizing the initial network parameter. The proposed approach is applied to Resnet, DenseNet, WideResNet, and transformer. The results show that the proposed initialization method achieved better final network training results than the Kaiming's method, and the other compared learning-based methods. Strength:\n\n1. The paper theoretically dig into the network initialization based on the difference of network parameter initialization and sample-wise local minima, and derive a quantity to approximate the upper-bound of training and test error bounds. The paper further approximate and minimize the bound with constraint on gradient magnitude. The proposed method is reasonble and inspired by theoretical analysis.\n\n2. The proposed method is evaluated on network training and show improved results with the proposed network parameter initialization method.\n\nWeakness:\n\nMy major questions and concerns are on the novelty over [52] and insufficient evaluation details. \n\n1. The theoretical analysis of this work is similar to the analysis of [52], though the conclusion of this paper is different measuring by the cosine similarity of sample-wise gradients. \n\n2. The proposed approach is subject to an optimization problem of (6) adaptive to mini-batch based implementation. For feasibility in implementation, it makes some assumption and introduced the hyper-parameters, e.g., \\lambda. How is the performence affected by the hyper-parameters of the optimization algorithm, including, e.g., \\lambda, and number of iterations, etc ?\n\n3. Due to the randomness, e.g., the mini-batch, the performance should be reported by running a same task in several times, and the mean the variance of the performance should be more insightful to compare the different methods. \n\n4. Because of the introduced additional computational cost in optimization, the approach is reported by running 100 iterations. How about the comparisons (speed and performance) if all the compared methods are aligned in number of running iterations for initialization? Please see my questions on the novelty, experimental justifications, etc., in the weakness. The paper clearly state the limitations of this approach. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "6rfdopQ8um4", "AgKuBcv1BFf", "2gd-f-2IYgh", "KsM-p-pn3_t", "5sWj80CTSh", "9mHbJHbrK-y", "PSlLuTudKeb", "fMDsKy4Jfc", "a8HrRb-465T", "Ns4bRDT3ySS", "URG-X4GXfRX", "I_Zls_qdBc6", "hHD-oBKDM5_", "Txm4DVqI58A", "nips_2022_xL8sFkkAkw", "nips_2022_xL8sFkkAkw", "nips_2022_xL8sFkkAkw", "nips_2022_xL8sFkkAkw" ]
nips_2022_QrK0WDLVHZt
Optimal Gradient Sliding and its Application to Optimal Distributed Optimization Under Similarity
We study structured convex optimization problems, with additive objective $r:=p + q$, where $r$ is ($\mu$-strongly) convex, $q$ is $L_q$-smooth and convex, and $p$ is $L_p$-smooth, possibly nonconvex. For such a class of problems, we proposed an inexact accelerated gradient sliding method that can skip the gradient computation for one of these components while still achieving optimal complexity of gradient calls of $p$ and $q$, that is, $\mathcal{O}(\sqrt{L_p/\mu})$ and $\mathcal{O}(\sqrt{L_q/\mu})$, respectively. This result is much sharper than the classic black-box complexity $\mathcal{O}(\sqrt{(L_p+L_q)/\mu})$, especially when the difference between $L_p$ and $L_q$ is large. We then apply the proposed method to solve distributed optimization problems over master-worker architectures, under agents' function similarity, due to statistical data similarity or otherwise. The distributed algorithm achieves for the first time lower complexity bounds on both communication and local gradient calls, with the former having being a long-standing open problem. Finally the method is extended to distributed saddle-problems (under function similarity) by means of solving a class of variational inequalities, achieving lower communication and computation complexity bounds.
Accept
The paper extends gradient sliding to the situation where both functions are smooth and the sum is strongly convex. The resulting algorithm is then applied to distributed optimization settings and similarity assumptions, where it jointly achieves optimal gradient evaluations and communication complexities, improving on prior complexity bounds by logarithmic factors. Initially, the reviewers were unclear about the motivation and construction of the algorithm, as well as the significance of the theoretical results. However, through extensive discussion, most of the issues were clarified to the satisfaction of the reviewers. Consequently, I recommend acceptance of the paper and urge the reviewers to carefully incorporate all the clarifications in their rebuttal into the camera ready paper. In addition, please provide an accurate answer (either yes or no) to question 3a in the reproducibility checklist.
train
[ "jbp5JCocAB", "YAw2pncJ2k0", "PWS13jW8hrW", "doy3gPHkLK", "eOKzjRUkgzO", "IC_zi4a9IAK", "S6uwaEK-To", "mz4HIW79FZ6", "34z5ZUuKTpS", "LtobKKzTxEy", "5Fhzr9oYh87T", "-9eSCIFk4_z", "oE2AxN_wQD", "MqKLv3GFmCo", "XFPWSS9RP9", "e2eWwStdvc7", "Wl-m5TaTRup", "n_zdf4pi6t-", "3252QHkc7NS", "b_mc53LVoAk", "VpSttydyUfp", "E93NpK-sYol", "-ryUTQbKmAhx", "qNHVaZv3kNi", "aqFyn9WBywqT", "KrL9Kg2Glzd", "y-8DB4NSnRl", "mjOgK8ooWop", "uy-zmzhBKW3", "Lvk8v68M8oc", "rh9efOzqPbr", "F7iPpsR-9-" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly thank Reviewer **21CM** for the response, important comments, and positive final feedback!", " Thanks for the detailed reply. I do not have any more questions. I would like to raise my score. ", " Thank you for the response!\n\nAt the moment we are discussing this with Reviewer **disq**.\nPlease, read \"Response to Reviewer disq (part 1)\", \"Reply (part 1)\" by Reviewer **disq**, \"Response to Reviewer disq. Reply (part 1)\".\n\nFor convenience, we duplicate it here.\n\n1) Reviewer **disq** noted that we use inexact proximal calculations (see line 5 go Algorithm 1), but we call it \"sliding\" (and not only us - see Section 1.2 of our paper), often other varieties of sliding that can be found for different types of problems do similar things (inexact proximal calculations). This technique is most likely called \"sliding\" because of the physicality of the name, because we \"slide\" only one function while the gradient of the other function is fixed.\n\n2) Sliding/inexact proximal calculations is used in all the papers on similarity. Reviewer can open the papers from Table 1, all papers solve proximal subproblems. For convenience, we give the basic idea (idea of DANE, 2014, see Table 1) that most works use (DANE, DANE-HB, DANE-LS, AIDE, SONATA, SPAG, AccSONATA). This idea use the proximal gradient descent with the Bregman divergence (Reviewer can read it in Section 1.1 of [1]):\n$$\nx^{k+1} = \\arg \\min_x \\left( \\langle \\nabla r (x^k), x - x^k\\rangle + \\frac{1}{\\eta} D_{\\varphi} (x, x^k) \\right),\n$$\nwhere $D_{\\varphi} (x, x^k)$ is the Bregman divergence:\n$$\nD_{\\varphi} (x, x^k) = \\varphi(x) - \\varphi(x^k) - \\langle \\nabla \\varphi(x^k), x - x^k \\rangle.\n$$\nThe key is to use \n$$\n\\varphi(x) = f_1(x) + \\frac{\\delta}{2} || x ||^2,~~~ ~~~ \\eta = 1.\n$$\nThen, we have the following iteration of the proximal gradient descent\n$$\nx^{k+1} = \\arg \\min_x \\left( f_1(x) + \\langle \\nabla (r - f_1) (x^k), x \\rangle + f_1(x) + \\frac{\\delta}{2} || x - x^k||^2 \\right)\n$$\nIf we rewrite it in terms of $q = f_1$ and $p = f - f_1$, we get\n$$\nx^{k+1} = \\arg \\min_x \\left( q(x) + \\langle \\nabla p (x^k), x \\rangle + \\frac{\\delta}{2} || x - x^k||^2 \\right)\n$$\nFor the method we just described, one can only prove the following estimates:\n$$\nO\\left( \\frac{\\delta}{\\mu} \\log 1/e\\right),\n$$\nwhich is not optimal. \n\nThere is a reasonable question, but whether it is possible to speed up this approach. For example, there are accelerated versions of the proximal method. \n\nBut the literature since 2014 has been unable to add direct acceleration to the proximal gradient method for the similarity problem and to obtain optimal rates. This is due to the fact that the analysis of accelerated methods requires convexity of both the function $p$ and the function $q$. This is not the case for the similarity problem (see (3), our problem is \"non-convex+concave=strongly convex\"). Analysis for the non-accelerated version (we described) were invented back around 2014, but the acceleration has become a challenge for the community. It seems to us that any team taking on the similarity task has tried to accelerate the idea described above (we have also tried).\n\nThere were attempts to accelerate with a heavy ball (DANE-HB), but they only give the optimal rate for quadratic problems. There were attempts to accelerate with Catalyst envelope (AIDE, SPAG, AccSONATA), but they work only for some cases or don't meet lower bounds. Moreover, direct acceleration is a more practical trick; acceleration with Catalyst-type envelops often performs worse in practice and gives suboptimal rates.\n\n3) We can finalize the above as follows:\n\nthe proximal gradient descent + direct acceleration= no results, problems in analysis (need convexity of both functions $p$ and $q$)\n\nthe proximal gradient descent + heavy ball = optimal only for quadratic problems \n\nthe proximal gradient descentd + Catalyst = no optimal rates\n\n3) Our work is not based on gradient descent. It is based on the extragradient, adds sliding (proximal calculations) to it, and then accelerate it. Please read \"Response to Reviewer disq (part 1)\" for more details about the whole idea of our method.\n\nWe hope we were able to explain!\n\n[1] Hadrien Hendrikx, Lin Xiao, Sebastien Bubeck, Francis Bach, and Laurent Massoulie. Statistically preconditioned accelerated gradient method for distributed optimization.", " Thank you for the explanation. Here I am wondering:\n\n1. whether gradient sliding techniques have been applied previously in distributed optimization?\n\n2. If yes to question 1, then could you kindly give more intuition why those can not achieve optimal results while yours can as in the current work you claimed to solve a long standing problem. Thanks.", " For the synthetic dataset $L \\approx 10^4$, $\\lambda \\approx 0.1$, $\\delta \\approx 50$.\n\nWe added Table 2 with all values of $L$, $\\mu = \\lambda$ and $\\sigma$ - see Appendix C.2 in the revision.\n\nIn line 271 -273 we described that we use parameters from theory:\n\n\"The settings of the methods are made as described in the original papers. For algorithms that assume an absolutely accurate solution of local problems (DANE, SPAG, AccSONATA), we use AcGD with an accuracy of $10^{-12}$ as a subsolver.\"", " \n> **Acc SONATA is not a 2nd-order method. Separately, I feel there might a trivial analysis of it to allow for approximate proximal point operators. I don't understand what authors mean by \"this is not really fair to other competitors\". Scientific progress is not supposed to be a game :). Essence of my question was whether there is a simple analysis of approximate Acc SONATA. If yes, what would be the rate. If not, why is this analysis hard.**\n\nWe filled all empty cells in Table 1 (see the revision). As Reviewer asked, we assume that we solve the proximal subproblem inexactly using Accelerated gradient method with precision $\\varepsilon$ (this is reflected in the footnote to Table 1). \n\n> **I am satisfied with the most of the above responses. Please summarize them in the manuscript to improve the presentation.**\n\nWe are glad to hear that, the basic edits Reviewer asked for we made during the rebuttal. We are ready to add the rest when we have extra space, e.g to add and to discuss composite problems from [2,3].\n\n2] Filip Hanzely, Peter Richtárik, Federated Learning of a Mixture of Global and Local Models\n\n[3] Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik. Lower Bounds and Optimal Algorithms for Personalized Federated Learning", " Thanks for the response!\n\n1) Line 5 from the response is typical for similarity algorithms. Reviewer can open the papers from Table 1, all papers solve such or similar proximal subproblems. For convenience, we give the basic idea (idea of DANE, 2014, see Table 1) that most works use (DANE, DANE-HB, DANE-LS, AIDE, SONATA, SPAG, AccSONATA). This idea use the proximal gradient descent with the Bregman divergence (Reviewer can read it in Section 1.1 of [1]):\n$$\nx^{k+1} = \\arg \\min_x \\left( \\langle \\nabla r (x^k), x - x^k\\rangle + \\frac{1}{\\eta} D_{\\varphi} (x, x^k) \\right),\n$$\nwhere $D_{\\varphi} (x, x^k)$ is the Bregman divergence:\n$$\nD_{\\varphi} (x, x^k) = \\varphi(x) - \\varphi(x^k) - \\langle \\nabla \\varphi(x^k), x - x^k \\rangle.\n$$\nThe key is to use \n$$\n\\varphi(x) = f_1(x) + \\frac{\\delta}{2} || x ||^2,~~~ ~~~ \\eta = 1.\n$$\nThen, we have the following iteration of the proximal gradient descent\n$$\nx^{k+1} = \\arg \\min_x \\left( f_1(x) + \\langle \\nabla (r - f_1) (x^k), x \\rangle + f_1(x) + \\frac{\\delta}{2} || x - x^k||^2 \\right)\n$$\nIf we rewrite it in terms of $q = f_1$ and $p = f - f_1$, we get\n$$\nx^{k+1} = \\arg \\min_x \\left( q(x) + \\langle \\nabla p (x^k), x \\rangle + \\frac{\\delta}{2} || x - x^k||^2 \\right)\n$$\nThat's exactly what Reviewer wrote in line 5 of the response.\nFor the method we just described, one can only prove the following estimates:\n$$\nO\\left( \\frac{\\delta}{\\mu} \\log 1/e\\right),\n$$\nwhich is not optimal. \n\nThere is a reasonable question, but whether it is possible to speed up this approach. For example, there are accelerated versions of the proximal method (APGM or others). \n\nBut the literature since 2014 has been unable to add direct acceleration to the proximal gradient method for the similarity problem and to obtain optimal rates. This is due to the fact that the analysis of accelerated methods requires convexity of both the function $p$ and the function $q$. This is not the case for the similarity problem (see below in the rebuttal that our problem is \"non-convex+concave=strongly convex\"). Analysis for the non-accelerated version (we described) were invented back around 2014, but the acceleration has become a challenge for the community. It seems to us that any team taking on the similarity task has tried to accelerate the idea described above (we have also tried).\n\nThere were attempts to accelerate with a heavy ball (DANE-HB), but they only give the optimal rate for quadratic problems. There were attempts to accelerate with Catalyst envelope (AIDE, SPAG, AccSONATA), but they work only for some cases or don't meet lower bounds. Moreover, direct acceleration is a more practical trick; acceleration with Catalyst-type envelops often performs worse in practice and gives suboptimal rates.\n\n2) We can finalize the above as follows:\n\nthe proximal gradient descent + direct acceleration (Nesterov's acceleration or Tseng's variant) = no results, problems in analysis (need convexity of both functions $p$ and $q$)\n\nthe proximal gradient descent + heavy ball = optimal only for quadratic problems \n\nthe proximal gradient descentd + Catalyst = no optimal rates\n\n3) Our work is not based on gradient descent. It is based on the extragradient, adds sliding (proximal calculations) to it, and then accelerate it. For more details, see our rebuttal above.\n\nWe hope we were able to explain!\n\nP.S. Reviewer is right that we use inexact proximal calculations, but we call it \"sliding\" (and not only us - see Section 1.2 of our paper), often other varieties of sliding that can be found for different types of problems do similar things (inexact proximal calculations). This technique is most likely called \"sliding\" because of the physicality of the name, because we \"slide\" only one function while the gradient of the other function is fixed.\n\n\n\n[1] Hadrien Hendrikx, Lin Xiao, Sebastien Bubeck, Francis Bach, and Laurent Massoulie. Statistically preconditioned accelerated gradient method for distributed optimization.", " Thanks for the details! I am satisfied with most of the answers. I think experimental section is still confusing. For example, in the the updated paper, I am still not clear what values were used for lambda of synthetic experiments. I suggest providing a table of the problem and algorithm parameters used for different settings.\n", " I am satisfied with the most of the above responses. Please summarize them in the manuscript to improve the presentation. \n\nIt would great if the authors can point out which paper does the following\n> Some of the papers have puzzled over this issue and indicated what method, with what accuracy and with what parameters we need to use to solve the local subproblems.\n\n> Reviewer suggests including estimates for accelerated gradient descent in Table 1, but this is not really fair to other competitors; we do not know how the authors' method will behave if we add inaccuracy to their method, especially this issue concerns 2nd-order method\n\nAcc SONATA is not a 2nd-order method. Separately, I feel there might a trivial analysis of it to allow for approximate proximal point operators. I don't understand what authors mean by \"this is not really fair to other competitors\". Scientific progress is not supposed to be a game :). Essence of my question was whether there is a simple analysis of approximate Acc SONATA. If yes, what would be the rate. If not, why is this analysis hard.", " I really appreciate that the authors provided the motivation for their problem and some details about the origin the algorithm. \n\nHowever, I still don't understand the step 5 and 6 of Algorithm 1. Specifically, I am hoping to understand how q(x) is used as an approximately proximable function step 5, but its gradient is directly used in algorithms 1. And why this works out.\n\nLet me ask a counter question to understand the algorithm better. Could you have obtained the same results using the following approximate version of the accelerated proximal gradient descent method [Algorithm 1, APGM]? \n\n4. $y_{k} = (1 - \\tau_k) x_k + \\tau_k z_k$\n\n5. $z_{k+1} \\approx \\arg\\min_z p(y_k) + \\langle \\nabla p(y_k), z - y_k\\rangle + \\frac{\\gamma_k}2 \\|\\|z- y_k\\|\\|^2 + q(x)$ (similar to step 5 of current algorithm)\n\n6. $x_{k+1} = (1 - \\tau_k) x_k + \\tau_k z_{k+1}$\n\nI feel the above algorithm is more common and easier to understand than step 5 and 6 of this paper's algorithm. Can the authors explain what is the advantage of their algorithm over the above approximate APGM?\n\n[APGM] Tseng, Paul. \"On accelerated proximal gradient methods for convex-concave optimization.\" submitted to SIAM Journal on Optimization 2.3 (2008). https://www.mit.edu/~dimitrib/PTseng/papers/apgm.pdf", " With this message, we would just like to kindly remind Reviewers that we would be happy if Reviewers would participate in the rebuttal discussion process. We are looking forward to hearing from Reviewers **xSX2**, **disq** and **21CM**. We thank Reviewer **3MZq** for the response to the rebuttal.", " Thanks so much for the review and time!\n\nWe greatly appreciate Reviewer's careful attention to our paper.\n\nIf Reviewer feel that he/she are not an expert on the subject, we politely ask to let AC know about it and to take part in a discussion with other Reviewers. Please keep our paper in mind. Reviewer is the only one who rejects our paper and gives \"fair\" score of the contribution. We are very worried about our work, as probably Reviewer is about his/her submits.\n\nP.S. All our algorithms contain 2 loops: the first (main one) by iterator $k$, the second one by inexact calculation of the prox.", " > **To avoid naming collisions within the community, I would not call it extragradient, since that is a different method, and just call it accelerated nesterov, or something like that.**\n\nAccelerated Nesterov is the same method as Accelerated Gradient Descent. This is a method / acceleration technique invented by Yuri Nesterov in the second half of the 20th century. This does not fully reflect the idea of ​​our method. Recall that our method is \"extragradient + sliding for a composite problem + Nesterov acceleration\".\n\nIn our algorithm's name, we wanted to emphasize that we accelerate the extragradient method. We thought about how we can highlight the fact that inside extragradient there is also sliding. But stopped on the current naming. \n\nWe are absolutely open to changing the names of our methods.\n\n> **In point 3, does the VI framework also follow a strongly convex = convex + possibly nonconvex form? If so that would help unify the two stories; otherwise, they appear quite disparate.**\n\nMinimization problems are a special case of VIs (see line 200, Example 1). Therefore, yes, Algorithm 2 (for VIs) and point 3 of our response above is applicable for \"strongly convex = convex + possibly nonconvex\" minimization problems. \n\nThen, at first glance, it seems that we can consider the VIs and not consider the minimization problems. But it is not so. VIs are a broader class of problems than minimization problems, it is more complicated, in particular, it includes saddle point problems (min-max). Minimization problems can be solved faster than general VI problems. See e.g. Table 2 and compare our estimates and lower bounds for minimization and saddle problems:\n\n$$\n\\text{For minimization: } \\sqrt{\\frac{\\delta}{\\mu}} \\cdot \\log 1/\\varepsilon ~~~~ \\text{ and } ~~~~ \\sqrt{\\frac{L}{\\mu}} \\cdot \\log 1/\\varepsilon.\n$$ \n\n$$\n\\text{For saddles: } \\frac{\\delta}{\\mu} \\cdot \\log 1/\\varepsilon ~~~~ \\text{ and } ~~~~ \\frac{L}{\\mu} \\cdot \\log 1/\\varepsilon.\n$$\n\n$$\\sqrt{\\frac{\\delta}{\\mu}} \\leq \\frac{\\delta}{\\mu} ~~~~ \\text{ and } ~~~~ \\sqrt{\\frac{L}{\\mu}} \\leq \\frac{L}{\\mu}$$.\n\nEstimates for minimization problems are better, so we consider them separately. ", " Based on your responses, it seems that this paper may be trying to solve a seminal problem that I am less familiar with, and trying to achieve bounds, which your framework fits.\n\nTo clarify where my questions/concerns come from, from a practical implementation point of view, I think it is still not clear to me that the usual flavor of machine learning problems are well-solved using this approach (which it seems now has 3 loops if you account for the approximate prox calculation, and a pre-decided $T$, which from my experience these techniques tend to be pessimistic). Figure 1 is a nice inclusion, however. \n\nHowever, if the goal of the paper is to give a method that achieves a lower bound, which is a long-standing problem, that is a nice theoretical contribution. Since this is not my problem space, I would defer to the reviews of others more embedded in this problem area. ", " Thanks for reply, hope we understand now!\n\nIn the main paper, we focus on strongly-convex/strongly-monotone problems when the parameter $\\mu > 0$ (Assumptions 1 and 7). In Table 2 we present a comparison of the results in this case only. Algorithm 1 and Algorithm 2 are also algorithms for strongly-convex/strongly-monotone problems.\n\nAlgorithm 3 and 4 (Appendix) are algorithms for the convex problem ($\\mu = 0$). They differ slightly from Algorithms 1 and 2 because, for example, Nesterov acceleration is done differently for the strongly convex and convex cases. In the main article, we present the results in the convex case for completeness (to emphasize that we also have them along with the strongly-convex/strongly-monotone case).", " This description was helpful. To avoid naming collisions within the community, I would not call it extragradient, since that is a different method, and just call it accelerated nesterov, or something like that.\n\nIn point 3, does the VI framework also follow a strongly convex = convex + possibly nonconvex form? If so that would help unify the two stories; otherwise, they appear quite disparate.", " In general these two comments are about what goes in the main paper and what goes in the appendix. My general rule of thumb is that I do not need the appendix to decide whether a paper warrants acceptance, and only use it to check for correctness, or if there are extra curiosities that are interesting but not essential. If an algorithm is claimed to be a major contribution, it needs to be in the main paper. If the claim is that numerical results are not a key contribution, and have moved them to the appendix, that is acceptable. ", " We thank Reviewer **3MZq** for review, time and the insightful comments, which will help to improve our work.\n\n> **It is a bit unwieldy to have 2 theorem boxes**, which are referred to in the paper, in the appendix. In general I have a hard time understanding why each one is contributing something different, and something like this should be discussed more clearly in the main text.\n\nSorry, we don’t quite understand what Reviewer meant in that comment. Please explain, in more detail, it seems that the problem is not too big and we can calmly solve/explain it.\n\n> **The second set of numerical results are extremely inconclusive**, and does not show consistent advantage at all.\n\nWe moved them to Appendix (see the revision). Instead, we added more datasets for the first group of experiments in the main part.\n", " \n> **Why is it called extragradient method?** Extragradient usually implies taking two gradient calls and then using the second one, in order to get a \"midpoint discretization\" scheme and improve performance, but at a 2x overhead. None of the algorithms seem to be doing this.\n\nTo understand the intuitions of this method, let us try to lay out a complete picture of how it came about.\n\n1) The problem of optimal distributed methods under similarity assumption has been known for a long time. It has been an open problem for 8 years. Works from Table 1 have been mostly published at leading conferences such as NeurIPS, ICML, AISTATS, ICLR. But these methods did not reach the lower bounds. These papers consider different basic ideas. We send Reviewer to Section 1.1 and 2 of [1] (Hadrien Hendrikx et al). There, one of the basic approaches to similarity methods is clearly outlined, as well as the history of the issue. It has become clear to us that we need to look at the problem of similarity methods differently. The idea of a composite problem seemed interesting. This is exactly what we describe in the beginning of the paper:\n$$\nr (x) = f_1(x) + \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]\n$$\nwhere $q(x) = f_1(x)$ is convex, and $p(x) = \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]$ may be non-convex.\n\n2) The problem above is a composite problem. To obtain optimal estimates for such problems, sliding algorithms are usually used (see Section 1.2 of our work). At the highest level, the idea of sliding can be stated as follows (in theory it will be more difficult, but it gives an intuition), let us do a gradient descent as follows:\n$$\nx^{k+1} = x^k - \\gamma (\\nabla q (x^k) + \\nabla p(w^k)),\n$$\nwhere we update the point $w^k$ quite rarely, for example, once every $t$ iterations $w^k = x^k$, otherwise it is just taken from the previous iteration $w^k = w^{k-1}$. This allows $\\nabla p$ to be called very rarely. \nBut for the problem \"strongly convex = convex + possibly non-convex\" no sliding was invented.\n\n3) In parallel with this (without additional thoughts about similarity), we thought about sliding for composite variational inequalities (equation (10) in our paper):\n$$\nR(x) = Q(x) + P(x).\n$$\nTo quickly understand the intuitions of variational inequalities, simply change to the operator on the gradient of the function: $R \\to \\nabla r$, $Q \\to \\nabla q$, $P \\to \\nabla p$.\nThe issue is that different methods are used for variational inequalities compared to minimization problems. Gradient descent for variational inequalities is not optimal. But the extragradient method is optimal:\n$$\nu^k = x^k - \\gamma R(x^k), ~~~~ x^{k+1} = x^k - \\gamma R(u^k).\n$$\nAs a result, we got Algorithm 2 (see our paper). It combines the ideas of extragradient and sliding: in line 4, when we compute \\tilde u^k we make sliding, because $P(x^k)$ is fixed, but $Q$ is changing. In line 5, we make an extra step with $R(u^k)$.\nAlgorithm 2 has two interesting and important features: a) (12) – a rare stopping criterion for monotone variational inequalities, such a criterion is used only in one very recent paper (see Theorem 8 in our paper), b) in theoretical analysis it is not necessary to assume the monotonicity of one of the operators (monotonicity for minimization problems means convexity). The second feature is exactly what we needed for the method with similarity.\n\n4) Then Algorithm 2 was accelerated using the Nesterov acceleration to produce Algorithm 1. Therefore, Algorithm 1 is called the accelerated extragradient.\n\nHere we have tried to outline very basic ideas and intuitions, it is easy to see that Algorithms 1 and 2 do not look as simple as we described them here. But hopefully we have given some insight into our thoughts. We would be happy to include some of these intuitions in the paper if we have extra space.\n\n\n[1] Hadrien Hendrikx, Lin Xiao, Sebastien Bubeck, Francis Bach, and Laurent Massoulie. Statisti- cally preconditioned accelerated gradient method for distributed optimization.\n", " \n> **It seems like the method mainly focuses on an outer loop / inner loop approach**, where the outer loop achieves linear convergence and the inner loop borrows from other SOTA methods to get good guarantees. This seems appropriate for optimal rates, but I don't see the clear benefit to distributed optimization. Is this a noise variance controlling technique, or straggler mitigation technique? I guess what I'm asking is, if A = gradient sliding and B = optimal method, is this paper just A + B (arbitrary mix and match), or are there deeper ramifications for using this combination?\n\n1) Note that all of the methods in Table 1 are “outer loop / inner loop approach”.\nTo put it in Reviewer terms, the community has been solving $A + B$ since 2014, but has not gotten the optimal result during those 8 years. As we noted earlier, most of the papers in Table 1 were presented at A* conferences, and among the authors of these papers (often first and second authors) are very famous and highly cited scientists.\n\n2) In answering the question (*Why is it called extragradient method?*) we tried to convey that the creation of $A$ required a great deal of effort.\n\n3) *Is this a noise variance controlling technique, or straggler mitigation technique?* Neither, we explained the intuition of our method above. One can look for some connections our methods with variance controlling and straggler mitigation, but that would be artificial, we didn't put in these connections when we created the methods. Perhaps Reviewer sees something.\n\n> **There is the big question also of what is hidden in the constants** For example, the scale of delta may be big in practice. More to the point, if delta is small in practice, to the point where the rates are tight, wouldn't the competitor methods work pretty well? A more extensive numerical experiments comparison would show if this is really a practical method.\n\n1) In the paragraph from line 43, we give an explanation why similarity is interesting to consider. This is primarily due to the fact that it is natural and has a good theoretical background. In particular, one can prove that if the data are uniformly distributed among the devices, then\n$$\n||\\nabla^2 f_i (x) - \\nabla^2 f_j (x) || \\sim \\frac{L}{\\sqrt{m}} ~~ \\text{or}~~ \\frac{L}{\\sqrt{m}},\n$$\nwhere $L$ is a Lipshitz constant of gradients, $m$ is a size of local datasets on devices. What is more, this similarity is popular in literature. \n\n2) *A more extensive numerical experiments comparison would show if this is really a practical method.* We have added more datasets for the first group of experiments. In the future we plan to add experiments for other practical problems, but for now due to the time limit can only provide this.\n\n> **What's the big idea behind using the prox operations in step 5 of alg 1?** Is it just to achieve optimal rates? I ask because it seems prox operations are used, but aren't really discussed in terms of key benefits and numerical insights here, and the standard is usually just gradient descent.\n\n1) We probably have partially clarified the answer to the question when we discussed the other questions above.\n\n2) It is important to note that the subproblem/prox operator in line 5 is solved by an additional optimizer. For example, here one can use gradient descent or accelerated gradient descent or other methods, the main thing is to find $x^{k+1}_f$ with necessary accuracy. But the accuracy of the solution is given by the unusual condition (4) (see Theorem 1), that is why we use a rather rare method (see Theorem 2), because it gives such guarantees that we need.\n\n> **In (8), I see a dependency of error on 1/T^2. However, in (9), T is a constant.** This doesn't seem logical, since later inner iterations should require higher precision (and thus larger T) than initial ones. This of course affects the main final results (Thm 5)\n\n$T$ is fixed, it also does not depend in any way on the iteration number, nor on the desired accuracy $\\varepsilon$. The only thing we need to guarantee for solving the problem from line 5 of Algorithm 1 is that condition (4) from Theorem 1 is satisfied. And that condition is satisfied with fixed $T$ from (8).\n\n> **Theorems 4 and 10 seem to better match the inner / outer loop issues, but they are kind of terrible rates then.** How do they compare against competitors?\n\nTheorems 4,6, 10, 12 are given for the case when $\\mu = 0$, this is the convex case for minimization and monotone for variational inequalities. And in these cases our algorithm is also optimal and outperforms competitors in the application to the distributed problem with the similarity assumption.\n\nWe don't quite understand what Reviewer meant by *Theorems 4 and 10 seem to better match the inner / outer loop issues*. Please explain.", " We thank Reviewer **21CM** for review, time and the insightful comments, which will help to improve our work.\n\nNext, we answer the questions asked by Reviewer.\n\n> **Presentation needs serious improvements. See questions below.**\n\nWe improved. Please see the revision and answers below.\n\n> **The numerical results are relatively less informative. On real datasets, the proposed method only achieves comparable performance with the state-of-the-art method.**\n\nWe added more real datasets in the revision. Our method wins competitors on these datasets.\n\n> **Line 23, \"a network of $m$ agents.\" For Equation (2), is it $1/m$ or $1/n$ ? See also Line 47, line 45 line 56.**\n\nThe correct option in line 23: “$n$ agents”. We fixed, thanks! Lines 45, 47, 56 are correct\n\n> **Line 44, why one would use the second order information for similarity notation?** What would happen if one uses first/zeroth-order information for similarity notation. How to verify that function similarity holds.\n\n1) After line 44, we give an explanation why the second-order similarity is interesting to consider. This is primarily due to the fact that it is natural and has a good theoretical background. In particular, one can prove that if the data are uniformly distributed among the devices, then\n$$\n||\\nabla^2 f_i (x) - \\nabla^2 f_j (x) || \\sim \\frac{L}{\\sqrt{m}} ~~ \\text{or}~~ \\frac{L}{\\sqrt{m}},\n$$\nwhere $L$ is a Lipshitz constant of gradients, $m$ is a size of local datasets on devices. \nthe second-order similarity is popular in literature. All of the papers in Table 1 address it; most of them presented at NeuIPS, ICML, ICLR, and AISTATS. But the most interesting thing is that for 8 years, none of the papers could reach the lower bounds. This was a challenge for us.\n\n2) If we consider first-order similarity, this, too, has its place in the literature, but it is a different setting of the problem with a different background and other work-competitors. Moreover, we do not know the same facts about first-order similarity that also naturally emerged as for second-order similarity, i.e. we mean theorems in the spirit of \"if data are uniformly distributed over the devices, then the constant of first-order similarity is...\". The only thing we can prove something from the second order similarity:\nif $||\\nabla^2 f_i (x) - \\nabla^2 f_j (x) || \\leq \\delta$ for all x, then the function $(f_i - f_j)$ has a $\\delta$-Lipshitz gradient:\n$$\n|| \\nabla f_i (x) - \\nabla f_j (x) - \\nabla f_i (y) + \\nabla f_j (y)|| \\leq \\delta || x - y||\n$$\nFrom this equation we have that\n$$\n|| \\nabla f_i (x) - \\nabla f_j (x)|| - || \\nabla f_i (y) - \\nabla f_j (y) || \\leq || \\nabla f_i (x) - \\nabla f_j (x) - \\nabla f_i (y) + \\nabla f_j (y)|| \\leq \\delta || x - y||$$\nand\n$$\n|| \\nabla f_i (x) - \\nabla f_j (x)|| \\leq \\delta || x - y|| + || \\nabla f_i (y) - \\nabla f_j (y) ||\n$$\nLet us fix some $y = y_0$ and assume that $|| \\nabla f_i (y_0) - \\nabla f_j (y_0) || \\leq G$, it means that\n$$\n|| \\nabla f_i (x) - \\nabla f_j (x)|| \\leq \\delta || x - y|| + G\n$$\nIf we solve the problem on $R^d$, it means that the first-order similarity parameter can be very large (because $|| x - y||$ is unbounded).\n\n> **If there is similarity, does it reduce back to the stochastic setting?** For example, if using first-order similarity notation, one may rewrite the problem as a stochastic optimization with a discrete distribution and bounded variance of the first-order information.\n\nIt seems possible to consider that we work with stochastic hessians (using exactly the same reasoning that Reviewer gave for gradients). This raises a number of questions\n\n1) Most methods for similarity, including ours, use only gradients. Methods that use hessian are quite expensive, especially in a distributed setup, when hessians need to be sent to the server. Moreover, Newton's methods are good for local convergences, while we solve a global minimization problem. \n\n2) We also ask Reviewer to pay attention to Table 1, most methods of competitors using hessians (column “Order”) give bad rates (and some even do not converge at all).\n\n\n> **For second-order similarity, how to evaluate it numerically?**\n\n1) If we can guarantee that the data is uniformly distributed among the computing devices, then the estimate we discussed above is valid $\\delta \\sim L/\\sqrt{m}$. For example, if one solves a problem on a computing cluster and distributes the data independently and uniformly.\n\n2) If we do not know how the data are distributed on the devices, then we can only measure this similarity numerically. The easiest way is to send the hessian to the server a couple of times, but this is expensive. So we can use practical tricks. For example, send only the diagonal elements of a hessian, or compress it (send the top-10% of the largest hessian values).\n", " \n> **The proposed method combines acceleration, extragradient, and gradient sliding. It is unclear which component improves the complexity bound, comparing to the existing literature. Please specify with a more detailed comparison with existing literature on the methodology.**\n\nIf we are talking about our results for the distributed problem under similarity conditions, then the idea is based on 4 main things:\n\n1) consider the distributed problem as a composite problem, where one function is convex and the other is possibly nonconvex,\n\n2) special sliding for composite variational inequalities based on the extragradient\n\n3) the feature of this sliding is that it is applicable to the composite problem with monotone+nonmonotone operators (convex+nonconvex functions)\n\n4) acceleration of this sliding. \n\nNext, in details. To understand the whole intuitions of our methods, let us try to lay out a complete picture of how it came about.\n\n1) The problem of optimal distributed methods under similarity assumption (as we stated above) has been known for a long time. It has been an open problem for 8 years. Works from Table 1 have been mostly published at leading conferences such as NeurIPS, ICML, AISTATS, ICLR. But these methods did not reach the lower bounds. These papers consider different basic ideas. We send Reviewer to Section 1.1 and 2 of [1] (Hadrien Hendrikx et al). There, one of the basic approaches to similarity methods is clearly outlined, as well as the history of the issue. It has become clear to us that we need to look at the problem of similarity methods differently. The idea of a composite problem seemed interesting. This is exactly what we describe in the beginning of the paper:\n$$\nr (x) = f_1(x) + \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]\n$$\nwhere $q(x) = f_1(x)$ is convex, and $p(x) = \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]$ may be non-convex.\n\n2) The problem above is a composite problem. To obtain optimal estimates for such problems, sliding algorithms are usually used (see Section 1.2 of our work). At the highest level, the idea of sliding can be stated as follows (in theory it will be more difficult, but it gives an intuition), let us do a gradient descent as follows:\n$$\nx^{k+1} = x^k - \\gamma (\\nabla q (x^k) + \\nabla p(w^k)),\n$$\nwhere we update the point $w^k$ quite rarely, for example, once every $t$ iterations $w^k = x^k$, otherwise it is just taken from the previous iteration $w^k = w^{k-1}$. This allows $\\nabla p$ to be called very rarely. \nBut for the problem \"strongly convex = convex + possibly non-convex\" no sliding was invented.\n\n3) In parallel with this (without additional thoughts about similarity), we thought about sliding for composite variational inequalities (equation (10) in our paper):\n$$\nR(x) = Q(x) + P(x).\n$$\nTo quickly understand the intuitions of variational inequalities, simply change to the operator on the gradient of the function: $R \\to \\nabla r$, $Q \\to \\nabla q$, $P \\to \\nabla p$.\nThe issue is that different methods are used for variational inequalities compared to minimization problems. Gradient descent for variational inequalities is not optimal. But the extragradient method is optimal:\n$$\nu^k = x^k - \\gamma R(x^k), ~~~~ x^{k+1} = x^k - \\gamma R(u^k).\n$$\nAs a result, we got Algorithm 2 (see our paper). It combines the ideas of extragradient and sliding: in line 4, when we compute \\tilde u^k we make sliding, because $P(x^k)$ is fixed, but $Q$ is changing. In line 5, we make an extra step with $R(u^k)$.\nAlgorithm 2 has two interesting and important features: a) (12) – a rare stopping criterion for monotone variational inequalities, such a criterion is used only in one very recent paper (see Theorem 8 in our paper), b) in theoretical analysis it is not necessary to assume the monotonicity of one of the operators (monotonicity for minimization problems means convexity). The second feature is exactly what we needed for the method with similarity.\n\n4) Then Algorithm 2 was accelerated using the Nesterov acceleration to produce Algorithm 1. Therefore, Algorithm 1 is called the accelerated extragradient.\n\nHere we have tried to outline very basic ideas and intuitions, it is easy to see that Algorithms 1 and 2 do not look as simple as we described them here. But hopefully we have given some insight into our thoughts. We would be happy to include some of these intuitions in the paper if we have extra space.\n\n\n[1] Hadrien Hendrikx, Lin Xiao, Sebastien Bubeck, Francis Bach, and Laurent Massoulie. Statisti- cally preconditioned accelerated gradient method for distributed optimization.\n", " > **Line 6, 7, missing bracket.**\n\nFixed, thanks!\n\n> **Line 23, is it $m$ or $n$?**\n\n$n$. fixed!\n\n> **Line 29, there are m agents or m samples?**\n\n$m$ samples\n\n> **Line 30, what does it mean by mismatch between parameter and sample?**\n\nThanks! We changed it to avoid misundertandings. Now there is “ the loss of the model x on the sample z^j_i”\n\n> **Assumption 1 with $\\mu=0$ is normally treated as convexity while $\\mu<0$ is weakly convex in the literature. It would be nice to stay consistent.**\n\nWe deleted “weakly”. Thanks!\n\n> **Example 2 Line 201, is it $\\min_z$ or $max_z$ as it is a saddle point problem.** \n\n$\\max_z$. Fixed, thanks!\n\n> **Line 114, 120, missing bracket**\n\nFixed, thanks!\n", " We thank Reviewer **disq** for review, time and the insightful comments, which will help to improve our work.\n\nNext, we answer the questions asked by Reviewer.\n\n**Paper does not explain the intuition behind the algorithm. The exposition should also make it clear how they are related to known ideas. These help the community assimilate the results and build on top of them to produce future scientific output.**\n\nTo understand the intuitions of this method, let us try to lay out a complete picture of how it came about.\n\n1) The problem of optimal distributed methods under similarity assumption has been known for a long time. It has been an open problem for 8 years. Works from Table 1 have been mostly published at leading conferences such as NeurIPS, ICML, AISTATS, ICLR. But these methods did not reach the lower bounds. These papers consider different basic ideas. We send Reviewer to Section 1.1 and 2 of [1] (Hadrien Hendrikx et al). There, one of the basic approaches to similarity methods is clearly outlined, as well as the history of the issue. It has become clear to us that we need to look at the problem of similarity methods differently. The idea of a composite problem seemed interesting. This is exactly what we describe in the beginning of the paper:\n$$\nr (x) = f_1(x) + \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]\n$$\nwhere $q(x) = f_1(x)$ is convex, and $p(x) = \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]$ may be non-convex.\n\n2) The problem above is a composite problem. To obtain optimal estimates for such problems, sliding algorithms are usually used (see Section 1.2 of our work). At the highest level, the idea of sliding can be stated as follows (in theory it will be more difficult, but it gives an intuition), let us do a gradient descent as follows:\n$$\nx^{k+1} = x^k - \\gamma (\\nabla q (x^k) + \\nabla p(w^k)),\n$$\nwhere we update the point $w^k$ quite rarely, for example, once every $t$ iterations $w^k = x^k$, otherwise it is just taken from the previous iteration $w^k = w^{k-1}$. This allows $\\nabla p$ to be called very rarely. \nBut for the problem \"strongly convex = convex + possibly non-convex\" no sliding was invented.\n\n3) In parallel with this (without additional thoughts about similarity), we thought about sliding for composite variational inequalities (equation (10) in our paper):\n$$\nR(x) = Q(x) + P(x).\n$$\nTo quickly understand the intuitions of variational inequalities, simply change to the operator on the gradient of the function: $R \\to \\nabla r$, $Q \\to \\nabla q$, $P \\to \\nabla p$.\nThe issue is that different methods are used for variational inequalities compared to minimization problems. Gradient descent for variational inequalities is not optimal. But the extragradient method is optimal:\n$$\nu^k = x^k - \\gamma R(x^k), ~~~~ x^{k+1} = x^k - \\gamma R(u^k).\n$$\nAs a result, we got Algorithm 2 (see our paper). It combines the ideas of extragradient and sliding: in line 4, when we compute \\tilde u^k we make sliding, because $P(x^k)$ is fixed, but $Q$ is changing. In line 5, we make an extra step with $R(u^k)$.\nAlgorithm 2 has two interesting and important features: a) (12) – a rare stopping criterion for monotone variational inequalities, such a criterion is used only in one very recent paper (see Theorem 8 in our paper), b) in theoretical analysis it is not necessary to assume the monotonicity of one of the operators (monotonicity for minimization problems means convexity). The second feature is exactly what we needed for the method with similarity.\n\n4) Then Algorithm 2 was accelerated using the Nesterov acceleration to produce Algorithm 1. Therefore, Algorithm 1 is called the accelerated extragradient.\n\nHere we have tried to outline very basic ideas and intuitions, it is easy to see that Algorithms 1 and 2 do not look as simple as we described them here. But hopefully we have given some insight into our thoughts. We would be happy to include some of these intuitions in the paper if we have extra space.\n\n\n[1] Hadrien Hendrikx, Lin Xiao, Sebastien Bubeck, Francis Bach, and Laurent Massoulie. Statisti- cally preconditioned accelerated gradient method for distributed optimization.\n", " \n> **No proper examples are provided for the settings studied in the paper.**\n\nIf we understand correctly, this sentence is explained through the following question (if we are wrong, please clarify).\n\n> **What are examples where r is mu-strongly convex, p is nonconvex and q is convex? It looks like the one in simulations doesn’t satisfy this. If there are no such examples, can step 5 of algorithm be solved using accelerated smooth strongly convex optimization?**\n\n1) First of all, the problem for which our sliding was invented suits these assumptions. This problem is the distributed minimization problem under similarity. See (3) in our paper:\n$$\nr (x) = f_1(x) + \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)],\n$$\nwhere $q(x) = f_1(x)$ is convex, and $p(x) = \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]$ may be non-convex.\n​​More specifically, for example, our experimental setup: $r(x)$ is a distributed regression problem with a regularizer (strongly convex), $q(x) = f_1(x)$ is a server loss function (it is also strongly convex, but it is enough that it is convex for theory), $p(x) = \\frac{1}{n} \\sum_{i=1}^n [f_i(x) - f_1(x)]$ (non-convex).\n\n2) In fact, sliding can also be applied to “convex + convex” problems (such problems satisfy the \"convex + not necessarily convex\" assumption). Such problems are quite common. One interesting and popular example at the moment is the so-called personalized federated learning [2] and [3] \n\n[2] Filip Hanzely, Peter Richtárik, Federated Learning of a Mixture of Global and Local Models\n\n[3] Filip Hanzely, Slavomír Hanzely, Samuel Horváth, Peter Richtárik. Lower Bounds and Optimal Algorithms for Personalized Federated Learning\n\n3) *Can step 5 of algorithm be solved using accelerated smooth strongly convex optimization?* \nThe answer is yes anyway, because the problem in line 5 is convex anyway (convexity depends on the function $q$, and it is convex). But we use not the accelerated gradient method, because we need to guarantee condition (4) (see Theorem 1) when solving the internal problem, it is important for obtaining optimality. For the accelerated gradient method, we did not find such convergence results. \n\n>**Not enough experiment details are provided and there seems to be missing baselines**\n\nWe expanded the experimental section in the revision. Please indicate which details Reviewer would like to see more of.\n\n\n**It is not clear whether the missing entries in the “Local gradient complexity” column of the table actually means these oracles are hard to solve. The best complexity for solving the oracle sub-problems should be mentioned here.**\n\nDashes in Table 1 means that the authors assume they know how to solve any local subproblems with accuracy $\\varepsilon = 0$. That is why we did not include such results in Table 1. We don’t know practical algorithms that can solve minimization problems absolutely precisely. Reviewer suggests including estimates for accelerated gradient descent in Table 1, but this is not really fair to other competitors; we do not know how the authors' method will behave if we add inaccuracy to their method, especially this issue concerns 2nd-order methods. Some of the papers have puzzled over this issue and indicated what method, with what accuracy and with what parameters we need to use to solve the local subproblems.\n\nThe question of stopping the internal method for local subproblems is also important for us because we use a rather unusual stopping criterion and this has turned out to be one of the key places of our theoretical analysis and an important stepping stone to achieving optimality in local computations.\n", " ​​\n> **Why does the algorithm work? Why is this called “accelerated extragradient”**\n\nWe answered this question above (*Paper does not explain the intuition behind the algorithm.*)\n\n> **What are examples where r is mu-strongly convex, p is nonconvex and q is convex? It looks like the one in simulations doesn’t satisfy this. If there are no such examples, can step 5 of algorithm be solved using accelerated smooth strongly convex optimization?**\n\nWe answered this question above (*No proper examples are provided for the settings studied in the paper.*)\n\n> **Function similarity assumption and the assumption 6 different (notice j in former). What is the relation between the two and why does the paper use a different assumption? **\n\nThanks! We fixed Assumption 6 in the revision. It is easy to prove that if $||\\nabla^2 f_j (x) - \\nabla^2 f_i (x)|| \\leq \\delta$ for all $i,j$ and $x$, then\n$$\n|| \\nabla^2 f (x) - \\nabla^2 f_j (x)|| = || \\frac{1}{n} \\sum _{i=1}^n [\\nabla^2 f_i (x) - \\nabla^2 f_j (x) ] || \\leq \\frac{1}{n} \\sum _{i=1}^n || \\nabla^2 f_i (x) - \\nabla^2 f_j (x)|| \\leq \\delta.\n$$\n\n> **Acc SONATA [44], what is the complexity to solve proximal steps? Is it O(\\sqrt{L_i/\\mu} \\log(1/eps)) because of strong convexity? If yes, this should be noted in the table as well.**\n\nWe answered this question above (*It is not clear whether the missing entries in the “Local gradient complexity”*).\n\n> **In experiments, what are the r, p and q, and Lp, Lq and \\mu?**\n\nIn experiments we use Algorithm 1 with $r,p,q$ as described in (3) and in Section 3. We added details about $L,\\mu,\\delta$ (see the revision). \n\n> **In experimental section what does lambda=0,1 mean? What was used for the experiments and why were these choices made?**\n\nThanks! That is an inaccuracy. The regularization $\\lambda = 0.1$ from (17) is correct only for synthetic dataset. We corrected this in the revision. The full selection of the constant is described as follows:\n\n\"In Section C.2 we explain how the parameters $L$ and $\\delta$ are estimated.\nFor the synthetic dataset we choose the noise level and the regularization parameter such that $L/\\delta = 200$ and $L/\\lambda = 10^5$. \nFor the real datasets the regularization parameter is chosen such that $L/\\lambda = 10^6$.\"\n\n> **Please provide comparison to AccSONATA which has the SOTA communication complexity. The sub problems should be easy to compute if they are convex or strongly-convex.** \n\nWe added AccSONATA (see revision). But it does not give good results. This is most likely due to the fact that it uses envelope/Catalyst acceleration, which in practice works worse than Nesterov's direct acceleration.\n\n> **In Synthetic minimization dataset how much noise was added?**\n\nAs we noted earlier, the full selection of the constant is described as follows:\n\n\"In Section C.2 we explain how the parameters $L$ and $\\delta$ are estimated.\nFor the synthetic dataset we choose the noise level and the regularization parameter such that $L/\\delta = 200$ and $L/\\lambda = 10^5$. \nFor the real datasets the regularization parameter is chosen such that $L/\\lambda = 10^6$.\"\n\n\n> **Lq >= Lp >= mu assumption should be more prominant**\n\nSorry, we don't quite understand what was meant. In fact, we can get the same results as in Theorem 3 for the case when $L_q < L_p$ - see Line 161. But from the point of view of the similarity application we are interested only in the case when $L_p \\leq L_q$, i.e. $\\delta \\leq L$.\n\n> **“Weakly convex” function also has another meaning of mu < 0. May be use “convex”?**\n\nThanks, we fixed it - see the revision.\n", " We thank Reviewer **xSX2** for review, time and the insightful comments, which will help to improve our work.\n\nNext, we answer the questions asked by Reviewer.\n\n> **I am wondering what is the author's thoughts on the comparison between the proposed methods and the so-called local SGD studied in [1]**\n\nLet us highlight some of the differences:\n\n1) In works about local methods (in particular [1]) all devices work in parallel, in our method only server works, other devices rarely send required gradients.\n\n2) In local methods, for example in Local SGD, when local steps happen, in fact local function on device f_i is minimized. In our case the local subproblem looks different (see line 5 in Algorithm 1 of our paper).\n\n3) Our method is primarily sharpened to use similarity (although it can be used for any problem), local methods were presented for general problems. Therefore our method works better (optimal) for problems under hessian similarity.\n\n[1] Stich, Sebastian U. \"Local SGD converges fast and communicates little.\" arXiv preprint arXiv:1805.09767 (2018). \n\n> **any thoughts on the proposed methods for deep learning**\n\nThis is a very interesting area of research. We are busy with it now and do not want to reveal all the details, but we tell some ideas:\n\nIn theoretical works it is often assumed that neural networks have a $L$-Lipschitz gradient. One can also recall that the learning rate/step in theoretical analysis is usually $\\gamma \\sim 1/L$. Combining these two facts and knowing some classical values of learning rate $\\gamma_{classical}$ for training e.g. ResNet on ImageNet, one can find out the estimate of the Lipschitz constant of gradients for the ResNet training problem on ImageNet: $L_{est} \\sim 1/\\gamma_{classical}$. If we train our model in a distributed manner and divide the data uniformly, there is a similarity $\\delta$ between workers. Again from theory this $\\delta$ can be estimated as $\\delta_{est} \\sim L_{est}/\\sqrt{m}$, where $m$ is a local data size. It turns out that some modifications of our algorithms (from the current paper under review) work for distributed training of neural networks (if we use $\\delta_{est} \\sim L_{est}/\\sqrt{m}$ as a parameter).\n", " Dear Reviewers, Area Chairs and Senior Area Chairs!\n\nWe published a revision of our paper in which we have tried to solve most of the issues. Сhanges are highlighted in blue. What is new:\n\n1) We deleted the word “weakly”, now if $\\mu = 0$, then the function is “convex” (without “weakly”). Thanks Reviewer **disq**, **21CM**!\n\n2) We changed Assumptions 6 and 12 a bit to have the same definition of similarity in the introduction and in the main part. Thanks Reviewer **disq**! It does not affect the results. \n\n3) We fixed typos. Thanks Reviewer **21CM**!\n\n4) We made changes to the experimental section:\n\na) added two more datasets in the first group of experiments (asked by Reviewer **3MZq**),\n\nb) added AccSONATA (asked by Reviewer **disq**),\n\nc) added more details about constant $L,\\mu, \\delta$ choice and estimates (asked by Reviewer **disq**),\n\nd) moved the second set of experiments to Appendix C.1, because of space limit.\n\nThank you very much for your work! You really helped make our paper better.\n", " The author studied a convex optimization problem, where the objective function can be decomposed into a smooth convex function and a possible nonconvex function. There are three major contributions: 1. the author proposed gradient sliding algorithm which achieves optimal complexity of gradient calls of each component functions; 2. the author proved the algorithm achieves optimal communication complexity under $\\delta$-similarity; 3. the author applied the proposed method to a class of distributed saddle-point optimization problem. The paper is a good one with strong theoretical results and clear presentation. I am wondering what is the author's thoughts on the comparison between the proposed methods and the so-called local SGD studied in [1],\nand any thoughts on the proposed methods for deep learning.\n\n[1] Stich, Sebastian U. \"Local SGD converges fast and communicates little.\" arXiv preprint arXiv:1805.09767 (2018). the authors have adequately addressed the limitations ", " Paper considers convex minimization of sum of smooth functions r(x) = p(x) + q(x) with Lipschtiz-smoothness constants Lp and Lq (Lp < Lq). Here they assume that p can be non-convex and q is convex. This paper provides a gradient sliding algorithm which only uses O(\\sqrt{Lp}) gradients of p and O(\\sqrt{Lq}) gradients of q. This kind of separation is useful when first order oracle of p is much more costlier than that of q. Paper provides these results when r is convex, or strongly convex.\n\nThis algorithm is then used to obtain optimal communication and gradient complexities for distribution finite sum optimization under similarity Hessian assumption. Paper also extends the sliding scheme to the setting of solving a monotonic variation inequality of sum of two Lipschitz continuous operators. Strengths\n- Gradient sliding results seems novel\n- Results seem to pass quick checking of logic\n- Paper provides results for two different settings and applies them to distributed optimization to obtain optimal guarantees\n- Experiments section\n\nWeakness\n- Paper does not explain the intuition behind the algorithm. The exposition should also make it clear how they are related to known ideas. These help the community assimilate the results and build on top of them to produce future scientific output.\n- No proper examples are provided for the settings studied in the paper.\n- Not enough experiment details are provided and there seems to be missing baselines\n- It is not clear whether the missing entries in the “Local gradient complexity” column of the table actually means these oracles are hard to solve. The best complexity for solving the oracle sub-problems should be mentioned here.\n\nAfter rebuttal\n- Authors addressed almost all of my concerns. Assuming authors will take action on my suggestions, including the new ones about the intuition and rewording the claim, I am increasing my score. 1. Why does the algorithm work? Why is this called “accelerated extragradient”\n2. What are examples where r is mu-strongly convex, p is nonconvex and q is convex? It looks like the one in simulations doesn’t satisfy this. If there are no such examples, can step 5 of algorithm be solved using accelerated smooth strongly convex optimization?\n3. Function similarity assumption and the assumption 6 different (notice j in former). What is the relation between the two and why does the paper use a different assumption?\n4. Acc SONATA [44], what is the complexity to solve proximal steps? Is it O(\\sqrt{L_i/\\mu} \\log(1/eps)) because of strong convexity? If yes, this should be noted in the table as well.\n\n5. In experiments, what are the r, p and q, and Lp, Lq and \\mu?\n6. In experimental section what does lambda=0,1 mean? What was used for the experiments and why were these choices made?\n7. Please provide comparison to AccSONATA which has the SOTA communication complexity. The sub problems should be easy to compute if they are convex or strongly-convex.\t\n8. In Synthetic minimization dataset how much noise was added?\n\nMinor comments\n\n9. Lq >= Lp >= mu assumption should be more prominant\n10. “Weakly convex” function also has another meaning of mu < 0. May be use “convex”?\n No limitations are provided. Authors may discuss the challenges of extending results to to nonconvex settings", " The paper combines extra gradient, gradient sliding, and acceleration to propose an accelerated extragradient sliding methods for convex composite optimization that achieves improved gradient complexity. When applied to distributed optimization problem under agents' function similarity, the method achieves lower bounds on communication and gradient complexities. The method is further extended to variational inequality setting. ## Strength\n\n1. The paper combines extra gradient, gradient sliding, and acceleration that achieves $O(\\sqrt{L_p/\\mu})$ and $O(\\sqrt{L_q/\\mu})$ gradient call for $\\nabla p$ and $\\nabla q$ respectively, improving over the condition number for minimization problem. It well suits the unbalanced problem.\n2. The method is further extends to obtain optimal gradient sliding for VIs.\n3. When applied to distribution optimization and distributed minimax optimization under similarity assumption, the proposed methods achieve the lower bounds on both communication and gradient complexities.\n\n\n## Weakness\n\n1. Presentation needs serious improvements. See questions below.\n\n2. The numerical results are relatively less informative. On real datasets, the proposed method only achieves comparable performance with the state-of-the-art method. \n\n 1. Line 23, \"a network of $m$ agents.\" For Equation (2), is it $1/m$ or $1/n$? See also Line 47, line 45 line 56.\n\n2. Line 44, why one would use the second order information for similarity notation? What would happen if one uses first/zeroth-order information for similarity notation. How to verify that function similarity holds.\n\n3. If there is similarity, does it reduce back to the stochastic setting? For example, if using first-order similarity notation, one may rewrite the problem as a stochastic optimization with a discrete distribution and bounded variance of the first-order information. \n\n4. For second-order similarity, how to evaluate it numerically?\n\n5. The proposed method combines acceleration, extragradient, and gradient sliding. It is unclear which component improves the complexity bound, comparing to the existing literature. Please specify with a more detailed comparison with existing literature on the methodology. \n\n## Minor Comments:\n1. Line 6, 7, missing bracket. \n2. Line 23, is it $m$ or $n$?\n3. Line 29, there are $m$ agents or $m$ samples? \n4. Line 30, what does it mean by mismatch between parameter and sample?\n5. Assumption 1 with $\\mu=0$ is normally treated as convexity while $\\mu<0$ is weakly convex in the literature. It would be nice to stay consistent. \n6. Example 2 Line 201, is it $\\min_z$ or $\\max_z$ as it is a saddle point problem. \n7. Line 114, 120, missing bracket N.A. ", " This paper considers a gradient sliding approach for distributed optimization. The idea is that by interleaving the computations between past and future iterates, the distributed optimization system is more robust to noise and convergence can be accelerated. Strengths\n - The paper does a pretty thorough related works review, and seems to be integrating sophisticated methods.\n\nWeaknesses\n - It is a bit unwieldy to have 2 theorem boxes, which are referred to in the paper, in the appendix. In general I have a hard time understanding why each one is contributing something different, and something like this should be discussed more clearly in the main text.\n\n - The second set of numerical results are extremely inconclusive, and does not show consistent advantage at all. It is possible that all the rest of the issues are misunderstandings, and if we can clarify them (and discuss how they can be clarified in the paper), I can increase my score.\n\nBroad strokes questions\n\n - Why is it called extragradient method? Extragradient usually implies taking two gradient calls and then using the second one, in order to get a \"midpoint discretization\" scheme and improve performance, but at a 2x overhead. None of the algorithms seem to be doing this.\n\n - It seems like the method mainly focuses on an outer loop / inner loop approach, where the outer loop achieves linear convergence and the inner loop borrows from other SOTA methods to get good guarantees. This seems appropriate for optimal rates, but I don't see the clear benefit to *distributed* optimization. Is this a noise variance controlling technique, or straggler mitigation technique? I guess what I'm asking is, if A = gradient sliding and B = optimal method, is this paper just A + B (arbitrary mix and match), or are there deeper ramifications for using this combination?\n\n - There is the big question also of what is hidden in the constants. For example, the scale of delta may be big in practice. More to the point, if delta is small in practice, to the point where the rates are tight, wouldn't the competitor methods work pretty well? A more extensive numerical experiments comparison would show if this is really a practical method.\n\n - What's the big idea behind using the prox operations in step 5 of alg 1? Is it just to achieve optimal rates? I ask because it seems prox operations are used, but aren't really discussed in terms of key benefits and numerical insights here, and the standard is usually just gradient descent. \n\nTechnical questions\n - In (8), I see a dependency of error on 1/T^2. However, in (9), T is a constant. This doesn't seem logical, since later inner iterations should require higher precision (and thus larger T) than initial ones. This of course affects the main final results (Thm 5)\n\n - Theorems 4 and 10 seem to better match the inner / outer loop issues, but they are kind of terrible rates then. How do they compare against competitors?\n n/a" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 2 ]
[ "YAw2pncJ2k0", "rh9efOzqPbr", "doy3gPHkLK", "y-8DB4NSnRl", "mz4HIW79FZ6", "34z5ZUuKTpS", "LtobKKzTxEy", "KrL9Kg2Glzd", "aqFyn9WBywqT", "qNHVaZv3kNi", "nips_2022_QrK0WDLVHZt", "MqKLv3GFmCo", "e2eWwStdvc7", "b_mc53LVoAk", "Wl-m5TaTRup", "3252QHkc7NS", "n_zdf4pi6t-", "F7iPpsR-9-", "F7iPpsR-9-", "F7iPpsR-9-", "rh9efOzqPbr", "rh9efOzqPbr", "rh9efOzqPbr", "Lvk8v68M8oc", "Lvk8v68M8oc", "Lvk8v68M8oc", "uy-zmzhBKW3", "nips_2022_QrK0WDLVHZt", "nips_2022_QrK0WDLVHZt", "nips_2022_QrK0WDLVHZt", "nips_2022_QrK0WDLVHZt", "nips_2022_QrK0WDLVHZt" ]
nips_2022_Y4vT7m4e3d
Decentralized Local Stochastic Extra-Gradient for Variational Inequalities
We consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data that is heterogeneous (non-IID) and distributed across many devices. We make a very general assumption on the computational network that, in particular, covers the settings of fully decentralized calculations with time-varying networks and centralized topologies commonly used in Federated Learning. Moreover, multiple local updates on the workers can be made for reducing the communication frequency between the workers. We extend the stochastic extragradient method to this very general setting and theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone (when a Minty solution exists) settings. The provided rates explicitly exhibit the dependence on network characteristics (e.g., mixing time), iteration counter, data heterogeneity, variance, number of devices, and other standard parameters. As a special case, our method and analysis apply to distributed stochastic saddle-point problems (SPP), e.g., to the training of Deep Generative Adversarial Networks (GANs) for which decentralized training has been reported to be extremely challenging. In experiments for the decentralized training of GANs we demonstrate the effectiveness of our proposed approach.
Accept
The paper studies decentralized local stochastic extra-gradient for variational inequalities. An extra-gradient method is developed for this problem. Theoretical results are established and complemented by simulations. While there were some concerns about the novelty of the work in the initial review, the authors adequately addressed these comments in their response. While a number of typos were present in the paper, I believe that these can be addressed as a minor revision in the final version. I do encourage the authors to carefully proofread their camera ready submission. The work is of interest to a part of the conference audience and should be accepted.
val
[ "ge54pU1J13", "Qa17JRmBc4J", "qdRDmLAGMY", "uObtIVRaBoT", "XJ7DSdAmDfbT", "S8tFexfaKXyw", "7eZ1zkwJlg", "eqAhRRstvfL", "S5Z8vNsCscv", "TdI3wL1pUfe", "WOl2gleeOGh", "w2ckSbpKWc", "hQtcddfGG8", "zl6qj2PXtaT", "UMI4QsA_nSE" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are very grateful to Reviewer **dmBR** for the response! We are especially grateful for the careful handling of our text! Using Reviewer's response, we tried to make our paper better.\n\n> **The current algorithm includes diffusion strategies on the clients. It has been known diffusion strategies work in distributed learning. Please include a discussion on the differences with the cited papers from the aspects of algorithm design.**\n\nWe added this. See the revision of our paper (lines 117 - 124). Unfortunately, due to space limitation, we cannot describe the difference in detail. We give the basic idea. For minimization problems a combination of diffusions and gradient descent is usually used. But since we work with VIs and saddle point problems, we change the gradient descent to the classical method for VIs and saddle point problems - the extragradient method. Accordingly, in contrast to the works mentioned by Reviewer, we consider a combination of the extragradient method and diffusions.\n\n> **I list where the presentation needs to be improved in only the first page as examples**\n\nWe did another proofreading of the paper and fixed the typos found by Reviewer and some others. We hope that the text of our paper has become better. We have planned a few more proofreadings to finally make sure the quality of the text is good. We thank Reviewer once again for the careful work with the text.\n", " We are grateful for raising the score! Thanks for the review and response!", " I would thank authors for their detailed explanation. Now my concern has been resolved, and I increased my score by 1.", " Thank you for your response. I agree that this paper has made considerable contributions in the theory now. However, some concerns remain.\n\n- The current algorithm includes diffusion strategies on the clients. It has been known diffusion strategies work in distributed learning. Please include a discussion on the differences with the cited papers from the aspects of algorithm design.\n\n- I list where the presentation needs to be improved in only the first page as examples:\n\n 1. Line 1, Abstract: \"(w)e consider distributed stochastic variational inequalities (VIs) on unbounded domains with the problem data being heterogeneous (non-IID) and distributed across many devices.\" ... with the problem data being ... is a bit weird.\n\n 2. Line 9, Abstract: \"strongly monotone, monotone, and non-monotone setting\" -> \"... settings\";\n\n 3. Line 10, Abstract: \"(t)he provided rates have explicit dependence on network characteristics and how it varies with time, data heterogeneity, variance, number of devices, and other standard parameters.\" What are the \"characteristics\" here? What does \"it\" refer to?\n\n 4. Line 18, Introduction: \"(i)n large scale machine learning (ML) scenarios the training data is often is split over many client devices, such as e.g. geo-distributed datacenters or mobile devices\", \"large scale -> large-scale\"; \n \"is often is split over\" -> \"is often split over\"; \n \"such as e.g.\" -> \"such as\";\n\n 5. Line 21, Introduction: \"non-centralized\" -> \"decentralized\";\n\n 6. Line 31, Introduction: \"or personalization\" -> \"and personalization\";\n\n 7. Line 28 says advances in \"development, design, and understand\", while line 31 says \"all these methods\". It is a mismatch;\n\n 8. Line 32, Introduction: \"single objective loss functions (minimization objective)\", \"minimization objective\" -> \"minimization objectives\";\n\n 9. Line 32, Introduction: \"generator and discriminator objective\", \"objective\" -> \"objectives\";\n\n These are only for page 1. Much more problems exist in the rest of the paper. It definitely needs a thorough revision.", " With this message, we would just like to kindly remind Reviewers that we would be happy if Reviewers would participate in the rebuttal discussion process. We are looking forward to hearing from Reviewers **zZ6M**, **n86C** and **dmBR** . We thank Reviewer **nETb** for the responses to the rebuttal.", " We thank Reviewer again for the review, the time, and the positive feedback on our work!", " I would like to thank the authors for the clarification. I have increased my score accordingly.", " We thank Reviewer **nETb** for the work! We are glad that on the whole the paper received a positive reaction from Reviewer. Next, we try to resolve the issue that Reviewer noted.\n\n> **On the downside, the proposed algorithm can only partially handle the case of non-monotone operators with minty solutions.** In fact, both \\sigma, the standard deviation of the noise, and D, the variability among nodes appear in the bounds (9) and (10) in terms that do not depend on K. Even if D=0 , one still needs to increase the batch size to get better accuracy. This is probably a problem that already needs to be addressed in the centralized case.\n\nWe agree with Reviewer. Note that the method from the work-competitor [1] has the same disadvantage.\n\n> **I also feel that it is not clear what the authors want to convey through section 5.2.** We can probably also get the same types of results with other decentralized algorithms that are mentioned in related work.\n\nAmong the papers for VIs and saddle point problems, there are no works that consider decentralized methods with local updates. In Section 5.2 we need exactly such methods, this is because we use different decentralized networks and communicate only once per epoch (or less frequently). It means that one (or less frequently) time per epoch we have some connection network (full graph or clusters), but in other iterations the connection graph is empty. We check how reducing the edges in the connection graph (full graph vs clusters) or increasing the number of local iterations (full graph at each epoch vs full graph at each 5th epoch) affect the learning process.\n\n> **In (7) and (8) the supremum is outside the expectation** while for stochastic VIs it is more common to have supremum inside the expectation and there is a standard process to achieve this in the centralized case. Actually, it is more meaningful to have the supremum inside the expectation because the gap function should be used as a measure for every realization. Therefore, I am wondering if we can get a result of this kind here or if there is any technical difficulty that makes the authors to present results like this. \n\n1) We agree that the criterion that Reviewer suggested is better than the one we use. Unfortunately, we could not obtain this result without additional tricks. This is an interesting question for future works.\n\n2) Note that it is in fact possible to obtain convergence for monotone inequalities on the argument ($|| z^k - z^*||^2$). Such guarantees even better than the discussed supremum. To do this we can use the classical regularization trick. This trick usually makes convex problems strongly convex. The idea is that the regularization parameter is so small ($\\sim \\varepsilon$, accuracy of the solution) that it almost does not spoil the quality of the solution. In detail, a monotone operator $F$ is replaced with strongly monotone $F + e_k T$, where $T$ is a strongly monotone operator and $e_k > 0$ is a regularization parameter (depends on the iteration number $k$). If we denote $z^k$ as a solution of the regularized VI, then it is possible to prove [1] that $z^k$ converges to $z^*$ for $e_k \\to 0$.\n\n> **In the experiments, the authors observe linear convergence in the bilinear case when there is no noise.** I understand why this should be the case in the centralized case. However, in the decentralized case with data heterogeneity, the presence of network error should prevent the iterates from converging, as we see in the strongly monotone setup. Could you please explain why this is not observed in the bilinear case?\n\n1) If we understand correctly, this question is related to Figure 1, where in the left plot in the case of zero noise, the method stoped converging after some accuracy (this result is expected and understood by Reviewer). In the right plot it seems that the method converged and didn’t stopped. In fact, the stopping accuracy was just cutted on the plot. The stopping accuracy is very much tied to the randomness of problem generation. In the revision version we will insert a more informative plot to avoid misunderstandings. \n\n2) We also ask Reviewer to pay attention to Figure 6 (Appendix A.2) where we compare convergence for the bilinear problem with constant and decreasing steps. In the case of zero noise, the constant step method reaches a certain accuracy and stops (this is due to heterogeneity), the method with the decreasing step converge slower, but much deeper. \n\n[1] A.B. Bakushinskii and B.T. Polyak. On the solution of variational inequalities\n", " We thank Reviewer **dmBR** for the work! We are glad that Reviewer highlighted 4 pros in our paper. Meanwhile, Reviewer identified 2 cons. Next, we try to solve them.\n\n> **Please clarify the difference and advantages from the following papers**\n\nHundreds of papers on decentralized algorithms have been published so far. Comparing our results with the results of all these articles is quite problematic. Therefore, let us limit the scope of the paper to survey only strongly related works.\n\n1) In our paper we are interested in distributed **variational inequalities** (and **saddle point problems** as a special case) - this is a more general and broad class of problems than minimization problems, moreover, in terms of theory, methods for minimization are not optimal for saddle problems and variational inequalities [6] (Sections 7.2 and 8.2). \n\n2) **The communication network** over which the devices are distributed can be **time-varying**. Sometimes the network can be empty at all, in particular such a setting can include methods that do **local updates**, without communication.\n\n3) We give **convergence rates** in three cases: strong-monotone, monotone, non-monotone.\n\nWhen we wrote the literature survey in our paper we only included methods for distributed variational inequalities and saddle point problems, moreover decentralized methods or methods that support local updates. \n\nAll papers that Reviewer gives are not devoted to variational inequalities or saddle point problems. Moreover, some of the papers do not provide the rate of convergence of the methods, but only guarantee that the method converges (but how fast?). In particular,\n\n[1] - only minimization problems are considered (not saddle point problems and variational inequalities), only fixed networks (not time-varying, without local updates), convergence rates are not given\n\n[2] - only minimization problems are considered (not saddle point problems and variational inequalities), convergence rates are not given\n\n[3] - only linear regression are considered (not saddle point problems and variational inequalities), only fixed networks (not time-varying, without local updates), convergence rates are not given\n\n[4] - only minimization problems are considered (not saddle point problems and variational inequalities), only fixed networks (not time-varying, without local updates)\n\n[5] - only linear regression are considered (not saddle point problems and variational inequalities), convergence rates are not given\n\nWe do not quite understand why these works should be valuable to us and why we should consider them. These works are more from signal processing than from theoretical optimization.\n\n\n> **The paper seems written in a rush. Many typos exist.**\n\nWe kindly disagree with this. We guarantee that the paper was not written in a rush. Moreover, we did proofreading. We would appreciate it if Reviewer would point out typos, it would help make our paper better.\n\n\n\n\n[1] A. H. Sayed, Diffusion adaptation over networks, vol. 3, pp. 323-453, 2014.\n\n[2] A. H. Sayed, S. Y. Tu, J. Chen, et al, Diffusion strategies for adaption and learning over networks: an examination of distributed strategies and network behavior, IEEE Signal Processing Maganize, vol. 30, no. 3, pp. 155-171, 2013.\n\n[3] S. Y. Xie, L. Guo, Analysis of distributed adaptive filters based on diffusion strategies over sensor networks, IEEE Transactions on Automatic Control, vol. 63, no. 11, pp. 3643-3648, 2018.\n\n[4] S.A. Alghunaim, K. Yuan, A unified and refined convergence analysis for non-convex decentralized learning. IEEE Transactions on Signal Processing. 2022.\n\n[5] A.S. Matveev, M. Almodarresi, R. Ortega, A. Pyrkin, S. Xie. Diffusion-based Distributed Parameter Estimation Through Directed Graphs with Switching Topology: Application of Dynamic Regressor Extension and Mixing. IEEE Transactions on Automatic Control. 2021.\n\n[6] I. Goodfellow, NIPS 2016 Tutorial: Generative Adversarial Networks\n", " We thank Reviewer **n86C** for the work! We are glad that on the whole the paper received a positive reaction from Reviewer. Next, we try to resolve the issue that Reviewer noted.\n\n> **My main concern is that the authors assume unbounded parameter domain, but in the monotone setting, we have to assume the parameters are bounded to get a meaningful convergence bound**, i.e., in equation (7), they have to assume max_{z,z'}||z-z' || is bounded. I understand that adding projection may collapse existing proof, but assuming bounded domain without projection operator looks unreasonable to me.\n\nIt seems that there is a misunderstanding. We do not assume the bounded domain. (7) holds **for any bounded set** $\\mathcal{C}$ containing the solution $z^*$. $\\mathcal{C}$ is not a feasible set. It is just used for the convergence criterion. This is a standard thing when we work with VIs on unbounded domains, in particular, on $R^d$. This trick was first used by Y. Nesterov in [1]. We can also give a link to a newer paper that explains this in detail [2] (see 2.5a and Lemma 1). In our work, we also indicate that the set $\\mathcal{C}$ is not feasible - see line 268. Thus, there is no contradiction with this assumption and unbounded domain. It is sufficient to assume that the solution is bounded.\n\n[1] Yurii Nesterov. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming\n\n[2] Kimon Antonakopoulos, Veronica Belmega, and Panayotis Mertikopoulos. An adaptive mirror- prox method for variational inequalities with singular operators. In Advances in Neural Information Processing Systems 32 (NeurIPS)\n", " We thank Reviewer **zZ6M** for the work! \n\nReviewer was absolutely correct in noting that we present a decentralized modification of extra-gradient with local steps for solving stochastic variational inequalities and saddle point problems. The theoretical analysis is given in the strongly monotone, monotone, and non-monotone cases. \n\nWe would be grateful if Reviewer gave us more detailed comments that explains the score and gives us an opportunity to improve the paper.\n", " This paper focuses on decentralized local stochastic extra-gradient for variational inequalities.\nThis paper uses extra-gradient as the main tool to solve saddle point problems which satisfy variational inequalities.\nThe results of this paper are not surprise but still have their own value. The techniques in this paper are regular and the results of this paper are not surprise.\nBut the results of this paper have their own value. No. Yes", " This paper proposed a decentralized extra-gradient method with intermittent communications to solve distributed variational inequalities problem, and one important scenario is federated GAN training or federated adversarial training. They consider the setting where clients are connected with a decentralized network and clients only synchronize with its neighborhood. They provide the convergence rate for strongly monotone, monotone and non-monotone cases. The rates make sense. They also provide experiments validating their algorithms' effectiveness and efficiency. Strengths:\nAs far as I know, this paper is the first to analyze the extragradient method, a celebrated algorithm in minimax/variational inequality optimization, in decentralized and intermittent communication settings. Due to the rise of federated GAN and adversarial robust training, distributed minimax problems received increasing attention from optimization and ML community. Hence, the theoretical results in this paper have its own value.\n\nWeakness:\nMy main concern is that the authors assume unbounded parameter domain, but in the monotone setting, we have to assume the parameters are bounded to get a meaningful convergence bound, i.e., in equation (7), they have to assume max_{z,z'}||z-z' || is bounded. I understand that adding projection may collapse existing proof, but assuming bounded domain without projection operator looks unreasonable to me. Please refer to weakness part. This is pure theory paper so I did not see any negative societal impact.", " This paper studies distributed stochastic variational inequalities (VIs) problems on unbounded domains while the data on different devices might be heterogeneous (non-IID). The authors propose a new algorithm that (1) the gossip matrix is drawn from some distribution in very communication and (2) multiple updates are made on every device between two communications. The authors prove that the proposed method speedups in all strong-monotone, monotone, and non-monotone cases. Comprehensive experiments show that the proposed method have faster speeds in solving distributed stochastic saddle-point problems (SPP), e.g., to training Deep Generative Adversarial Networks (GANs). Pros:\n+ The idea makes sense. Multiple diffusions on devices between two communications can certainly speedups the training.\n+ The experiments are nice for a theory-intensive paper. The results are in agreement with the theory.\n+ The proofs are comprehensive and clear. I do not find any major flaws.\n+ The authors provide the source code of the experiments.\n\nCons:\n- The motivations are quite straightforward. I am concerned it is not novel. Please clarify the difference and advantages from the following papers:\n\nA. H. Sayed, Diffusion adaptation over networks, vol. 3, pp. 323-453, 2014.\n\nA. H. Sayed, S. Y. Tu, J. Chen, et al, Diffusion strategies for adaption and learning over networks: an examination of distributed strategies and network behavior, IEEE Signal Processing Maganize, vol. 30, no. 3, pp. 155-171, 2013.\n\nS. Y. Xie, L. Guo, Analysis of distributed adaptive filters based on diffusion strategies over sensor networks, IEEE Transactions on Automatic Control, vol. 63, no. 11, pp. 3643-3648, 2018.\n\nS.A. Alghunaim, K. Yuan, A unified and refined convergence analysis for non-convex decentralized learning. IEEE Transactions on Signal Processing. 2022.\n\nA.S. Matveev, M. Almodarresi, R. Ortega, A. Pyrkin, S. Xie. Diffusion-based Distributed Parameter Estimation Through Directed Graphs with Switching Topology: Application of Dynamic Regressor Extension and Mixing. IEEE Transactions on Automatic Control. 2021.\n\n- The paper seems written in a rush. Many typos exist. Please clarify the difference and advantages from the following papers:\n\nA. H. Sayed, Diffusion adaptation over networks, vol. 3, pp. 323-453, 2014.\n\nA. H. Sayed, S. Y. Tu, J. Chen, et al, Diffusion strategies for adaption and learning over networks: an examination of distributed strategies and network behavior, IEEE Signal Processing Maganize, vol. 30, no. 3, pp. 155-171, 2013.\n\nS. Y. Xie, L. Guo, Analysis of distributed adaptive filters based on diffusion strategies over sensor networks, IEEE Transactions on Automatic Control, vol. 63, no. 11, pp. 3643-3648, 2018.\n\nS.A. Alghunaim, K. Yuan, A unified and refined convergence analysis for non-convex decentralized learning. IEEE Transactions on Signal Processing. 2022.\n\nA.S. Matveev, M. Almodarresi, R. Ortega, A. Pyrkin, S. Xie. Diffusion-based Distributed Parameter Estimation Through Directed Graphs with Switching Topology: Application of Dynamic Regressor Extension and Mixing. IEEE Transactions on Automatic Control. 2021. The authors did not discuss the limitations and potential negative societal impact of their work, while no significant issues are identified.", " In this work, the authors address the problem of solving minty variational inequality in decentralized optimization. They extend extra-gradient to this setup and prove convergence results under different assumptions on the operator (strong monotonicity, monotonicity, existence of Minty solution) and a weak assumption on the network topology (expected multi-step contraction). These theoretical results are further complemented by experiments on a syntactic problem and GANs. This work nicely complements the existing literature by providing an exhaustive analysis on decentralized extra-gradient for solving VI problems. On the positive side, the assumption made in this work about the network allows us to take into account various communication schemes. Moreover, their convergence rates feature explicit dependence on different constants of the problem and the network. \n\nOn the downside, the proposed algorithm can only partially handle the case of non-monotone operators with minty solutions. In fact, both $\\sigma$, the standard deviation of the noise, and $D$, the variability among nodes appear in the bounds (9) and (10) in terms that do not depend on $K$. Even if $D=0$, one still needs to increase the batch size to get better accuracy. This is probably a problem that already needs to be addressed in the centralized case.\n\nI also feel that it is not clear what the authors want to convey through section 5.2. We can probably also get the same types of results with other decentralized algorithms that are mentioned in related work. 1. In (7) and (8) the supremum is outside the expectation while for stochastic VIs it is more common to have supremum inside the expectation and there is a standard process to achieve this in the centralized case. Actually, it is more meaningful to have the supremum inside the expectation because the gap function should be used as a measure for every realization. Therefore, I am wondering if we can get a result of this kind here or if there is any technical difficulty that makes the authors to present results like this.\n\n2. In the experiments, the authors observe linear convergence in the bilinear case when there is no noise. I understand why this should be the case in the centralized case. However, in the decentralized case with data heterogeneity, the presence of network error should prevent the iterates from converging, as we see in the strongly monotone setup. Could you please explain why this is not observed in the bilinear case? Fine" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 3 ]
[ "uObtIVRaBoT", "qdRDmLAGMY", "TdI3wL1pUfe", "S5Z8vNsCscv", "nips_2022_Y4vT7m4e3d", "7eZ1zkwJlg", "eqAhRRstvfL", "UMI4QsA_nSE", "zl6qj2PXtaT", "hQtcddfGG8", "w2ckSbpKWc", "nips_2022_Y4vT7m4e3d", "nips_2022_Y4vT7m4e3d", "nips_2022_Y4vT7m4e3d", "nips_2022_Y4vT7m4e3d" ]
nips_2022_CZNFw38dDDS
P2P: Tuning Pre-trained Image Models for Point Cloud Analysis with Point-to-Pixel Prompting
Nowadays, pre-training big models on large-scale datasets has become a crucial topic in deep learning. The pre-trained models with high representation ability and transferability achieve a great success and dominate many downstream tasks in natural language processing and 2D vision. However, it is non-trivial to promote such a pretraining-tuning paradigm to the 3D vision, given the limited training data that are relatively inconvenient to collect. In this paper, we provide a new perspective of leveraging pre-trained 2D knowledge in 3D domain to tackle this problem, tuning pre-trained image models with the novel Point-to-Pixel prompting for point cloud analysis at a minor parameter cost. Following the principle of prompting engineering, we transform point clouds into colorful images with geometry-preserved projection and geometry-aware coloring to adapt to pre-trained image models, whose weights are kept frozen during the end-to-end optimization of point cloud analysis tasks. We conduct extensive experiments to demonstrate that cooperating with our proposed Point-to-Pixel Prompting, better pre-trained image model will lead to consistently better performance in 3D vision. Enjoying prosperous development from image pre-training field, our method attains 89.3% accuracy on the hardest setting of ScanObjectNN, surpassing conventional point cloud models with much fewer trainable parameters. Our framework also exhibits very competitive performance on ModelNet classification and ShapeNet Part Segmentation. Code is available at https://github.com/wangzy22/P2P.
Accept
The paper presents a method of prompt tuning to transfer 2D pre-trained weights to tackling 3D understanding problems. All reviewers are positive about the novelty of the method. With large 2D pretrained models, higher performances are still expected from xwSJ, which is also a reasonable comment. Other 3D understanding tasks, such as segmentation and detection of outdoor scenes, are strongly encouraged, as they are the true needs of the industry.
train
[ "wfj_ioPAs5Gl", "sqxz9DtUOVt", "5otA25gZuV0", "wXY8HOp-0c4", "eO6Fqr4bZFp", "kqe3QaivO0Y", "6q_BZXt02KxE", "YZmrhk1BSG", "ubT1Uqey4z", "3IImbi3g0GB", "s2-jiHxiMt3", "w3Lv1dxR95Z", "GoAQDbVo5CE", "wggKVfpXQ5I", "GvGKoJFSsb0", "Nj0LjORO4JM", "YqQZsJe3YaT", "Gt9U0hrzKne", "Dyyhuw-Vbo" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for upgrading your score and providing valuable feedback. We will update our revised paper according to our discussions. Thank you again for your insightful and constructive suggestions that improve paper quality!", " Thanks for your responses, which I believe are reasonable. Considering that these important discussions will be included in the final camera ready version (there is an extra page for such additions) I have increased my score to 6.", " Dear reviewer 2HDY,\n\nDoes our response address all your concerns? Please feel free to let us know if you have any further questions. \nTo view the comment, please click here: https://openreview.net/forum?id=CZNFw38dDDS&noteId=YZmrhk1BSG\n\nBest wishes!", " Dear reviewer xwSJ,\n\nDoes our response address all your concerns? Please feel free to let us know if you have any further questions.\nTo view the comment, please click here: https://openreview.net/forum?id=CZNFw38dDDS&noteId=3IImbi3g0GB\n\nBest wishes!", " Dear reviewer RwM1,\n\nDoes our response address all your concerns? Please feel free to let us know if you have any further questions. \nTo view the comment, please click here: https://openreview.net/forum?id=CZNFw38dDDS&noteId=GoAQDbVo5CE\n\nBest wishes!", " Thank you so much for your response to our comments! We wish the following response could address your concerns.\n\n#### **1. About the results of P2P with ResNet-101 on ScanObjectNN.**\n\nThe reason is that the transferability of different image models to 3D domain are different. But we can see that the scaling-up trend of the transferability **within the same architecture** is consistent. The scaling-up trend in 2D domain is the property that a larger model and more data are guaranteed to produce higher performance. This is important since it can leverage the development in hardware and datasets. As our P2P also shows such property in 3D domain, we are optimistic that with our P2P framework, 3D point cloud analysis can still benefit from future development in 2D pre-training models. We will include this analysis in our revised paper.\n\n#### **2. About the pooling strategy ablation.**\n\nWe agree that fine-grained geometric features in classification are not much important compared with other dense prediction tasks, as the final prediction is dependent on a single global feature. However, recent work like RepSurf[1] finds that the encoding of detailed local geometry can bring a positive effect on 3D models. But we still agree that pooling strategy ablations on dense prediction tasks like part segmentation that requires per-point comprehension will be more powerful and convincing than ablations on classification. We will conduct additional experiments on part-segmentation to further verify the conclusion we get in the last response and include it in our revised paper. Thanks for your suggestion!\n\n#### **3. About the limitation of P2P.**\n\nThanks for your advice! We will add a more thorough limitation discussion of P2P in our revised paper. We think that P2P may have difficulty in performing 3D tasks that concentrates on modality-dependent geometry analysis like completion, reconstruction, or upsampling. This is because P2P exploits and transfers the shared visual semantic knowledge between 2D and 3D domains, but these low-level tasks focus more on 3D domain-specific information. Apart from that, even though our P2P framework only requires a few trainable parameters to leverage pre-trained 2D knowledge and obtain high performance, its overall training parameters and FLOPs are still large when the image model is large. We will investigate this problem in future works.\n\n### Reference\n[1] Ran, Haoxi, Jun Liu, and Chengjie Wang. \"Surface Representation for Point Clouds.\" CVPR. 2022.", " Hi,\n\nThanks so much for your responses to my comments and questions. \n- The explanation about the data starvation problem (the diversity of categories of large scale 3D datasets being limited compared to ImageNet-1k and 21k) makes sense. Thanks.\n- One follow-up question I have is whether there is an intuitive explanation behind why P2P with ResNet-101 on ScanObjectNN has the best performance, despite ResNet-101 having relatively lower accuracy (IN acc. = 77.4 in Table 1a). This result seems somewhat surprising and would benefit from an explanation?\n- I don't agree with the conclusion about the pooling strategy ablation that it definitively shows that geometry information is preserved by the sum pooling point feature aggregation step. The difference between the three pooling strategies seems negligible. It seems more likely that the considered task with the largest demonstrated improvement (object classification) doesn't need fine-grained geometric features. Can you think of any 3D point cloud processing tasks which are more dependent on geometric information than object classification which P2P might not show improvements on? It would be helpful to have a discussion about this when discussing limitations of P2P (which are still missing from the paper...)\n- Are there any other limitations of P2P? (Will a discussion of this be added to a new revision of the paper?)", " Thanks for your careful review and comments! Hopefully the following contents could answer your questions.\n\n### **1. About the updated experiment results.**\n\n> My main concern is the experiment result. Apparently, the proposed design does not improve the performance.\n\nWe implement different image models in Table 1 in our supplementary material, where P2P with ConvNeXt-L as image model achieves 87.1 on ScanObjectNN dataset, surpassing previous literature by a large margin. Sorry for not including it in our main paper. \n\nWe further update more comprehensive results in Table 1 in \"Response to All Reviewers\". From the quantitative results and the accuracy curve, we can conclude that with our proposed P2P prompting method, better 2D pre-trained models in one family will result in better 3D classification performance. \n\nTo compare with previous literature, the updated results are shown in Table 2 in \"Response to All Reviewers\". From the updated results we can conclude that with our proposed P2P prompting method, we achieve the state-of-the-art performance on ScanObjectNN dataset, surpassing previous best works such as PointMLP by a large margin. \n\nFor part segmentation experiments, the updated results are shown in Table 3 in \"Response to All Reviewers\". With ConvNeXt-L as image model and UPerNet as segmentation head, P2P also surpass PointMLP and KPConv on instance mIoU. \n\nHopefully these updated experiment results will address your concern on P2P performance.\n\n### **2. About the computation cost and model size for the prompting procedure.**\n\n> What are the computation cost and model size for the prompting procedure?\n\nThe FLOPs of the prompting module is 4.2G. The parameters of the prompting module is 81.7k.", " ### **3. About multi-view fusion in part segmentation.**\n\n> How to obtain point predictions for part segmentation is unclear. It seems not very reasonable to simply add the multi-view predictions. Actually, the multi-view fusion is not clearly stated in the paper.\n\nSorry for the unclear statement about the multi-view fusion. During evaluation, we re-project pixel-level predictions back to points according to Point-to-Pixel projection correspondences. Segmentation probabilities for each point from multiple views are added together for majority voting. More specifically, if we denote $c^k_i \\in \\mathcal{R}^{N}$ as the segmentation prediction of point $i$ from view $k$, where $N$ is the number of total classes. Then the added prediction for totally $K$ views would be $c_i=\\sum^K_{k=1}c^k_i$. The final part segmentation result for point $i$ would be $n=\\textrm{arg}\\max_N c_i$.\n\nThe reason why we use a multi-view prediction summation is that one pixel may correspond to multiple points from one projection view. This will cause two problems. Firstly, point-wise segmentation boundaries are blurred, since multiple points in a local region are predicted to be the same class. Secondly, segmentation confidence for points would be less distinguishable. For example, if three points $p_1, p_2, p_3$ belonged to different classes $c_1, c_2, c_3$ are projected in the same pixel $i_a$ and the multi-hot segmentation confidence for $i_a$ is $[1/3, 1/3, 1/3, 0]$, then the argmax operation cannot decide which class $p_1, p_2, p_3$ belong to. However, suppose that from another view, $p_1, p_4$ belonged to $c_1, c_4$ are projected to the same pixel $i_b$ with segmentation confidence $[1/2, 0, 0, 1/2]$, then the segmentation probability for $p_1$ would be $[5/6, 1/3, 1/3, 1/2]$. Under this condition, the argmax operation could correctly predict that $p_1$ belongs to $c_1$.\n\n### **4. About scene level point cloud understanding tasks.**\n\n> More results on scene-level point cloud understanding with datasets like ScanNet or S3DIS are expected to illustrate the effectiveness of the prompt-tuning pipeline.\n\nThanks for your constructive suggestion. However, our main concern in this paper is utilizing object-level experiments to demonstrate that migrating pre-trained knowledge from 2D domain to 3D tasks is a novel and feasible learning paradigm for 3D development. We also show the potential of P2P in dense prediction tasks with experiments on part segmentation. For more complex scene level detection and segmentation, we hope we can study them more thoroughly in future work.", " Thanks for your careful review and comments! Hopefully the following contents could answer your questions.\n\n### **1. About the experiment results.**\n\n> Although the method leverages extra 2D image knowledge, it does not show clear performance or speed advantages over previous 3D networks on both classification and part segmentation. The parameters that need to be trained are fewer but the whole model is larger. The 2D prior knowledge is not fully exploited in this method.\n\nWe implement different image models in Table 1 in our submitted supplementary material. Sorry for not including it in our main paper. We further update more comprehensive results in Table 1 in \"Response to All Reviewers\". From the quantitative results and the accuracy curve, we can conclude that with our proposed P2P prompting method, larger 2D pre-trained models in one family will result in better 3D classification performance. We hope that this scalable trend will address your concern on how much we exploit the 2D pre-trained knowledge.\n\nTo compare with previous literature, the updated results are shown in Table 2 in \"Response to All Reviewers\". From the updated results we can conclude that with our proposed P2P prompting method, we achieve the state-of-the-art performance on ScanObjectNN dataset, surpassing previous best works such as PointMLP by a large margin. For part segmentation experiments, the updated results are shown in Table 3 in \"Response to All Reviewers\". With ConvNeXt-L as image model and UPerNet as segmentation head, P2P also surpass PointMLP and KPConv on instance mIoU. Hopefully these updated experiment results will address your concern on P2P performance.\n\n### **2. About the projection process.**\n\n#### **(A) Adding features for points in one pixel.**\n\n> The design of simply adding the point features in the same pixel seems trivial, and even with the explanations in Line190-197, I don't really think it preserves geometry. Also, no more experiments are conducted to analyze these design choices.\n\nThanks for pointing out the lack of ablation on how to aggregate features of multiple points in one pixel. We conduct ablations on max-pooling, taking average and summation, shown in the following table. We implement ViT-B as our image model on ModelNet40 dataset, which is pre-trained on ImageNet-1k dataset with supervised classification. \n\n| Method | Accuracy | \n| :-------: | :------: | \n| max | 92.2 |\n| mean | 92.3 |\n| sum | 92.7 |\n\nAs shown in Table 4, the quantitative results show that the summation design is the best choice. What's more, according to the visualization results of projected and colored images in Figure 1 in our paper, the objects appear to be semi-transparent, which to some extent demonstrates the preservation of geometrical information from 3D point clouds in 2D images.\n\n#### **(B) Features for pixels without points.**\n\n> For the projection from 3D to 2D, which are the pixel features for those pixels without points?\n\nSorry for the unclear statement. Features for pixels without points are initialized as zeros.\n\n#### **(C) Processing of the empty pixels in the coloring module.**\n\n> How's the processing of the empty pixels in the coloring module? The visualization results look very clean and not very smooth actually, and I wonder if the empty pixels are filtered out in the coloring module.\n\nFeatures for pixels without points are initialized as zeros. The convolution layers in the coloring module are applied to the whole projected image to predict color for each pixel. We don't explicitly filter out empty pixels in the coloring module, as learnable bias parameters in the convolution layers would predict the same color (greenish gray in our visualization) for pixels with zero value. As for the smoothing problem, it is caused by the sparsity of the point cloud. We try to solve it by the convolution layers in the coloring module with $3\\times 3$ kernel size.", " ### **2. About the flow between pixel and points.**\n\n> To my understanding, the flow is unidirectional; pre-trained image features are being used to learn a better representation for points.\n\nSorry for the unclear statement in the introduction. We further discuss the bidirectional knowledge flow in Section 3.1 (L141-149 in original paper, L119-127 in revised paper). The flow from point to pixel is more direct than the opposite: the output color of each pixel is influenced by the point features, since the pixel features are obtained from point features according to the projection correspondences. Therefore, pixel colors embrace geometry information from point clouds.\n\n### **3. About the result comparison with Point-BERT.**\n\n> P2P’s largest model achieves the same performance as Point-BERT.\n\n> Similarly, claims of “superiority” of P2P (L64, L370) are clearly not supported by the accuracy results in the experiments.\n\nSorry for the claims that are not that rigorous in the first version. However, with our updated experiment results, our P2P obtains state-of-the-art performance on ScanObjectNN dataset and surpass previous literature by a large margin, as shown in Table 2. For ModelNet40 dataset, we surpass Point-BERT and PointMLP-elite.\n\n### **4. About the ablation on pooling strategy in the projection process.**\n\n> Instead of taking a sum of point features in each cell (L189), did you try max pooling? It would be great to see a sensitivity analysis comparing {max,mean,sum} pooling for this.\n\nThanks for pointing out the lack of ablation studies on point feature aggregation in each pixel. The ablation on max/mean/sum pooling are shown in the following table. We implement ViT-B as our image model on ModelNet40 dataset, which is pre-trained on ImageNet-1k dataset with supervised classification. \n\n| Method | Accuracy | \n| :-------: | :------: | \n| max | 92.2 |\n| mean | 92.3 |\n| sum | 92.7 |\n\nAccording to the results, summation operation is better than max pooling or mean pooling, which is consistent with what we have discussed in Section 3.2.2 (L189-197 in original paper, L167-175 in revised paper). On the one hand, the max pooling operation drops much geometric information in one pixel. On the other hand, the mean pooling operation neglect the density information from 3D domain, which also undermines the geometrical knowledge in projected images. \n\n> ..., since images of semi-transparent objects (which is unrealistic) would seemingly be out-of-distribution for pre-trained image models on ImageNet?\n\nAs for the out-of-distribution problem, we agree that the semi-transparent objects are not similar in **texture** as objects in realistic images. However, the **shape** information of object are hardly affected by the semi-transparent attribute. Given that the image model relies on both shape and texture for classification[1], domain gap in texture is a trade-off choice as it won't be a decisive factor. What's more, we try to solve this problem by tuning the normalization layers of the image model. There are two supporting reasons. Firstly, the normalization parameters will affect the image texture style according to StyleGAN[2]. Secondly, some early literature such as AdaBN[3] proposed to match normalization parameters for domain adaptation.\n\n### **References**\n\n[1] Tuli, Shikhar, et al. \"Are Convolutional Neural Networks or Transformers more like human vision?.\" arXiv preprint arXiv:2105.07197. 2021. \n[2] Karras, Tero, Samuli Laine, and Timo Aila. \"A style-based generator architecture for generative adversarial networks.\" CVPR. 2019. \n[3] Li, Yanghao, et al. \"Adaptive batch normalization for practical domain adaptation.\" Pattern Recognition 80 (2018): 109-117.", " Thanks for your careful review and detailed comments! Hopefully the following contents could answer your questions.\n\n### **1. About the unclear motivation, problem and significance.**\n\n#### **(A) The data starvation problem.**\n\n> However, the data starvation problem seems to only exist for specific object-centric datasets such as ShapeNet. By contrast, consider the large Scannet and Waymo datasets. Moreover, recent advances in 3D rendering (e.g., NeRF) suggests that highly lifelike synthetic 3D data may soon become available. Therefore, scarcity of large datasets does not appear to be a fundamental concern.\n\nSorry for the misunderstanding caused by our motivation claims. We agree that there are large-scale scene-level datasets ScanNet and Waymo, and that recent advances in 3D rendering is promising to produce more synthetic 3D data. However, our main concern is that the scale and generalizability of these 3D datasets are relatively weaker than their counterparts in 2D domain. For example, there are ImageNet-1k that contains 1.2M images from 1000 categories, not to mention the larger ImageNet-21k dataset. After all, it is much easier to obtain various images from the Internet. On the contrary, there are only 1513 scenes containing 20 categories in indoor dataset ScanNet, while outdoor dataset Waymo also contains no more than 30 categories. Therefore, their diversity and volume lag behind 2D pre-training datasets. Another advantage of 2D pre-training is that it can consistently scale-up. In other words, larger model size and larger data size will consistently produce higher performance. Numerous mature pre-training methods based on ImageNet are proposed based on this property and show promising performances on both classification and downstream tasks.\n\nTherefore, it would be great if the abundant datasets and outstanding pre-training mechanism in 2D domain could help 3D development, since they both illustrate the visual world and share many similarities. And this doesn't contradict with more data in 3D domain. Our main contribution is proposing a new learning paradigm to leverage 2D pre-training knowledge to 3D domain at a low trainable parameter cost. We implement different image models and the results are included in our submitted supplementary material. We further update more comprehensive results in Table 1 in \"Response to All Reviewers\". From the quantitative results and the accuracy curve, we can conclude that with our proposed P2P prompting method, the scaling-up property in 2D domain is successfully kept, as larger 2D pre-trained models in one family will result in better 3D classification performance. This demonstrates the feasibility of leveraging the abundant 2D datasets and the development in 2D pre-training to 3D domain.\n\n#### **(B) Results comparisons with pre-trained point cloud Transformers.** \n\n> Moreover, point B) seems plainly false since recent methods like Point-BERT work just as well as P2P on, e.g., ModelNet40.\n\nTo compare with previous literature, the updated results are shown in Table 2 in \"Response to All Reviewers\". From the updated results we can conclude that with our proposed P2P prompting method, we achieve the state-of-the-art performance on ScanObjectNN dataset, surpassing previous best works such as Point-BERT, PointMLP by a large margin. On ModelNet40 dataset, we also surpass Point-BERT and PointMLP-elite.\n\nFor part segmentation experiments, the updated results are shown in Table 3 in \"Response to All Reviewers\". With ConvNeXt-L as image model and UPerNet as segmentation head, P2P also surpass Point-BERT, PointMLP and KPConv on instance mIoU. Hopefully these updated experiment results will address your concern on P2P performance.\n\n#### **(C) Benefits of the proposed P2P prompting.** \n\n> As a result, it is unclear what the actual problem is that is being addressed here and why this prompting method is needed at all. The main benefit of P2P seems to be in the use of fewer model parameters, but its unclear why this is important.\n\nThere are three benefits of our proposed P2P. **Firstly, high performance.** According to Table 2, our P2P obtain state-of-the-art performance on ScanObjectNN dataset and surpass previous literature by a large margin. **Secondly, consistent scaling-up trend.** According to Table 1, our proposed prompting mechanism could largely benefit the remarkable progress in 2D pre-training, as larger scale image model in one family will consistently result in better 3D performance. **Thirdly, low prompting cost.** Prompting is an important mechanism to transfer pre-trained knowledge to downstream tasks at a low tuning cost. It would benefit from recent advances in fundamental models and would contribute to the future unified model researches, as different input modalities and different output tasks can share the same large-scale fundamental model and only require light-weight prompting module for adaptation.\n\n", " Thanks for your careful review and comments! Hopefully the following contents could answer your questions.\n\n### **1. About the ablations on improved backbone architecture and added compute cost.**\n\n> How much improvements are from the improved backbone architecture and added compute cost? It would be great to show an ablation study on that.\n\nThanks for your insightful suggestion about ablations on improved backbone architecture. Actually, we've conducted similar ablation studies on image models pre-trained on ImageNet-21k[6] dataset in Table 1 in our submitted supplementary material. Sorry for not including it in our main paper. We further update more comprehensive results in Table 1 in \"Response to All Reviewers\". From the quantitative results, we can conclude that with our proposed P2P prompting method, larger 2D pre-trained models from one family will result in better 3D classification performance. \n\nAs for the added computation cost, since we use the same prompting module for different image model, the FLOPs from P2P prompting remains the same: **4.2G**. The FLOPs of the improved image models are referenced from their original papers and shown in the following table.\n\n| ResNet | FLOPs | ViT | FLOPs | Swin | FLOPs | ConvNeXt | FLOPs |\n| :-------: | :---: | :---: | :---: | :--: | :---: | :------: | :---: |\n| ResNet-18 | 1.8G | ViT-T | 1.1G | Swin-T | 4.5G | ConvNeXt-T | 4.5G |\n| ResNet-50 | 3.8G | ViT-S | 4.6G | Swin-S | 8.7G | ConvNeXt-S | 8.7G |\n| ResNet-101 | 7.6G | ViT-B | 17.5G | Swin-B | 15.4G | ConvNeXt-B | 15.4G |\n| | | | | | | ConvNeXt-L | 34.4G\n\n### **2. About the potential of P2P on segmentation task.**\n\n> The proposed approach has some limitations on the feasible tasks to apply on. For example, it may not work if we want to conduct a 3D segmentation task.\n\nTo illustrate the potential of P2P to be applied on tasks other than classification, we conduct experiments on part segmentation and our improved results with improved image model are shown in Table 3 in \"Response to All Reviewers\". The part segmentation results on instance mIoU surpass the previous outstanding works like KPConv, which demonstrates the potential of our P2P to perform segmentation tasks.\n\n> Related to the question above, the comparison in Table 4 is not fair due to lack of model complexity analysis.\n\nThanks for pointing out the lack of complexity analysis in Table 4. The trainable parameters for each model is listed as below. \n\n| Model | Trainable Parameters |\n| :---- | :--------------------: |\n| PointNet++ | 1.4 M |\n| DGCNN | 1.8 M |\n| Point-BERT | 21.1 M |\n| PointMLP | 12.6 M |\n| KPConv | 15.2 M |\n| *P2P (ViT-B-MAE-SFPN, original)* | 0.3 M |\n| P2P (ConvNeXt-B-SFPN) | 6.1 M |\n| P2P (ConvNeXt-L-UPer) | 71.7 M |\n\nWith ConvNeXt-B-SFPN setting under a relatively low trainable parameter cost, we achieve competitive performance. With a stronger ConvNeXt-L-UPer setting, we use more parameters to obtain higher segmentation performance. The extra trainable parameters mainly come from the UPerNet segmentation head. We agree that the trainable parameters for ConvNeXt-L-UPer setting is relatively large, but we want to emphasize that the scaling-up porperty of our proposed P2P framework is crucial. In future works, we plan to analyze P2P in segmentation more thoroughly for better performance-parameter balance. ", " \n### **3. Segmentation results comparisons with previous literature.**\nWe show the updated part segmentation results as below. The base version uses a ConvNeXt-B as image model with SemanticFPN[7] as segmentation head. The advanced version uses a ConvNeXt-L as image model with UPerNet[8] as segmentation head. We also show the results of P2P(ViT-B-MAE-SFPN) setting in our original paper. This table is corresponding to Table 4 in our revised paper. We report the mean IoU across all part\ncategories with mIoU_C and the mean IoU across all instance with mIoU_I.\n#### **Table 3. Part Segmentation Results**\n| Model | mIoU_C | mIoU_I |\n| :---------- | :----: | :----: |\n| DGCNN | 82.3 | 85.2 |\n| Point-BERT | 84.1 | 85.6 |\n| PointMLP | 84.6 | 86.1 |\n| KPConv | 85.1 | 86.4 |\n| *P2P (ViT-B-MAE-SFPN, original)* | 81.7 | 85.0 |\n| P2P (ConvNeXt-B-SFPN) | 82.5 | 85.7 |\n| P2P (ConvNeXt-L-UPer) | 84.1 | 86.5 |\n\n### **References**\n[1] He, Kaiming, et al. \"Deep residual learning for image recognition.\" CVPR. 2016. \n[2] Liu, Zhuang, et al. \"A convnet for the 2020s.\" CVPR. 2022. \n[3] Dosovitskiy, Alexey, et al. \"An image is worth 16x16 words: Transformers for image recognition at scale.\" arXiv preprint arXiv:2010.11929. 2020. \n[4] Liu, Ze, et al. \"Swin transformer: Hierarchical vision transformer using shifted windows.\" ICCV. 2021. \n[5] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. \"Imagenet classification with deep convolutional neural networks.\" NeurIPS. 2012. \n[6] Ridnik, Tal, et al. \"Imagenet-21k pretraining for the masses.\" arXiv preprint arXiv:2104.10972. 2021. \n[7] Kirillov, Alexander, et al. \"Panoptic feature pyramid networks.\" CVPR. 2019. \n[8] Xiao, Tete, et al. \"Unified perceptual parsing for scene understanding.\" ECCV. 2018.", " We would like to thank all reviewers for their careful review and insightful feedback! We are excited that they found our proposed idea to be \"novel\"[R1,R2,R3,R4] and \"interesting\"[R2,R3,R4] and our proposed framework to be \"elegant\"[R1,R2].\n\nWe also appreciate their suggestions to make our work better. We notice that many reviewers have concerns on the experiment results. To further demonstrate the effectiveness of our proposed P2P framework, we've updated some experiment results as listed below. We also update these experiment results in our revised paper, where our major revisions are marked blue. \n\n### **1. P2P variants with different image models.** \nThanks to Reviewer RwM1's suggestion, we implement different scales of convolution-based ResNet[1], ConvNeXt[2] and attention-based Vision Transformer[3], Swin Transformer[4] as the image model in our P2P framework. These image models are pre-trained on ImageNet-1k[5] dataset with supervised classification. We report classification accuracy on ModelNet40 (MN Acc.) and ScanObjectNN (SN Acc.) datasets. We also report the classification accuracy of the image model on ImageNet (IN Acc.). We report trainable parameters of each P2P framwork with Tr. Param. Note that we've conducted similar ablation studies on image models pre-trained on ImageNet-21k[6] dataset in Table 1 in our submitted **supplementary material**. Here we make this ablation more comprehensive. This table is corresponding to Table 1 in our revised paper.\n#### **Table 1. Classification Results of P2P Variants with Different Image Models.**\n#### *(a) ResNet.*\n| Image Model | IN Acc. | Tr. Param. | MN Acc. | SN Acc. |\n| :---------- | :----: | :--------: | :-----: | :-----: |\n| ResNet-18 | 69.8 | 109 K | 91.6 | 82.6 |\n| ResNet-50 | 76.1 | 206 K | 92.5 | 85.8 |\n| ResNet-101 | 77.4 | 257 K | 93.1 | 87.4 |\n#### *(b) Vision Transformer.*\n| Image Model | IN Acc. | Tr. Param. | MN Acc. | SN Acc. |\n| :---------- | :----: | :--------: | :-----: | :-----: |\n| ViT-T | 72.2 | 99 K | 91.5 | 79.7 |\n| ViT-S | 79.8 | 116 K | 91.8 | 81.6 |\n| ViT-B | 81.8 | 150 K | 92.7 | 83.4 |\n#### *(c) Swin Transformer.*\n| Image Model | IN Acc. | Tr. Param. | MN Acc. | SN Acc. |\n| :---------- | :----: | :--------: | :-----: | :-----: |\n| Swin-T | 81.3 | 136 K | 92.1 | 82.9 |\n| Swin-S | 83.0 | 154 K | 92.5 | 83.8 |\n| Swin-B | 83.5 | 178 K | 92.6 | 84.6 |\n#### *(d) ConvNeXt.*\n| Image Model | IN Acc. | Tr. Param. | MN Acc. | SN Acc. |\n| :---------- | :----: | :--------: | :-----: | :-----: |\n| ConvNeXt-T | 82.1 | 126 K | 92.6 | 84.9 |\n| ConvNeXt-S | 83.1 | 140 K | 92.8 | 85.3 |\n| ConvNeXt-B | 83.8 | 159 K | 93.0 | 85.7 |\n| ConvNeXt-L | 84.3 | 198 K | 93.2 | 86.2 |\n\n### **2. Classification results comparisons with previous literature.**\nWhen comparing with previous literature, we show a base version and an advanced version of P2P. In the base version, we implement ResNet-101 as image model and use a simple fully connected layer as classification head. In the advanced version, we implement ConvNeXt-L pre-trained on ImageNet-21k[6] dataset as our image model and replace the fc classification head by a multi-layer perceptron (MLP). We also show the results of P2P(ViT-B-MAE) setting in our original paper. This table is corresponding to Table 2 in our revised paper. We report the classification accuracy on ModelNet40 (MN) and ScanObjectNN (SN). We also report the trainable parameters with Tr. Param. and pre-training type for each method.\n#### **Table 2. Classification Results on ModelNet40 and ScanObjectNN**\n| Method | Pre-train | Tr. Param. | MN Acc. | SN Acc. |\n| :---------- | :------: | ---------: | :-----: | :-----: |\n| PointNet++ | N/A | 1.4 M | 90.7 | 77.9 |\n| DGCNN | N/A | 1.8 M | 92.9 | 78.1 |\n| MVTN | N/A | 14.0 M | N/A | 82.8 |\n| PointMLP-elite | N/A | 0.68 M | 93.6 | 83.8 |\n| PointMLP | N/A | 12.6 M | 94.1 | 85.4 |\n| DGCNN-OcCo | 3D | 1.8M | 93.0 | N/A |\n| Point-BERT | 3D | 21.1 M | 93.2 | 83.1 |\n| *P2P (ViT-B-MAE, original)* | 2D | 0.17 M | 93.2 | 84.5 |\n| P2P (ResNet-101) | 2D | 0.25 M | 93.1 | 87.4 |\n| P2P (ConvNeXt-L-21k-mlp) | 2D | 1.0 M | 93.7 | 87.6 |\n\n", " This paper proposes a new model architecture for 3D problem which leverages the powerful backbones pretrained from the 2D task. The idea is straightforward. The input point cloud is projected into 2D pixels using an encoder model, then the 2D pixels is colored by a coloring module, and the colored images are fed into a pretrained ViT backbone and then predictions are made by the task-specific heads. The overall approach provides an elegant solution to leverage the representations of 2D models. The experimental results demonstrate superior performance on public benchmarks including ModelNet40, ShapeNetPart datasets. 1. Novel model architectures to utilize pretrained 2d models. To my knowledge, the idea of using projection into 2D and coloring module is new. \n2. The idea is simple yet effective. The pretrained 2d models are easy to get and the results are promising. How much improvements are from the improved backbone architecture and added compute cost? It would be great to show an ablation study on that. The proposed approach has some limitations on the feasible tasks to apply on. For example, it may not work if we want to conduct a 3D segmentation task. \nRelated to the question above, the comparison in Table 4 is not fair due to lack of model complexity analysis.", " In this paper, the authors introduce point-to-pixel prompting (P2P), a learning framework for leveraging pre-trained image transformers for 3D tasks. The method is mainly motivated by the data scarcity issue in 3D domains. P2P learns a geometry-preserving transformation from point cloud to 2D grid, and then a projection to prepare the 2D grid data to be processed by a pre-trained image transformer expecting image tokens. The main benefit of P2P, to my understanding, is the ability to achieve comparable accuracy to other 3D models with much fewer parameters that need to be trained with 3D data. This is validated on two tasks, 3D object classification and 3D part segmentation. Strengths\n\n-----------------\n- the question of whether knowledge can be transferred from large pre-trained image models for use with 3D domains is interesting\n- the point-to-pixel prompt pipeline, which is nicely visualized in Figure 2, appears to be novel and is simple and elegant\n- this work is a nice demonstration of ideas from NLP transferring successfully to other domains (in this case, to 3D point cloud processing)\n- the paper is well-written and easy to read\n\n\nWeaknesses\n\n----------------- \n- $\\text{\\textbf{Unclear motivation,problem, and significance}}$: The current set of claims in the introduction are that A) there is a data starvation problem in 3D domain (L34-35) and B) pre-training point cloud transformers suffers from an imbalance between the number of trainable parameters and limited training data, leading to insufficient optimization and overfitting (L40-41). However, the data starvation problem seems to only exist for specific object-centric datasets such as ShapeNet. By contrast, consider the large Scannet and Waymo datasets. Moreover, recent advances in 3D rendering (e.g., NeRF) suggests that highly lifelike synthetic 3D data may soon become available. Therefore, scarcity of large datasets does not appear to be a fundamental concern. Moreover, point B) seems plainly false since recent methods like Point-BERT work just as well as P2P on, e.g., ModelNet40.\n- As a result, it is unclear what the actual problem is that is being addressed here and *why* this prompting method is needed at all. The main benefit of P2P seems to be in the use of fewer model parameters, but its unclear why this is important.\n- $\\text{\\textbf{Multiple unsubstantiated claims}}$. These can be addressed with careful editing. \n - (L54) “The end-to-end optimization pipeline and the strategy of freezing the pre-trained image model promote the *bidirectional* knowledge flow between points and pixels”. To my understanding, the flow is *unidirectional*; pre-trained image features are being used to learn a better representation for points.\n - (L271) “Firstly, our P2P outperforms traditional 3D pretraining methods” (on ModelNet40). P2P’s largest model achieves the same performance as Point-BERT.\n - Similarly, claims of “superiority” of P2P (L64, L370) are clearly not supported by the accuracy results in the experiments.\n\n\nReferences\n\n-----------------\n- Dai, Angela, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. \"Scannet: Richly-annotated 3d reconstructions of indoor scenes.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5828-5839. 2017. \n- Sun, Pei, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo et al. \"Scalability in perception for autonomous driving: Waymo open dataset.\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 2446-2454. Instead of taking a sum of point features in each cell (L189), did you try max pooling? It would be great to see a sensitivity analysis comparing {max,mean,sum} pooling for this, since images of semi-transparent objects (which is unrealistic) would seemingly be out-of-distribution for pre-trained image models on ImageNet?\n\nCurrently, the strengths and weaknesses of the paper roughly balance each other out in my opinion and I believe the work as it stands is borderline. I would be interested to read the authors response to my feedback. Thanks!\n\n============================\nAfter reading the author's responses to my questions and concerns I have increased my score to reflect that I feel the strengths now outweigh the weaknesses.\n\n No. Limitations of the P2P framework should be discussed in the main text (e.g., in the conclusions section). ", " The paper proposes point-to-pixel prompting to leverage 2D pre-trained models to help 3D point cloud recognition tasks. The main modules include a geometry-preserved projection and a geometry-aware coloring, which fill the gap between 3D point clouds and 2D images. The experiments on ModelNet40 and ScanObjectNN show that P2P achieves comparable performance on classification tasks with only a few trainable parameters. Strengths:\n1. The paper first proposes a prompt-tuning method to adopt 2D pre-trained parameters in 3D, which is an interesting and novel exploration.\n2. With p2p prompting, the model can achieve competitive results on the shape classification task with much fewer trainable parameters.\n\nWeaknesses:\n1. Although the method leverages extra 2D image knowledge, it does not show clear performance or speed advantages over previous 3D networks on both classification and part segmentation. The parameters that need to be trained are fewer but the whole model is larger. The 2D prior knowledge is not fully exploited in this method. \n2. The design of simply adding the point features in the same pixel seems trivial, and even with the explanations in Line190-197, I don't really think it preserves geometry. Also, no more experiments are conducted to analyze these design choices. \n3. More results on scene-level point cloud understanding with datasets like ScanNet or S3DIS are expected to illustrate the effectiveness of the prompt-tuning pipeline. Please refer to weaknesses. Some more questions are listed below:\n1. For the projection from 3D to 2D, which are the pixel features for those pixels without points?\n2. How to obtain point predictions for part segmentation is unclear. It seems not very reasonable to simply add the multi-view predictions. Actually, the multi-view fusion is not clearly stated in the paper. \n3. How's the processing of the empty pixels in the coloring module? The visualization results look very clean and not very smooth actually, and I wonder if the empty pixels are filtered out in the coloring module. The limitation is discussed in Sec 4.3. ", " In this paper, the authors propose to leverage the pretrained image model for point cloud downstream tasks. Specifically, they introduce a Point-to-Pixel Prompting to transform a point cloud as the corresponding image, by geometry-preserved projection and geometry-aware coloring. Strengths\n1) The paper is well written with clear motivation and good organization.\n2) Leveraging 2D pretraining for 3D tasks is an interesting topic.\n3) Point-to-Pixel Prompting is novel.\n\nWeakness\n1) My main concern is the experiment result. Apparently, the proposed design does not improve the performance.\n2) What are the computation cost and model size for the prompting procedure? See Strengths And Weaknesses See Strengths And Weaknesses" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 5, 3 ]
[ "sqxz9DtUOVt", "kqe3QaivO0Y", "YZmrhk1BSG", "ubT1Uqey4z", "GoAQDbVo5CE", "6q_BZXt02KxE", "s2-jiHxiMt3", "Dyyhuw-Vbo", "3IImbi3g0GB", "Gt9U0hrzKne", "w3Lv1dxR95Z", "YqQZsJe3YaT", "Nj0LjORO4JM", "GvGKoJFSsb0", "nips_2022_CZNFw38dDDS", "nips_2022_CZNFw38dDDS", "nips_2022_CZNFw38dDDS", "nips_2022_CZNFw38dDDS", "nips_2022_CZNFw38dDDS" ]
nips_2022_NQFFNdsOGD
Your Transformer May Not be as Powerful as You Expect
Relative Positional Encoding (RPE), which encodes the relative distance between any pair of tokens, is one of the most successful modifications to the original Transformer. As far as we know, theoretical understanding of the RPE-based Transformers is largely unexplored. In this work, we mathematically analyze the power of RPE-based Transformers regarding whether the model is capable of approximating any continuous sequence-to-sequence functions. One may naturally assume the answer is in the affirmative---RPE-based Transformers are universal function approximators. However, we present a negative result by showing there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is. One key reason lies in that most RPEs are placed in the softmax attention that always generates a right stochastic matrix. This restricts the network from capturing positional information in the RPEs and limits its capacity. To overcome the problem and make the model more powerful, we first present sufficient conditions for RPE-based Transformers to achieve universal function approximation. With the theoretical guidance, we develop a novel attention module, called Universal RPE-based (URPE) Attention, which satisfies the conditions. Therefore, the corresponding URPE-based Transformers become universal function approximators. Extensive experiments covering typical architectures and tasks demonstrate that our model is parameter-efficient and can achieve superior performance to strong baselines in a wide range of applications. The code will be made publicly available at https://github.com/lsj2408/URPE.
Accept
This paper studies relative positive embeddings based Transformers. The authors present a negative result that there exist continuous sequence-to-sequence functions that relative based Transformers cannot approximate (irrespective of the depth and width of the network). The authors then propose a novel attention module, called Universal RPE-based (URPE) Attention which resolves this problem and show superior performance on a wide range of applications. There is a strong consensus amongst the reviewers that the paper is technically-solid, novel, well-motivated and has good practical applications. I agree with the reviewers and recommend acceptance.
train
[ "kmIhJGXk3B", "YIlkYrGKH4z", "VBORkb5LxR7", "1aY_Bc4QiP1", "KfWrSZHaD-", "gJ0GvGNF17dh", "GQcMMKhC0q3", "JI7fG4WY77L", "5yosMZcm_W", "mUZQkCHm4YI", "Dxl0lYEnx_z", "lCVBF1aW-_Y", "pdA4B0WIkU", "MDPWUjwa6DE", "UDqgijb7kqQ", "KXCso_8veZB" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your appreciation of our work! Your feedback is insightful to help us improve our paper. Thanks!", " Thanks for the authors' responses. Overall, I think this is a practical method with good theoretical proof. The paper writing, mathematical analysis, and experiments on kinds of modality make this paper solid and reliable. I choose to maintain my original positive rating.", " We are encouraged and delighted to know that our response has addressed your concerns. As you advised, we have updated our paper and provided an ablation study on the length of input sequences to make our paper more concrete. Please refer to Section C.1 in the Appendix. We sincerely thank you again for your valuable feedback!", " We sincerely thank all the reviewers and the area chair for their efforts in reviewing our paper. The comments have enlightened us to ponder how to improve the quality of our submission. As you advice, we add new experimental results and discussions to our paper, including:\n\n- The Runtime and Memory Usage Evaluation of our URPE-based Transformer (Section C.4 in the Appendix)\n- The performance of our URPE-based Transformers of different model sizes (Section C.5 in the Appendix)\n- The comparison between our URPE-based Transformers and Transformers with both APE and RPE (Section C.5 in the Appendix)\n- Ablation Study on the length of input sequences (Section C.1 in the Appendix).\n\nPlease let us know if you have any further concerns and we are willing to answer any further questions you have on our paper. Thank you again for your insightful feedback.\n\nThanks!\n\nPaper 1737 Authors", " Hi authors,\n\nthank you for your response for my comments.\n\nI misunderstood the vocabulary size, yes, it is clear that when increasing the number of vocabulary candidates, the problem will be harder and it is impressive that URPE showed good performance even though the number increases.\n\nFor sequence length evaluation, I think it can show the main problems on RPE and URPE solved that clearly, so adding the results can make this paper more concrete.", " Thank you very much for supporting our work! We appreciate your advice on the experiments. Here are our responses to your questions:\n\n**Regarding the performance improvements.** It is worth noting that all the improvements are obtained with negligible more parameters compared to the backbone Transformers. For the Language Modeling task, there are only 4K newly introduced parameters while we obtain 0.8 lower test perplexity score. Besides, we can see that our URPE-based Attention enables the Graphormer to reduce more than 40\\% relative MAE on the ZINC dataset, which is a significantly large performance gain. On PCQM4M dataset, our URPE-based Attention improves the performance of the Graphormer with 12.5M parameters to match the performance of the Graphormer with 48.3M parameters. Under the quantum chemistry precision, this improvement should be considered to be significant.\n\n**Regarding the model architectures and sizes.** As stated in Line 216 to 218, the principles of our experimental design include covering typical RPE-based Transformer architectures and sizes, which indeed aligns with your advice. Briefly, we choose three different architectures: 1) Transformer with T5-style RPE (in Section 5.1); 2) Transformer-XL (in Section 5.2); 3) Graphormer (in Section 5.3). Besides, the model sizes also vary from 12.5M to 151M, covering Transformer-Tiny, Transformer-Small, and Transformer-Base. We follow your advice and conduct experiments on the Language Modeling tasks with 4-layer and 8-layer Transformer-XL models. The results on the wikitext-103 dataset are presented in Table 1. Due to the time limitation and restrictions of computational resources, we will conduct experiments on models of large size and vision tasks and add the new results to the next version of our paper.\n\n\n| | | |\n| :----------------------------------------------------------- | :------ | :------ |\n| **Valid PPL** | **L=4** | **L=8** |\n| RPE-based Transformer-XL | 29.61 | 25.98 |\n| URPE-based Transformer-XL | 28.72 | 25.15 |\n| Table 1. Validation Perplexity of RPE-based Transformer and URPE-based Transformer with different number of layers. | | |", " Thank you very much for supporting our work! We respond to your questions as below.\n\n\n**Regarding comparisons between RPEs and absolute positional encoding (APEs) in the synthetic experiment.**\nIt is correct that the APE-based Transformer can perfectly solve the designed synthetic tasks, and we have empirically verified this before the submission. As this experiment is a bit beyond the scope of our work (studying the capacity of RPE-based models since in many practical scenarios, e.g., long sequence, image, graph, APE is not straightforward to apply), we purposely removed it from the paper to avoid confusion. We are willing to add it back if the reviewer feels it can strengthen our work. \n\n**Regarding combinations (and comparisons) of RPEs and APEs in real experiments.**\nThanks for the question. Following your suggestion, we conduct experiments on language pre-training to test different PE strategies. We chose this task since we noticed that in some competitive pre-training methods like UniLMv2[1], APE and RPE have already been used together. We mainly test three model variants: APE+RPE Transformer, RPE Transformer, and our URPE Transformer. For all the models, RPE is set to the T5 version, following UniLMv2. We roughly keep the number of parameters of different models to the same and train the models in the BERT-base setting using the same hyper-parameters. \n\nDue to the tight schedule of the rebuttal period, we only obtained the validation loss in the pre-training stage (masked language modeling loss after 1M iterations on a hold-out validation set). We observed that the validation losses of the APE+RPE/RPE/URPE Transformers are 1.86/1.94/1.87, respectively. The results show that URPE Transformer is competitive with APE+RPE Transformers and is much better than RPE Transformers. \n\nTogether with the above observations on the synthetic dataset, we can see that URPE is competitive/superior to previous APE/RPE or their combinations. We can add those results to the paper in the next version of the paper. \n\n**Regarding whether the improvement comes from the increased expressiveness or not.**\nThanks for the question. Showing where the improvement comes from in a rigorous way is challenging. For the synthetic dataset, the task is designed to be difficult for RPE Transformer by theory. Therefore we believe the improvement is coming from better expressiveness. For the language pre-training task, the community usually observes that models with larger capacity get better results (e.g., GPT-2 v.s. GPT-3, BERT-base v.s. BERT-large). Therefore, in the experiment above, we think the improvement from RPE to URPE may also come from the power of better expressiveness. We agree it is an important question and will investigate it deeper.\n\n**Regarding why the community uses APE.**\nThanks for the question. We think the language pre-training experiment above can answer it. It can be seen from the results that using RPE only is worse than using APE+RPE, which suggests that RPE may not be powerful enough to replace the APE module entirely. We agree with the reviewer that APE may not be a perfect way to model sequential behavior. Our work can be considered as an initial exploration to investigate the disadvantage of RPE and address its limitation. \n", " \n**Regarding the computational cost.** We further conduct memory and time costs profiling experiments on our URPE-based Transformers. We choose the vanilla Transformer as the backbone model. The number of layers and the hidden dimension are set to 12 and 768 respectively. The number of attention heads is set to 12. The batch size is set to 32. We vary the sequence length from [128, 256, 512]. We run profiling of all the models on a 16GB NVIDIA Tesla V100. Following Combiner [2], we compare the inference speed and memory costs of the vanilla Transformer with RPE and our URPE. The results are presented in Table 1 and 2, which show that our URPE only increases minor computational costs.\n\n| | | | |\n| :----------------------------------------------------------- | :------ | :------ | :------ |\n| **Inference Runtime (ms in log base 2)** | **128** | **256** | **512** |\n| RPE-based Transformer | 4.55 | 5.60 | 6.79 |\n| URPE-based Transformer | 4.59 | 5.66 | 6.91 |\n| Table 1. Inference Runtime (ms in log base 2) of RPE-based Transformer and URPE-based Transformer with different sequence lengths. | | | | \n\n| | | | |\n| :----------------------------------------------------------- | :------ | :------ | :------ |\n| **Memory (GB)** | **128** | **256** | **512** |\n| RPE-based Transformer | 0.96 | 1.12 | 1.86 |\n| URPE-based Transformer | 0.97 | 1.17 | 2.04 |\n| Table 2. Peak memory usage (GB) of RPE-based Transformer and URPE-based Transformer with different sequence lengths. | | | |\n\n[1] Bao, Hangbo, et al. \"Unilmv2: Pseudo-masked language models for unified language model pre-training.\" International Conference on Machine Learning. PMLR, 2020.\n\n[2] Ren, Hongyu, et al. \"Combiner: Full attention transformer with sparse computation cost.\" Advances in Neural Information Processing Systems 34 (2021): 22470-22482. https://openreview.net/forum?id=MQQeeDiO5vv&noteId=-h5HnwArwV-", " Thank you very much for the careful review! We would like to point out that Longformer/Big Bird that you mentioned and our work indeed study and improve different aspects of the Transformer model. \n\nLongformer, Big Bird, and many other seminal works, including Sparse Transformer[1], Linformer[2], Reformer[3], Performer[4], Random Feature Attention[5], and Transformer-XL[6], are in the family called \"Efficient Transformer\"[7]. As the name suggests, all the above models target improving the inference efficiency and reducing the computational/memory cost in the self-attention module, particularly for long sequence understanding and generation tasks. However, our work investigates the model capacity by studying whether the Transformer model can approximate continuous functions well.\n\nAs those models and ours target solving different issues (i.e., efficiency v.s. capacity) in the Transformer architecture, they can be well combined. This is precisely what we try to deliver in the URPE-based Transformer-XL experiment: The Transformer-XL model can be improved with URPE-based attention for long sequence generation tasks. We hope our explanation can address your concerns and will conduct more experiments for URPE-based attention with other efficient models.\n\n[1] Child, Rewon, et al. \"Generating long sequences with sparse transformers.\" ICML 2018. \n\n[2] Wang, Sinong, et al. \"Linformer: Self-attention with linear complexity.\" arXiv preprint 2020.\n\n[3] Kitaev, Nikita, Łukasz Kaiser, and Anselm Levskaya. \"Reformer: The efficient transformer.\" ICLR 2020.\n\n[4] Choromanski, Krzysztof, et al. \"Rethinking attention with performers.\" ICLR 2021.\n\n[5] Peng, Hao, et al. \"Random feature attention.\" ICLR 2021.\n\n[6] Dai, Zihang, et al. \"Transformer-xl: Attentive language models beyond a fixed-length context.\" ACL 2019.\n\n[7] Tay, Yi, et al. \"Efficient transformers: A survey.\" ACM Computing Surveys (CSUR) (2020).", " Thanks very much for your appreciation of our work! \n\nThe difference among groups in Figure 1 is the vocabulary size, not the sequence length. For all the synthetic data experiments, we set the sequence length to 128 and varied the token vocabulary size from [10, 1000, 10000]. See Line 231-233 for the experimental setting. Thus, Figure 1 indicates even when the vocabulary size grows (i.e., the task is more difficult), our URPE-based model can still approximate the target functions. \n\nIt is apparent that RPE-based Transformers will get a higher accuracy for synthetic tasks with shorter sequence lengths. But different from our URPE-based model, it still cannot reach perfect performance according to Theorem 2. We are willing to add such experiments in the next version of the paper if you think those empirical results can improve the quality of the paper.", " Thank you very much for supporting our work! Due to space limitations, in our submission, we put the visualizations of the learned positional encoding in Appendix C.1 (See the supplementary material). It can be seen from the figures that the matrices $B$ and $C$ in the URPE capture different aspects of the positional information. We will consider moving these visualizations to the main body in the next version of the paper. \n\nWe hope our responses above can address your questions and concerns, and we sincerely hope the reviewer can reevaluate our paper based on our responses.\n", " This paper presents URPE, a new universal relative position embed for stronger transformer architecture.\nStarting from \"absolute position embed-based transformers are universal approximators of continuous\nsequence-to-sequence functions on a compact domain, authors analyze the approximation power of relative\nposition embedding in mathematics, and conclude that RPE-based transformers are not universal\napproximators. Then authors propose a new relative position embedding with simple but effective\nimprovement to achieve universal representations. Experiments on both language and graph datasets show\nURPE-based transformer’s advantages. Detailed ablation experiments and analyses make the improvements of\nURPE reliable. ### Strengths\n1. The motivation and method of this paper are both proven in mathematics.\n2. The proposed URPE is easy to implement and can adapt to any RPE-based transformers for further\nimprovements. It only introduces a few extra parameters to the original RPE.\n3. This paper is well written and organized. Motivations, methods, and concerns in experiments are all very\nclear and easy to follow.\n\n### Weaknesses\n1. Improvements seem to be marginal.\n2. Experiments in each table are conducted under a single transformer architecture(e.g., Transformer-Base in\nTab.). It’s interesting to see how many gains URPE can obtain with all kinds of transformer architectures\n(e.g., Transformer-Tiny/Small/Large, etc.).\n3. Experiments in Vision Transformer. Relative Position Embedding is also widely used in many vision\ntransformers (e.g., Swin-Transformer). The results of the URPE-based vision transformer are important to prove\nURPE’s generalization to other modalities. See Weaknesses. The authors briefly discussed their limitations in L348-350.", " The paper presents an analysis which shows that transformers with relative positional encoding are not universal sequence to sequence function approximators. This is shown rigorously but also intuitively based on the fact that traditional attention will sum to 1 hence given the same inputs the output is always going to be the same. The paper further shows that if an attention function satisfies two conditions then the resulting transformer model is a universal approximator for sequence to sequence functions. Based on this analysis the authors propose a simple and easy to understand modification to the attention function and show experimentally that 1) the resulting transformer can identify absolute positions in the sequence and 2) it improves upon the traditional attention in a variety of real world tasks. Strengths\n------------\n\n- The theoretical assessment of the limitations of RPE is novel, interesting and very simple to follow\n- The proposed universal RPE is also interesting and simple to follow\n- I particularly enjoyed both synthetic tasks that clearly showcase that RPE cannot calculate the absolute positions of tokens while universal RPE can\n- The experimental evaluation showcases that this simple proposed improvement can improve the results in various tasks\n\nWeaknesses\n---------------\n\n- There is very little comparison to using both RPE and absolute positional encoding. For instance the synthetic tasks could all be solved using absolute positional encoding and although I understand that the point of these experiments is to showcase the universality of URPE the question remains for other tasks like language modeling for instance.\n- Similarly there is little intuition and understanding that can be drawn from the paper regarding the need for absolute positions. For instance, why would absolute positions be needed for language modeling? All things being equal, does it change anything if a certain subdocument is between positions 100-150 vs 50-100? How can we know if the improvement actually comes from the increased expressivity or improved learning dynamics?\n- Although the number of parameters is increased only marginally, there is little mention wrt the possible increase in computational and memory cost. Element-wise multiplication is cheap but it does require another activation map the same size as the attention to be kept in the accelerator's memory. My questions as mentioned in the weaknesses section above are mostly regarding the comparison to combining an absolute positional encoding and traditional RPE or even fixed RPE like ALiBi. Why would this approach be preferred? Have you experimented with adding APE together with RPE in your baselines?\n\nFinally, I would also like to know about possible memory and computational cost increase with URPE. There were no obvious limitations that needed to be addressed.", " In the paper the authors first presented theoretical proof that the Relative Positional Encoding (RPE) based Transformers are not universal function approximators, unlike the originally designed Absolute Positional Encoding (APE) based Transformers are. One primary reason for this is that most RPEs are placed inside the softmax in the attention module. The softmax operator always generates a right stochastic matrix. This restricts the network from capturing positional information in the RPEs and limits its capacity. The paper also conducted synthetic experiments to support this claim empirically. To overcome this limitation, the authors provided two sufficient conditions for RPE-based Transformers to achieve universal function approximation. With this results, the authors proposed a new attention module called Universal RPE-based (URPE) Attention. The transformers with URPE-based Attention, called URPE-based Transformers, are universal function approximators. Finally they presented experimental results to demonstrate the effectiveness of Transformers with the proposed URPE-based Attention. Strengths:\nAs the authors noted that RPE-based Transformers generalize better on longer sequences compared to their APE-based counterparts. Transformers with RPE can achieve strong performance in language understanding and language generation tasks. RPEs are also popularly used in other domains to encode translation/rotation-invariant structural signals. Also, RPE makes Transformer easily be extended to other data modalities, such as image and graph, as the relative distance naturally preserves invariant properties for several important transformations like rotation and translation. Hence, there are many advantages of using RPEs and therefore these encodings became increasingly popular. But the authors pointed out one major limitation of RFE-based transformers. Transformers with RPE are not universal function approximators. That is, there exist continuous sequence-to-sequence functions that RPE-based Transformers cannot approximate no matter how deep and wide the neural network is. This paper not only pointed out this limitation, but also proposed Universal RPE-based (URPE) Attention to overcome this drawback. The authors presented theoretical proofs and experimental results to support their claims. Longformer, Big Bird such models try to solve the sequence length limitation of APE-based transformers from a different angle. Is it possible to provide a comparison of the URPE-based Transformers with these architectures in terms of model size and performance? I found the paper well rounded and a good addition to the Transformer literature. A more though comparison can help boost confidence on this work.", " This paper shows the Transformer with Relational Positional Encoding (RPE) is not a universal function approximator theoretically and empirically. Thereafter, it presents sufficient conditions to be a universal function approximator and proposes a new method that achieve them, Universal RPE (URPE). It shows that URPE-based Transformers predict the position more accurately than Transformer with RPE and outperform the Transformer with other RPE on a wide range of applications.\n\n============================\n\nGreat paper, and they addressed my concern enough. I'd like to sustain my score. Strengths:\n- Even though when $d=1$, why Transformer with RPE is not universal approximator is shown theoretically.\n- It presents conditions to be a universal function approximator.\n- It proposes URPE. URPE-based Transformer can cover the conditions suggested to be a universal function approximator.\n- It evaluates URPE-based Transformers for a variety of tasks and transformer types (e.g., TrXL, Graphormer) and they show better performance than the Transformers without URPE.\n\nWeaknesses:\n- I couldn't find the critical weakness even though I tried to find by reading the manuscripts several times with references.\n - In Figure 1, it is weird RPE-based Transformers fail to predict the position when the number of tokens is 10. I expected that when the context length is short, RPE and URPE work fine and as the length increases, RPE fails. Could you explain this?\n It proves it's argument (RPE-based Transformer is not a universal function approximator) and validates it's model URPE-based Transformer outperforms RPE-based Transformer for synthetic tasks and language modeling and graph learning. I think they addressed what they must consider so I don't have any concerns about the limitations over that.", " The authors propose present an analysis showing that Transformers with Relative Position Encodings (RPE) are not universal function approximations and provide an alternative formulation, namely URPE, that satisfies the conditions for universal function approximation. The authors further perform a small empirical study showing that the proposed method outperforms baseline RPE both on synthetic tasks as well as real world datasets. Transformers build the foundation of many modern applications both in the research as well as the industrial community. As such improving the theoretical understanding and underpinning of those popular architecture is of great importance to the research community. \n\nI appreciate that the analysis is presented in a clear and concise manner which makes it easy for the reader to follow. \n\nI cannot come up with obvious weaknesses, however with the caveat that I am not very familiar with the underlying theory. Would it be possible for the authors to visualize the learned relative position embeddings? This tends often tends to be a good indicator to get an impression for what the position embeddings are actually learning. As far as I can tell, the authors have not addressed potential limitations, but no limitations are apparent." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3, 1 ]
[ "YIlkYrGKH4z", "gJ0GvGNF17dh", "KfWrSZHaD-", "nips_2022_NQFFNdsOGD", "mUZQkCHm4YI", "lCVBF1aW-_Y", "pdA4B0WIkU", "pdA4B0WIkU", "MDPWUjwa6DE", "UDqgijb7kqQ", "KXCso_8veZB", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNdsOGD", "nips_2022_NQFFNdsOGD" ]
nips_2022_2ktj0977QGO
Multi-Instance Causal Representation Learning for Instance Label Prediction and Out-of-Distribution Generalization
Multi-instance learning (MIL) deals with objects represented as bags of instances and can predict instance labels from bag-level supervision. However, significant performance gaps exist between instance-level MIL algorithms and supervised learners since the instance labels are unavailable in MIL. Most existing MIL algorithms tackle the problem by treating multi-instance bags as harmful ambiguities and predicting instance labels by reducing the supervision inexactness. This work studies MIL from a new perspective by considering bags as auxiliary information, and utilize it to identify instance-level causal representations from bag-level weak supervision. We propose the CausalMIL algorithm, which not only excels at instance label prediction but also provides robustness to distribution change by synergistically integrating MIL with identifiable variational autoencoder. Our approach is based on a practical and general assumption: the prior distribution over the instance latent representations belongs to the non-factorized exponential family conditioning on the multi-instance bags. Experiments on synthetic and real-world datasets demonstrate that our approach significantly outperforms various baselines on instance label prediction and out-of-distribution generalization tasks.
Accept
The paper studies multiple instance learning (MIL) by treating bags as auxiliary information, aiming to identify invariant causal representations using only bag labels available in the MIL setting. To achieve identifiability, it is assumed that the prior distribution over the instance latent variables belongs to the non-factorized exponential family conditioning on the bags. This allows the disentanglement between the causal and non-causal factors and only the causal ones are supposed to contribute to the instance labels (while the bag-level labels are used in the proposed objective function in Eq. 8 to accommodate the MIL setting). Experiments are conducted on multiple datasets to demonstrate the instance prediction and out-of-distribution generalization performance of the proposed TargetedMIL algorithm. The perspective of learning invariant causal representations is new in the context of multiple instance learning. Reviewers have acknowledged this interesting aspect of the proposed work. Authors and reviewers engaged in a detailed discussion and the authors' rebuttal helped to address some major confusions, which further improved the qualify of the paper. The authors are encouraged to more clearly highlight the key difference from two important references in the final version of the paper, including identifiable VAEs and multiple instance VAE, which are relevant to the proposed work. The causal inference related assumption could also be further clarified as suggested by one reviewer.
val
[ "JuI44NupTW4", "jjanCzuIkU", "0CU0jl98tcR", "BeYw_NUCHdF", "Bci25bSSIe2", "XLyWgb7mYlX", "gHC_YfPibF", "FavpOhwP8xm", "d0DKQ9Bb7F8", "-9_g5jGXKNJ", "MeuO5FtuWtI", "jsvfpG9LnTq", "N5hQmfQi3AZ", "bvTLqovApN_", "v36CVg-7opp", "md4bbzox3t8", "yTq-NdOq5CM", "ZpFNe_I7OY" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for reading our responses and raising the score! \n\nWe will certainly incorporate the discussions into the manuscript. And also, thanks very much for the dataset recommendations; we will run experiments with the suggested datasets in the future.\n\n**Q: Did you tune the hyperparameters for the baseline approaches as well?**\n\n**A:** Yes, we did tune the baseline approaches. For the results reported in the paper, we extensively tuned parameters using the suggested parameter ranges in their paper and also expanded the search range. For the new results reported in the response, we took the performances reported in the relevant paper (due to the limited rebuttal timeframe).", " I thank the authors for writing a detailed response! The intuition provided helps clarify the motivation and I suggest the authors consider including something similar in the paper.\nI appreciate the authors providing additional experimental results, however, (i) that is not the purpose of these discussions. I am reviewing the paper as submitted. (ii) There are better instance labeled datasets available the authors should consider. For example, there are instance labeled versions of SIVAL datasets, as well as birdsong datasets with instances labeled. Here, the bag structure is not artificial and is inherent to the problem. \nInterpreting the results for real datasets is critical to understanding whether the approach is actually doing what is claimed.\nDid you tune the hyperparameters for the baseline approaches as well?\n\nI have a better insight into the work as a result of your responses, and I agree there is a reasonable contribution. But, there are still weaknesses in novelty over prior work and empirical results and interpretation. Given this, I am not enthusiastic, but I am ok with accepting. Thanks again to the authors for your responses.", " Thanks very much for the constructive comments and recommendation. We will continue revising the manuscript to incorporate the results and discussions.", " Below we further discuss the raised concerns:\n\n**Guaranteeing the correctness of the path directions**\n\nThis is an interesting problem that is important for causality-based learning algorithms. In the following, we explain our reasoning and argument regarding why it is necessary to have these assumptions even if they are theoretically often unverifiable, and discuss the reasons why they work well empirically.\n\nTheoretically, every causal inference or causality-based machine learning algorithm must depend on certain unverifiable assumptions [1]. For example, causal structure learning algorithms assume three unverifiable assumptions: Markovian, faithfulness, and causal sufficiency assumptions; causal effect estimation algorithms often assume a known DAG if they build upon do-calculus, or assume unconfoundedness, overlap, and stable unit assumptions if they follow the potential outcome framework. In the same sense, the *DAG assumption* and the *invariant generation mechanism assumption* are the two necessary assumptions adopted by TargetedMIL. Therefore, we argue that although conditions (b) and (c) are unverifiable, relying upon these widely-accepted assumptions is not really a weakness of the causality-based TargetedMIL algorithm.\n\nEmpirically, there are much evidence that a VAE-based model works sufficiently well in learning generative models by maximizing the evidence lower bound (ELBO) derived from its underlying generative process. Our experiment results also support that TargetedMIL effectively identifies the causal factors. Because TargetedMIL identifies the latent causal factor $\\mathbf{z}^c$, it empirically outperforms MIL algorithms in instance label prediction tasks and performs better than supervised learning algorithms in OOD generalization tasks. Furthermore, as demonstrated by the recent successes of causality-based learning algorithms that build upon VAE [2,3,4], maximizing the evidence lower bound has also been shown to be an effective way of learning causal representations.\n\n[1] Judea Pearl. Causality. Cambridge University Press. 2009.\n\n[2] Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, Kun Zhang. Instance-dependent Label-noise Learning under a Structural Causal Model. NeurIPS 2021.\n\n[3] Mengyue Yang, Furui Liu, Zhitang Chen, Xinwei Shen, Jianye Hao, Jun Wang. CausalVAE: Disentangled Representation Learning via Neural Structural Causal Models. CVPR 2021.\n\n[4] Chaochao Lu, Yuhuai Wu, José Miguel Hernández-Lobato, Bernhard Schölkopf. Invariant Causal Representation Learning for Out-of-Distribution Generalization. ICLR 2022.\n\n**Relationship between instance label prediction and OOD generalization tasks**\n\nThe motivation for designing the causality-based TargetedMIL algorithms is that we want to utilize the bag information in MIL to learn causal representation for the instances. By achieving this goal, TargetedMIL naturally performs well on both instance label prediction (because the causal representation excludes transformations that are difficult to learn under weak supervision) and OOD generalization tasks (because the causal representation is invariant across distributions).\n\nWe believe that an algorithm that performs well on two tasks should not be viewed as its weakness, but rather a strength that validates the motivation and the effectiveness of TargetedMIL.\n\n**Computational complexity**\n\nThanks for raising the concern regarding the computational cost. To further address this, we empirically evaluated the computation time of 4 representative deep learning-based MIL algorithms, including TargetedMIL, MIVAE, miNet, and AttnMIL. The training time is obtained from running 200 epochs on the ColonCancer dataset. To ensure a fair comparison, all feedforward algorithms (mi-Net, AttnMIL, KSA-MIL) used the same convolutional network structure, and encoder-decoder methods (MIVAE and TargetedMIL) use the same convolutional encoder and a corresponding deconvolutional decoder. The experiments are conducted using a single Nvidia RTX 3090 GPU.\n\n| | FLOPs (M) | Training time (s) |\n|-------------|-----------|-------------------|\n| mi-Net | 691 | 289.94 |\n| AttnMIL | 704 | 295.72 |\n| KSA-MIL | 788 | 323.64 |\n| MIVAE | 2046 | 750.10 |\n| TargetedMIL | 1139 | 446.81 |\n\nFrom the results, we can see that the computation time of TargetedMIL is not significantly different to the rest of the deep MIL algorithms. Feedforward algorithms are slightly faster because they do not have a decoder and do not sample from distributions. TargetedMIL is faster than MIVAE because it only explicitly models one latent variable. However, their running times are all of the same magnitude. We hope this comparison could address the reviewer’s concern about the computational complexity. ", " Thanks for your responses.\n\nI've read the other reviews and authors' response. But I respectfully disagree with your responses about the assumptions. Specifically, although DAG is a common assumption in causal inference, it is hard to guarantee the correctness of the path direction between the generated latent factors z and the other factors (as well as their invariant generation mechanisms) during the learning process, making it less convincing about the theoretical and empirical results. Besides, two unrelated tasks in this paper make the readers confused about the goal of designing the novel causal-based learning method. My concern about the time efficiency was also not well addressed, which would limit the implementation in practical applications. \n\nAccording the above reasons, I would still vote for reject for the paper.\n", " I think the idea of using the bag information to prove the identifiability of the latent factors is quite novel and promising. This idea provides a new direction for multi-instance learning research, and the essence of this idea could also work in other weakly supervised learning settings, which may further broaden the impact of this work. \n\nThe discussions in the rebuttals provide helpful intuitions for understanding the identifiability assumptions of Theorem 1 under the multi-instance learning setting, and the result on classical MIL datasets is also a nice addition. On comments regarding conditions (b) and (c) in Assumption 1 (*raised by Reviewer A4sn*), my understanding is that these conditions are ubiquitously adopted in causal inference and causal representation learning literature. Therefore, it should not be a concern about the reasonability of the algorithm. The authors can add the discussions and clarifications to the camera-ready version. \n\nI recommend strong acceptance for this paper.\n\n", " Thank you for raising your score. We will incorporate the discussed changes in the manuscript.", " I had a chance to read through the authors' rebuttals and other reviewers' comments. The responses have answered my question well, and my concern has been adequately resolved. Overall, I think this is a novel and well-motivated work that explored the synergy between deep generative multi-instance learning and causal inference, an area that deserves more attention. Furthermore, the superior OOD generalization performance shown over standard supervised learning algorithms has the potential to broaden the impact of multi-instance learning research.\n\nI increased my score to acceptance.\n\nSoundness: 3, good\nContribution: 3, good\nRating: 7: Accept\n", " Dear reviewers, \n\nWe follow the setting in the MIVAE paper [1] and report additional experiments on instance label prediction performance using the multi-instance 20 Newsgroup dataset [2]. \n\n| | miSVM | KI-SVM | GPMIL | DPMIL | VGPMIL | MIVAE | TargetedMIL |\n|--------------------------|-------|--------|-------|----------|----------|----------------|----------------|\n| alt.atheism | 0.53 | 0.68 | 0.44 | 0.67 | 0.70 | 0.745±.030 | **0.803±.021** |\n| comp.graphics | 0.65 | 0.47 | 0.49 | 0.79 | 0.79 | 0.800±.042 | **0.809±.038** |\n| comp.os.ms-windows.misc | 0.42 | 0.38 | 0.36 | 0.51 | 0.52 | **0.548±.038** | 0.545±.035 |\n| comp.sys.ibm.pc.hardware | 0.57 | 0.31 | 0.35 | 0.67 | 0.70 | **0.711±.034** | 0.679±.038 |\n| comp.sys.mac.hardware | 0.56 | 0.39 | 0.54 | 0.76 | 0.79 | 0.783±.035 | **0.810±.033** |\n| comp.windows.x | 0.56 | 0.37 | 0.36 | 0.73 | 0.69 | 0.754±.032 | **0.802±.025** |\n| misc.forsale | 0.31 | 0.29 | 0.33 | 0.45 | 0.54 | 0.553±.334 | **0.615±.036** |\n| rec.autos | 0.51 | 0.45 | 0.38 | **0.76** | 0.71 | 0.720±.024 | 0.731±.035 |\n| rec.motorcycles | 0.09 | 0.52 | 0.46 | 0.69 | 0.76 | 0.766±.029 | **0.812±.023** |\n| rec.sport.baseball | 0.18 | 0.52 | 0.38 | 0.74 | 0.76 | 0.764±.036 | **0.802±.028** |\n| rec.sport.hockey | 0.27 | 0.66 | 0.43 | 0.91 | **0.94** | 0.925±.020 | 0.938±.018 |\n| sci.crypt | 0.57 | 0.47 | 0.31 | 0.68 | 0.82 | 0.773±.036 | **0.843±.021** |\n| sci.electronics | 0.83 | 0.42 | 0.71 | 0.90 | 0.92 | **0.928±.020** | 0.901±.029 |\n| sci.med | 0.37 | 0.55 | 0.32 | 0.73 | 0.73 | 0.745±.025 | **0.800±.026** |\n| sci.space | 0.46 | 0.51 | 0.32 | 0.70 | 0.74 | 0.748±.027 | **0.786±.026** |\n| soc.religion.christian | 0.05 | 0.53 | 0.45 | 0.72 | 0.73 | 0.753±.035 | **0.761±.034** |\n| talk.politics.guns | 0.57 | 0.43 | 0.38 | 0.64 | 0.72 | 0.714±.038 | **0.759±.026** |\n| talk.politics.mideast | 0.77 | 0.60 | 0.46 | 0.80 | 0.87 | 0.840±.020 | **0.884±.021** |\n| talk.politics.misc | 0.61 | 0.50 | 0.29 | 0.60 | 0.64 | 0.650±.044 | **0.750±.036** |\n| talk.religion.misc | 0.08 | 0.32 | 0.32 | 0.51 | 0.49 | 0.525±.035 | **0.673±.031** |\n\nThe reported results are the AUC-PR scores on the test sets, results with the highest score are highlighted in bold font. Parameter are selected using the \"alt.atheism\" dataset, and the same parameters are used for the rest of the datasets.\n\n[1] W. Zhang. Non-I.I.D. multi-instance learning for predicting instance and bag labels using variational autoencoder. IJCAI 2021, pages 3377-3383.\n\n[2] Z.-H. Zhou, Y.-Y. Sun, and Y.-F. Li. Multi-instance learning by treating instances as non-I.I.D. samples. ICML 2009, pages 1249-1256.", " **Q3. Intuitions behind assumptions in Theorem 1.**\n\n**A:** Thank you for thequestion. *Intuitively*, the auxiliary information should \"break the symmetry\" in the space of representations the model could learn. An analogy is inferring an object's shape from its shadow: if we only observe one shadow of the object, it's difficult to know its shape; however, if we observe multiple objects under similar lighting conditions (from a bag of instances), we may identify the lightings; if we observe an object under different lighting conditions (from many bags of instances), we may identify the underlying shape. Another analogy is to consider the bags as H&E stained histopathology images and the instances as image patches: in one bag, we observe cells under the same staining, which provides information regarding the staining; in different bags, we observe cancerous cells under different staining, which makes it possible to infer the causal representations of cancerous cells. Such information is available in MIL but not in a standard learning setting. *Technically*, assumption (iv) can be stated as the vectors $(\\mathbf{\\lambda}(\\mathbf{B}_1, y) - \\mathbf{\\lambda}(\\mathbf{B}_0, y)$, $\\cdots$, $\\mathbf{\\lambda}(\\mathbf{B}_k, y) - \\mathbf{\\lambda}(\\mathbf{B}_0, y))$ are independent. Furthermore, $y$ is unnecessary if there exist $k+1$ distinct bags such that \n$(\\mathbf{\\lambda}(\\mathbf{B}_1) - \\mathbf{\\lambda}(\\mathbf{B}_0)$,$\\cdots$, $\\mathbf{\\lambda}(\\mathbf{B}_k) - \\mathbf{\\lambda}(\\mathbf{B}_0))$ are independent. \n\nBecause bags are independent (the instances within bags do not need to be), (iv) can be satisfied.\nAssumption (ii) is necessary as if $\\mathbf{f}$ is not injective, some information of $\\mathbf{z}$ would be unrecoverable from $\\mathbf{x}$.\nThe necessity of Assumptions (i) and (iii) are technical: (i) ensures that $\\phi_\\varepsilon$ is non-zero, and (iii) guarantees that the Jacobian of $T_{\\mathbf{f}}$ exists and has full column rank.\n\n**Q4. Weighting hyper-parameter alpha in Eq (8)?**\n\n**A:** The $\\alpha$ should be in the term as $\\alpha \\log p_{\\mathbf{\\omega}}(Y \\vert \\mathbf{z})$.\n\n**Q5. The maximum operator of Eq (8).**\n\n**A:** We will remove this paragraph to make space for the new discussions.\n\n**Q6. Additional experiments**\n\n**A:** Thanks for this constructive comment. We followed the setting in MIVAE and experimented with the 20 Newsgroup datasets. *Please refer to “Additional experiments (Response to Reviewer WkwK and RjbB).\"* We also conduct OOD experiments with MIVAE and report the test results here:\n\n| | ColoredMNIST | ColoredFashionMNIST |\n|-------------|--------------|---------------------|\n| ERM | 0.105±.007 | 0.225±.007 |\n| ERM1 | 0.109±.005 | 0.333±.089 |\n| ERM2 | 0.101±.002 | 0.132±.008 |\n| MinMax | 0.152±.025 | 0.292±.086 |\n| IRM | 0.628±.096 | 0.534±.194 |\n| IRM GAME | 0.599±.027 | 0.702±.015 |\n| iCaRL | 0.688±.007 | 0.617±.360 |\n| MIVAE | 0.156±.003 | 0.284±.120 |\n| TargetedMIL | 0.925±.004 | 0.866±.009 |\n\n**Q7. Parameter tuning.**\n\n**A:** Thanks for catching this. The parameters are tuned by grid search using the evidence lower bound. Three parameters are involved in the tuning: the learning rate {1e-2, 1e-3, 1e-4}, the weighting parameter $\\alpha$ {1,10,100}, and the latent dimensionality {8,16,24,32}. Furthermore, parameters are only tuned per dataset, e.g., experiments of 10 FashionMNIST-bags used the same parameters.\n\n**Q8. Comparison and differences with MIVAE.**\n\n**A**: Please kindly refer to responses to Q1 and Q5.\n\n**Q9. Clinically validating the causal interpretations on the clinical datasets.**\n\n**A:** Thank you for asking. Unfortunately, we do not currently have collaborators to analyze the representations pathologically (the cancerous representations do seem different from the normal ones). We would love to reach out to collaborators in the future.\n\n**Q10. Limitations.**\n\n**A:** Thanks for the comment. We will add new discussion and expand the discussion on standard MIL assumption: (1) our generative model in Figure 1 only applies to the *causal prediction* task, i.e., predicting effect $y$ from cause $\\mathbf{z}^c$ as discussed in [2]. The *anti-causal predictions* task, i.e., predicting cause from effect, is currently unexplored and worth investigating in MIL.\n(2) The standard MIL assumption can be viewed as a simplification of the collective MIL assumption, which assumes that the bag label is determined by instances belonging to more than one concept [3]. For complex vision tasks, e.g., classifying the image of *beach* where it must have latent factors corresponding to *water* and *sand*, the current approach is not sufficient.\n\n[2] B. Schölkopf et al. On causal and anticausal learning. ICML 2012, pages 459-466.\n\n[3] X.-C. Li, et al. Deep multiple instance selection. Sci. China Inf. Sci. 64: 130102 (2021).\n", " **Q1. Regarding typos.**\n\nResponse: Thanks for your careful reading! We will continue to proofread the paper. All discussed changes will be incorporated into the revised version.\n\n**Q1: Regarding comparison with MIVAE, the distinction between factorized/non-factorized priors, and why MIVAE is not identifiable.**\n\n**Response:** Thank you for this constructive comment. \n\n*We first summarize the four differences between TargetedMIL and MIVAE:*\n\n(1) The first difference lies in the generative models. MIVAE explicitly infers two latents: an instance-level $\\mathbf{z}^I$ specific to each instance and a bag-level $\\mathbf{z}^B$ shared by all instances in a bag. TargetedMIL only models one instance-specific latent $\\mathbf{z}$ which is then decomposed into causal factors $\\mathbf{z}^c$ and non-causal ones $\\mathbf{z}^e$. In TargetedMIL, the bag information is used for conditioning the prior latent distribution instead of explicitly modeled as a latent.\n\n(2) The second difference lies in the prior distributions for the latents, which is crucial for model identifiability. MIVAE assumes that the latents follow unconditional Gaussian priors, which is unidentifiable. However, in TargetedMIL, the prior distribution for the latent is a conditional Gaussian that depends on the bag information, allowing for identifiability.\n\n(3) The third difference is how the algorithms utilize supervision. In MIVAE, the bag label is predicted from both instance-level and bag-level factors, which is not best suited for instance label prediction. Because all instances in the same bag share the same bag-level factor, the bag factor is not useful for predicting the labels of individual instances. Furthermore, as the bag-level factor $\\mathbf{z}^B$ varies across bags, using $\\mathbf{z}^B$ in prediction makes MIVAE susceptible to distribution change. TargetedMIL predicts the bag label using only the causal factor $\\mathbf{z}^c$. This reduces the difficulty of predicting instance labels from bag supervision and gives the model out-of-distribution generalization capability because the biases incorporated in $\\mathbf{z}^e$ are excluded.\n\n(4) The fourth difference lies between the construction of the bag prior $p(\\mathbf{B})$ in TargetedMIL and the learning of bag-factor $\\mathbf{z}^{B}$ in MIVAE. In MIVAE, bag information is modeled as means of Gaussians whic has limited expressive power. However, in TargetedMIL, we model the bag information using a permutation-invariant set function parameterized by neural networks with universally approximate set functions.\n\n*Then, we discuss the necessity of obtaining identifiability results from non-factorized prior distributions.*\n\nYou are right that our identifiability results mainly build upon the analytical techniques introduced by iVAE. However, we argue that our main contribution is not in theoretically advancing iVAE but rather in synergistically integrating iVAE with MIL: allowing for non-factorized priors is important for MIL because the dependency among instances is a characteristic intrinsic to many MIL problems [1].\n\n*Lastly, we discuss why MIVAE is not identifiable and why identifiability is important for instance label prediction performance and out-of-distribution generalizability.*\n\nThe identifiability of the latent variables comes from conditional priors, i.e., the latents $\\mathbf{z}$ in the generative model are conditioned on auxiliary information $\\mathbf{u}$. Unconditional Gaussians are unidentifiable because there will always exist some transformation that changes the value of $\\mathbf{z}$ but not its distribution. For example, applying rotation to a spherical Gaussian distribution $p(\\mathbf{z})$ does not change $p(\\mathbf{x})$ but changes $p(\\mathbf{z}|\\mathbf{x})$, and the two distributions are indistinguishable. In MIVAE, the prior distributions of the two latent variables are unconditional: the instance-level factor $\\mathbf{z}^B$ and $\\mathbf{z}^I$ are independently modeled as unconditional Gaussian distributions; this is the reason why MIVAE is not identifiable.\n\nIdentifiability is particularly important for instance label prediction in MIL because the bag-level supervision is weaker than the instance-level prediction task. An algorithm can still learn useful representations without identifiability, and it may perform well when supervision matches the task or training/test distributions remain unchanged. However, the unidentifiable representations often contain noisy transformations and spurious features, making it harder to infer instance labels from bag supervision and more susceptible to distribution change. Identifiability is also attractive for MIL because the auxiliary information can be provided by the readily available multi-instance bags, whereas additional supervision is often required under standard supervised learning settings.\n\n[1] Z.-H. Zhou, Y.-Y. Sun, and Y.-F. Li. Multi-instance learning by treating instances as non-I.I.D. samples. ICML 2009, pages 1249-1256.", " **Question: How to ensure conditions (b) and (c) in Assumption 1 always hold?**\n\n**A:** Thanks for your question. Condition (b) of Assumption 1 states that the causal graph is a directed acyclic graph (DAG), a standard assumption adopted by almost all work related to causal inference. The DAG assumption generally holds for most machine learning application scenarios [1]. Condition (c) is similar to the stable generating mechanism assumption in [2,3]. However, it is easier to satisfy in ours because TargetedMIL only requires that $p(\\mathbf{x}\\_{ij}|\\mathbf{z}\\_{ij})$ is invariant across different bags (whereas under standard supervised learning setting it is required to be invariant across different environments).\n\nYou are right that some of these assumptions are difficult to verify; however, rigorous identifiability results cannot be obtained without assumptions. In reality, some of the assumptions can be relaxed. For example, (a) may be relaxed to the collective MIL assumption using the work mentioned by *Reviewer RjbB*; (c) may be relaxed to a weaker version where $p(\\mathbf{x}\\_{ij}|\\mathbf{z}\\_{ij})$ is subject to small perturbance. Furthermore, the experiments on histopathology images and text classification (please refer to \"Additional experiments (Response to Reviewer WkwK, RjbB, A4sn.”) validate the effectiveness of TargetedMIL in real-world applications.\n\n[1] J. Pearl. Causality. Cambridge University Press, 2009.\n\n[2] B. Schölkopf, et al. Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021.\n\n[3] P. Cui and S. Athey. Stable learning establishes some common ground between causal inference and machine learning. Nature Machine Intelligence, 4(2):110–115, 2022.\n\n**Comment 1: The conditions (b) and (c) in Assumption 1 may not hold in real-world applications.**\n\n**R1:** Please refer to the response above.\n\n**Comment 2: Lack of theoretical analyses about estimation errors using Equations (7) and (8), while Theorem 1 only holds in ideal environment.**\n\n**R2:** Thanks for this comment. Firstly, Eq (7) can represent any permutation-invariant set function [1]. Secondly, for Eq (8), the approximation error comes from the fact that VAE-based methods conduct amortized inference, and the ELBO is a lower bound of the log-likelihood. To our knowledge, no consensus exists regarding the approximation errors in VAE models. However, many empirical results have shown that VAE-based models perform well in a wide range of applications, e.g., vision tasks such as image classification and face recognition, and biology tasks such as protein structure and binding prediction.\n\nWe now discuss the assumptions in Theorem 1. The requirement of *assumptions (i) and (iii)* are technical: (i) ensures that $\\phi_\\varepsilon$ is non-zero, and (iii) guarantees that the Jacobian of $T_{\\mathbf{f}}$ exists and has full column rank. *Assumption (ii)* is necessary as if $\\mathbf{f}$ is not injective, some information of $\\mathbf{z}$ would be unrecoverable. The above three assumptions are required by most of the identifiability results. Furthermore, *assumption (iv)* is quite easily satisfied in MIL. (iv) can be stated as $(\\mathbf{\\lambda}(\\mathbf{B}_1, y) - \\mathbf{\\lambda}(\\mathbf{B}_0, y)$, $\\cdots$, $\\mathbf{\\lambda}(\\mathbf{B}_k, y) - \\mathbf{\\lambda}(\\mathbf{B}_0, y))$ are independent. Furthermore, $y$ is unnecessary if there exist $k+1$ distinct bags such that \n$(\\mathbf{\\lambda}(\\mathbf{B}_1) - \\mathbf{\\lambda}(\\mathbf{B}_0)$,$\\cdots$, $\\mathbf{\\lambda}(\\mathbf{B}_k) - \\mathbf{\\lambda}(\\mathbf{B}_0))$ are independent. \n\nBecause bags are independent (the instances within bags do not need to be), (iv) can be easily satisfied.\n\n**Comment 3: Lack of analysis of computational complexity.**\n\n**R3:** We did not conduct a computational complexity analysis because the exact cost of TargetedMIL depends on the encoder-decoder networks required for the task. However, a major computational advantage of TargetedMIL over existing MIL algorithms is that TargetedMIL can be trained by a mini-batch containing multiple bags. In contrast, other deep MIL approaches, such as Attention-based MIL, must be trained by mini-batch containing only one bag.\n\n**Comment 4: The relationship between the two tasks (instance label prediction and OOD Generalization) is unclear.**\n\n**R4:** They do seem unrelated as instance label prediction is a multi-instance learning task while OOD generalization is a supervised learning task. TargetedMIL excels on both tasks because it identifies the causal factor $\\mathbf{z}^c$. For instance label prediction, TargetedMIL performs significantly better than existing MIL algorithms because focusing on $\\mathbf{z}^c$ and excluding $\\mathbf{z}^e$ makes it easier to infer instance label from bag supervision. For OOD generalization, TargetedMIL performs well because the causal factor $\\mathbf{z}^c$ is invariant across distribution changes and because TargetedMIL accurately predicts instance labels.\n", " **Comment: The utilized datasets are slightly simple. MNIST, FaMNIST, and KuzushijiMNIST are simple datasets that are not so representative to verify the effectiveness of the proposed method. More complex datasets should be explored.**\n\n**Response:** Thanks for this constructive comment. Besides the Colon Cancer results reported in the manuscript, we also report experiments with the multi-instance 20 Newsgroup datasets used in [3] to further verify Targeted MIL. *Please refer to \"Additional experiments (Response to Reviewer WkwK and RjbB).\"*\n\n**Question: Does this result apply to collective MIL assumption proposed in [1]? The collective assumption assumes that several instances work together to determine the bag label. This assumption is explored in a related work that generalizes Attn-MIL [2]. Hence, could this work generalize to this MIL assumption?**\n\n**Answer:** Thanks for bringing up this insightful question and important reference. [2] provides a viable way to extend our work to the collective MIL assumption. It utilizes Gumbel softmax and Gumbel top-k for the standard and collective MIL assumption, respectively. As the Gumbel reparameterization trick is in synergy with VAE-based methods [4], using Gumbel top-k with our proposed algorithm is likely to work with the collective MIL assumption. This would be an interesting direction for future explorations. We will add the discussion and reference to the revised manuscript.\n\n[3]\tZ.-H. Zhou, Y.-Y. Sun, and Y.-F. Li. Multi-instance learning by treating instances as non-I.I.D. samples. *ICML 2009*, pages 1249-1256.\n\n[4] \tE. Jang, S. Gu, and B. Poole. Categorical Reparameterization with Gumbel-Softmax. *ICLR 2017*.", " **Q1: What is the scope of domains for the proposed causal graph in Figure 1? Is it applicable to weakly-supervised image classification problems? Discussing some practical problems for which this causal graph is suitable would be preferable.**\n\n**A1:** The causal graph in Figure 1 is suitable for a wide range of weakly supervised tasks where *the bag labels are determined by the labels of their instances*, such as sound event detection, object detection, and medical image analysis. For example, in histopathology medical image analysis, a whole-slide image is represented by a bag, and the cells are represented by instances. Supervision is only available at the image level, while whether a patch is cancerous or normal is unknown; however, patch level predictions are crucial for interpretability in medical applications. TargetedMIL is suitable because it accurately predicts instance labels by identifying the underlying causal factors of the cancerous cells.\n\n**Q2: At the high level, how does the proposed VAE-based MIL method compare to the methods that are based mainly on attention, such as [16] and its follow-up works? As VAE-based MIL algorithms is very different from the current trend of attention-based MIL algorithms, what are the considerations when choosing one over another?**\n\n**A2:** [16] utilizes the attention mechanism in a feedforward network to aggregate each instance's contribution to the bag label. Because the attention mechanism assigns continuous weights to both positive and negative instances in positive bags, it is not best suited for instance label prediction under the standard multi-instance assumption.\n\nThe proposed TargetedMIL algorithm integrates max-pooling with the evidence lower bound to learn an encoder-decoder model with identifiable causal representations, and the identified causal representation makes instance label prediction easier while benefiting model robustness.\n\nIn summary, our proposed algorithm should be preferred when the task is instance label prediction, or distribution change exists. Attention-based MIL algorithms are more suitable for bag classification tasks where the training and test datasets follow the same distribution.\n\n**Comment regarding minor text improvements.**\n\n**A:** Thanks for helping us improve the paper and correcting the typos. We addressed these issues and will further proofread the manuscript. All discussed changes will be included in the revised manuscript.\n\n", " The authors investigate a generative model for MIL data where instances are generated from bag level and instance level latent factors. They develop variational auto encoders to model this generative process and show an identifiability condition. Experimental results are presented on some synthetic MI datasets and one real world problem which show advances on the state of the art.\n \n+ Paper attempts to explore the use of causal models in MIL, which is a nice, not well explored direction.\n+ The experimental results are very impressive and it seems the approach can work well in the real world domain investigated.\n\n- The paper is dense and hard to read. There are many typos. \n- The paper borrows heavily from two prior papers, one on identifiable VAEs and one on a multiple instance VAE. It was not clear to me that the advances made beyond these were significant in algorithmic or theoretical terms. While the identifiability result is nice, it was not clear if it required analytical tools that were novel beyond the known result for iVAE. The distinction between factorized/non-factorized seems minor; it is not clear how significant this is. There was also very little comparison to MIVAE, although the graphical models are extremely similar. It is stated that the reason this approach could do better than MIVAE was identifiability, but why that is and why the MIVAE was not identifiable was not clear to me. In general, I found the discussion and comparison to prior work lacking both in the theory and in the experiments. This is one significant area for improvement.\n- It is not clear if the assumptions in theorem 1 make sense for MIL, especially (iv). Not enough intuition is provided to help understand the necessity/sufficiency of the assumptions.\n- in Eq (8), where is the \"weighting hyper-parameter alpha\"?\n- \"The maximum operator of Equation 8 is fundamentally different from the pooling operators used\"---this was not very convincing to me.\n- Only one experiment is done with an instance labeled MI datasets. There are many MIL datasets available with instance labels. The authors should do additional experiments with actual MI datasets rather than synthetic data such as digit recognition. \n- From the appendix, it is not clear how hyperparameters were chosen. There did not seem to be a common tuning process for all algorithms.\n- As mentioned above, the comparison with MIVAE was not convincing to me. The ablation study in the appendix is helpful but the claim \"this is caused by not having identifiability\" lacks justification, and does not really explain why. Further explanation is needed to explain the differences between these approaches. \n- No information is given about causal interpretations on the clinical dataset. Are causal relationships found clinically validated?\n-Limitations are not really included except one line about the standard MIL assumption, which is not a sensible limitation in as much as every model must make some such assumption. \n\nTo summarize, I found the approach promising, but the paper has significant room for improvement.\n see above see above", " This work solves multi-instance learning (MIL) via utilizing bag-level weak supervision. Specifically, the authors propose a general graphical model which disentangles each instance as generated from instance-specific factors and bag-inherited factors. The proposed TargetedMIL algorithm is solved by identifiable variational autoencoder (iVAE). Pros:\n1. The graphical model seems novel and rational. Under the standard assumption of MIL, instance labels are not observed and their labels determine the bag-level label. Hence, decoupling instance labels to both instance-specific factors and bag-level factors seems reasonable.\n2. A novel TargetedMIL algorithm is proposed. TargetedMIL takes advantage of permutation invariant set transformation networks to identify the latent factors and learn instance feature representations.\n3. The paper organization is well and clear to understand. Theoretical results are provided.\n\nCons:\n1. The utilized datasets are slightly simple. MNIST, FaMNIST, and KuzushijiMNIST are simple datasets that are not so representative to verify the effectiveness of the proposed method. More complex datasets should be explored.\n 1. Does this result apply to collective MIL assumption proposed in [1]? The collective assumption assumes that several instances work together to determine the bag label. This assumption is explored in a related work that generalizes Attn-MIL [2]. Hence, could this work generalize to this MIL assumption?\n\n[1] Multiple instance learning: A survey of problem characteristics and applications. Pattern Recognition.\n[2] Deep multiple instance selection. Sci. China Inf. Sci.\n Yes", " This paper synergizes identifiable variational autoencoder with multi-instance learning, an important weakly supervised learning problem. By utilizing the instances in the multi-instance bags as auxiliary information, the proposed method provably identifies the latent factors of the positive instances up to affine transformations. The proposed method then utilizes the inferred latents and achieves significantly better results in downward tasks such as instance label prediction and out-of-distribution generalization. Empirical evaluations of qualitative latent reconstructions support the identifiability claim. Quantitative results on several benchmark datasets also show that the proposed method is significantly better at predicting instance labels and out-of-distribution generalization than the baselines. Strengths:\n--\tThis paper presents a novel method for integrating multi-instance learning with identifiable latent representation learning.\n--\tThe paper is well written and nicely presented.\n--\tThe qualitative latent reconstruction results are novel and interesting, and the quantitative results show significant improvement over the baselines.\n--\tThe out-of-distribution generalizability brought by the identified latent factors broadens the impact of MIL methods to other machine learning subfields.\n\nOverall, the strength of this paper is in the formulation of multi-instance learning as an identifiable VAE with auxiliary information problems. This provides a novel perspective for MIL band motivates better methods not only for instance label prediction but also for out-of-distribution generalization. The focus on multi-instance learning is important as it has been much overlooked compared to how prevalent it is in important real-world applications such as whole-slide medical imaging and fine-grained prediction, and the focus on identifiability has been shown to be useful to further the MIL methodology. In terms of writing, it nicely addresses the challenges of MIL and identifiability and builds up its case coherently. Identifiability is first analyzed under the correct graphical model, then relaxed to accommodate the multi-instance assumption. The model assumptions are clearly stated as well as the assumptions about the data generating process.\n\nWeakness:\n--\tSome minor issues in the text could be improved with further proofreading. Please see the detailed comments below.\n\nMinor Points:\nLine 147, “In Equation 2”, -> “In Equation 2,”\nLine 160, “can then be written as” -> “can be written as”\nLine 226, “and thus ensures identifiability” -> “and to ensure identifiability”\nLine 263, “Appendix” -> “Appendices”\nIn Table 2, “TargetedMIL” should be in bold font.\nLine 628 (in the Appendix), “Table ??” should be “Table 3”.\n 1.\tWhat is the scope of domains for the proposed causal graph in Figure 1? Is it applicable to weakly-supervised image classification problems? Discussing some practical problems for which this causal graph is suitable would be preferable.\n2.\tAt the high level, how does the proposed VAE-based MIL method compare to the methods that are based mainly on attention, such as [16] and its follow-up works? As VAE-based MIL algorithms is very different from the current trend of attention-based MIL algorithms, what are the considerations when choosing one over another?\n Yes. The authors have discussed that the method only applies to the standard multi-instance assumption. Solving the identifiability problem in other MIL assumptions remains to be explored.\n\n", " This paper proposes a causal representation learning algorithm for multi-instance learning, called TargetedMIL, which aims to identify invariant causal representations of instances from bag-level weak supervision. TargetedMIL separates each instance into a causal and a non-causal parts and estimates labels for each instance. Experiments on synthetic and real-world datasets demonstrate the effectiveness of the proposed algorithm for two tasks. Strengths:\n\n1.\tThis work is well-motivated with theoretical analyses. \n\n2.\tThe assumptions used are described in detail.\n\n3.\tThe experiments are conducted on real-world datasets the empirical results outperform some classical methods.\n\nWeaknesses:\n\n1.\tThe conditions (b) and (c) in Assumption 1 may not hold in real-world applications. \n\n2.\tLack of theoretical analyses about estimation errors using Equations (7) and (8), while Theorem 1 only holds in ideal environment.\n\n3.\tLack of analysis of computational complexity.\n\n4.\tThe relationship between the two tasks (instance label prediction and OOD Generalization) is unclear.\n How to ensure conditions (b) and (c) in Assumption 1 always hold? My major concern is about the reasonability of the proposed algorithms. The conditions in assumptions used in this paper seems very strict that may be violate in real world. Furthermore, the latent factors identified by the proposed algorithm are hard to evaluate which may violate the assumptions." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 4 ]
[ "jjanCzuIkU", "-9_g5jGXKNJ", "XLyWgb7mYlX", "Bci25bSSIe2", "jsvfpG9LnTq", "bvTLqovApN_", "FavpOhwP8xm", "N5hQmfQi3AZ", "nips_2022_2ktj0977QGO", "v36CVg-7opp", "v36CVg-7opp", "ZpFNe_I7OY", "md4bbzox3t8", "yTq-NdOq5CM", "nips_2022_2ktj0977QGO", "nips_2022_2ktj0977QGO", "nips_2022_2ktj0977QGO", "nips_2022_2ktj0977QGO" ]
nips_2022_Siv3nHYHheI
Online Training Through Time for Spiking Neural Networks
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models. Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency. Particularly, backpropagation through time (BPTT) with surrogate gradients (SG) is popularly used to enable models to achieve high performance in a very small number of time steps. However, it is at the cost of large memory consumption for training, lack of theoretical clarity for optimization, and inconsistency with the online property of biological learning rules and rules on neuromorphic hardware. Other works connect the spike representations of SNNs with equivalent artificial neural network formulation and train SNNs by gradients from equivalent mappings to ensure descent directions. But they fail to achieve low latency and are also not online. In this work, we propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning by tracking presynaptic activities and leveraging instantaneous loss and gradients. Meanwhile, we theoretically analyze and prove that the gradients of OTTT can provide a similar descent direction for optimization as gradients from equivalent mapping between spike representations under both feedforward and recurrent conditions. OTTT only requires constant training memory costs agnostic to time steps, avoiding the significant memory costs of BPTT for GPU training. Furthermore, the update rule of OTTT is in the form of three-factor Hebbian learning, which could pave a path for online on-chip learning. With OTTT, it is the first time that the two mainstream supervised SNN training methods, BPTT with SG and spike representation-based training, are connected, and meanwhile it is in a biologically plausible form. Experiments on CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS demonstrate the superior performance of our method on large-scale static and neuromorphic datasets in a small number of time steps. Our code is available at https://github.com/pkuxmq/OTTT-SNN.
Accept
The authors propose an online training algorithm (OTTT) for spiking neural networks (SNNs) using eligibility traces and instantaneous loss values. They show empirically that this method performs better than previous ones in feed-forward spiking neural networks. All reviewers agree that the empirical results are impressive and that the method is interesting for neuromorphic hardware. The authors also provide a mathematical analysis of the learning method. Weaknesses: - Networks are mostly applied to static tasks, while more temporal tasks are potentially more interesting for SNNs - Comparison to previously proposed methods is missing. In general, a very interesting and strong paper. I propose acceptance.
test
[ "oLKVV80Nozw", "thmBtJmnP2n", "x1cpu2CSdHK", "wA3T4kFjAuV", "VitYfTW-Vxe", "-w21uEDkDN7", "B_LH-4NCNx", "DgINWVYhXZ", "rT7sttFiw-", "tv0b3LzKyUq", "U3BosS7lzm2", "2DoPlOk4XNe", "SzWqUlzjrF", "xLDk7a-OJDu", "PQNAUilIClb", "EvOlY615l4W", "WgB_bPmVz-a", "JejHTho56Vh", "evzCJ1zUn5W" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the valuable suggestion and we will clarify this in the following revision. Yes, for each input sample, the network is reset at time step 0, and at each discrete time step $t$ the input at time step $t$ is passed to the network, with total $T$ time steps. For static images, the input at all time steps is the real-valued image pixel value, which is similar to previous work and can be regarded as the input current. For dynamic inputs, the input at each time step is different. This is the setting in many previous works and their realization of BPTT is to unroll over these discrete time steps. We follow this setting and we will add the clarification.", " Thank you for the valuable comments.\n\nWe agree that the outputs of the classification task are static over time, and it would be interesting future work to consider applications such as reinforcement learning. Many SNN models do not explicitly model long-term memory and additional efforts on the model (e.g. specifically designed architecture) are required for these tasks. We will consider extending the training method to these scenarios in future work.\n\nAs for the regression task, our method is also applicable. For SNNs, regression is typically done with the firing rate as well. So the output and the loss are similar to that of the classification task, i.e. the loss is $L=\\mathcal{L}(\\frac{1}{T}\\sum_{t=1}^T\\mathbf{s}^N[t], \\mathbf{y})$, where $\\mathbf{y}$ would be the regression target, and the total loss of our instantons loss is still an upper bound of this loss. Also, note that the output of the model does not need to be restricted to 0 or 1. In practice, we assume that the neurons of the last output layer will not spike or reset and do classification based on the accumulated membrane potential, which is similar to previous works (see Appendix C.2.2). So the output is $\\mathbf{u}^N[t]$, which is not restricted to binary output, and the loss is calculated between $\\mathbf{u}^N[t]$ and $y$.\n", " Thank you for the explanations.\n\nRe 7: So, with \"time steps\" you mean the total run-time of the network, where it is reset at time step 0 and you present (constant?) inputs for the mentioned number of time steps? And you assume that BPTT is unrolled fully over time? In my opinion this requires some clarification in the text.\n\nI remain at \"accept\" as my evaluation, I think the current paper presents an interesting avenue for SNN learning, which is in my eyes not yet a solved issue.", " Thank you for the clarifications. \n\nRe 2: To clarify my point: Although the inputs for tasks come from DVS cameras, the outputs are classifications, which are static over time. Patterns relevant for classification can be discernable on quite short time scales, hence not requiring any long-term memory. I do realize that time and space for the paper are limited, and application to e.g. reinforcement learning tasks can be left to future work. \n\nRe 4: if I understand correctly now, it depends on a spike occurring at time t or not. It then seems to me that this method is mostly applicable to classification tasks, where typically one neuron needs to be 1 and the rest 0. Can the method also be applied to regression?\n", " Thank you for the responses, and apologies for the delay in replying.\nThe earlier weaknesses were important, but the authors have addressed them rather well.\nI am updating the rating.\nI would recommend that the authors reflect in the main text, not only the appendix, some of the important new clarifications and results.. ", " Dear Reviewer AZJQ,\n\nThank you again for your review. We have tried our best to address your concerns in the response / the updated paper with very detailed discussions and additional results. Since the author-reviewer discussion deadline is approaching, could you please take a look at our response and re-evaluate our paper? We are willing to answer questions if you have other concerns. Thank you for your consideration and we are looking forward to hearing from you.\n\nSincerely,\n\nAuthors of Paper 1731 ", " We thank all reviewers for their valuable comments and suggestions. We have uploaded an updated version of our paper based on the reviews. Revisions are marked as blue in the text. The updates are summarized as follows:\n\n1. We add the citation and discussion of the recent related work mentioned by reviewers in Section 2.\n\n2. Following the suggestions from reviewers, we supplement several experiments (provided in the response) in Appendices D.2, D.3, and D.4. Due to the limited space, the supplemented results are currently in Appendix. \n\n3. Following the suggestions from Reviewer bMnW, we re-organize the order of Section 3.2 and modify several descriptions in the Abstract and Introduction.\n\n4. In response to Reviewer Mcuq, we clarify the description of the instantaneous loss in Section 4.1.\n\n5. Following the suggestions from Reviewer AZJQ, we clarify the term \"forward-in-time\" in Section 1.\n\nWe will continually revise the paper based on the suggestions. Thanks for the valuable comments again.", " Thank you for your comments. We try our best to address your concerns as follows.\n\n1. About the related work Yin et al. (2021) and Bohnstingl et al. (2022), and the difference between our work and theirs.\n\nWe provide a very detailed discussion below, and we have added the citation and discussion in the revised paper.\n\nYin et al. (2021) is a recent work that directly leverages the RNN training method named forward propagation through time (FPTT) [1] to train spiking neural networks with the help of surrogate gradients (SG). Moreover, they propose a new liquid spiking neuron whose time constant depends on the input and previous membrane potentials, and show that FPTT should be combined with this neuron for good results.\n\nBohnstingl et al. (2022) is a recent work that proposes an online learning method OSTL for recurrent and spiking neural networks, and for non-differentiable SNNs, they also use surrogate gradients.\n\nOur work is different from them in three main aspects.\n\n(1) **Our training method is simpler and more efficient than them.** \n\nFor FPTT, the original FPTT [1] trains recurrent neural networks by dynamically regularizing weights. It calculates gradients at each time step based on the current state, and regularizes the update of weights by a penalty loss which is based on the running average of previous weights and the previous gradient. Yin et al. (2021) directly apply this method to SNNs and require heavy computation to regularize the update of parameters. As a comparison, we calculate gradients based on the tracked pre-synaptic activities and only need to update parameters according to simple rules, which is computationally efficient and could be easier to be implemented, e.g. on neuromorphic hardware.\n\nFor OSTL, they seek the exact equivalence with BPTT with SG, so their tracked eligibility traces have a much larger memory overhead than our tracked pre-synaptic activities. Particularly, the memory complexity of their method is $O(n^2)$, while ours is only $O(n)$, where $n$ is the number of neurons in a layer (the complexity for BPTT is $O(Tn)$, where $T$ is the time steps). This is because they consider the derivative of the reset operation so that they have to maintain a large tensor for eligibility traces, while we do not consider this so the derivation can be simplified to Eq. (4) in the paper, and therefore we only need to track pre-synaptic activities for each neuron. So our method is much simpler and requires fewer costs (the training memory costs advantage has been demonstrated in Fig. 2, which shows that we reduce the $O(Tn)$ complexity of BPTT to $O(n)$, rather than increasing to $O(n^2)$), and our tracked trace could be easier to be implemented on neuromorphic hardware.\n\n(2) **Our method has a more solid theoretical grounding for optimization.** \n\nThe major obstacle of training SNNs is that the spiking operation is discrete and non-differentiable. Therefore, directly applying RNN training methods to SNNs is problematic as the derivative of the Heaviside step function is 0 almost everywhere. Previous works that apply learning methods of RNNs to SNNs (including Yin et al. (2021)) or seek the exact equivalence with these learning methods (including Bohnstingl et al. (2022)) use \"surrogate gradients\" (SG) to handle this problem, which substitutes the derivative of the step function with continuous approximations. However, gradient descent with such a method in the context of RNN-like training typically lacks theoretical clarity for optimization, since it is not the true gradient of the actual function and the descent direction is not guaranteed. \n\nUnlike these works, we provide a more solid theoretical grounding from a new perspective. We do not try to seek the exact equivalence to gradients calculation by BPTT (or similar methods for RNN) with SG. Instead, we connect OTTT with another branch of SNN training methods, i.e. methods based on spike representation which is better for theoretical analysis of optimization. This branch of methods builds the connection between spike representation (e.g. the (weighted) firing rate or spiking time) of neurons in an ANN-like closed form that is sub-differentiable. So gradients can be calculated through the spike representation and are well defined. We prove that gradients of OTTT can provide a similar descent direction as these gradients based on spike representation and therefore provide a theoretical grounding for optimization in the context of the optimization problem formulated by spike representation. This also provides a connection between the two mainstream SNN training methods, i.e. BPTT with SG and spike representation-based methods. \n\n(3) **Our experiments can scale to large-scale tasks** including ImageNet classification and are based on the common LIF neuron, while Yin et al. (2021) and Bohnstingl et al. (2022) do not consider the large-scale problem, and Yin et al. (2021) require a more complex neuron model.\n", " 2. \"Does not provide experimental comparisons to such methods as [1-3], but only to BPTT.\"\n\nAll these methods do not scale to large-scale tasks such as ImageNet classification, and for experiments in our work, they hardly have results to compare. Currently, BPTT with SG is the only direct training method that can scale to such large-scale tasks. Therefore, we mainly compare with BPTT and other methods that can achieve high performance on these tasks in the paper. Yin et al. (2021) conduct an experiment on the moderate-scale task DVS-CIFAR10 that is also used in our work. Their result is not better than ours, as listed below: \n\n| Method | Accuracy |\n| :----: | :----: |\n| FPTT (Yin et al., 2021) | 72.3\\% |\n| OTTT$_A$ (ours) | 76.27$\\pm$0.15\\% |\n| OTTT$_O$ (ours) | 76.63$\\pm$0.34\\% |\n\nThey also conduct an experiment on DVS128-Gesture, on which we supplement our results in the following response to question 5. We compare their results in that part. \n\n3. \"If any advantage - theoretical or other - exists compared to these prior methods, the paper does not summarise clearly what element in the new theoretical derivation enables that advantage.\"\n\nAs discussed in the detailed response to question 1, our method is simpler and more efficient to implement, provides a more solid theoretical grounding, and performs better even on large-scale datasets. Our new theoretical derivation is unique and enables a clearer explanation of the optimization. In detail, we do not try to seek the exact equivalence to gradients calculation by BPTT with SG which is unclear for optimization considering the non-differentiability, but we connect OTTT with another branch of SNN training methods, i.e. methods based on spike representation which is theoretically more clear for optimization, for theoretical analysis and guarantee. \n\n4. \"The work does not aim to achieve something brand new, as successful SNN training has already been possible. Therefore its impact is expected to be limited.\"\n\nWe respectfully disagree with this statement. The training method for SNNs is still an important open problem, especially if we want to consider more properties that are suitable for on-chip learning on neuromorphic hardware. As for existing training methods, while they may be successful regarding the performance (e.g. ANN-SNN or BPTT with SG, and the direct training methods only scale to large-scale datasets very recently [1,2,3,4] and may still have a large improvement space), they all have important limitations as we have introduced in the paper, e.g. BPTT with SG suffers from large training memory costs and lack of theoretical clarity for optimization, and they are inconsistent with the online property of rules on hardware. As for existing online training methods, as discussed in the response to the first question, from the theoretical perspective, they lack solid theoretical grounding for optimizing non-differentiable SNNs, and from the experimental perspective, none of them scale to large-scale tasks with large networks structures as BPTT with SG do. And some of these methods require more memory costs and could be more complex to be implemented than our method, e.g. on neuromorphic hardware. So it is still an important problem to study proper SNN training methods.\n\nOur method and analysis should make important contributions from both theoretical and practical perspectives. As discussed in the detailed response to question 1, our method provides a more solid theoretical grounding for optimizing non-differentiable SNNs, and our online method can scale to large-scale tasks with more efficient computation and less costs, which could also pave a path for online on-chip training of SNNs. Note that Reviewer Mcuq also pointed out in \"Strengths\" that \"This article can form a substantial step in that direction (for online learning on neuromorphic hardware).\"\n", " 5. \"does not show results on sequential tasks.\"\n\nFirst, we would like to point out that our experiment on the neuromorphic dataset DVS-CIFAR10 is **indeed** sequential. The dataset contains sequences of dynamic inputs produced by DVS cameras, which is commonly used to measure the neuromorphic computing of SNNs.\nSecond, our experiments follow a large number of previous works on SNNs [1,2,3,4,5,6,7,8] that focus on the most commonly studied static and neuromorphic datasets.\n\nAdditionally, we supplement an experiment on another commonly used neuromorphic dataset DVS128-Gesture [9], which contains 11 kinds of hand gestures recorded by a DVS camera. These neuromorphic data are also sequential. The results are below:\n\n| Method | Network structure | Time steps | Accuracy |\n| :----: | :----: | :----: | :----: |\n| SLAYER [10] | 8-layer CNN | 300 | 93.64$\\pm$0.49\\% |\n| DECOLLE [11] | 3-layer CNN | 1800 | 95.54$\\pm$0.16\\% |\n| BPTT [7] | 8-layer CNN (PLIF, BN) | 20 | 97.57\\% |\n| BPTT [7] | 8-layer CNN (LIF, BN) | 20 | 96.88\\% |\n| FPTT (Yin et al., 2021) | 8-layer CNN (LTC-SNN) | 20 | 97.22\\% |\n| BPTT | VGG (sWS) | 20 | 96.88\\% |\n| OTTT$_A$ (ours) | VGG (sWS) | 20 | 96.88\\% |\n\nIt shows that our method can achieve the same high performance as BPTT does. The SOTA result [7] incorporates additional techniques to learn membrane time constant and Yin et al. (2021) leverages a more complex neuron model, while we do not dive into such techniques (actually there are only 288 test samples and the 0.69\\% accuracy gap stands for 2 samples). \n\nWe note that some other works [12,13,14] conduct experiments on other sequential tasks such as speech recognition. However, they show that such tasks require special design for neuron models and architectures to achieve better results [12, 13]. Given the limited time, we are unable to thoroughly dive into them, and it would be important future work. And the recurrence in our work is verified by introducing feedback connections to improve performance on image classification tasks, which is supported by previous works [8, 15].\n\n6. \"does not compare experimentally with non-spiking networks, in terms of accuracy, memory, or computational efficiency.\"\n\nWe supplement the results on CIFAR-10 and CIFAR-100 below. The non-spiking ANN models are based on the ReLU activation instead of spiking neurons.\n\nResults on CIFAR-10 (the last line is ANN):\n\n| Method | Network structure | Params | Time steps | Accuracy |\n| :----: | :----: | :----: | :----: | :----: |\n| ANN-SNN | VGG-16 | 40M | 16 | (92.29\\%) |\n| BPTT | ResNet-19 (tdBN) | 14.5M | 6 | (93.16\\%) |\n| BPTT | 9-layer CNN (PLIF, BN) | 36M | 8 | (93.50\\%) |\n| BPTT | VGG (sWS) | 9.2M | 6 | 92.78$\\pm$0.34\\% (93.23\\%) |\n| OTTT$_A$ (ours) | VGG (sWS) | 9.2M | 6 | 93.52$\\pm$0.06\\% (93.58\\%) |\n| OTTT$_O$ (ours) | VGG (sWS) | 9.2M | 6 | 93.49$\\pm$0.17\\% (93.73\\%) |\n| **ANN** | VGG (sWS) | 9.2M | N.A. | (94.43\\%) |\n\nResults on CIFAR-100 (the last line is ANN):\n\n| Method | Network structure | Params | Time steps | Accuracy |\n| :----: | :----: | :----: | :----: | :----: |\n| ANN-SNN | VGG-16 | 40M | 400-600 | (70.55\\%) |\n| Hybrid Training | VGG-11 | 36M | 125 | (67.87\\%) |\n| DIET-SNN | VGG-16 | 40M | 5 | (69.67\\%) |\n| BPTT | VGG (sWS) | 9.3M | 6 | 69.06$\\pm$0.07\\% (69.15\\%) |\n| OTTT$_A$ (ours) | VGG (sWS) | 9.3M | 6 | 71.05$\\pm$0.04\\% (71.11\\%) |\n| OTTT$_O$ (ours) | VGG (sWS) | 9.3M | 6 | 71.05$\\pm$0.06\\% (71.11\\%) |\n| **ANN** | VGG (sWS) | 9.3M | N.A. | (73.19\\%) |\n\nDue to the limited time, we are unable to provide ImageNet results. For the neuromorphic dataset DVS-CIFAR10, the equivalent feedforward non-spiking ANNs may not directly handle the dynamic inputs. So we do not consider it. Usually, SNNs with a very small number of time steps do not reach the performance of equivalent ANNs due to the information propagation with discrete spikes rather than floating-point numbers. The results of our model with 6 time steps are acceptable.\n\nOur training memory cost is $O(n)$ (where $n$ is the number of neurons) and is the same as non-spiking ANNs. As for computational efficiency, it is common to compare the energy efficiency between SNNs with spike-based operations and ANNs with floating-point calculations. As has been demonstrated in Section 5.6, with 6 time steps each neuron in our trained model averagely generates 1.1 spikes. Therefore the total synaptic operations of our SNN model would be about the same as the FLOP operations of ANN. Since the cost of synaptic operation is much lower than FLOP operation (this depends on neuromorphic hardware, some can achieve one to two orders of improvement), our SNN model would require much less energy consumption than non-spiking ANNs. Moreover, we can also flexibly reduce the time steps to achieve a trade-off between accuracy and energy consumption, as discussed in Section 5.5 and Appendix D.\n", " 7. Suggestions for the writing.\n\nThank you for your suggestions. We will carefully refine our presentation. The term \"forward-in-time\" means we only need to do online calculations through time without computing backward through time. This term is similarly used in the literature [16] and we have clarified it in the revision.\n\n8. Limitation. We have discussed the limitations of the work in Appendix E.\n\n[1] Zheng et al. Going deeper with directly-trained larger spiking neural networks. AAAI, 2021.\n\n[2] Li et al. Differentiable spike: Rethinking gradient-descent for training spiking neural networks. NeurIPS, 2021.\n\n[3] Fang et al. Deep residual learning in spiking neural networks. NeurIPS, 2021.\n\n[4] Deng et al. Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting. ICLR, 2022.\n\n[5] Wu et al. Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in Neuroscience, 2018.\n\n[6] Zhang and Li. Temporal spike sequence learning via backpropagation for deep spiking neural networks. NeurIPS, 2020.\n\n[7] Fang et al. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. ICCV, 2021.\n\n[8] Xiao et al. Training feedback spiking neural networks by implicit differentiation on the equilibrium state. NeurIPS, 2021.\n\n[9] Amir et al. A low power, fully event-based gesture recognition system. CVPR, 2017.\n\n[10] Shrestha and Orchard. Slayer: Spike layer error reassignment in time. NeurIPS, 2018.\n\n[11] Kaiser et al. Synaptic plasticity dynamics for deep continuous local learning (DECOLLE). Frontiers in Neuroscience, 2020.\n\n[12] Bellec et al. Long short-term memory and learning-to-learn in networks of spiking neurons. NeurIPS, 2018.\n\n[13] Bellec et al. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications, 2020.\n\n[14] Bohnstingl et al. Online spatio-temporal learning in deep neural networks. IEEE Transactions on Neural Networks and Learning Systems, 2022.\n\n[15] Kim et al. Neural architecture search for spiking neural networks. arXiv preprint arXiv:2201.10355.\n\n[16] Neftci et al. Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks. IEEE Signal Processing Magazine, 2019.\n", " Thank you very much for appreciating our work. We respond to your valuable comments as follows.\n\n1. About the related work Yin et al. (2021), and the difference between our work and theirs.\n\nThanks for the reference. Yin et al. (2021) is a recent work that directly leverages the RNN training method named forward propagation through time (FPTT) [1] to train spiking neural networks with the help of surrogate gradients. Moreover, they propose a new liquid spiking neuron whose time constant depends on the input and previous membrane potentials and show that FPTT should be combined with this neuron for good results.\n\nOur work is different from theirs in three main aspects.\n\n(1) **Our training method is simpler and more efficient than FPTT.** \n\nThe original FPTT [1] trains recurrent neural networks by dynamically regularizing weights. It calculates gradients at each time step based on the current state and regularizes the update of weights by a penalty loss based on the running average of previous weights and the previous gradient. Yin et al. (2021) directly apply this method to SNNs and require heavy computation to regularize the update of parameters. As a comparison, we calculate gradients based on the tracked pre-synaptic activities and only need to update parameters according to simple rules, which is computationally efficient and could be easier to be implemented, e.g. on neuromorphic hardware. \n\n(2) **Our method has a more solid theoretical grounding for optimization.** \n\nThe major obstacle to training SNNs is that the spiking operation is discrete and non-differentiable. Therefore, directly applying RNN training methods to SNNs is problematic as the derivative of the Heaviside step function is 0 almost everywhere. Previous works that apply learning methods of RNNs to SNNs (including Yin et al. (2021)) use \"surrogate gradients\" (SG) to handle this problem, which substitutes the derivative of the step function with continuous approximations. However, gradient descent with such a method in the context of RNN-like training typically lacks theoretical clarity for optimization, since it is not the true gradient of the actual function, and the descent direction is not guaranteed. \n\nUnlike these works, we provide a more solid theoretical grounding from a new perspective. We do not try to seek the exact equivalence to gradients calculation by BPTT (or similar methods for RNN) with SG. Instead, we connect OTTT with another branch of SNN training methods, i.e., methods based on spike representation which is better for theoretical analysis of optimization. This branch of methods builds the connection between spike representation (e.g., the (weighted) firing rate or spiking time) of neurons in an ANN-like closed form that is sub-differentiable. So gradients can be calculated through the spike representation and are well defined. We prove that gradients of OTTT can provide a similar descent direction as these gradients based on spike representation and therefore provide a theoretical grounding for optimization in the context of the optimization problem formulated by spike representation. This also provides a connection between the two mainstream SNN training methods, i.e., BPTT with SG and spike representation-based methods.\n\n(3) **Our experiments can scale to large-scale tasks** including ImageNet classification and are based on the common LIF neuron, while Yin et al. (2021) do not consider the large-scale problem and require a more complex neuron model.\n\nWe have added the citation and discussion in the revised paper.\n", " 2. \"The tasks are quite \"static\", and do not immediately seem to need much memory. The approach would be less valid for real-world, time-varying data.\"\n\nIn our experiments, the inputs of image classification tasks including CIFAR-10, CIFAR-100, and ImageNet are static, while the inputs of the neuromorphic dataset DVS-CIFAR10 are dynamic, which is converted from CIFAR-10 by DVS cameras. Our theorems apply for convergent inputs (i.e., the weighted average of the input sequence converges), so it can work well on time-varying but convergent data such as DVS-CIFAR10. \n\nFurthermore, we supplement an additional experiment on DVS128-Gesture [3], which contains 11 kinds of hand gestures recorded by a DVS camera and is more real-world and time-varying. The results are below:\n\n| Method | Network structure | Time steps | Accuracy |\n| :----: | :----: | :----: | :----: |\n| SLAYER [4] | 8-layer CNN | 300 | 93.64$\\pm$0.49\\% |\n| DECOLLE [5] | 3-layer CNN | 1800 | 95.54$\\pm$0.16\\% |\n| BPTT [6] | 8-layer CNN (PLIF, BN) | 20 | 97.57\\% |\n| BPTT [6] | 8-layer CNN (LIF, BN) | 20 | 96.88\\% |\n| BPTT | VGG (sWS) | 20 | 96.88\\% |\n| OTTT$_A$ (ours) | VGG (sWS) | 20 | 96.88\\% |\n\nIt shows that our method is also applicable to such time-varying data and achieves the same high performance as BPTT. The SOTA result [6] incorporates additional techniques to learn membrane time constant, while we do not dive into such techniques (actually there are only 288 test samples and the 0.69\\% accuracy gap stands for 2 samples). While our theorems mainly consider convergent inputs, it would be interesting future work to further consider the theoretical grounding for time-varying non-convergent inputs.\n\nBesides, the memory costs in our comparison are not for inputs but for intermediate variables of unfolded SNNs during training. Even for static inputs, BPTT for SNNs should maintain the computational graph unfolded along the simulation time, while our OTTT does not, and this also holds for dynamic inputs.\n\n3. About \"the effects of the assumptions on the proof\".\n\n(1) *\"The reset moment is ignored\"*. This is a part of our method rather than an assumption. The reset operation is ignored so that we can track pre-synaptic activities of each neuron to decouple the temporal dependency. This further enables our gradients to be aligned with gradients by spike representation, which supports the proof of the descent direction. Note that training non-differentiable SNNs by BPTT with SG is theoretically unclear, so we do not seek the exact equivalence with the form of BPTT but build the connection with gradients based on spike representation and prove the descent guarantee for the optimization problem (as explained in the paper and the second point in the above response to question 1). \n\n(2) *\"Does the equilibrium condition depend on constant inputs? And why can only equilibria happen and not limit cycles?\"* The equilibrium condition depends on constant or convergent inputs, i.e. the weighted average inputs $\\mathbf{\\overline{x}}[t]=\\frac{\\sum_{\\tau=0}^t \\lambda^{t-\\tau}\\mathbf{x}[\\tau]}{\\sum_{\\tau=0}^t \\lambda^{t-\\tau}}$ converge through time $\\mathbf{\\overline{x}}[t]\\rightarrow \\mathbf{x^*}$ (line 142 in the paper). The equilibrium (but not limit cycle) would happen with contractive recurrent connections. This is proven in [2] according to the contractive mapping theorem, and we also consider the contractive recurrent connections.\n\n4. About the instantaneous loss.\n\nWe would like to clarify that the instantaneous loss at time $t$ is $L[t]=\\frac{1}{T}\\mathcal{L}\\left(\\mathbf{s}^N[t], \\mathbf{y}\\right)$, and considering all time steps the total loss is $L\\coloneqq\\sum_{t=1}^TL[t]$. So the instantaneous loss can be computed independently at each time step, while the loss based on firing rate depends on all time steps and does not support online gradients. We have clarified this in the revision.\n\n5. \"Intuitively, what is remembered in the trace that allows to bypass remembering all activations over time?\"\n\nIntuitively, the tracked pre-synaptic activities $\\hat{\\mathbf{a}}^l[t] = \\sum_{\\tau \\leq t}\\lambda^{t-\\tau}\\mathbf{s}^l[\\tau]$ maintain the previous spikes by different coefficients that are related to the time constant of the LIF neuron, then propagation through the trace could deal with previous spikes.\n", " 6. \"What are the gradients of the \"spike representation\"?\"\n\nThe gradients of the spike representation are gradients calculated through the closed-form transformation between the spike representation of neurons in different layers, which are not the gradients from BPTT. In this work, we consider the weighted firing rate as spike representation: $\\mathbf{a}[t]=\\frac{\\sum_{\\tau=0}^t \\lambda^{t-\\tau}\\mathbf{s}[\\tau]}{\\sum_{\\tau=0}^t \\lambda^{t-\\tau}}$, and the closed-form transformation is $\\mathbf{a}^{l+1}[T] \\approx \\sigma\\left(\\frac{1}{V_{t h}}\\left(\\mathbf{W}^{l} \\mathbf{a}^{l}[T]+\\mathbf{b}^{l+1}\\right)\\right)$. The gradients of spike representation are calculated as $\\frac{\\partial L}{\\partial \\mathbf{W}^l}=\\frac{\\partial L}{\\partial \\mathbf{a}^N[T]}\\prod_{i=N-1}^{l+1}\\frac{\\partial \\mathbf{a}^{i+1}[T]}{\\partial \\mathbf{a}^i[T]}\\frac{\\partial \\mathbf{a}^{l+1}[T]}{\\partial \\mathbf{W}^l}$ (refer to the \"Spike Representation\" paragraph in Section 3.2). As explained in the paper and the second point in the above response to question 1, direct gradients from BPTT for non-differentiable SNNs are problematic, and BPTT with surrogate gradients is not theoretically clear for optimization. We build the connection between gradients of OTTT and gradients based on spike representation (which is more theoretically clear) to prove the descent guarantee for the optimization problem.\n\n7. \"What is meant with \"time steps\" in Fig 2?\"\n\n\"Time Steps\" are the discrete time steps to simulate SNNs. The simulation of forward SNN dynamics is discretized and unfolded over time, and it applies to SNN models regardless of training methods.\nFig 2 is to show the memory cost comparison between BPTT and OTTT under different settings of the simulation time steps of SNNs.\n\n[1] Kag and Saligrama. Training recurrent neural networks via forward propagation through time. ICML, 2021.\n\n[2] Xiao et al. Training feedback spiking neural networks by implicit differentiation on the equilibrium state. NeurIPS, 2021.\n\n[3] Amir et al. A low power, fully event-based gesture recognition system. CVPR, 2017.\n\n[4] Shrestha and Orchard. Slayer: Spike layer error reassignment in time. NeurIPS, 2018.\n\n[5] Kaiser et al. Synaptic plasticity dynamics for deep continuous local learning (DECOLLE). Frontiers in Neuroscience, 2020.\n\n[6] Fang et al. Incorporating learnable membrane time constant to enhance learning of spiking neural networks. ICCV, 2021.\n", " Thank you very much for appreciating our work. We respond to your valuable comments as follows.\n\n1. \"The results of OTTT with feedback connections and with smaller batch sizes need more baselines.\"\n\nThank you for the suggestion. We follow your advice and conduct additional experiments to compare our method with the BPTT baseline. The results of feedback connections are below:\n\n| Network structure | Method | Accuracy |\n| :----: | :----: | :----: |\n| VGG | OTTT$_O$ | 71.05$\\pm$0.06\\% (71.11\\%) |\n| VGG-F | OTTT$_O$ | 72.63$\\pm$0.23\\% (72.94\\%) |\n| VGG | BPTT | 69.06$\\pm$0.07\\% (69.15\\%) |\n| VGG-F | BPTT | (69.49\\%) |\n\nFirst, it can be seen from the table above that using feedback connections improves performance when trained by BPTT with surrogate gradients. Indeed, this is a known fact demonstrated in previous works [1]. Second, we can see that in different settings, compared with BPTT, OTTT consistently achieves higher performance, and the improvement of OTTT from feedback connections is more significant than that of BPTT.\n\nThe results of training with batch size 1 are below:\n\n| Method | Batch Size | Accuracy |\n| :----: | :----: | :----: |\n| OTTT$_A$ / OTTT$_O$ | 128 | 88.20\\% / 88.62\\% |\n| OTTT$_A$ / OTTT$_O$ | 1 | 88.07\\% / 88.50\\% |\n| BPTT | 1 | 87.51\\% |\n\nIt shows that a good model can be obtained by BPTT/OTTT with batch size 1, and OTTT performs better. This is because we do not use batch normalization (as explained in Section 4.4), so the model is less sensitive to the batch size. \nCombined with the online-in-time property of OTTT, which would correspond to the temporally local property of biological learning rules and rules on neuromorphic hardware (BPTT does not have this property), such training scenario could pave a path for online on-chip learning. \n\n2. \"It would also have been interesting to see results with fully recurrent architectures.\"\n\nThanks for the suggestion again. We conduct an experiment to use a recurrent spiking neural network on the Fashion-MNIST classification task. The input is flattened as a vector with 784 dimensions and is connected to 400 spiking neurons with recurrent connections. The outputs of neurons are then connected to a readout layer for classification. We compare BPTT, OTTT$_A$, and OTTT$_O$ with the results in [2] and [3]. The results are below:\n\n| Method | Time steps | Accuracy |\n| :----: | :----: | :----: |\n| ST-RSBP [2] | 400 | 90.00$\\pm$0.14\\% (90.13\\%) |\n| IDE [3] | 5 | 90.07$\\pm$0.10\\% (90.25\\%) |\n| BPTT | 5 | (90.58\\%) |\n| OTTT$_A$ (ours) | 5 | (90.36\\%) |\n| OTTT$_O$ (ours) | 5 | (90.40\\%) |\n\nFrom the table, we can see that for this relatively simple model, the results of OTTT and BPTT are very similar and BPTT performs slightly better. \n\n3. \"Does $s[\\tau]$ in line 141 refer to the output spikes of each layer?\"\n\nYes, the notation here considers a group of neurons that can refer to each layer, and $s[\\tau]$ represents the output spikes of the neurons at time $\\tau$.\n\n4. \"Can OTTT also be applied to non-spiking neural network architectures?\"\n\nCurrently, OTTT is designed for SNNs. The derivation is based on the spiking neuron dynamics and spike signals, and it is to handle the problem of the optimization for non-differentiable SNNs with theoretical guarantee. \n\n5. About suggestions for writing and organization.\n\nThank you very much for your valuable suggestions. We have carefully considered them and revised the paper. In this revision, we re-organize the order of Section 3.2 and modify some descriptions. \nResponses to several questions are below:\n\n(1) \"It's not clear why BPTT with SG leads to extremely low latency\". \n\nPrevious works empirically show that training models by BPTT with SG can achieve high performance with a very small number of time steps compared with other methods. This is why we call it \"with extremely low latency\". We have made the description more precise in this revision. \n\n(2) \"What does it mean for online learning to be a learning rule\".\n\nWe mean that the learning rule is temporally local and has the online property. We modify the description in the revision as \"... inconsistent with the online property of biological learning rules and rules on neuromorphic hardware\".\n\n6. \"Related work that uses eligibility traces should be cited\".\n\nThank you for pointing it out. We have already cited Bellec et al. (2020) in the originally submitted paper (lines 109 \\& 196 in the paper). Murray (2019) uses eligibility traces to train non-spiking recurrent networks. We have added the references in the revision.\n", " 7. \"Comparison with equivalent non-spiking architecture trained with BPTT as an (upper) baseline\".\n\nThank you for the suggestion. Models in Table 1 are feed-forward networks and therefore the equivalent non-spiking architecture will not be unfolded through time. So they will just be trained by BP. We supplement the results on CIFAR-10 and CIFAR-100 below. The ANN models are based on the ReLU activation instead of spiking neurons.\n\nResults on CIFAR-10 (the last line is ANN):\n\n| Method | Network structure | Params | Time steps | Accuracy |\n| :----: | :----: | :----: | :----: | :----: |\n| ANN-SNN | VGG-16 | 40M | 16 | (92.29\\%) |\n| BPTT | ResNet-19 (tdBN) | 14.5M | 6 | (93.16\\%) |\n| BPTT | 9-layer CNN (PLIF, BN) | 36M | 8 | (93.50\\%) |\n| BPTT | VGG (sWS) | 9.2M | 6 | 92.78$\\pm$0.34\\% (93.23\\%) |\n| OTTT$_A$ (ours) | VGG (sWS) | 9.2M | 6 | 93.52$\\pm$0.06\\% (93.58\\%) |\n| OTTT$_O$ (ours) | VGG (sWS) | 9.2M | 6 | 93.49$\\pm$0.17\\% (93.73\\%) |\n| **ANN** | VGG (sWS) | 9.2M | N.A. | (94.43\\%) |\n\nResults on CIFAR-100 (the last line is ANN):\n\n| Method | Network structure | Params | Time steps | Accuracy |\n| :----: | :----: | :----: | :----: | :----: |\n| ANN-SNN | VGG-16 | 40M | 400-600 | (70.55\\%) |\n| Hybrid Training | VGG-11 | 36M | 125 | (67.87\\%) |\n| DIET-SNN | VGG-16 | 40M | 5 | (69.67\\%) |\n| BPTT | VGG (sWS) | 9.3M | 6 | 69.06$\\pm$0.07\\% (69.15\\%) |\n| OTTT$_A$ (ours) | VGG (sWS) | 9.3M | 6 | 71.05$\\pm$0.04\\% (71.11\\%) |\n| OTTT$_O$ (ours) | VGG (sWS) | 9.3M | 6 | 71.05$\\pm$0.06\\% (71.11\\%) |\n| **ANN** | VGG (sWS) | 9.3M | N.A. | (73.19\\%) |\n\nDue to limited time, we are unable to provide ImageNet results. For the neuromorphic dataset DVS-CIFAR10, the equivalent feedforward non-spiking ANNs may not directly handle the dynamic inputs. So we do not consider them. Usually, SNNs with a very small number of time steps do not reach the performance of equivalent ANNs due to the information propagation with discrete spikes rather than floating-point numbers. The results of our model with 6 time steps are acceptable.\n\n8. Limitations. We have discussed the limitations of the work in Appendix E.\n\n[1] Kim et al. Neural architecture search for spiking neural networks. arXiv preprint arXiv:2201.10355.\n\n[2] Zhang and Li. Spike-train level backpropagation for training deep recurrent spiking neural networks. NeurIPS, 2019.\n\n[3] Xiao et al. Training feedback spiking neural networks by implicit differentiation on the equilibrium state. NeurIPS, 2021.\n", " In this paper, the authors propose an online learning algorithm (OTTT) that's applicable to spiking neural networks (SNNs). OTTT uses a combination of eligibility traces (trace of past activity of the neuron) and use of instantaneous loss values to achieve an online algorithm. The authors show its connection to spike representation based methods as well as three-factor learning rule and demonstrate empirically that this method works better than existing methods for training feedforward spiking neural networks.\n ## Strengths\n\nThe paper derives an novel online learning rule, primarily applied to feed-forward SNNs. The connection to spike representation derived in the paper is very interesting, and connects these seemingly disparate methods. \n\nThe empirical results are impressive, and strongly support the utility of OTTT. The fact that OTTT works with batch size of 1 is particularly impressive, and makes it of strong practical interest for neuromorphic hardware. \n\nThe paper is also well written and the exposition is clear and easy to understand.\n\n## Weaknesses\n\nThe results of OTTT with feedback connections and with smaller batch sizes needs more baselines, esp. compared to BPTT to understand how it stands in comparison to conventional methods. i.e. to understand if the advantage there because of OTTT or other reasons?\n\nIt would also have been interesting to see results with fully recurrent architectures.\n\nMinor: The connection with spike representation methods is certainly interesting, but might be a bit over-emphasized and could be confined to a single section so that it doesn't interrupt the flow of OTTT. Specifically, the Spike Representation heading in Sec. 3.2 threw me off a bit, since I was looking for a connection with Sec. 4.1. ## Questions\n\n* Does $s[\\tau]$ in line 141 refer to the output spikes of each layer?\n* Can OTTT also be applied to non-spiking neural network architectures?\n\n## Suggestions\n\n* In the abstract:\n * l.5: it's not clear why BPTT with SG leads to extremely low latency. Rephrase?\n * l.6: theoretical unclarity -> lack of theoretical clarity.\n * l.11: what does it mean for online learning to be a learning rule? Rephrase?\n* l.32: The supervised training of SNNs is challenging not because of its \"complex neuron model\" but because of non-differentiability in the neuron model\n* Sec 2. I would suggest giving brief descriptions of individual references rather than grouping large numbers of references together with vague sentences, which doesn't convey much useful information about related work.\n* Related work that uses eligibility traces should be cited: e.g. (Murray 2019; Bellec et al. 2020)\n* l.142: A few details of how this can be proved would be useful.\n* Table 1: It would be helpful to see comparison with equivalent non-spiking architecture trained with BPTT as an (upper) baseline to put the performance of OTTT in perspective.\n\nBellec, G., Scherr, F., Subramoney, A., Hajek, E., Salaj, D., Legenstein, R., Maass, W., 2020. A solution to the learning dilemma for recurrent networks of spiking neurons. Nature Communications 11, 3625.\n\nMurray, J.M., 2019. Local online learning in recurrent networks with random feedback. eLife 8, e43299. The authors do not discuss the limitations of the work.", " The authors introduce a new approach to training spiking neural networks, online training through time (OTTT). The approach only requires constant memory since it does not need to backpropagate through time. Instead, tracks presynaptic activities and leverages instantaneous loss and gradients. The proposed approach provides good results on typical benchmarks such as CIFAR-10 and 100.\n\n Strengths:\n* How to best train SNNs is still an open problem, especially if one aims for online learning on neuromorphic hardware. This article can form a substantial step in that direction.\n* If I understand the mathematical proof well, the authors show that their method's gradient points in the same direction as the BPTT one, without needing to track many variables in memory.\n* The results of the approach seem good.\n\nWeaknesses:\n* The work currently does not discuss other SNN approaches that also do not backpropagate gradients through time, such as: Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time, B Yin, F Corradi, SM Bohte - arXiv preprint arXiv:2112.11231, 2021https://arxiv.org/pdf/2112.11231.pdf\n* Some of the main intuitions behind not needing to backpropagate through time remain unclear (see questions).\n* The tasks to which the approach is applied are quite \"static\", and do not immediately seem to need much memory. Moreover, the approach seems to rely on neurons moving to an equilibrium state, which would be less valid for real-world, time-varying data. 1. How does the authors' work relate to other approaches that implement forward propagation through time?\n\n2. What are the effects of the assumptions on the proof? For instance, in line 167 the reset moment is ignored. Line 147-150: Does the equilibrium condition depend on constant inputs? And why can only equilbria happen and not limit cycles? \n\n3. The \"instantaneous\" loss is defined as a sum over time (lie 184). How is that instantaneous and the definition higher up (line 182) not? They both sum over time, although 1/T is inside the loss function higher up.\n\n4. Intuitively, what is remembered in the trace that allows to bypass remembering all activations over time? \n\n5. What are the gradients of the \"spike representation\"? Are these the gradients from BPTT?\n\n6. What is meant with \"time steps\" in Fig 2? The time steps BPTT goes back in time? How does this apply to OTTT?\n N/A", " The paper presents online training through time (OTTT) for SNNs, which is derived from BPTT to enable learning without backward passes through time. The goal compared to other approaches is to improve memory consumption, biological plausibility, or latency. Theoretical analysis shows that the gradients of OTTT are in a direction that is suitable for optimization. Experiments show that, compared to BPTT, OTTT has an advantage in terms of memory consumption, especially as training sequence lengths increase. The paper addresses a topic that has attracted a lot of interest by the community. It also includes significant theoretical analysis and has given effort in explaining the derivations.\n\nOn the other hand, the paper:\n- does not even mention some recent methods (FPTT, OSTL) [1, 2] that have similar goals, let alone explain the differences. Therefore the extent of the advance in this paper cannot be evaluated. \n- does not provide experimental comparisons to such methods as [1-3], but only to BPTT\n- if any advantage - theoretical or other - exists compared to these prior methods, the paper does not summarise clearly what element in the new theoretical derivation enables that advantage.\n- the work does not aim to achieve something brand new, as successful SNN training has already been possible. Therefore its impact is expected to be limited.\n- does not show results on sequential tasks, but only on image classification, even though the learning rule concerns recurrent networks\n- does not compare experimentally with non-spiking networks, in terms of accuracy, memory, or computational efficiency.\n- could improve its writing, by removing redundant repetitions, providing a sketch/summary of the derivation before the derivation, and by clarifying terms such as forward-in-time\n\n[1] Yin, Bojian, Federico Corradi, and Sander M. Bohte. \"Accurate online training of dynamical spiking neural networks through Forward Propagation Through Time.\" arXiv preprint arXiv:2112.11231 (2021).\n[2] Bohnstingl, Thomas, et al. \"Online spatio-temporal learning in deep neural networks.\" IEEE Transactions on Neural Networks and Learning Systems (2022).\n[3] Bellec, Guillaume, et al. \"A solution to the learning dilemma for recurrent networks of spiking neurons.\" Nature communications 11.1 (2020): 1-15. Could the authors explain what are their expectations from the comparisons and analyses that are missing, if they were performed?\nCould these actually be peformed? There is no discussion of limitations by the authors." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "x1cpu2CSdHK", "wA3T4kFjAuV", "xLDk7a-OJDu", "SzWqUlzjrF", "-w21uEDkDN7", "evzCJ1zUn5W", "nips_2022_Siv3nHYHheI", "evzCJ1zUn5W", "evzCJ1zUn5W", "evzCJ1zUn5W", "evzCJ1zUn5W", "JejHTho56Vh", "JejHTho56Vh", "JejHTho56Vh", "WgB_bPmVz-a", "WgB_bPmVz-a", "nips_2022_Siv3nHYHheI", "nips_2022_Siv3nHYHheI", "nips_2022_Siv3nHYHheI" ]
nips_2022_y5ziOXtKybL
Asymptotic Properties for Bayesian Neural Network in Besov Space
Neural networks have shown great predictive power when dealing with various unstructured data such as images and natural languages. The Bayesian neural network captures the uncertainty of prediction by putting a prior distribution for the parameter of the model and computing the posterior distribution. In this paper, we show that the Bayesian neural network using spike-and-slab prior has consistency with nearly minimax convergence rate when the true regression function is in the Besov space. Even when the smoothness of the regression function is unknown the same posterior convergence rate holds and thus the spike-and-slab prior is adaptive to the smoothness of the regression function. We also consider the shrinkage prior, which is more feasible than other priors, and show that it has the same convergence rate. In other words, we propose a practical Bayesian neural network with guaranteed asymptotic properties.
Accept
This work conducts novelty study and extends the results on asymptotic convergence of Bayesian ReLU networks from the Hölder space to the more general Besov space. The reviewers consider it "a strong theoretical results closing a gap for posterior contraction of BNN in Besov spaces". The authors' feedback addressed a few concerns in the initial reviews including the lack of clarity, the question on the technical challenges to extend to a Besov space, and an error in the proof. During the author-reviewer discussion period, the authors also corrected a constant which determines the complexity of the neural network model. As a result, the condition of the theory became "harsh" (see authors' response to reviewer x58D), and the numerical results did not satisify the theory's requirement any more. The authors provided an updated version of the paper to include the change and moved the experiments to appendix. Nonetheless, the reviewers did not think that change decreased its theoretical value and still considered it above the threshold for acceptance due to its novelty. The remaining concerns from the reviewer is lack of evaluations for the non-smoothness assumption and its usage in some real applications, also for the possible difference between a purposely designed fixed prior with a learnable prior.
train
[ "MU0W4dh2stK", "X1xWcqGvks", "v43t0jJ07vJ", "gusvobF1w4a", "C9EXjBLuqSj", "AIMZ-WrVGHS", "i3wTSVlBbWN", "75nQhJ6gQL8", "S8Ougl3jOfM", "hdXKYNRtVYh", "UU1Amylx77", "Y5z3ztdZWOM" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for this clarification. It is clear now. ", " Thanks for your constructive questions.\n\n## Q. \n\n> The question about the learnable hyperparameters is about the prior derived in the paper, which seems to be a fixed prior. However, we normally set a learnable prior to practice. Will this learnable prior be better than your derived one?\n\nWe think it would be nice to consider a learnable prior that estimates the hyperparameters of the prior as you said. For instance, we can assume the half-Cauchy distribution of the standard deviation of the parameters. However, there is a problem in that the amount of computation increases, and in the current submission, good enough results were obtained with a simpler fixed prior. We will consider the theoretical study and actual comparison of the learnable prior. Thanks for the suggestion.\n\n## Q. \n\n> All in all, I suggest the authors to do more experiments on real datasets to show potential applications.\n\nThank you for your good suggestions. We will consider experiments that can apply the models we proposed to real data.", " Thanks for your responses! \n\nThe question about the learnable hyperparameters is about the prior derived in the paper, which seems to be a fixed prior. However, we normally set a learnable prior to practice. Will this learnable prior be better than your derived one? \n\nNeural network-based method may also implicitly assume an underlying smooth function, so we can draw the conclusion that the better performance of neural network-based methods indicates the underlying non-smooth function. The authors may compare your methods with the old one (within the Besov space and Hölder space separately) on the real-world datasets in a fair setting to see the different performances. If the method within Besov space can achieve better performance in some datasets, we may know the non-smooth function approximation may be beneficial. \n\nAll in all, I suggest the authors to do more experiments on real datasets to show potential applications. \n\n", " Thank you for your sharp point on the part. In short, we presented numerical examples as one of the numerical evidence to support our theory.\n\nOur research deals with sufficient conditions to achieve (nearly) optimal posterior consistency. In other words, it was shown that the model satisfying the conditions of the paper had posterior consistency. As mentioned in the revised submission, our conditions are harsh as we have to consider a fairly complex model. For instance, a sufficient conditions for estimating the functions in the experiments are that, according to Theorem, the depth $L_n$ of the neural network model should be greater than 30, the width $W_n$ greater than 400, and proper Gaussian mixture prior. We tried to show through numerical experiments that a suitable neural network has sufficiently good theoretical properties even if the theoretical conditions are partially satisfied. In experiments, we showed that valid inference is possible by using the prior as it is, and considering the simple models which have a depth of 5 and a width of 200. In numerical experiments, we wanted to explain the tendency of theoretical results and to show the meaning of the word 'practical' in our research.", " Thank you very much for your detailed answer. You didn't have to go through answering my specific comments, but thank you that you did. \n\nCould you please elaborate a bit on \"it does not satisfy the conditions of the paper\"? Does this mean that the numerical examples are just a proxy now and there is no numerical evidence that supports your theory that satisfies all the conditions? Please note that this is fine if this is the case, I just want to know whether this is the case or not.", " Dear reviewers,\n\nThanks for the constructive and detailed feedback. Based on your review, we have revised the paper as follows:\n\n* The value of the constant $c_{(d, m)}$ which determines the complexity of the neural network model, was corrected by checking the references. Accordingly, the numerical experiments in the submitted paper are also modified. The results related to the numerical experiments were moved to the appendix based on the advice of reviewer x58D.\n* We changed the algorithm in numerical examples to the NUTS algorithm from the variational approximation.\n* As pointed out by reviewer T5q1, we have corrected the proof of the Theorem 1. We confirmed that there were no problems with the result of the theorem.\n* By all of your helpful feedback, we have corrected ambiguous and incorrect expressions in spelling, notation, and the proofs. Especially, we rewrite the abstract and conclusion more clearly.", " \nThanks for your comments and suggestions.\n\n## Q. \n\n> The authors did not highlight the challenges or difficulties in how to extend the results from the Hölder space to the more general Besov space.\n\n> Can you highlight the challenges on the proposed extensions?\n\nDifficult points when considering the Besov space compared to the Hölder space were the following.\n\n1. The shape of the space was not intuitive. We could see easily that it was a larger (general) space, including the Hölder space \"by the formula\", but it was not that easy to think of which functions were \"actually\" included. For this reason, we have mentioned in the paper example functions (including $f_1$ and $f_2$) that are easy to understand for the Besov space.\n2. To show posterior consistency using the Lemma in Ghosal and Van Der Vaart (2007), it was necessary to prove the upper bound of entropy and the lower bound of the prior mass. Of these, the latter is affected by the enlarged function space. From Suzuki (2018), we found an empirical minimizer close enough to the true function which in the model space, and computed the lower bound of prior mass using norm inequalities. In fact, it was difficult to found the necessary conditions of priors for the theorems. We checked the related works and found the conditions mentioned in the paper: (1) enough mass around zero, (2) thick tail to sample true function (3) while not too much.\n\n## Q. \n\n> It would be better to compare with other options, like iid Gaussians, spike-and-slab, and the Gaussians priors with learnable hyperparameters.\n\n> In practice, we normally learn the parameters of priors from the data as well rather than just fixing them while we using BNN for data modelling. Is there any effect from such learnable priors?\n\nWe added the result of using the Gaussian prior and the Gaussian mixture prior proposed in the revised submission for simple synthetic data. Both model give similar results, but the Gaussian prior at $f_1$ and the Gaussian mixture prior at $f_2$ give slightly better results.\n\nNote: As replied to Reviewer x58D, it is an experimental result for a smaller model which does not satisfiy the conditions of the paper.\n\n## Q.\n\n> Since the advantage of Besov space is its ability to include non-differentiable functions, can you find one or some real data that needs the functions from Besov space rather than Hölder space or even L^p space?\n\nWhen we first planned the study, we were interested in 'why does an artificial neural network model (a machine learning model) outperform a (traditional) statistical model?'. One of the answers was 'what if the relationship between explanatory and response data is in the form of an indifferentiable function?'. Of course, it is difficult to verify this in practice. This is because it is almost difficult to confirm the form of specific functions of explanatory variables and response variables in real data. However, empirically, we know that there are fields in which decision-tree-based model such random forest or neural network models perform better than models that find smooth functional relationships such as Kernel regression. For these data, it is natural that there would be a non-smooth relationship between the explanatory variable and the response variable. For this reason, we think that it is essential to compare the relationship between other models and deep learning models for follow-up studies.\n\nIn short, we speculate that there is a non-smooth relationship in the recent machine learning problems (image processing, natural language processing, etc.) where neural network models perform better than traditional statistical models.\n\n## Q. \n\n> Some minor problems include: the symbols should be explained, like “A” in (9), any conflict between “L_1” in Line 108 and “L_n” in (10), Line 158, “a expectation” and Line 185, “acontinuous”\n\nThanks for your detailed comments. We have corrected everything you mentioned in the revised submission.", " Thanks for your comments and suggestions. Overall, there were few problems in the mathematical proof, but the results were unchanged. We found some issues caused by the ambiguous notation, and fixed all of them.\n\n## Q. \n\n> in appendix C1, proof of lemma 3 (35) : can you quickly justify how you lower bound the norm?\n\nIs it correct to ask the process of calculating the **upper bound** in the inequality of Lemma 3 (35)? \n\nSince there were notations to be fixed in the existing submission, it has been corrected as follows.\n\n$$\\lVert A_k^+(f)(x) \\rVert_{\\infty} \\leq \\max_j \\lVert W_{j,:}^{(k-1)} \\rVert_1 \\lVert A_{k-1}^+(f)(x) \\rVert_{\\infty} + \\lVert b^{(k-1)} \\rVert_{\\infty}$$\n\n$$\\leq WB\\lVert A_{k-1}^+(f)(x) \\rVert_{\\infty} + B$$\n\n$$\\leq (W+1)(B \\vee 1) \\lVert A_{k-1}^+(f)(x) \\rVert_{\\infty} $$\n\n$$\\leq (W+1)^{k-1}(B \\vee 1)^{k-1},$$\n\nfor all $x$, where $A_{j, :}$ is the $j$-th row of the matrix $A$.\n\nIf your question is about this part, we got the above inequality as follows: \n\nFirst, from the triangular inequality,\n\n$$ \\lVert Ax + b \\rVert_\\infty \\leq \\lVert Ax \\rVert_\\infty + \\lVert b \\rVert_\\infty.$$\n\nNext, by the definition of the matrix norm and the properties of the infinity norm of the matrix, we got \n$$\\lVert Ax\\rVert_\\infty \\leq \\lVert A \\rVert_\\infty \\lVert x \\rVert_\\infty = \\max_j \\lVert A_{j,:}\\rVert_1 \\lVert x \\rVert_\\infty.$$\n\nAfter the second inequality, we used the conditions of the parametric space and mathematical induction for the last inequality.\n\n## Q. \n\n> in the proof of theorem 1 (and subsequently in the paper for the other analog proofs), it is not clear how the prior on the connectivity pattern (ie on $\\gamma$) appears in the proof of the prior thickness result (43)-(44)-(45), should there be a $\\binom{T_n}{S_n}$ factor somewhere or am I making a mistake ?\n\nYou are right. It was our mistake. We correct the proof in the revised paper. We confirmed that the results of theorem did not change. Sorry for the confusion and thank you for correcting the proof.\n\n## Q. \n\n> It could be interesting to, at least, discuss why you are only presenting an adaptive result for the first prior.\n\nIn Theorem 3, we mentioned the conditions that the shrinkage prior must satisfy. As you can see, conditions such as tail probability depend on the parameters $L_n, ~W_n$, and $B_n$. For adaptive estimation like Theorem 2, it is necessary to propose a general shrinkage prior distribution that satisfies all conditions even when $L_n, ~W_n$, and $B_n$ are varying, but this has not been solved yet. We have planned this in the future work.", " \nThanks for your comments and suggestions. First of all, we are sorry that our expression is unfriendly. \n\n## Q.\n\n> Wouldn’t MCMC methods with true posterior be more appropriate for the purpose of the numerical example?\n\n> There are several claims that the paper proposes a “practical” model, however, there is no empirical evidence on that. Some discussion on actual computational time used and how feasible the inference with the considered priors would be appreciated\n\nThank you for your suggestion. We changed the algorithm in numerical examples to NUTS algorithm (one of the Hamiltonian Monte Carlo algorithm) from the variational approximation.\n\nIn addition, as mentioned in revised submission, we confirmed that some calculations of the model parameter proposed in the submitted paper were wrong and corrected them. The complexity of the model required for the experiments has grown, and it is difficult to implement it numerically. Thus, we replaced the numerical experiments to small model which shows interesting results though **it does not satisfy the conditions of the paper**. \n\nWe plan to use SGHMC (Stocahstic gradient Hamiltonian Monte Carlo) as the algorithm for the real data examples. In fact, we already checked that the following papers suggest methods for extracting MCMC samples from the spike-and-slab prior distribution through the SGHMC algorithm.\n\n* Sun, Y., Song, Q., & Liang, F. (2022). Learning sparse deep neural networks with a spike-and-slab prior. Statistics & Probability Letters, 180, 109246.\n* Song, Q., Sun, Y., Ye, M., & Liang, F. (2020). Extended stochastic gradient Markov chain Monte Carlo for large-scale Bayesian variable selection. Biometrika, 107(4), 997-1004.\n\nWe used the weight and bias parameters of the network model estimated by the frequentist method as the initial value of the NUTS algorithm, it will be possible to obtain a more efficient (nice and fast) MCMC sample than the above papers. This method would be applicable to large models such as ResNet. Unfortunately, due to lack of time, the experimental results were not obtained this submission. We would like to propose inferential algorithms together through follow-up studies.\n\n## Q. \n\n> Line 10, “Posterior consistency”, and more:\n\nThanks for your detailed comments. In the revised submission, we have corrected the ambiguous or insufficient description.\n\n## Q. \n\n> Line 96, “u \\in {0, 1, …, d}”?\n\nThe part you pointed out is about 'with what component to differentiate', and the existing notation is correct. For instance, $D^u f(x)$ with $u=(1, 1, 1, ..., 1)$ means \n$$ \\frac{\\partial^d f(x)}{\\partial x_1 \\partial x_2 \\cdots \\partial x_d}.$$\n\n## Q.\n\n> $D^u,~\\delta$, $N$ for the normal distribution, and more:\n\nThanks for your detailed comments. In the revised submission, we have corrected notations that may cause confusion.\n\n## Q.\n\n> Posterior consistency, $\\eta$ in Eq. (7), and more:\n\nThanks for your detailed comments. In the revised submission, we have added explanations for undefined terms and symbols you mentioned.\n\n## Q. \n\n> Theorem 1, M_n is not defined, how does it depends on n?\n\n$M_n$ means any sequence that is sufficiently large and increasing indefinitely as $n$ grows. Anything is fine as long as it goes infinity.\n\n## Q. \n\n> Figures 2 and 3 can also lose the histogram part to save some space. (The histogram part can go into the appendix also)\n> Figures 2-3. Are dots training data points? Left column plots can used “x” and “y” labels for axis, also subplots without labels look too strange for me. It seems they may have “(a)”, “(b)”, … labels\n\nAs you advised, we move the figures to the appendix and write description more detail than before. Thank you for your helpful advice.\n\n## Q. \n\n> Line 218. “shrinkage prior” - “spike-and-slab prior” was meant?\n\nAs you pointed out, that was a our mistake. Thanks for the detailed comments. We have corrected the text you mentioned in the revised submission.\n\n## Q. \n\n> Lines 229-230, “Experiments … will also be of great help…” - not clear, is it suggested as future work? It would indeed be nice to have some comparison in the current submission.\n\nWhat you pointed out is correct. We are planning to compare with statistical models (in a narrow sense) such as Bayesian LARK B-spline model and Bayesian neural network model as a follow-up study. From a theoretical point of view, it has been shown that the models to be compared are (nearly) optimal, but more research is needed in a implementation to make comparisons with real data.\n\n## Q. \n\n> Minor:\n\nThanks for your detailed comments. We have corrected everything you mentioned in the revised submission.", " This work extends the results of Polson and Rocková [2018] on the minimax contraction rate of Bayesian ReLU network from the Hölder space to the more general Besov space. Such result brings more freedom to the underlying functional form behind the data with similar minimax contraction rate guarantee. Beside the spike-and-slab prior, the result with shrinkage prior is also given but with more computational efficiency. The strong point of this work is the ability to extend the functional form modelled by Bayesian ReLU network and its posterior contraction rate guarantee. The motivation is clear, and the idea is well presented.\n\nHowever, as the authors said, this is an extension of existing works Polson and Rocková [2018] with similar backgrounds, settings, and results, and the derivative looks a little straightforward, so the innovation is limited. The authors did not highlight the challenges or difficulties in how to extend the results from the Hölder space to the more general Besov space. \n\nAnother main weakness is the evaluation. Only one simple example is given to show the ability of designed prior for BNN regression. It would be better to compare with other options, like iid Gaussians, spike-and-slab, and the Gaussians priors with learnable hyperparameters. In practice, we normally learn the parameters of priors from the data as well rather than just fixing them while we using BNN for data modelling. Is there any effect from such learnable priors? \n\nThe underlying functions are usually assumed to be continues when using BNN for real-data modelling. Since the advantage of Besov space is its ability to include non-differentiable functions, can you find one or some real data that needs the functions from Besov space rather than Hölder space or even L^p space? \n\nSome minor problems include: the symbols should be explained, like “A” in (9), any conflict between “L_1” in Line 108 and “L_n” in (10), Line 158, “a expectation” and Line 185, “acontinuous”\n The underlying functions are usually assumed to be continues when using BNN for real-data modelling. Since the advantage of Besov space is its ability to include non-differentiable functions, can you find one or some real data that needs the functions from Besov space rather than Hölder space or even L^p space? \n\nCan you highlight the challenges on the proposed extensions? N/A", " This paper present a proof of Bayesian posterior contraction rate in Besov spaces for Bayesian neural networks priors for the regression problem on $[0,1]^d$. More specifically, two priors are presented : spike and slab and shrinkage (the latter being easier to implement). Under the frequentist assumption that the true regression function $f_0$ belongs in a Besov space $B_{p,q }^s$, it is shown that the two priors achieve minimax rates of convergence (up to log terms) with a (non adaptive) choice of prior parameters (which includes a choice of depth, width & sparsity of the network). Moreover, it is shown that an adaptive rate of convergence over all smoothness index $s>0$ is possible for a spike and slab prior with an additional prior on the architecture. This work extend previous results for Bayesian posterior contraction in Hölder spaces and frequentist minimax theory in Besov for neural networks. This paper is a strong theoretical results closing a gap for posterior contraction of BNN in Besov spaces. Concerning the priors, it presents a new (computationally more efficient) shrinkage prior that is shown to lead to good posterior contraction rates, which is (to the best of my knownledge) a new result even for BNN in Hölder spaces.\nMoreover, I think that paper is overall wery clear and well written. However, I have two questions :\n\n- in appendix C1, proof of lemma 3 (35) : can you quickly justify how you lower bound the norm ?\n\n- in the proof of theorem 1 (and subsequently in the paper for the other analog proofs), it is not clear how the prior on the connectivity pattern (ie on $gamma$) appears in the proof of the prior thickness result (43)-(44)-(45), should there be a $\\binom{T_n}{S_n}$ factor somewhere or am I making a mistake ? From my point of view the main limitation of the paper is that adaptive result is only proven in the case of the spika and slab prior, and that the numerical expriments are conducted on toy examples with known smoothness levels. It could be interesting to, at least, discuss why you are only presenting an adaptive result for the first prior.", " The paper proposes the theoretical results for Bayesian neural networks in the Besov space. It extends the results from the previous work in Holder space into the Besov space. It shows that Bayesian neural networks with the spike and slab priors and further relaxes it to be just shrinkage priors converge to the true regression function in the Besov space. **Update after rebuttal:** I would like to thank the reviewers for their incredible job during the rebuttal and by improving the submission very much. I believe that the clarity of the paper has drastically improved and therefore I am raising my score. Even though after the correction of the theory the numerical example does not satisfy the theory now I believe the submission provides an interesting step towards better theoretical guarantees of BNNs.\n\n=================================\n\n**Originality:** \n*Strengths:* The main results appear to be novel and an interesting extension of the previous results in Holder space into the Besov space. \n\n**Quality:** \n*Strengths:* I am sorry but I have to admit that I cannot assess the actual theoretical contribution of the paper as it is beyond my comfort level zone, but also because in contrast to some people who prefer formulae over word description I would prefer the latter to better understand the formulae. Please see comments on this below in Clarity section.\n\n*Weaknesses:* Though I appreciate that the main contribution of the paper is theoretical, the numerical example provided is not too satisfying. \n\n* The method is only used on 2 synthetic examples with analytical functions\n* The main theoretical results are about the posterior, but the numerical experiment uses the Bayes by Backdrop for inference, which is implementation of the variational inference, i.e. approximation of the true posterior. Wouldn’t MCMC methods with true posterior be more appropriate for the purpose of the numerical example? \n* There are several claims that the paper proposes a “practical” model, however, there is no empirical evidence on that. Some discussion on actual computational time used and how feasible the inference with the considered priors would be appreciated\n\n\n**Clarity:** \n*Strengths:* The text where available is mostly well written and easy to follow. The paper tries to tell the full story with a proper introduction rather than jumping straight into it as if a reader has just finished reading the previous work and doesn’t require any introduction - the approach used in many recent papers. \n\n*Weaknesses:* There is not so much text in the paper but a lot of formulae. I appreciate there are people who prefer formulae over words, but there are others who much better read words over formulae. Word description of formulae would be much appreciated. Also, the notation is not very careful which does not help further with readability of this lot of maths. \nIf the space is an issue, some general paragraphs from the introduction can easily be reduced or removed to save some space. Figures 2 and 3 can also lose the histogram part to save some space. (The histogram part can go into the appendix also)\n\n**Significance:** \n*Strengths:* The paper addresses a very important problem of finding theoretical guarantees for deep learning, Bayesian deep learning in particular in this case. It does seem to make a step further in extending those guarantees (by providing them in a bigger Besov space). \n\n**Summary:** Readability of the paper, namely the lack of words over maths, is the main issue with this submission in my opinion. So much so that it is difficult to assess the quality of the paper. It seems the paper has significant and original results but it should also be easier to understand these results while reading the paper. Moreover, the numerical example can be largely improved. \n\nSpecific comments/suggestions:\n1. Line 10. “In other words” - a bit inappropriate use here as it hasn’t been said before anything about practicality, only about asymptotic properties\n2. “Posterior consistency” is never properly defined in the paper\n3. Line 96, “u \\in {0, 1, …, d}”?\n4. D^u is used for u-th derivative, and then D_n is used for a dataset. It is a bit confusing though technically it is not overlapping notation. It is probably better to use another letter for a dataset\n5. Equation between lines 120 and 121. Both D^u and d^u is used. What is the difference?\n6. Eq. (7) - \\eta is not defined\n7. Line 152, \\delta is already used for the Dirac function\n8. Theorem 1, M_n is not defined, how does it depends on n?\n9. Line 204, N is already used, probably it is better to used \\mathcal{N} for the normal distribution \n10. Figures 2-3. Are dots training data points? Left column plots can used “x” and “y” labels for axis, also subplots without labels look too strange for me. It seems they may have “(a)”, “(b)”, … labels\n11. Line 218. “shrinkage prior” - “spike-and-slab prior” was meant?\n12. Lines 229-230, “Experiments … will also be of great help…” - not clear, is it suggested as future work? It would indeed be nice to have some comparison in the current submission.\n\n\nMinor:\n1. Line 21, “Before observing the data, A” - lower case for “A”\n2. Line 36, “are” -> “were”\n3. Line 69, “than THE spike-and-slab prior”\n4. Line 82. “GP” - acronym is not introduced\n5. Line 86, “to solve THE optimization problem”\n6. Line 88, “with THE (nearly) optimal convergence rate”\n7. Line 148, “Let” - lower case for “L”\n8. Line 211, “were” -> “are” My main concern with the paper is its clarity and I don’t have questions on that. \n\nThe other smaller concern is regarding the numerical example, so maybe the authors can answer questions about it, however, as this is not the major concern, the authors should be free to give priority to questions to other reviewers in their rebuttal if they would help to resolve some major concerns from the other reviewers.\n\n1. Wouldn’t MCMC methods with true posterior be more appropriate for the purpose of the numerical example? \n2. What can the authors say about the claims about practicality of the method?\n There is no discussion on this, but since the work is theoretical it is not the major issue" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "gusvobF1w4a", "v43t0jJ07vJ", "i3wTSVlBbWN", "C9EXjBLuqSj", "S8Ougl3jOfM", "nips_2022_y5ziOXtKybL", "hdXKYNRtVYh", "UU1Amylx77", "Y5z3ztdZWOM", "nips_2022_y5ziOXtKybL", "nips_2022_y5ziOXtKybL", "nips_2022_y5ziOXtKybL" ]
nips_2022_iKKfdIm81Jt
Planning for Sample Efficient Imitation Learning
Imitation learning is a class of promising policy learning algorithms that is free from many practical issues with reinforcement learning, such as the reward design issue and the exploration hardness. However, the current imitation algorithm struggles to achieve both high performance and high in-environment sample efficiency simultaneously. Behavioral Cloning~(BC) does not need in-environment interactions, but it suffers from the covariate shift problem which harms its performance. Adversarial Imitation Learning~(AIL) turns imitation learning into a distribution matching problem. It can achieve better performance on some tasks but it requires a large number of in-environment interactions. Inspired by the recent success of EfficientZero in RL, we propose EfficientImitate~(EI), a planning-based imitation learning method that can achieve high in-environment sample efficiency and performance simultaneously. Our algorithmic contribution in this paper is two-fold. First, we extend AIL into the MCTS-based RL. Second, we show the seemingly incompatible two classes of imitation algorithms (BC and AIL) can be naturally unified under our framework, enjoying the benefits of both. We benchmark our method not only on the state-based DeepMind Control Suite, but also on the image version which many previous works find highly challenging. Experimental results show that EI achieves state-of-the-art results in performance and sample efficiency. EI shows over 4x gain in performance in the limited sample setting on state-based and image-based tasks and can solve challenging problems like Humanoid, where previous methods fail with small amount of interactions.
Accept
This paper introduces a simple approach that improves the sample efficiency of model-based RL for continuous control tasks. The proposed approach, EfficientImitate builds on EfficientZero and uses a hybrid BC-AIL training scheme. The contribution is relatively simple and is shown through satisfactory experiments to give a substantial sample efficiency boost. The paper is clear and appropriately contextualizes its contribution. All reviewers found the paper to be clear, novel, technically sound, and empirically well validated. The results show that the innovations constitute a meaningful contribution to a fairly general problem class. In initial reviews, two of the three reviewers mentioned that the method is not shown on discrete-action problems. Personally, I wouldn't have seen this to be a major concern, since continuous control problems are a large problem class. However, the authors replied with a comment indicating that for lunar lander (a discrete action atari environment), the method works. It is unclear if this addition was cherry-picked among discrete environments and it is also unclear to me if the authors will add this to the paper. Nevertheless, as noted, I don't find this to be a major gap in the paper as it currently stands. Given the sufficiently positive reviews, level of review agreement, and my own reading, I endorse this paper for acceptance.
train
[ "0svlHjnEb3b", "BL9UN01uWR", "5jQIJlNAe_k", "VZbL-4wEEf", "0NQ-VCh5Bex", "kOqy2Fzo9Ds", "Mj13WOQOia", "VbPH5LO8aVC" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response.\n\nYour explanations make sense. I would encourage you to propagate them to the paper in order to help the readers also to build up the intuitions that you have.\n\nThank you for conducting additional experiments.", " Thanks for the response. The additional discussions and visualizations have addressed my questions. Nevertheless, as discussed in my original review, I am still concerned if a good reward / discriminator can be learned in tasks with long-planning horizon and sparse reward. \n\nOverall, I think this is a good paper and I would vote for poster acceptance.", " Thank you for reviewing our paper! We are glad that you find our paper valuable. We address your questions below.\n\n> Q1. What are the failure modes of the proposed method?\n\nSo far, we have not identified failure modes for the state-based tasks. (Though our method gets 0.74 expert performance at 500k env samples for the Humanoid task, it can actually get 0.92 expert performance at 800k, at which all the baselines still learn nothing (< 0.20 expert performance)). In the image-based tasks, however, we find that the proposed method can still fail when the discriminator overfits to wrong, spurious features in the image (Line 265-268). For example, it would over-penalize the agent when it does not match some irrelevant details. However, this is a common problem for adversarial imitation learning, and solving it is orthogonal to our main contribution.\n\n> Q2. Can the learned reward signal and state embeddings be interpreted in some way?\n\nYes. The reward signal can suggest whether the agent’s behavior looks like the expert's behavior during training. When the agent chooses actions that more resemble the expert’s actions, it will receive a higher reward. For the state embeddings, one approach to interpret it is by the t-SNE plot. We use the image-based Walker experiment as an example (see Appendix E). We use the trained model at 100k env steps to generate the state embeddings of one expert trajectory, and in environment trajectory at 0k, 25k, 50k, 75k, and 100k steps. Then we use t-SNE to visualize the embeddings on the 2D plane. As is shown in the figure, the agent’s trajectory gradually matches expert’s trajectory (blue) during training. Moreover, the expert’s trajectory has a circle structure, which represents the periodic pattern of the Walker’s walking behavior. Therefore, our model can represent the environment in a meaningful way. \n\n\n> Q3. How are the reward (logits), value function (dis.), and action distribution represented in the MCTS? There are different implementations for Sample Muzero. Some of them are using distributional rewards and values, with discretized action spaces, while others are using continuous actions, e.g., Gaussians, and searching with sampled actions. In addition, how do these design choices affect the final performance of EI?\n\nThe reward is represented by logits with sigmoid (same as the GAIL). The value function is represented by a discretized categorical distribution as in MuZero. The action distribution is represented by a Gaussian distribution followed by a tanh function. The policy neural network outputs the mean and diagonal std of the Gaussian distribution. We follow the design choices in EfficientZero and Sampled Muzero. We hypothesize that the different design choices in our case may have a similar effect to that in Sampled MuZero. \n", " Thank you for reviewing our paper! We address your questions and concerns below.\n\n> Q1. The base method, MuZero and EfficientZero, also works well in discrete action space. How is the performance of EI in discrete action space?\n\nWe have just carried out experiments in the LunarLander-v2 environment provided by the OpenAI gym, where the agent uses four actions to land a spacecraft. We collect 5 expert trajectories for imitation learning. We follow the task setup in SQIL. Concretely, in the expert trajectories, the spacecraft (expert agent) is initialized at a fixed initial position. But during training and evaluation, the initial position of agent is perturbed so that the agent should learn to deal with different circumstances. We provide the agent with 5 expert trajectories, and allow 50k in-env samples for training. We find that our method could solve this task successfully. The results and comparisons are as follows. \n| Method | Performance |\n| ------- | ---- | \n| BC | 0.76 | \n| DAC* | 0.32 ± 0.05 |\n| SQIL | 0.80 ± 0.03 |\n| ValueDICE | 0.71 ± 0.04 |\n| Ours | **0.90 ± 0.03** |\n*Note that DAC is based on TD3, which is not designed for the discrete action space. We make DAC’s extension to the discrete action space by ourselves.\n\nTherefore, our method is also effective in the discrete action space. Due to the limited resource and time during the rebuttal phase, we do not study more complex discrete tasks here. However, we believe that our method can also work well in those tasks.\n\nFor the implementation of EI in this case, we first let the policy network (and the BC policy network) output a categorical distribution, which represents the probability of taking each discrete action. During recurrent inference, we turn the discrete action into an one-hot vector, concatenate it with the current state, and feed them to the dynamics network and discriminator to calculate the next state and the AIL reward. The other parts remain the same.\n\n> Q2. EI uses the Reanalyzed algorithm for offline training, and requires all the samples should be reanalyzed. What is the motivation for using the Reanalyzed algorithm here, and how significant does it contribute to the sample-efficiency of EI? Have you used the \nReanalyzed algorithm for other baselines?\n\nThanks for the question. Since the Reanalyze algorithm is an optional add-on component proposed by the original MuZero algorithm, we mention the use of Reanalyze algorithm to highlight that it has now been used as a standard component in the recent EfficientZero algorithm and here. In MuZero/EfficientZero, the motivation of the Reanalyze algorithm is to improve the performance and sample efficiency by providing a more accurate value target and policy target for the past visited states with MCTS. However, the sample efficiency of EI also comes from the BC component besides the Reanalyze algorithm. BC is very crucial here because it directly points out the potential right action in high-dimensional action spaces, without which even extensive MCTS can be not so efficient. After we remove the action provided by BC, the sample efficiency of EI can drop by half. EI gets stuck in local minima in challenging tasks without BC (Figure 6). Therefore, the Reanalyze only partly contributes to the sample efficiency of EI.\n\nSince the Reanalyzed algorithm is coupled with MuZero-style MCTS, it cannot be used for the other model-based imitation baselines directly (they are not based on MCTS). However, we believe that the idea of Reanalyze (using planning for policy improvement) can be extended to other model-based methods.\n\n> Q3. How to choose the balancing weights $λ_d$ and $λ_{bc}$ in Eq. (7)?\n\nThe balancing weights are determined by a hyperparameter search. We find that $λ_d$ = 0.1 and $λ_{bc}$ = 0.01 works quite well across the tasks.\n", " Thanks for the thoughtful review. We address your concerns below. We are happy to discuss any additional questions or concerns.\n\n> Q1. Why is it important to use EfficientZero? Can we use any other model-based method?\n\nThanks for the question. One important reason we use EfficientZero here is that it comes with a MCTS planning component, which leads to a natural approach to unifying BC and AIL. We notice that MCTS planning has a very unique benefit. It enables our method to use BC actions in a clever way: it will choose BC actions when they are right (leading to long-term distribution matching) and discard them when they are wrong. Model-free methods do not have such capability, since they cannot compute the long-term effects of BC actions via extensive search. Our experiments (Line 301-310) also show that when we reduce the MCTS search, the performance will degrade significantly. Moreover, the other model-based methods like Dreamer also do not have such a benefit. Take dreamer as an example, although it also builds a model of the environment, it still uses model-free methods to learn the policy function with the model. Thus, other model-based approach like Dreamer cannot combine the BC as we did. In conclusion, what makes EfficientZero (MCTS planning) unique and important here is that it can evaluate the effect of BC actions, and use them to speed up learning. Such a feature does not present in model-free and the other latest model-based methods.\n\n> Q2. Can it be that the main benefit comes from using EfficientZero as the algorithm? What if we use this algorithm in other model-based imitation learning baselines?\n\nWe agree that EfficientZero does offer some benefits: compared with other model-based imitation learning baselines, its extra MCTS planning component can ensure more effective policy and value updates. However, the benefit also comes from BC. The BC component can provide a very good initial solution to EfficientZero to speed up the learning process, and help the policy escape from local minimum in challenging tasks with high dimensional states. Since the other model-based imitation learning baselines are not based on MCTS, we cannot implement EfficientImitate in the other baselines directly. \n\n> Q3. Why did the author choose a specific number of trajectories? How does the method & baselines behave when different number of trajectories are considered?\n\nThanks for pointing out this. We have just carried out experiments with fewer trajectories to increase the difficulty. We reduce the number of expert trajectories from 5 to 2 in the state-based experiments, and from 20 to 10 in the image-based experiments. The results are shown as below. \n\n| Method | Cheetah (State) | Walker (State) | Cheetah (Image) | Walker (Image) |\n| ---- | ---- | ---- | ---- | ---- |\n| BC | 0.50 | 0.12 | 0.33 | 0.11 |\n| DAC | 0.18 ± 0.02 | 0.22 ± 0.03 | 0.04 ± 0.01 | 0.10 ± 0.02 |\n| SQIL | 0.04 ± 0.01 | 0.10 ± 0.03 | 0.05 ± 0.00 | 0.27 ± 0.04 |\n| ValueDICE | 0.43 ± 0.05 | 0.50 ± 0.06 | 0.04 ± 0.01 | 0.06 ± 0.01 |\n| VMAIL | N/A | N/A | 0.12 ± 0.03 | 0.22 ± 0.06 |\n| Ours | **0.94 ± 0.03** | **0.97 ± 0.02** | **0.91 ± 0.02** | **0.93 ± 0.03** |\n\nThe performance of our method only degrades a little bit and approximately remains the same level. It still outperforms the baselines by a large margin.\n\n> Q4. Can you provide more intuition / evidence on why each component of the algorithm is important ? (i.e., GAIL, EfficientZero, BC)\n\nImportance of EfficientZero: It provides an MCTS-based RL framework, over which we can unify the two classes of imitation learning algorithms, BC and AIL, with planning. \n\nImportance of BC: It improves the sample efficiency, and helps the policy to escape from the local minimum by giving good candidate actions when we use MCTS to solve the imitation problem. \n\nImportance of GAIL: It defines the goal of MCTS planning, which is to ensure long-term distribution matching. We do agree that GAIL can be replaced with its improved versions (Line 179-183). One reason we use GAIL here is that it is the vanilla AIL algorithm. Using the improved AIL/IRL algorithms here might further improve the performance and this is left for future work.\n\n", " The paper proposes a method to improve sample efficiency of the model-based imitation learning. The method consists in combining three components:\n* Efficient-Zero algorithm\n* GAIL-style imitation learning approach which will learn a discriminator and provide it to Efficient-Zero algorithm to specify the reward\n* BC algorithm to learn a policy which will regularize the acting distribution of Efficient-Zero\n\nThe method demonstrates improved data efficiency (in terms of online rollouts) compared to the baseline, and is conceptually simple. Strenghts:\n* A method which combines different tricks and achieves a competitive performance in a model-based imitation learning setting\n* Conceptual simplicity of the method\n* The paper studies an important problem - model-based imitation learning\n\nWeaknesses:\n* The paper made multiple somehow arbitrary choices which led to a quite an efficient algorithm, but lacks an intuition on why each of the choices was important. For example, the proposed technique (GAIL & BC component) can be plugged-and-played to any other model-based method as well as non model-based method. Is EfficientZero algorithm really crucial for this technique to work? The paper would benefit from more evidence on why each of the proposed components are important. What would happen if instead of model-based algorithm, we use a model-free algorithm with similar acting distribution (mixture between acting and BC policies), having an additional term to optimize for the GAIL-learned reward function?\n* Can it be that baseline methods can be significantly improved if EfficientZero algorithm used in these? Can it be that the main benefit in this work comes from using EfficientZero as the main RL algorithm?\n* The authors somewhat arbitrary choose the number of expert trajectories. Why exactly these numbers? How do baseline methods perform when more or less expert demonstrations are available?\n * Why is it important to use EfficientZero? Can we use any other model-based method?\n* Can it be that the main benefit comes from using EfficientZero as the algorithm? What if we use this algorithm in other model-based imitation learning baselines?\n* Why did the author choose a specific number of trajectories? How does the method & baselines behave when different number of trajectories are considered?\n* Can you provide more intuition / evidence on why each component of the algorithm is important ? (I.e., GAIL, EfficientZero, BC)\n I think the authors adequately presented the limitations.", " This paper proposes a new IL called EfficientImitate (EI), a planning-based imitation learning method that can achieve high sample efficiency and performance. EI extends AIL to a model-based setting and solve it with MCTS with high sample-efficiency. EI can benefit from BC by using the BC actions in the search. EI provides a natural way to unify the two classes of existing IL methods, BC and AIL. EI uses BC actions as candidates in MCTS. The performance of EI in both state-based and image-based control tasks are amazingly good. ==Originality==\n\nThe proposed EI is novel and interesting. The main idea is to extend AIL to a model-based setting with multi-step losses under the MCTS-based RL framework. Thanks to the planning component of the algorithm, EI naturally unifies two types of previous imitation learning methods (BC and AIL), and shows a significant performance boost. The connections and differences of this work and previous work are well discussed and the related work are cited in the paper.\n\n==Quality==\n\nThe proposed method is technically sound. The experiments are conducted in diverse settings with ablations, and the results well supports the claims. Although there are some limitations, this work is a complete piece of work in planning-based IL.\n\n==Clarity==\n\nThis paper is clearly written and well organized. The problem is well formulated. The method is clearly motivated and introduced. The limitations and potential impacts are also discussed.\n\n==Significance==\n\nThis paper provides an novel sample-efficient IL method that extends AIL into the MCTS-RL. 1. The base method, MuZero and EfficientZero, also works well in discrete action space. How is the performance of EI in discrete action space?\n\n2. EI uses the Reanalyzed algorithm for offline training, and requires all the samples should be reanalyzed. What is the motivation for using the Reanalyzed algorithm here, and how significant does it contribute to the sample-efficiency of EI? Have you used the Reanalyzed algorithm for other baselines?\n\n3. How to choose the balancing weights $\\lambda_d$ and $\\lambda_{bc}$ in Eq. (7)? 1. The performance of EI with discrete action space is unknown.\n\n2. There is no discussion on limitations and potential social impacts.", " This paper presents EfficientImitate (EI), which combines Adversarial Imitation Learning (AIL) with model-based planning with MCTS, for sample-efficient imitation learning. During training, an expert buffer is maintained in addition to the standard replay buffer used in EfficientZero. A discriminator is used to predict rewards by contrasting expert demonstrations and agent replays. Next, MCTS is performed each time step using the predicted reward from AIL. In addition, EI unifies BC with AIL by directly including BC actions in the MCTS, which significantly improves the performance. Experiments suggest that EI has achieved significantly better performance than the SOTA methods in various continuous control tasks. \n Strengths: \nOverall, the paper is well-written and easy to follow. Solid experiments have been conducted to support the claims made by the paper. The proposed idea has achieved a very strong performance in experiments, outperforming the state-of-the-art (SOTA) methods by a large margin, with much fewer environment interactions. \n\nThere are some minor weaknesses of the paper. \n\n1/ The work mainly focuses on the continuous control domains, which has a relatively dense reward and limited planning horizon. However, MCTS is naturally designed for tasks with discrete action spaces, e.g., the game of Go or Atari games, and they are not considered during the experiments. My feeling is that with a sparse reward and long-planning horizon, learning reward from AIL might not be sufficient and may cause extra issues to MCTS. In that case, balancing BC and AIL will be a tricky issue.\n\n2/ The proposed idea is relatively straightforward: it is a simple combination of AIL and MCTS. I think it would be better to dig deeper into the proposed framework. \n 1/ What are the failure modes of the proposed method?\n\n2/ Can the learned reward signal and state embeddings be interpreted in some way? \n\n3/ How are the reward, value function, and action distribution represented in the MCTS? There are different implementations for Sample Muzero. Some of them are using distributional rewards and values, with discretized action spaces, while others are using continuous actions, e.g., Gaussians, and searching with sampled actions. In addition, how do these design choices affect the final performance of EI? I think one limitation of work is that for lots of continuous control tasks, inference speed is critical. However, MCTS is a slow process. Although it achieves a very good performance during evaluation, it might not be appropriate for real-world situations, where inference speed is critical. \n\nOverall, I still think this is a valuable paper and would like to vote for acceptance." ]
[ -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "0NQ-VCh5Bex", "5jQIJlNAe_k", "VbPH5LO8aVC", "Mj13WOQOia", "kOqy2Fzo9Ds", "nips_2022_iKKfdIm81Jt", "nips_2022_iKKfdIm81Jt", "nips_2022_iKKfdIm81Jt" ]
nips_2022_nE8IJLT7nW-
Peripheral Vision Transformer
Human vision possesses a special type of visual processing systems called peripheral vision. Partitioning the entire visual field into multiple contour regions based on the distance to the center of our gaze, the peripheral vision provides us the ability to perceive various visual features at different regions. In this work, we take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition. We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data. We evaluate the proposed network, dubbed PerViT, on ImageNet-1K and systematically investigate the inner workings of the model for machine perception, showing that the network learns to perceive visual data similarly to the way that human vision does. The performance improvements in image classification over the baselines across different model sizes demonstrate the efficacy of the proposed method.
Accept
The paper proposes a transformer architecture that models human-like peripheral vision. Experiment results show it achieves good performance. All the reviewers consider the paper above the bar. They like the novelty and the strong empirical performance. The AC finds no reason to object.
train
[ "gRhmhLejDe", "VAzHPQyi5kA", "FCn4erTKtVh", "qmWH2AiC4jA", "uhQLGTn-nMA", "8-2Gr-7BCBJ", "tirRpvy69aM", "zjit52bYJpz", "HNLttS6A7GO", "P6VF4FyZ0fv", "LPSF9yFx0To", "oUWfi5FLbL0", "MvNbBKrlnw0u", "NVwVNq3niaB", "tFa302Plyji", "fVUXM17N8A6", "lRp_nQ9Ggtc", "5dV3nUbi2-G", "GEALQhg2Ay", "B13llK6vyqX" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We truly appreciate the positive evaluations and will do our best to reflect the comments as much as possible.", " Thank you authors for your message and flagging the missing score change. I just upgraded my score to reflect the changes.\n\nBest wishes.", " We again thank the reviewer for the professional, insightful suggestions and positive evaluations on our work. Based on the comments, we have updated the draft (pdf) for better presentation of peripheral vision (toning down rough narrative on peripheral vision & discussing relevant previous work) and will further spend a fair amount of time to polish the writing even after the rebuttal with exploration of more recent literature on peripheral vision.", " We again thank the reviewer for the motivating feedback and are glad to hear that reviewer 5MyL's most relevant concerns are addressed by our rebuttal. We find, however, the rating remains as before and thus will be truly appreciated if the recommendation is updated. Thank you in advance!", " We thank the reviewers for reading our answers in the rebuttal and upgrading the final recommendation (mNPh and 5MyL). We appreciate reviewers NKKB, mNPh, and XfL6 for positive evaluations on our manuscript and are delighted to find that the rebuttal resolved most of the concerns for reviewer 5MyL. We have undergone a revision of the manuscript and will further spend a fair amount of time to polish the writing (with additional experiments if possible) after the rebuttal, e.g., exploration on recent work of peripheral vision, toning down rough narrative, and experiments on datasets like iNaturalist-18. In the revised manuscript, the revised/added texts & experiments are colored in blue. Due to the 9-page limit for the rebuttal revision (https://nips.cc/Conferences/2022/PaperInformation/NeurIPS-FAQ), we leave out some original texts (colored in red) in the current revised manuscript; we will include the texts in our final manuscript (10-page limit) if accepted. Even after the rebuttal, we will be highly appreciated for further suggestion and feedback to make our manuscript more reliable and stronger.\n\nWe replace the original manuscript with revised one (pdf).\nOur orignal main/supplementary manuscripts and submitted code can be found in the supplementary material (zip).", " Dear authors,\n\nThank you for the detailed and informative rebuttal. After reading the other reviews and the additional results in the rebuttal my most relevant concerns are resolved. \n\nBest wishes.\n", " **[Segmentation results]**\nIn the rebuttal, we promised to evaluate our model on other downstream tasks such as object detection and semantic segmentation. However, we found that the quadratic complexity of our model makes the evaluations on time infeasible; these downstream tasks typically take input images of higher resolutions (approx. $800 \\times 1000$ images) than classification, being usually performed only by the networks that process inputs within feasible memory and time complexity, e.g, linear complexity with respect to the number of input tokens as in XCiT [16]. For example, some ViT-based work in Tab.3 [16, 23, 31, 54] introduce efficient self-attention techniques such as cross-covariance attention [16], squeezed convolutional projection [23], and window-wise attentions [31, 53], thus being able to evaluate their models on detection/segmentation. Meanwhile, the relevant work of [3, 15, 18, 25, 55] (as well as ours) with ‘quadratic complexity’ do not perform such evaluations as they typically generate out-of-memory (OOM) errors given high-resolution input images. Nevertheless, to address the reviewer’s concern, we explored a number of variants of PerViT during the rebuttal to provide the reviewers with ‘memory-efficient PerViT’ but this exploration is constrained by the limited time of the rebuttal; the ideation & exploration of memory-efficient $\\Phi_p$, the implementation, pre-training on ImageNet-1K, downstream task evaluation all demand a fair amount of time and resources for the evaluations to be complete. Although we were not able to fully address the concerns, we appreciate the reviewer’s feedback that helped us discover the reason why the current implementation of PerViT is limited for other tasks of high-resolution inputs. To make our manuscript more self-contained, we will discuss this point in the limitation section (Sec.5) in our final manuscript.", " **[Segmentation results]**\nIn the rebuttal, we promised to evaluate our model on other downstream tasks such as object detection and semantic segmentation. However, we found that the quadratic complexity of our model makes the evaluations on time infeasible; these downstream tasks typically take input images of higher resolutions (approx. $800 \\times 1000$ images) than classification, being usually performed only by the networks that process inputs within feasible memory and time complexity, e.g, linear complexity with respect to the number of input tokens as in XCiT [16]. For example, some ViT-based work in Tab.3 [16, 23, 31, 54] introduce efficient self-attention techniques such as cross-covariance attention [16], squeezed convolutional projection [23], and window-wise attentions [31, 53], thus being able to evaluate their models on detection/segmentation. Meanwhile, the relevant work of [3, 15, 18, 25, 55] (as well as ours) with ‘quadratic complexity’ do not perform such evaluations as they typically generate out-of-memory (OOM) errors given high-resolution input images. Nevertheless, to address the reviewer’s concern, we explored a number of variants of PerViT during the rebuttal to provide the reviewers with ‘memory-efficient PerViT’ but this exploration is constrained by the limited time of the rebuttal; the ideation & exploration of memory-efficient $\\Phi_p$, the implementation, pre-training on ImageNet-1K, downstream task evaluation all demand a fair amount of time and resources for the evaluations to be complete. Although we were not able to fully address the concerns, we appreciate the reviewer’s feedback that helped us discover the reason why the current implementation of PerViT is limited for other tasks of high-resolution inputs. To make our manuscript more self-contained, we will discuss this point in the limitation section (Sec.5) in our final manuscript.", " Thank you to the authors for providing an extensive rebuttal within a short time. My major concerns have been addressed, and I have read through other reviewer comments and feedback. I'm sorry that I didn't have much time to engage in discussion, but from my point of view there are no contentious topics to discuss and I believe that with the additional results and more careful framing of the contributions as mentioned in the rebuttal, the paper is a nice contribution to the NeurIPS, which others will find interesting. I am happy to update my final score to Accept. ", " \n**[Minor points]**\nWe thank the reviewer for detailed feedback and will revise them all accordingly.\n\n**[Discussion on biologically-inspired models]**\nWe appreciate the suggested references [F, G, H]. Following suggestions from the reviewers XfL6 and mNPH, we will extend the related work section to include recent literature on peripheral vision, biologically-inspired machine vision methods [B-H], and a discussion of how peripheral vision affects model behavior in a human-like way.\n\n\n\n\n\n[A] He et al. (2022). Masked Autoencoders Are Scalable Vision Learners.\n\n[B] Rosenholtz R. (2016). Capabilities and Limitations of Peripheral Vision.\n\n[C] Balas et al. (2009). A summary-statistic representation in peripheral vision explains visual crowding.\n\n[D] Deza et al. (2020). Emergent Properties of Foveated Perceptual Systems.\n\n[E] Deza et al. (2016). Can Peripheral Representations Improve Clutter Metrics on Complex Scenes?\n\n[F] Wang et al. (2017). Central and peripheral vision for scene recognition: A neurocomputational modeling exploration.\n\n[G] Reddy et al. (2020). Biologically inspired mechanisms for adversarial robustness.\n\n[H] Jonnalagadda et al. (2021). Foveater: Foveated transformer for image classification.", " **[Clarification on our method, e.g., peripheral projections]**\nThe proposed position-based attention $\\Phi_{p}$ extends the idea of previous relative positional (RP) encoding work [10, 15, 22, 37, 38, 50] which all use a single-layer linear projection, i.e., neural networks. In our work, we adopt a multi-layered design to provide torus-shaped attentions (top-right of Fig.2) and peripheral projections (PP) to break rotational symmetric properties (middle of Fig.2) to model peripheral vision. Specifically, given relative positions as inputs $\\mathbf{R} \\in \\mathbb{R}^{HW \\times HW \\times D_r}$, PP with $\\mathbf{W} \\in \\mathbb{R}^{K^2 \\times D_r \\times D_{\\text{hid}}}$ is formally defined as follows: $PP(\\mathbf{R}, \\mathbf{W})_{\\mathbf{q}, \\mathbf{k}, :} \\coloneqq \\sum\\_{\\mathbf{m} \\in \\mathcal{N}(\\mathbf{k})} \\mathbf{R}\\_{\\mathbf{q}, \\mathbf{m}, :} \\mathbf{W}\\_{\\mathbf{m} - \\mathbf{k}, :, :}$ where the neighbor function $\\mathcal{N}$ provides a set of neighbors around given input key position $\\mathbf{k}$ and is formally defined as $\\mathcal{N}(\\mathbf{k}) \\coloneqq \\left[ \\mathbf{k}-[\\frac{K}{2}], \\dots, \\mathbf{k}+[\\frac{K}{2}] \\right] \\times \\left[ \\mathbf{k}-[\\frac{K}{2}], \\dots, \\mathbf{k}+[\\frac{K}{2}] \\right]$. We use $K=3$ for all layers and heads of PerViT in our experiments as $K > 3$ hardly brought improvements (Supp. L201). We will rephrase this part (with a visual illustration if possible) in our final manuscript.\n\n**[Answers to questions]**\n\n1/6: In this work, we consider each query location $\\mathbf{q}$ of MPA, e.g., the position of a feature we want to transform, as a focal point, assuming each MPA in PerViT simultaneously processes $H \\times W$ pixel locations with $H \\times W$ different focal points given input feature size of $H \\times W$. This assumption provides ring-shaped attentions if a query is located at the center of the feature map (Fig.1). While we have developed our narrative in the context of images (2D), we agree on the reviewer’s point that this assumption deviates from the reality when considering FOV of a physical eyeball (3D). We will discuss this point in our final manuscript.\n\n2: For PerViT-T, we experimented with $D_r \\in$ {16, 64, 256} but increasing its size hardly brought improvements, thus setting $D_r = 16$ for the Tiny model (32 and 48 for Small and Medium models resp.).\n\n3/9/10: Please refer to our answers above in section **Clarification on our method, e.g., peripheral projections**.\n\n4: We appreciate the suggestion and will use the term “columnar”.\n\n5: Note that $w_r$ where $r \\in [D_{r}]$ is a learnable parameter that weighs input Euclidean distances in $D_r$ different ways: $\\mathbf{R}\\_{\\mathbf{q}, \\mathbf{k}, :} \\coloneqq \\text{concat}\\_{r \\in D_{r}} \\[w_{r} \\cdot \\mathbf{R}^{\\text{euc}}_{\\mathbf{q}, \\mathbf{k}}\\] \\in \\mathbb{R}^{D_r}$ (L125-127). Since $\\mathbf{R}$ is an input to the peripheral projection in Eq.7, $w_r$ is also a part of peripheral projection, thus being utilized for the peripheral initialization in Eq.9.\n\n\n7: The proposed Peripheral Positional Encoding (PPE) refers to $\\Phi_p$ in Eq.7, i.e., position information injected to self-attention matrix $\\Phi_c$. The Convolutional Positional Encoding (CPE) refers to a depth-wise convolution (L192); the term CPE is originally used in the work of [7].\n\n\n8: We agree on the reviewer’s point that our claim of state of the art in the current manuscript needs to be revised. We will revise the claim in L11 and L69 accordingly as follows: “The performance improvements in image classification task over the columnar Transformer baselines, e.g., DeiT, across different model sizes demonstrate the efficacy of the proposed method.”\n\n\n\n[A] He et al. (2022). Masked Autoencoders Are Scalable Vision Learners.\n\n[B] Rosenholtz R. (2016). Capabilities and Limitations of Peripheral Vision.\n\n[C] Balas et al. (2009). A summary-statistic representation in peripheral vision explains visual crowding.\n\n[D] Deza et al. (2020). Emergent Properties of Foveated Perceptual Systems.\n\n[E] Deza et al. (2016). Can Peripheral Representations Improve Clutter Metrics on Complex Scenes?\n\n[F] Wang et al. (2017). Central and peripheral vision for scene recognition: A neurocomputational modeling exploration.\n\n[G] Reddy et al. (2020). Biologically inspired mechanisms for adversarial robustness.\n\n[H] Jonnalagadda et al. (2021). Foveater: Foveated transformer for image classification.\n", " We thank all the reviewers for their insightful comments and suggestions. We are glad to see that the reviewers found our work has \"novel direction and motivation with extensive qualitative results (NKKB)\", \"interesting, technically sound, well-motivated approach with good performance (mNPh)\", \"concise idea with convincing and reasonable performance evaluation (XfL6)\", and \"complete ablation study with plenty of useful figures (5MyL)\". Nevertheless, the reviewers also point out important comments stating that:\n\n1. the method section writing can be improved for clarity,\n2. the proposed method requires additional ablation study,\n3. further evaluations on other datasets/tasks are missing,\n4. the paper oversimplifies/oversells peripheral vision and needs to extend related work.\n\nIn the rebuttal, we clarify the method section, e.g., peripehral projection, perform additional ablation study to further verify usefulness of our approach, provide experimental results on other datasets, and promise to tone down current narrative about peripheral vision. In our final manuscript, we will do our best to reflect all the comments from the reviewers as well as additional comments given in author-reviewer discussion period.", " We thank reviewer 5MyL for constructive comments and suggestions and will revise our paper by reflecting them as much as possible.\n\n**[Experiments on other datasets]**\nTo further verify the robustness of the proposed method, we compare the PerViT-M with baseline models [11, 15] on different transfer learning task with ImageNet pre-training in **Tab.R6**. We finetune trained PerViT-M on CIFAR-10, CIFAR-100, and iNaturalist-19, following the same training recipes of DeiT [11]. Even with significantly lower complexity than [11, 15], our method surpasses baselines by approx. 1%p on CIFAR-100 and iNaturalist19 while performing on par with [11] on CIFAR-10. Please excuse the absence of results on Flowers, Cars, and iNaturalist-18 datasets due to the tight submission deadline of the rebuttal. We will do our best to include them all in our final manuscript.\n\n**Table R6**. Transfer learning results on CIFAR-10, CIFAR-100, and iNAT-19.\n| Model | Size | GFLOPs | CIFAR-10 | CIFAR-100 | iNAT-19 | ImageNet-1K |\n|:---------------:|:----:|:------:|:--------:|:---------:|:-------:|:-----------:|\n| ViT-L/16 [15] | 307 | 117 | 97.9 | 86.4 | - | 76.5 |\n| DeiT-B [11] | 86 | 18 | 99.1 | 90.8 | 77.7 | 81.8 |\n| PerViT-M (ours) | 44 | 9 | 99.1 | 91.4 | 78.5 | 82.9 |\n\n**[The claim of state of the art]**\nWe agree on the reviewer’s point that our claim of state of the art in the current manuscript needs to be revised. We will revise the claim in L11 and L69 accordingly as follows: “The performance improvements in image classification task over the columnar Transformer baselines, e.g., DeiT, across different model sizes demonstrate the efficacy of the proposed method.”\n\n\n**[Segmentation results]**\nWe truly appreciate the reviewer’s motivating feedback that our method could shine more on downstream tasks like semantic segmentation. However, please excuse the absence of the segmentation results in this rebuttal due to its tight submission deadline; the implementation and experiments demanded more time than we expected. Nevertheless, we will do our best to include the requested segmentation results during the author-reviewer discussion period to further improve our manuscript. Meanwhile, we’d like to note that some work [16, 31, 43, 48, 53, 54] indeed report segmentation results but most of the relevant work [6, 8, 9, 11, 15, 37, 42, 45, 49, 51, 55, 57] did not include them, focusing on in-depth analyses and ablations of their methods from theoretical perspectives. In similar manner, a significant part of our current draft focuses on the systematic investigation on the inner workings of PerViT, e.g., how it learns to model peripheral vision, the impact of attentions, the role of locality, exploration of possible design choices of $\\Phi_p$, and rigorous proof that makes our results reasonable and interpretable, all of which encourage an inspiring direction in computer vision literature.\n", " We thank reviewer mNPh for professional comments and suggestions on recent peripheral vision literature and will revise the narrative accordingly to make our paper more reliable.\n\n**[Oversimplified peripheral vision]**\nOur manuscript in current form primarily focuses on developing self-attention for modeling peripheral vision but indeed oversimplifies its concept. We truly appreciate the suggested references [B, F, G, H] by reviewers RmNPh and RXfL6. With a careful review of the work and more exploration of recent literature on peripheral vision, we will revise the manuscript by updating the current (rough) narrative on peripheral vision and correcting misleading definitive terms (L47, 56) accordingly, reflecting the reviewers’ comments as much as possible. We will also extend related work in the revised manuscript, discussing recent work [B-E] on peripheral vision and how our method differs from previous efforts [F-H].\n\n**[Training sample efficiency experiments]**\nWe investigate the training sample efficiency of PerViT-S by subsampling ImageNet training data by fractions of 50% and 25% and compare the results with DeiT [11] in **Tab.R4**. For each subsamples, we increase the number of epochs so the models are presented with a fixed number of images. Our model consistently surpasses the baseline [11] for all subsampled datasets, showing its robustness under limited training data. We will include the results in our final manuscript.\n\n**Table R4**. ImageNet top-1 accuracy comparisons under different subsampling ratios.\n| Subsampling ratio | DeiT-S top-1 | PerViT-S top-1 |\n|:----:|:------------:|:--------------:|\n| 100% | 79.9 | 82.1 |\n| 50% | 74.6 | 77.4 |\n| 25% | 61.8 | 67.5 |\n\n**[Additional ablation experiments]**\nWe summarize the requested ablation on PerViT-T/S/M in **Tab.R5**; without $\\Phi_p$, the top-1 accuracy consistently drops for all the three models. Comparing (b) with (c), we observe that C-Stem and CPE are less effective for large models, bringing 1.3%p and 0.1%p gains for Small and Medium respectively whereas they improve the Tiny model by 5.1%p. In contrast, the effectiveness of $\\Phi_p$ is consistent across different model sizes, bringing ~1%p gains for all the three models. The effectiveness of $\\Phi_p$ for larger models, we hypothesize, is due to its flexibility in modeling local/global spatial attentions (Fig.7) while C-Stem/CPE perform only locally.\n\n**Table R5**. Ablation on PerViT-T/S/M: effect of $\\Phi_p$, C-Stem, and CPE.\n| Model | $\\Phi_p$ | C-Stem & CPE | Tiny | Small | Medium |\n|:---------------------------------------------:|:--------:|:------------:|:----:|:-----:|:------:|\n| (a) PerViT (ours) | o | o | 78.8 | 82.1 | 82.9 |\n| (b) without $\\Phi_p$ | x | o | 77.3 | 81.1 | 81.9 |\n| (c) without $\\Phi_p$, C-Stem, CPE (DeiT [11]) | x | x | 72.2 | 79.8 | 81.8 |\n\n\n**[Experiments on other datasets]**\nTo further verify the robustness of the proposed method, we compare the PerViT-M with baseline models [11, 15] on different transfer learning task with ImageNet pre-training in **Tab.R6**. We finetune trained PerViT-M on CIFAR-10, CIFAR-100, and iNaturalist-19, following the same training recipes of DeiT [11]. Even with significantly lower complexity than [11, 15], our method surpasses baselines by approx. 1%p on CIFAR-100 and iNaturalist19 while performing on par with [11] on CIFAR-10. Please excuse the absence of results on Flowers, Cars, and iNaturalist-18 datasets due to the tight submission deadline of the rebuttal. We will do our best to include them all in our final manuscript.\n\n**Table R6**. Transfer learning results on CIFAR-10, CIFAR-100, and iNAT-19.\n| Model | Size | GFLOPs | CIFAR-10 | CIFAR-100 | iNAT-19 | ImageNet-1K |\n|:---------------:|:----:|:------:|:--------:|:---------:|:-------:|:-----------:|\n| ViT-L/16 [15] | 307 | 117 | 97.9 | 86.4 | - | 76.5 |\n| DeiT-B [11] | 86 | 18 | 99.1 | 90.8 | 77.7 | 81.8 |\n| PerViT-M (ours) | 44 | 9 | 99.1 | 91.4 | 78.5 | 82.9 |\n\n**[Minor points / typos]**\nWe appreciate the comments and will correct the typos accordingly.\n", " We thank reviewer NKKB for constructive comments and suggestions and will revise our paper by reflecting them as much as possible.\n\n**[Clarification on peripheral projections]**\nThe proposed position-based attention $\\Phi_{p}$ extends the idea of previous relative positional (RP) encoding work [10, 15, 22, 37, 38, 50] which all use a single-layer linear projection, i.e., neural networks. In our work, we adopt a multi-layered design to provide torus-shaped attentions (top-right of Fig.2) and peripheral projections (PP) to break rotational symmetric properties (middle of Fig.2) to model peripheral vision. Specifically, given relative positions as inputs $\\mathbf{R} \\in \\mathbb{R}^{HW \\times HW \\times D_r}$, PP with $\\mathbf{W} \\in \\mathbb{R}^{K^2 \\times D_r \\times D_{\\text{hid}}}$ is formally defined as follows: $PP(\\mathbf{R}, \\mathbf{W})_{\\mathbf{q}, \\mathbf{k}, :} \\coloneqq \\sum\\_{\\mathbf{m} \\in \\mathcal{N}(\\mathbf{k})} \\mathbf{R}\\_{\\mathbf{q}, \\mathbf{m}, :} \\mathbf{W}\\_{\\mathbf{m} - \\mathbf{k}, :, :}$ where the neighbor function $\\mathcal{N}$ provides a set of neighbors around given input key position $\\mathbf{k}$ and is formally defined as $\\mathcal{N}(\\mathbf{k}) \\coloneqq \\left[ \\mathbf{k}-[\\frac{K}{2}], \\dots, \\mathbf{k}+[\\frac{K}{2}] \\right] \\times \\left[ \\mathbf{k}-[\\frac{K}{2}], \\dots, \\mathbf{k}+[\\frac{K}{2}] \\right]$. We use $K=3$ for all layers and heads of PerViT in our experiments as $K > 3$ hardly brought improvements (Supp. L201). We will clarify this in our revised manuscript.\n\n**[Experiments with different ViT implementation, e.g., MAE [A]]**\nWe'd like to note that the ViT implementation in MAE are designed for large ViT models, e.g., ViT-Base/Large/Huge, not for small ViTs, e.g., ViT-Tiny/Small/Medium, that our paper adopts. According to the MAE paper [A], “ViT-L is very big and tends to overfit” (Sec.4 of [A]) and “the training is unstable with NaN frequently observed during training” (Appendix.2 of [A]) so it explores different training recipes for the large ViTs (Tab.11 of [A]) with strong regularization which brings ~5%p improvements (from 76.5 to 82.5 as seen in Sec.4 of [A]) for ViT-L. Therefore, we adopt ViT implementations of DeiT [11] not those of MAE because our paper explores small ViT models, e.g., PerViT-T/S/M; we compare the models sizes in **Tab.R1**. To see how MAE's ViT implementation affects the performance of our (small) models, we conduct experiments with PerViT-T/S/M using MAE's ViT implementations and summarize the results in **Tab.R2**; strong regularization of [A] severely damages performance for all three models. Meanwhile, we observe that PerViT-S/M less suffer compared to PerViT-T as the implementations of MAE are optimal for large ViT models.\n\n**Table R1**. Model size, e.g., the number of parameters (M), comparison.\n| PerViT-T | PerViT-S | PerViT-M | MAE-B | MAE-L | MAE-H |\n|:--------:|:--------:|:--------:|:-----:|:-----:|:-----:|\n| 7.6 | 21 | 44 | 86 | 304 | 632 |\n\n**Table R2**. ImageNet top-1 accuracy comparisons using different implementations of DeiT and MAE [A].\n| Impl. | Tiny | Small | Medium |\n|:-----------:|:----:|:-----:|:------:|\n| DeiT (ours) | 78.8 | 82.1 | 82.9 |\n| MAE [A] | 74.9 | 81.1 | 82.8 |\n\n\n\n**[Additional baselines: $\\Phi_p$ with fixed parameters]**\nTo highlight the benefits of learning $\\Phi_p$ using NN, we conduct additional baseline experiments using PerViT-Tiny with parameters of $\\Phi_p$ fixed during training under three different initialization methods (L279-283 & top sec. of Tab.4) and summarize the results in a **Tab.R3**. Fixing $\\Phi_p$ damages for all three intializations which verify the efficacy of learning diverse position-based attentions across different layers and heads. We also observe that conv and rand inits perform poorly compared to PerViT-T without $\\Phi_p$, e.g., model (b) of Tab.1; we suspect that $\\Phi_p$ with fixed conv & rand inits only provides local and noisy attentions respectively for all layers while the model without $\\Phi_p$ has no such strong restrictions. We will include the results and discussion in our final manuscript.\n\n**Table R3**. ImageNet top-1 accuracy comparisons with fixed $\\Phi_p$ under three different initializations.\n| Model | peri (ours) | conv | rand |\n|:--------------:|:-----------:|:----:|:----:|\n| Ours | 78.8 | 78.6 | 78.5 |\n| Fixed $\\Phi_p$ | 77.8 | 77.1 | 75.8 |\n| Without $\\Phi_p$ | 77.3 | 77.3 | 77.3 |\n\n\n[A] He et al., Masked Autoencoders Are Scalable Vision Learners.", " We thank reviewer XfL6 for insightful comments and positive evaluation on our work. We will revise the paper by reflecting them as much as possible.\n\n**[Experiments on other tasks & datasets]**\nWe truly appreciate the reviewer’s motivating feedback that our method could benefit computer vision models in different tasks like semantic segmentation which is suggested by reviewer 5MyL as well. However, please excuse the absence of results on other tasks in this rebuttal due to its tight submission deadline; the implementation and experiments for the task of segmentation demanded more time than we expected. Nevertheless, we will do our best to include the requested results during the author-reviewer discussion period to further improve our manuscript. Instead, we evaluate our model, e.g., PerViT-M, on different transfer learning task with ImageNet pre-training and compare the results with baseline models of [11, 15] in **Tab.R6**. We finetune trained PerViT-M on CIFAR-10, CIFAR-100, and iNaturalist-19, following the same training recipes of DeiT [11]. Even with significantly lower complexity than [11, 15], our method surpasses baselines by approx. 1%p on CIFAR-100 and iNaturalist19 while performing on par with [11] on CIFAR-10. Please excuse the absence of results on Flowers, Cars, and iNaturalist-18 datasets due to the tight submission deadline of the rebuttal. We will do our best to include them all in our final manuscript.\n\n**Table R6**. Transfer learning results on CIFAR-10, CIFAR-100, and iNAT-19.\n| Model | Size | GFLOPs | CIFAR-10 | CIFAR-100 | iNAT-19 | ImageNet-1K |\n|:---------------:|:----:|:------:|:--------:|:---------:|:-------:|:-----------:|\n| ViT-L/16 [15] | 307 | 117 | 97.9 | 86.4 | - | 76.5 |\n| DeiT-B [11] | 86 | 18 | 99.1 | 90.8 | 77.7 | 81.8 |\n| PerViT-M (ours) | 44 | 9 | 99.1 | 91.4 | 78.5 | 82.9 |\n\n\n\n\n\n**[Clarification on our method, e.g., peripheral projections]**\nThe proposed position-based attention $\\Phi_{p}$ extends the idea of previous relative positional (RP) encoding work [10, 15, 22, 37, 38, 50] which all use a single-layer linear projection, i.e., neural networks. In our work, we adopt a multi-layered design to provide torus-shaped attentions (top-right of Fig.2) and peripheral projections (PP) to break rotational symmetric properties (middle of Fig.2) to model peripheral vision. Specifically, given relative positions as inputs $\\mathbf{R} \\in \\mathbb{R}^{HW \\times HW \\times D_r}$, PP with $\\mathbf{W} \\in \\mathbb{R}^{K^2 \\times D_r \\times D_{\\text{hid}}}$ is formally defined as follows: $PP(\\mathbf{R}, \\mathbf{W})_{\\mathbf{q}, \\mathbf{k}, :} \\coloneqq \\sum\\_{\\mathbf{m} \\in \\mathcal{N}(\\mathbf{k})} \\mathbf{R}\\_{\\mathbf{q}, \\mathbf{m}, :} \\mathbf{W}\\_{\\mathbf{m} - \\mathbf{k}, :, :}$ where the neighbor function $\\mathcal{N}$ provides a set of neighbors around given input key position $\\mathbf{k}$ and is formally defined as $\\mathcal{N}(\\mathbf{k}) \\coloneqq \\left[ \\mathbf{k}-[\\frac{K}{2}], \\dots, \\mathbf{k}+[\\frac{K}{2}] \\right] \\times \\left[ \\mathbf{k}-[\\frac{K}{2}], \\dots, \\mathbf{k}+[\\frac{K}{2}] \\right]$. We use $K=3$ for all layers and heads of PerViT in our experiments as $K > 3$ hardly brought improvements (Supp. L201). We will rephrase this part (with a visual illustration if possible) in our final manuscript.\n\n**[Answers to questions]**\n\n- Peripheral projections: please refer to our response above.\n\n- $\\mathcal{N}(\\cdot)$: please refer to our response above.\n\n- Discontinuity in the learned attention maps: In page 8 of supp., we explore different network designs of $\\Phi_p$ and perform qualitative comparisons. Comparing (1, 2) with (3, 4) of Fig.S6, we observe that $3 \\times 3$ kernel in $\\mathcal{N}(\\cdot)$ creates vertical/horizontal discontinuities in attentions, capturing vertical/horizontal relationship of visual features while the multi-layer design gives more diversity in shapes.\n\n- The necessity of ML and $\\mathcal{N}$ for $\\Phi_p$: As demonstrated in page 8 of supp., modeling effective peripheral vision demands both designs of ML and $\\mathcal{N}$; ML helps the model in forming (torus-shaped) peripheral regions while $\\mathcal{N}$ gently breaks the rotational symmetric property (L135-139).\n\n\n", " The paper proposes a transformer architecture which incorporates human-like peripheral vision. They modify the multi-head attention with peripheral attention, which consists of content and position attention. Content attention is the same as scaled dot product attention. Position attention is learned using a neural network which takes a fixed relative position embedding as input and is independent of the input image. The position-based attention learned by the proposed approach seems to divide the image into distinct regions similar to peripheral vision. Finally, they present quantitative and qualitative results to validate the effectiveness of their approach. Strengths: \n* The paper aims to design neural networks which see in a similar way to humans. The direction and the motivations are novel.\n* Extensive qualitative results help in understanding the proposed approach.\n\nWeakness:\n* The motivation behind using neural networks for learning position-based attention is not explained clearly.\n* It is possibly difficult to disentengle the performance gained by the particular change (like the peripheral encodings in thie case). There is huge variation withing different implementations of ViT itself. For instance, the implementation of ViT in [A], significantly outperforms the original model. ViT implementation in [A] without autoencoder fine tuning achieves 82% top1 on imageNet, bringing over 5% improvement on the original model (76.5%). It is hence natural to question approaches which bring minor gains (less than a percent). The change can clearly be achived by simpler engineering corrections. \n* A more solid approach would be to use the optimally trained ViT and then base the experiments on it. \n\n[A] He et al. Masked Autoencoders Are Scalable Vision Learners. CVPR 2022 * What is the formulation of neighbour function, and how does it impact the learned position-based attention?\n\n* There should be a baseline which uses fixed position embedding varying by layer to highlight the benefits of learning position embedding using NN. Did the authors try such a baseline? Would appreciate a discussion on that in the author's response.\n\n* Can you take the ViT implementation of [A] and apply the peripheral positional encoding on it? Yes, the limitations are addressed adequately.", " This paper takes inspiration from the foveated nature of human vision, in which visual scenes are processed with decreasing resolution and increased information compression as eccentricity to the center of the fovea increases.\nIt notes that recent vision transformers (ViTs), which use multi-headed self-attention, require a lot of data to learn useful patterns of self-attention at different layers of the feature hierarchy.\nWhile various methods have been proposed to address this, often relying in one way or another on re-asserting locality into the processing e.g. explicitly via convolution or by introducing pyramidal structure or more structured attention, the paper here proposes adding a position-based attention mechanism into a standard data-efficient image transformer (DeIT).\nIn the MPA, learned position-based attention maps are combined with the (original) learned content-based attention maps to produce a \"Multi-head Peripheral Attention\" (MPA) map.\nExperiments on ImageNet-1k show that their Peripheral ViT (PerViT) model which uses MPA uses this additional structure in the attention to perform better at the classification task than without, to a level on par with recent pyramidal architectures, across three different approximate model sizes.\nFurther experiments look at the qualitative nature of the learned peripheral maps, the trade off of influence between the position-based and content-based maps, the locality of the attention at different layers, the benefit of careful initialization, and an ablation of the various model additions. [Paper Strengths]\n* the proposed multi-head peripheral attention approach seem to be new, technically sound, and interesting \n* the experiments are well-motivated, with carefully chosen baselines and good performance\n* code is available with promise of reproducible experiments (although I did not try)\n\n[Paper Weaknesses]\nOverall I think the paper presents an interesting approach backed up by some good empirical results, but there are a few reasons why I remain borderline:\n* I think that the paper somewhat oversimplifies peripheral vision and oversells its involvement in the design of the model. There is only one reference to peripheral vision in the main text, and it is a bit reductive -- I encourage the authors to explore more recent literature on peripheral vision, e.g. https://visxvision.files.wordpress.com/2017/08/ruth_peripheral.pdf in order to better relate their work to peripheral vision literature. I think it is fair to be inspired by the notion of peripheral vision, but using definitive terms like \"peripheral inductive bias\" (L56) may be misleading. Claims like L47 \"According to recent study on inner workings of vision transformers [11, 15, 36, 42, 51], their behavior is in fact closely related to how the peripheral vision functions\" are not supported by the cited works (which make no mention of peripheral vision), and are arguably more speculation than scientific \"fact\". I believe the paper will be better served by toning down this aspect; it is ok to be inspired loosely by a notion of how peripheral vision might work, and this model might indeed model peripheral vision to a better extent than previous works, but as far as I can tell, we do not know and cannot be certain.\n* Secondly, although the experiments are reasonably extensive, a few experiments are missing which I think are important for a reader to understand the value MPA. (i) One of the main motivations to impose structure into the attention (I think) is to improve training sample efficiency. Have any experiments been conducted to explore this (e.g. performance vs training set proportion on ImageNet)? (ii) My understanding is that of the three components (phi_p, C-Stem, CPE), phi_p is the main novelty of this work, while C-Stem and CPE have been adopted from prior work. I would like to see the ablation on the -S and -M models to understand the impact with/without only phi_p in those cases. (iii) Experiments on other datasets (e.g. ones which DeIT supports already like iNaturalist) would help to further validate the usefulness of the proposed method.\n* Although I found the paper to have good general coverage of recent work in this area, I found the related work section to be somewhat brief, and would appreciate a more explicit explanation of why the proposed work differs from previous efforts to improve the efficacy of the self-attention mechanism for vision transformers. \n\nEDIT: post-rebuttal, upgraded Rating from 5 to 7 (accept).\n\n[Minor Points / typos]\n* L9 \"large-scale ImageNet dataset\" -- \"large-scale\" here may mislead readers to think of the full ImageNet dataset; in reality the 1K dataset is used in the paper\n* some references are not ordered numerically e.g. L30, 53, 82\n* L56 - we proposes \n* Fig 3 - bases on -> is based on the \n* Sec 3 - please define new variables when they are introduced (e.g. D_h), for better clarity. \n* Eq 3 has an unwanted comma\n* L119-21 - break up the sentence for better clarity\n* L123 - Eucliden\n* L134 non-linearlity\n* L142 the they\n* please check grammar in 4.1\n* L252 - double negative should be single negative?\n* Please fix table ordering in 4.2 so it follows text In addition to the points mentioned above,\n1. I think Figure 1 could be explained more clearly to inform a new reader - could you explain how does this learned attention map relate to peripheral vision? \n2. How was the D_r chosen - and what is the impact of varying it?\n3. L142 what is K?\n4. L179 as far as I'm aware, \"cylindrical\" is not a standard term. What is meant by the term here? Do you mean \"columnar\", in the sense as described by e.g. https://arxiv.org/pdf/2102.12122.pdf ? Columnar is strictly better, because cylindrical implies a circle in some plane?\n5. I'm confused why w_r is appearing in Eq 9, when it doesn't appear to be part of the peripheral projection? (based on my reading of L122-L132). Is this just describing all of the parameter settings for all experiments?\n6. What does classification of visual regions add in Fig 5? It seems somewhat arbitrary, since \"angle\" doesn't have an obvious notion in an image taken from an unknown camera. In Supp Sec B, there is some explanation of visual fields broken down by angle, but is it fair to assume that images correspond to a particular FOV? Peripheral vision only makes sense in the context of a physical eyeball which has a focal point; how does that relate to an image from an unknown camera? An alternative might be to make a scatter plot of average attention distances vs layers, in a similar vein to Fig 3 of [36]? \n7. What, if any, is the distinction between the proposed Peripheral Positional Encoding (PPE) (Table 2) and the used Convolutional Positional Encoding (CPE) (Table 1)? \n8. Table 5 of the DeIT paper https://arxiv.org/pdf/2012.12877.pdf shows that using the distillation method during training yields stronger performance (e.g. DeIT-B gives 83.4 Top-1, and higher when trained for longer). Although this is not necessarily a fair comparison, I think it could be mentioned at least. The abstract and contributions claim unconditionally (L11, L69) that the PerViT yields \"state of the art performance in image classification task\" but I think this is not technically true and ought to be revised?\n9. parts of S3.1 could be rewritten to improve readability, in particular the explanation of Peripheral Projections from L141. \"By referring neighboring relative distances around the keys\" does not make sense to me - could it be rewritten? \n10. In Eq 8 - the PP function does not appear to be explicitly defined until the supplementary. Perhaps some of this section could be represented graphically, similar to Fig 1 in [50], to improve clarity? Yes", " This paper proposed a novel visual transformer model that incorporates the inductive bias of peripheral vision into the relative positional encoding. Specifically, the authors define a multi-head peripheral attention (MPA) module, where content-based and position-based attention scores are first calculated independently and then combined through an element-wise product. The design of the position-based attention allows pairs of locations ($q$,$k$) with the same Euclidean distance to have similar attention scores. The peripheral projection introduces the diversity, while the peripheral initialization ensembles the increasing spatial receptive fields. The results show better performance on ImageNet classification task than models with similar capacity. The ablation studies suggest the major contribution of the performance improvement comes from the position-based peripheral attention in the MPA. Strength:\n- This paper proposed a concise idea to incorporate peripheral vision into the transformer-based computer vision models. The definition of relative positional encoding is straightforward to apply and understand and is well-aligned with the distance-based sampling in the human visual system.\n- The performance evaluation and ablation studies convincingly suggest that peripheral attention is the critical module that contributes to the improvement, and it has a reasonable and interpretable behavior (e.g., the position bias is mainly seen in early layers; larger models have farther peripheral information).\n\nWeakness:\n- The evaluation of ImageNet classification may not be the best task to show the benefit of adding a peripheral mechanism to the computer vision model. It would be great if the authors could demonstrate if this mechanism significantly increases the efficiency and robustness of the model.\n- The presentation of the methods and results can be improved for clarity. Major:\n- The \"peripheral projections\" part in Section 3.1 seems interesting and important, but it's hard to fully understand its intuition and computation from the description. Does this peripheral projection allow the attention to be calculated not from the key location itself, but from a small surrounding area (local context) around the key location? Does $PP$ in equation 8 stand for the computation of $\\Phi_p$ in equation 7? It will be helpful if the authors may rephrase this part in a clearer and more intuitive way, or maybe use a visual illustration if it's helpful. I want to understand the \"peripheral projection\" better since it has the most contribution according to the results in Table 4.\n- How is the $\\mathcal{N}(\\cdot)$ function defined in equation 7? Is it fixed for different layers/heads?\n- Why there is a vertical discontinuity in the attention patterns after peripheral projections (Fig. 2, middle row)?\n- From Table 4, it seems that adding ML (multi-layer design) to Euc doesn't help but adding $\\mathcal{N}$ (peripheral projection) to Euc+ML improves the performance significantly. Is the multi-layer design an essential component or do we only need the diversity from $\\mathcal{N}$?\n\n\nMinor:\n- The tables are not referenced in the order as they appear\n- Some discussion of how peripheral vision affects model behavior in a human-like way may be inspiring when sketching future directions.\n- It may be worth including a few early works that design biologically-inspired peripheral vision models with CNNs by polar transformation or using non-uniform sampling to generate foveated images, for example:\n 1. Wang, P., & Cottrell, G. W. (2017). Central and peripheral vision for scene recognition: A neurocomputational modeling exploration. Journal of vision, 17(4), 9-9.\n 2. Reddy, M. V., Banburski, A., Pant, N., & Poggio, T. (2020). Biologically inspired mechanisms for adversarial robustness. arXiv preprint arXiv:2006\n 3. Jonnalagadda, A., Wang, W., & Eckstein, M. P. (2021). Foveater: Foveated transformer for image classification. arXiv preprint arXiv:2105.14173.\n\n No negative societal impact.", " In this paper, the authors introduce a novel transformer biologically inspired transformer architecture. They introduce a change on the attention mechanism which enable models to split the visual field into different peripheral regions. Authors compare their proposal to the baseline on classification, showing improvement.\n **Strengths**:\n- S1. The paper proposes an interesting and useful addition to standard transformer attention. State-of-the-art methods typically lack of the ability to partition the visual field in peripheral regions, which can help focusing on the most important content for the model and make training more efficient.\n\n- S2. The paper is well written and motivated. I think authors described well the goal of the paper and the model, as well as illustrated well the idea with plenty of useful figures.\n\n- S3. The ablation study in the paper is very complete, going over all posible design decisions that shaped the final architecture and model. I specially like the results in Table 1, where the reader can easily understand the importance of the different elements. \n\n** Weaknesses**: \nIn my opinion the main weakness of the paper is the evaluation. I think authors fail to show that their proposed method is better than other proposals in the transformer area. My main concerns are:\n\n- W1. Authors only evaluate on one ImageNet, and do not provide additional results on other datasets. In my opinion for the paper to be convincing about the usefulness of the methods, author should provide results on other datasets as well.\n\n- W2. Focal-S performs better than the proposed model with similar number of flops and model size. As the ImageNet evaluation is the only data point we have to evaluate the model performance, it's hard to convince the reader with this evidence that the model is pushing the state-of-the-art. \n\n- W3. Given the main innovation of the model, I believe that an object centric dataset such as ImageNet is probably not the best dataset for evaluation. I think the proposed model is good at focusing to the different objects in the image in an efficient manner, while in ImageNet typically a single object is present.\n\n- W4. Following up with W3, I think this model could actually shine more on tasks such as segmentation, where fine detail of the input image is important to produce a quality output.\n\n- W5. Have the authors considered evaluating the segmentation masks generated by the attention? (Figure 6)\n\n\n\n------------------------------\n* After rebuttal*: The author's response addressed most of my concerns and I updated my review score recommending acceptance. My concerns are listed in the weaknesses section. My main questions are:\n\n- Have the authors considered other tasks/datasets for evaluation? In my opinion the evaluation is very limited.\n\n- How would the authors argue that the paper is still contributing to pushing the SOTA if Focal-S performs better than the proposed model.\n\n- Have the authors evaluated the quality of the segmentation in segmentation data? Yes, I think the discussion of limitations is correct. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "VAzHPQyi5kA", "qmWH2AiC4jA", "HNLttS6A7GO", "8-2Gr-7BCBJ", "nips_2022_nE8IJLT7nW-", "MvNbBKrlnw0u", "P6VF4FyZ0fv", "MvNbBKrlnw0u", "LPSF9yFx0To", "fVUXM17N8A6", "NVwVNq3niaB", "nips_2022_nE8IJLT7nW-", "B13llK6vyqX", "5dV3nUbi2-G", "lRp_nQ9Ggtc", "GEALQhg2Ay", "nips_2022_nE8IJLT7nW-", "nips_2022_nE8IJLT7nW-", "nips_2022_nE8IJLT7nW-", "nips_2022_nE8IJLT7nW-" ]
nips_2022_QzFJmwwBMd
ZARTS: On Zero-order Optimization for Neural Architecture Search
Differentiable architecture search (DARTS) has been a popular one-shot paradigm for NAS due to its high efficiency. It introduces trainable architecture parameters to represent the importance of candidate operations and proposes first/second-order approximation to estimate their gradients, making it possible to solve NAS by gradient descent algorithm. However, our in-depth empirical results show that the approximation often distorts the loss landscape, leading to the biased objective to optimize and, in turn, inaccurate gradient estimation for architecture parameters. This work turns to zero-order optimization and proposes a novel NAS scheme, called ZARTS, to search without enforcing the above approximation. Specifically, three representative zero-order optimization methods are introduced: RS, MGS, and GLD, among which MGS performs best by balancing the accuracy and speed. Moreover, we explore the connections between RS/MGS and gradient descent algorithm and show that our ZARTS can be seen as a robust gradient-free counterpart to DARTS. Extensive experiments on multiple datasets and search spaces show the remarkable performance of our method. In particular, results on 12 benchmarks verify the outstanding robustness of ZARTS, where the performance of DARTS collapses due to its known instability issue. Also, we search on the search space of DARTS to compare with peer methods, and our discovered architecture achieves 97.54\% accuracy on CIFAR-10 and 75.7\% top-1 accuracy on ImageNet. Finally, we combine our ZARTS with three orthogonal variants of DARTS for faster search speed and better performance. Source code will be made publicly available at: \url{https://github.com/vicFigure/ZARTS}.
Accept
This paper aims to solve the instability issues of differentiable architecture search (DARTS) using zero-order optimization. Three different optimization techniques are proposed and their efficacy is demonstrated successfully on several benchmark datasets and different variants of DARTS. Although there are some concerns regarding the computational complexity of zero-order optimization, the reviewers have found the contribution of this submission significant for acceptance at NeurIPS. Given this, we are happy to recommend acceptance.
train
[ "FCfSK2BccN9", "Z3ijPUw5-y3b", "lJ2fbTsOUL", "JHcxZ1CrWCW", "_0kY_GlMofa", "OoCAViMMr9d", "CSQ8Vju7qbE", "J0RqUeAMpG" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response, most of my concerns are addressed. I have raised my rating to borderline accept.", " Thank you for your thorough and valuable comments. We answer your questions as follows in the hope of resolving your concerns.\n\n**Q1: ZARTS is more time-consuming than other DARTS-based methods.**\n\nA1: We agree that the search cost of ZARTS is higher than some recent works aiming to speed up the search process, but it is more efficient than other methods that aim to stabilize DARTS, such as RDARTS, S-DARTS, and Amended-DARTS (as compared in Table 3). This work focuses on revealing the neglected harm of inaccurate approximation for optimal network weights $\\boldsymbol{\\omega^*(\\boldsymbol{\\alpha})}$ in DARTS and proposes to address the bi-level optimization problem in NAS by zero-order optimization methods.\n\nMoreover, our method is orthogonal to the prior works aiming to speed up the search process, such as P-DARTS, GDAS and MergeNAS. Sec. 5.3 and Table 5 in the submission show that ZARTS can be sped up by combining with these variants.\n\n\n**Q2: What properties for zero-order optimization methods are necessary to apply to NAS task?**\n\nA2: NAS requires zero-order optimization methods with efficient sampling strategies. We apply three zero-order optimization methods, including Multi-point estimator (RS), maximum-likelihood guided parameter search (MGS), and gradientLess descent (GLD), with progressive computational complexity and performance. Table 2 compares the three presented zero-order solvers (RS, MGS, and GLD), showing that sampling strategy significantly affects the performance. RS randomly samples candidates $\\boldsymbol{u}_i$ and thus has the least computational complexity, while GLD has to sample on spheres with various radii (line 173) and thus has the most computational complexity but achieves the best in Table 2. MGS, however, can trade accuracy off against speed. We hope the empirical results and analysis in this work could shed light on the frontier of NAS, and we will leave the exploration of new zero-order optimization methods as our future work.\n\n\n**Q3: Why does ZARTS converge better than DARTS?**\n\nA3: The main reason lies in that ZARTS circumvents the 1st-order approximation of $\\boldsymbol{\\omega}^*(\\boldsymbol\\alpha)$ and can search on the actual loss landscape. In contrast, DARTS adopts the 1st-order approximation, which distorts the loss landscape and optimum (please see Fig. 1 in the submission). Fig. 1c further verifies our analysis. We apply DARTS and ZARTS by starting from the same initial point and updating $\\boldsymbol{\\alpha}$ 10 times. ZARTS converges to the optimum, but DARTS does not.", " Thank you for your time and constructive feedback, we answer your questions as follows, which we hope will resolve your concerns.\n\n**Q1: Discussion on new zero-order optimization methods for NAS.**\n\nA1: Thanks for your suggestion. Inventing a new zero-order solver is intellectually attractive, yet our work is more focused on understanding the reason (so far unclear) why zero-order optimization outperforms first-order gradient descent for the (non-differentiable) bi-level optimization problem in NAS task, which mainly lies in an accurate estimation for $\\boldsymbol{\\omega}^*(\\boldsymbol{\\alpha})$ (please see discussion in Sec. 3 and Fig. 1). Results in Table 2 indicate that even vanilla multi-point estimator (RS) surpasses DARTS, empirically verifying our analysis on the superiority of zero-order optimization against gradient-based methods in NAS task.\n\nBy comparing the three presented zero-order solvers (RS, MGS and GLD), we observe that sampling strategy significantly affects the performance. RS randomly sample candidates $\\boldsymbol{u}_i$ and thus has the most minor computational complexity. In contrast, GLD has to sample on spheres with various radii (line 173) and thus has the most computational complexity but achieves the best in Table 2. MGS, however, can trade off the accuracy and speed. We hope the experimental results and analysis in this work could shed light on the frontier of NAS. We will leave the exploration of new zero-order optimization methods as our future work.\n\n**Q2: Discussion on the searching cost of ZARTS.**\n\nA2: ZARTS and DARTS-2nd have similar search cost (1.0 GPU-day) but ZARTS performs much better due to accurate estimation for $\\boldsymbol{\\omega}^*(\\boldsymbol{\\alpha})$. Though ZARTS is twice slower than DARTS-1st, it can be sped up by combining with other orthogonal methods, such as GDAS, P-DARTS and MergeNAS. Sec. 5.3 and Table 5 in the submission show the results of these ZARTS variants. Specifically, GZAS (ZARTS+GDAS) achieves 97.34\\% average performance with only 0.3 GPU-day; P-ZARTS (ZARTS+P-DARTS) achieves 97.59\\% average performance with 0.4 GPU-day; MergeZARTS (ZARTS+MergeNAS) achieves 97.64\\% average performance with 0.5 GPU-day.\n\n**Q3: Can ZARTS directly search on ImageNet?**\n\nA3: Yes. ZARTS has the same memory cost as DARTS, and it can be reduced by combining ZARTS with other orthogonal variants, such as MergeNAS (please see Table 5 in the submission). ZARTS can also directly search on ImageNet on a single NVIDIA 3090 GPU with 24G memory. Specifically, we train a supernet with 8 cells and 16 initial channels for 50 epochs with batch size 128. For MergeZARTS, the memory-efficient variants of ZARTS introduced in Sec. 5.3, we can train the supernet with batch size 256. To reduce search time, we randomly sample 25\\% samples from the training set of ImageNet and divide it into two subsets to train weights and architecture parameters, respectively. The performance of the discovered architectures and the search cost is shown in Table 4 in the supplementary material. We also briefly list our results here.\n\n| Method | Params (M) | Top-1 Error (%) | Search Cost (GPU-day) |\n|:------------ | ----------:|:---------------:| --------------------- |\n| SPOS | 3.5 | 25.6 | 12 |\n| ProxylessNAS | 7.1 | 24.9 | 8.3 |\n| FBNet-C | 5.5 | 25.1 | 9 |\n| ZARTS | 5.2 | 24.4 | 2.6 |\n| MergeZARTS | 5.5 | 24.3 | 0.7 |\n", " **Q4: Does ZARTS has the same memory cost as DARTS? Can ZARTS directly search on ImageNet?**\n\nA4: ZARTS has the same memory cost as DARTS, and it can be reduced by combining ZARTS with other orthogonal variants, such as MergeNAS (please see Table 5 in the submission). ZARTS can also directly search on ImageNet on a single NVIDIA 3090 GPU with 24G memory. Specifically, we train a supernet with 8 cells and 16 initial channels for 50 epochs with batch size 128. For MergeZARTS, the memory-efficient variants of ZARTS introduced in Sec. 5.3, we can train the supernet with batch size 256. To reduce search time, we randomly sample 25\\% samples from the training set of ImageNet and divide it into two subsets to train weights and architecture parameters, respectively. The performance of the discovered architectures and the search cost is shown in Table 3 in the supplementary material. We also briefly list our results here.\n\n| Method | Params (M) | Top-1 Error (%) | Search Cost (GPU-day) |\n|:------------ | ----------:|:---------------:| --------------------- |\n| SPOS | 3.5 | 25.6 | 12 |\n| ProxylessNAS | 7.1 | 24.9 | 8.3 |\n| FBNet-C | 5.5 | 25.1 | 9 |\n| **ZARTS** | 5.2 | 24.4 | 2.6 |\n| **MergeZARTS** | 5.5 | 24.3 | 0.7 |\n\n\n**Q5: Will the zero-order optimization still be accurate when searching on a huge search space?**\n\nA5: ZARTS is a general search method like DARTS and can be transferred to other search spaces, and our estimation for $\\boldsymbol{\\omega}^*(\\boldsymbol{\\alpha})$ will still be more accurate than DARTS. The primary concern about searching on a huge search space lies in the vast GPU memory requirement to build a supernet, which is a pervasive issue for all one-shot based NAS methods. Fortunately, our ZARTS can be easily combined with other memory-efficient methods, such as GDAS and MergeNAS, which can reduce more than half of the GPU memory requirement (please see Table 5 in the submission).", " Thank you for your valuable comments. We answer the questions as follows in the hope of resolving your concern.\n\n**Q1: The search cost of ZARTS is similar to DARTS (2nd), which may be a major limitation of this work.**\n\nA1: Though ZARTS has a similar search cost as DARTS (2nd), ZARTS considers $M=10$ steps gradient descent to estimate optimal network weights $\\boldsymbol{\\omega}^*(\\boldsymbol{\\alpha})$, while DARTS (2nd) only considers one step gradient descent for $\\boldsymbol{\\omega}$. Therefore, ZARTS is more efficient than DARTS.\nMoreover, we can simply speed up ZARTS by reducing $M$. In particular, when $M=2$, ZARTS achieves 97.38\\% accuracy, still outperforming DARTS (2nd) with less search cost (0.3 GPU-day), as shown in Table 1 (right) in the supplementary material. We also briefly list our results here.\n| Model | ZARTS(M=2) | ZARTS(M=5) | ZARTS(M=8) | ZARTS(M=10) | DARTS(1st) | DARTS(2nd) |\n|:---------- | ---------:|:---------:| --------- | ---------- | --------- | --------- |\n| Error (%) | 2.62 | 2.60 | 2.57 | 2.54 | 3.00 | 2.76 |\n| Cost (GPU-day) | 0.3 | 0.6 | 0.8 | 1.0 | 0.4 | 1.0 |\n\n\n**Q2: Many variants adopt first-order approximation instead of the second-order approximation for faster search, though it has inaccurate gradient estimations.**\n\nA2: The performance of these variants can be further improved by combining with ZARTS to refine the inaccurate gradient estimations, as verified by Table 5 in the submission. Specifically, in Sec. 5.3, we combine ZARTS with three variants of DARTS and derive GZAS, P-ZARTS and MergeZARTS, which achieve better performance than the original methods in only 0.5 GPU-day, showing that their fast search speed can also be maintained.\n\n**Q3: Suggestion about searching on NAS-Bench-201 to show the accuracy curves of architectures during the searching procedure.**\n\nA3: Thanks for your suggestion. On the one hand, Fig. 2 in the submission shows the accuracy curves of architectures searched on DARTS's search space. Specifically, we first train the supernet for 200 epochs by DARTS and ZARTS and obtain the discovered architectures every 25 epochs. Then we train those discovered architectures from scratch for 600 epochs in the same experimental settings. Please see Line 254-267 in the submission and Sec.2.5 in the supplementary material for details of experimental settings.). We believe that Fig. 2 demonstrates the superiority of ZARTS in architecture optimization. We observe that the architectures searched by ZARTS perform stably well (around 97.40\\% accuracy), while the performance\nof those searched by DARTS gradually drops. Moreover, the parameter number of architectures searched by DARTS decreases significantly after 50 epochs, indicating that parameterless operations dominate the topology and the instability issue occurs.\n\nOn the other hand, we search on NAS-Bench-201 and report the results in Table 4 in the supplementary material. We also briefly list our results here. Specifically, we adopt the hyperparameters in NAS-Bench-201 for a fair comparison. The results are averaged over three independent runs. Our method outperforms ENAS, DARTS, and ENAS on three datasets. The accuracy curve is also plotted in Fig. 4 in the supplementary material, showing that the search process of ZARTS is pretty stable.\n| Method | CIFAR-10(valid) | CIFAR-10(test) | CIFAR-100(valid) | CIFAR-100(test) | ImageNet16-120(valid) | ImageNet16-120(test) |\n|:---------- | ---------------:|:--------------:| ---------------- | --------------- | --------------------- | -------------------- |\n| DARTS(1st) | $39.77\\pm0.00$ | $54.30\\pm0.00$ | $15.03\\pm0.00$ | $15.61\\pm0.00$ | $16.43\\pm0.00$ | $16.32\\pm0.00$ |\n| DARTS(2nd) | $39.77\\pm0.00$ | $54.30\\pm0.00$ | $15.03\\pm0.00$ | $15.61\\pm0.00$ | $16.43\\pm0.00$ | $16.32\\pm0.00$ |\n| GDAS | $89.89\\pm0.08$ | $93.61\\pm0.09$ | $71.34\\pm0.04$ | $70.70\\pm0.30$ | $41.59\\pm1.33$ | $41.71\\pm0.98$ |\n| **ZARTS** | $91.23\\pm0.24$ | $93.98\\pm0.27$ | $71.64\\pm1.31$ | $71.67\\pm1.30$ | $44.46\\pm1.36$ | $45.06\\pm0.97$ |\n", " This paper proposes a DARTS variant named ZARTS, which replace the first-order and second-order approximations with zero-order approximations. Experiments on various datasets are conducted to show the superiority of ZARTS. Strengths:\n1. The inaccurate gradient estimation in first-order approximation could result in severe performance collapse and instability problems, this should be highlight in the community. This paper proposes a zero-order approximation method, which seems has better architecture optimization according to the paper's visualizations.\n2. According to Table 5, ZARTS applies to various DARTS variants, and can bring consistent improvements to them.\n\nWeaknesses:\n1. The search cost of ZARTS is similar to DARTS (2nd). Many variants adopt first-order approximation instead of the second-order approximation for faster search, though it has inaccurate gradient estimations; while the proposed zero-order approximation has a similar cost as the second-order one, which may be a major limitation of this work.\n2. Figure 1 is not sufficient enough to show the superiority of ZARTS in architecture optimization. I suggest the authors to conduct experiments on NAS benchmarks (e.g., NAS-Bench-201) to show the accuracy curves of architectures during the searching procedure. 1. Does ZARTS has the same memory cost as DARTS? Can ZARTS directly search on ImageNet?\n\n2. Will the zero-order optimization still be accurate when searching on a huge search space? Discussion of limitations was provided.", " The authors reexamine the bi-level optimization of NAS and reveal that the differentiable assumption in DARTS can mislead the search process. To verify the analysis, they plot loss landscapes of DARTS and show that first-order approximation of network weight will distort the landscape and drift the optimal. To tackle such an issue, the authors propose ZARTS by utilizing zero-order optimization for architecture parameters. Specifically, they adopt three different zero-order optimizers and sufficient empirical evaluation demonstrates that zero-order optimization unanimously outperforms DARTS. Besides, the authors provide extensive convergence experiments to show the stability of ZARTS. In general, ZARTS provides a new insight for NAS task and achieves good performance on multiple search spaces and datasets. Strengths:\n1. The paper is well-written and the motivation is clear. The authors first reexamine the bi-level optimization of NAS, then analyze the fundamental limitations in DARTS, and lead to the introduction of ZARTS.\n2. Illustration in Fig.1 is convincing, which provides enough evidence on the drawback of differentiable assumption of DARTS and the advantage of ZARTS against DARTS.\n3. Apart from applying three zero-order optimization methods to NAS, the authors also theoretically show that ZARTS can degrade to DARTS under differentiable assumption.\n4. The experimental results are impressive, showing that ZARTS outperforms DARTS and its variants on multiple search spaces and datasets. Besides, the authors conduct sufficient evaluation about the stability of ZARTS.\n\nWeaknesses:\n1. The authors base their zero-order search methods on three existing zero-order optimization algorithms. Of course it is fine with the current excellent analysis and nontrivial technical adaptation to the NAS problem, while it will be more impressive if the authors could propose their own zero-order algorithm tailored to NAS. I know proposing a new zero-order optimizer it self could be a prominent paper, so at least the authors could provide more discussion on this point.\n2. The searching cost of ZARTS seems to be longer than other variants of DARTS (in Table 3) Can the authors provide more detailed discussion on the searching cost?\n\nOverall, the motivation is clear and the evaluation is sufficient. The major concern is the search cost of ZARS, and I hope the authors could provide more analysis in the rebuttal.\n Please see the weakness. Besides, I notice that the variants of ZARTS, i.e., mergeZARTS and GZAS, requires fewer GPU memory, so my question is, can ZARTS directly search on ImageNet? The limitations were discussed. ", " This paper proposes a new neural architecture search method, namely ZARTS, which solves the bi-level optimization through zero-order optimization. It provides interesting analysis on the differentiable assumption in DARTS and show that the assumption can distort the loss landscape and misleads the search target in Fig. 1. Then this work discards the differentiable assumption and turns to zero-order optimization techniques which to my knowledge have not been used in neural architecture search (NAS) literature. This work also shows the connection between ZARTS and DARTS and points out that ZARTS degrades to the first/second-order DARTS when the iteration number M=0 (for first-order DARTS) or M=1 (for second-order DARTS). The well-designed experiments on the search spaces of DARTS and RobustDARTS verify the effectiveness of ZARTS which are implemented in three variants with the tradeoff between efficiency and efficacy. The paper is technically clear and well written. Strengths:\n\n++ This paper provides new and interesting analysis on the limitations of differentiable assumption in DARTS, showing that it can distort the loss landscape resulting in the severe instability issue (in Fig. 1a). \n\n++ By showing the drawbacks of differentiable assumption, it is a good idea to adopt zero-order optimization methods to solve the bi-level optimization problem in NAS.\n\n++ The theoretical analysis on the connection between ZARTS and DARTS are worth publishing, specifically: when the iteration number M is set as 0/1, ZARTS degrades to the first/second-order DARTS.\n\n++ Experiments verify the efficacy of ZARTS, showing that zero-order optimization can effectively alleviate the instability issue of DARTS.\n\n++ This paper provides three variants of ZARTS by combining with three variants of DARTS. Experiments indicate that the variants can speed up the search process and further improve the performance, showing the potential of zero-order optimization methods for NAS.\n\nWeaknesses:\n\n-- Zero-order optimization methods are known more inefficient than gradient descent method, so that ZARTS requires 1 GPU-day to search, which is more time-consuming than other DARTS-based methods.\n 1. This work introduces to apply three zero-order optimization methods to NAS. I wonder what properties for zero-order optimization methods are necessary to apply to NAS task? This question is also pertinent to the novelty of the paper, for better motivating the adoption of three specific zero-order optimization techniques presented in the paper.\n2. Zero-order optimization methods are known more inefficient than gradient descent method. It would be better if the authors could provide more discussion on why ZARTS converges better than DARTS?\n I think there is no particular limitation of this work." ]
[ -1, -1, -1, -1, -1, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "JHcxZ1CrWCW", "J0RqUeAMpG", "CSQ8Vju7qbE", "_0kY_GlMofa", "OoCAViMMr9d", "nips_2022_QzFJmwwBMd", "nips_2022_QzFJmwwBMd", "nips_2022_QzFJmwwBMd" ]
nips_2022_IvnoGKQuXi
Class-Dependent Label-Noise Learning with Cycle-Consistency Regularization
In label-noise learning, estimating the transition matrix plays an important role in building statistically consistent classifier. Current state-of-the-art consistent estimator for the transition matrix has been developed under the newly proposed sufficiently scattered assumption, through incorporating the minimum volume constraint of the transition matrix T into label-noise learning. To compute the volume of T, it heavily relies on the estimated noisy class posterior. However, the estimation error of the noisy class posterior could usually be large as deep learning methods tend to easily overfit the noisy labels. Then, directly minimizing the volume of such obtained T could lead the transition matrix to be poorly estimated. Therefore, how to reduce the side-effects of the inaccurate noisy class posterior has become the bottleneck of such method. In this paper, we creatively propose to estimate the transition matrix under the forward-backward cycle-consistency regularization, of which we have greatly reduced the dependency of estimating the transition matrix T on the noisy class posterior. We show that the cycle-consistency regularization helps to minimize the volume of the transition matrix T indirectly without exploiting the estimated noisy class posterior, which could further encourage the estimated transition matrix T to converge to its optimal solution. Extensive experimental results consistently justify the effectiveness of the proposed method, on reducing the estimation error of the transition matrix and greatly boosting the classification performance.
Accept
This work addresses the problem of estimating the transition matrix by using forward-backward cycle-consistency, with class-dependent noisy labels. There is merit in this work, as the proposed method might encourage the estimated transition matrix to converge to its optimal solution, without explicitly estimating the noisy class posterior probability. Therefore, it could help to build better statistically consistent classifiers. It is shown theoretically that the proposed method is superior compared to compared methods, and the effectiveness of the method is demonstrated on several different datasets. There was a lively discussion going on between the reviewers and the authors. Although there remain some open questions about how the hyperparameter values are chosen, I think this paper should be accepted.
train
[ "q-jl8H2tSdq", "5PfNLmDU34", "EHO_NmhT1sw", "uQWd1q8gVR", "KlZL5zUjWWb", "GuAwOXlqVcu", "xUf9OzGCPZ1", "OPQyVvltZ9T", "V6U8iKy2Bj2", "TvMG3-bAo1", "MIOejKv3wp", "cjdR3bgSbpN", "2fxE9Lc2-JJ", "_Qyy2D1kmnW", "fu_ImERHzeN", "mBOiYivWveG", "c4jY7sh1pan", "yrYAgR20Wj5", "PuN27q2z9E" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The response well addresses my concern and I would keep my score.", " Dear reviewer 3zVp,\n\nIt seems we have addressed all your major concerns. Can you kindly reconsider the recommendation? Thanks very much.\n\nBest", " Thank you very much for your quick responses. Your comments have greatly helped improve the quality of our paper. Here, we would like to answer the remaining questions.\n\n1、In the appendix, we have corrected the typo in equation 1, and we have updated the supplementary material. Thank you very much for pointing out this typo.\n\n2、For the questions of how the hyper-parameters were chosen, we divide the hyper-parameters we have used in our paper into two categories, to answer this question. \n\nThe first one is the most important trade-off hyper-parameter $\\lambda$ in Eq.(5). We use the greedy search strategy to choose its best value, where the search interval is [0.0, 1.0], and the search step is 0.1. As shown in Figure 2(m) to (p), we evaluate different values of $\\lambda$ on the CIFAR-10 and CIFAR-100 datasets, with noise rate 0.4 under the ''pair flipping'' noisy type. We have found that we can obtain the best performances on both datasets when $\\lambda$ is 0.3. Therefore, we set $\\lambda$ as 0.3 in all our experiments including the two synthetic and two real-world datasets.\n\nThe second category is about the parameters used during model training, including the network architecture, optimization method, batchsize, momentum, weight decay, learning rate, number of training epochs, etc.. All these parameters are set the same as our compared method VolMinNet [13], which provided the source code and the corresponding best parameters. When we combine our method with the DivideMix [11] method, these optimization parameters are set the same as DivideMix [11], which also provided the source code.\n\nWe sincerely hope that our responses can address all the remaining concerns. Thank you again for your great help and many good questions and suggestions, which largely help improve the quality of our paper. We would like to clarify if you have further concerns. We really hope that our paper could be reconsidered by the reviewer.", " Thanks for the detailed reply and new results. The authors have addressed my concerns well and I would keep my recommendation.", " Thank you for the prompt response. Regarding the second point, my question was about how the hyperparameter values were chosen, and not what the exact values are. I saw the paragraph in the main text listing the hyperparameter configuration, but this does not clarify how the authors arrived at these values.\n\nMinor remark: I think there is a typo in eq 1 in the appendix; the off-diagonal terms are probably \\eps/(C-1)?", " Thank you very much for your hard work and quick response for our reply. Here, we would like to answer the remaining questions to further address the reviewer's concerns.\n\n1、We have updated the supplementary material of our paper, which include the code and appendix. In the appendix, we have clearly described the definition of the noise model, and the true transition matrix $T$ for three commonly used noisy types : symmetry flipping, asymmetry flipping, and pair flipping. We highly encourage the reviewer to check our supplementary material to find these details, and thank you again for you hard work. \n\n2、In the experiment part, we have provided the implementation details, including all the hyper-parameter settings of our proposed method. In the ablation study part, we have analyzed the important parameter $\\lambda$ to explore its effect on model performance. Besides, we also provide the code in the supplementary material to help the readers to reproduce and evaluate our work. \n\nSpecifically, the details of how the hyperparameters are chosen can also be described here. '' For CIFAR-10 and CIFAR-100, the backbone we used is ResNet-34. We train the classification network $f(\\textbf{x}_i;\\textbf{w})$, the transition matrices $T$ and $T^{b}$ by SGD strategy, with batchsize of 128, momentum 0.9, weight decay $10^{-3}$, and learning rate $10^{-2}$. For CIFAR-10, the algorithm run 60 epochs and the learning rate is divided by 10 after the 30-$th$ epoch. For CIFAR-100, the algorithm run 80 epochs and the learning rate is divided by 10 after 30-$th$ and 60-$th$ epoch.\nFor Clothing 1M and Food-101N, the backbone we used is ResNet-50 which is pre-trained on ImageNet. We train the classification network $f(\\textbf{x}_i;\\textbf{w})$, the transition matrices $T$ and $T^{b}$ also with SGD strategy, with batchsize of 32, momentum 0.9, weight decay $10^{-3}$, and learning rate $2 \\times 10^{-3}$. The algorithm run 80 epochs and the learning rate is divided by 10 every 30 epochs. Before training, we warm up on all noisy data with early stopping technique, where we have trained 10, 10, 1 and 1 epochs on the CIFAR-10, CIFAR-100, Clothing 1M and Food 101N datasets, respectively. ''\n\nWhile for the parameters of the baseline methods, our reported results are based on the public code provided by the authors, and each number in all the tables is the mean of five runs. We compared all the methods under the same experiment settings for fair comparison, including the baseline network architecture, noisy types, etc. All the hyper-parameter setting strategy can be found in the corresponding references or code, and we have further checked all the important references in our paper to help the readers to assess the experimental evaluation.\n\n3、Thank you very much for pointing out some typos and mistakes in our paper. Last few days, we have carefully proofread the manuscript for several times, and we would like to update the manuscript if needed in the future. We also found that these few typos do not affect the readers to understand our paper clearly, since the other three reviewers give us positive comments and one reviewer points out that our paper is well-written. For the unusual structure in this paper (e.g. the introduction contains the proof of the “theoretical result”), we aim to help the readers understanding our method in more depth and clearly, since the theoretical result is not complex to be proved. \n\nWe sincerely hope that our responses can address all the remaining concerns. Above all, we really hope that our paper could be reconsidered and evaluated. \n", " I would like to thank the authors for their detailed rebuttal. While the authors' response answers many of my questions, a few important points of my review have remained unaddressed. In particular, despite the good results reported on some datasets, I believe the manuscript, in its current form, fails to meet the requirements for effective scientific communication.\n\n- Important experimental details are missing (from either the main text or the appendix -- in fact, the paper does not have an appendix at all) e.g. what are the exact noise models used for the experiments, \n\n- It is impossible to assess the experimental evaluation since the paper does not provide details on how the hyperparameters of the baselines were chosen, or how the hyperparameters of the proposed method were chosen.\n\n- Finally, the paper is made difficult to read by the many typos and mistakes, but also by the unusual structure (e.g. the introduction contains the proof of the “theoretical result” (as the paper notes on line 188).\n\nFor these reasons I decide to maintain my score and suggest that the authors improve the manuscript before it can be ready for acceptance.", " Dear reviewer 3zVp,\n\nThank you very much for reviewing our paper and giving us some good questions. \n\nWe have tried our best to answer all the questions according to the comments. We sincerely hope that our responses can address all your concerns. Is there anything that needs us to further clarify for the given concerns?\n\nThank you again for your hard work.\n", " Dear AC and all the reviewers, \n\nThanks for handling our manuscript.\n\nWe have tried our best to answer all questions of the reviewers about our paper. We wander if our responses address all the concerns ? \n\nThanks all !", " 8、It is unclear from the results reported in the tables (in particular Tables 2 and 3) when the proposed method breaks and what exactly are the assumptions that it requires to perform well. \n\nThe exactly requirement of the assumption is to utilize as many as confidient examples during model training. In oder to reveal what will happen if the assumption breaks in the proposed method, we design the following experiments to make detailed comparison between our proposed method and the baseline method VolMinNet. In the experiments, we first train a classifier with clean labels on the Cifar-10 dataset. Then we utilize this model to remove 50% confident examples of every class in the synthetic noisy dataset. After that, we train our proposed method and the VolMinNet with the same synthetic noisy subset. Finally, we test these two model on the same testing dataset. The experiment results are listed below, we can clearly see that our algorithm more robust compared with the baseline method VolMinNet.\n\n| Dataset | Cifar-10( Synthetic Noisy Subset) |\n| Method | Sys-20 | Sys-40 | Sys-60 |\n| :-----------: | ----: | ----: | ----: |\n| VolMinNet [13] | 85.34$\\pm$0.08 | 80.33$\\pm$0.12 | 61.48$\\pm$0.58 |\n| ours | 86.06$\\pm$0.06 | 81.40$\\pm$0.23 | 71.53$\\pm$0.29 |\n\n9、Instead of selecting the 3 fixed noise rates reported in the paper, I suggest a plot in which the noise rate is varied on the Ox axis, while the Oy axis indicates the test accuracy. \n\nIn reality,selecting 3 fixed noise rates is the common experiment settings in the label-noise learning research field, most of the research works follow this experiment setting as reported in our paper. In order to give more detailed analysis of the experiment results as the reviewer suggested, we have also done more experiment with different noise ratios. Given the time issue, we just did experiments on Cifar-10 dataset with 6 noise ratios, and the experiment results demonstrate the same conclusion as reportaed in the paper.\n\n| Dataset | Cifar-10 |\n| Method | Sys-10 | Sys-20 | Sys-30 | Sys-40 | Sys-50 | Sys-60 |\n| :-----------: | ----: | ----: | ----: | ----: | ----: | ----: |\n| ours | 91.54$\\pm$0.13 | 90.44$\\pm$0.19 | 88.98$\\pm$0.15 | 87.30$\\pm$0.25 | 84.18$\\pm$0.12 | 81.01$\\pm$0.25 |\n", " 4、How does a simple baseline that uses early stopping regularization compare to the proposed method? \n\nWe compared our method with another recently proposed early stopping method for label-noise learning on the real-world dataset Clothing1M, since the referenced two do not provide proper code for reproduction. We can clearly see that our method also outperforms such method with early stopping regularization. Furthermore, if we combine our method with these technics, I believe it could further boost the performance of our method.\n\n| Dataset | Clothing1M |\n| Method | CE | ForwardT[20] | VolMinNet[13] | DivideMix[11] | ERL[15] | EarlyStop | Ours| DivideMix+VolMinNet | DivideMix+Ours |\n| :-----------: | ----: | ----: | :----: | ----: | ----: | :----: | ----: | ----: | :----: |\n| Accuracy | 68.94% | 69.84% | 69.82% | 74.67% | 72.87%| 74.64% | 70.73% | 74.83% | 75.12%$\\pm$0.05$ |\n\nERL[15]: Early-Learning Regularization Prevents Memorization of Noisy Labels, NIPS 2020.\n\nEarlyStop: Understanding and Improving Early Stopping for Learning with Noisy Labels, NIPS 2021.\n\n5、I suggest adding to the experimental comparison with more natural label noise datasets(e.g., OpenImage, MS-COCO, MegaFace).\n\nMS-COCO is a large image recognition/classification, object detection, segmentation, and captioning dataset, it contains 330K images with more than 2M instances in the 80 object categories. This dataset is a multi-label datasets with 5 objects per image. OpenImages is a dataset of ~9M images annotated with image-level labels, object bounding boxes, object segmentation masks, visual relationships, and localized narratives. For each image, it also have multiple labels.\n\n However, our method mainly focuses on the traditional single-label classification problem at present. MegaFace Challenge 2 (MF2) is a real world long-tailed noisy dataset: Training on 672K identities and 4.7M photos, and tested at million scale. The main challenge for this dataset is designed to cope with the long-tail distribution data with openset identities. In the future, we will extend our method on these datasets to cope with the multi-label classification problem with noisy label, and also the long-tailed data distribution problem.\n\n6、What are the exact noise models? How many classes are assumed to suffer from label noise in the symmetric/asymetric/pair model? \n\nThe exact noise models follow the traditional experiment settings, we need to manually corrupt these datasets by the noise transition matrix $T$, where $T_{ij}(\\textbf{x})=P(\\bar{Y}=j|Y=i,X=\\textbf{x})$. It means that we have the noisy label $\\hat{y}_j$ which is flipped from the clean label $y_i$. In all the experimental setting, we assume that all the classes in the dataset suffer from label noise in the Symmetry/Asymmetry/Pair fliping model.\n\nSpecifically, the Symmetry fliping: replacing the original label of each sample image with a probability, or with other class labels randomly drawn from the class set. Asymmetry flipping: according to the similarity of the class, all samples of a class are generated by flipping all samples of a class to a specific class label similar to it with a probability (such as bird <-> airplane, cat <-> dog, deer <-> horse, etc.). Pair flipping: a simulation of fine-grained classification with noisy labels, where labelers may make mistakes only within very similar classes.\n\n7、The method seems to not perform particularly well on datasets with natural noise. Why are only some and not all baselines presented in tables 4 and 5? What are the confidence intervals for the values in tables 4 and 5?\n\nAlthough we do not list all previous label-noise learning methods here, in reality we have compared 14 recent representative works on the Clothing1M dataset in Table 4, which almost contain most of the representative statistically consistent algorithms and some semi-supervised methods. While on the Food-101N dataset, we almost compared our method with all the previous method including the statistically consistent and inconsistent algorithms, and we get state-of-the-art performance. \n\nFor the confidence intervals of the values in table 4 and 5, we have done detailed survey about the previously reported experiment results on these two datasets, even there is no algorithm ever report the confident interval on these real-world datasets. Because the noise distribution, noise ratio are all fixed, all the algorithms just report the final classification accuracy. But for the synthetic datasets, we always report the confidence interval since the synthetic noise is randomly generated in each running. \n\nHowever, in order to meet the requirement of the reviewer, we also report the mean classification accuracy and the standard deviation computed over five runs on these two real-world datasets as follows.\n\n| Method | Cloting1M | Food-101N|\n| :-----------: | ----: | :----: |\n| DivideMix | 74.58 | 84.37 |\n| DivideMix+ours | 75.12$\\pm$0.05 | 86.11$\\pm$0.03 |\n\n", " 1、 Since this method needs a forward-backward training process, does it require more time to train the network? How many extra arameters are introduced in the newly proposed method compared with the previously proposed method VolMinNet[13]?\n\nCompared with the previous representative work VolMinNet[13], our proposed method does not need to directly compute the volume of the transition matrix $T$, i.e., vol($T$). In practice, vol($T$) denotes a measure that is related or proportional to the volume of the simplex formed by the column of $T$. Given a square matrix $T$, the VolMinNet method adopts the determinant of the matrix as the the volume measurement, i.e., vol$(T)=$ det$(T)$, where det denotes the determinant. Clearly, the VolMinNet needs to compute the determinant of the transition matrix $T$ during model training. While our proposed method needs to optimize the backward transition matrix $T^{b}$, through minimizing another two cross-entropy loss, and it will also introduce extra $C\\times C$ parameters, where $C$ is the number of classes. However, the introduced $C\\times C$ extra parameters in our method are relatively very small compared with the overall parameters of the whole model. Also, both the forward and backward transition matrix $T$ and $T^{b}$ can be optimized in an end-to-end manner simultaneously. Therefore, our method requires almost the same training time as the compared method VolMinNet in practice, just introduces another $C\\times C$ extra parameters.\n\n2)This paper states that it tries to estimate the transition matrix under the sufficiently scattered assumption, what’s the difference between this assumption and the previous anchor point assumption?\n\nThe anchor point assumption assumes that there contain some instances belonging to a specific class almost surely. The definition of anchor point assumption can be described as follows. For each class $k\\in {1,...,C}$, if there exists an instance $\\textbf{x}^k\\in X$ with $P(Y=k|X=x^{k}) = 1$, then we define these examples are anchor points. Under the anchor point assumption, the transition matrix based label noise learning method focuses on finding anchor points for each class. While for the sufficiently scattered assumption, the definition can be clearly stated according to VolMinNet. The difference or relationship between them is that, the anchor-point assumption is a sufficient but not necessary condition for the sufficiently scattered assumption when $C>2$. That is to say, the anchor-point assumption is a special case of the sufficiently scattered assumption, and the sufficiently scattered assumption is a mild assumption to the anchor-point assumption, which can be theoretically proved. Therefore, the proposed method under the sufficiently scattered assumption, could deal with more general and complex noise models. \n\n\n", " 1、Ablation study on just using Eq.(3) to optimize the classification model.\n\nThe overall objective function as shown in Eq.(5) contains three items, namely: T-Forward transition matrix module ($T$-For), T-Backward Transition matrix ($T^{b}$), their combination ($T+T^{b}$), and our final proposed method (Ours). We have done detailed ablation study to reveal how each item contributes to the overall method and performance improvement, which includes all the intermediate results (Eq.(2),Eq.(3) and Eq.(4)). All the experiment results on two synthetic datasets and two real-world datasets are illustrated in the following table. We can clear see that just using T-Forward and T-Backward transition matrix could obtain comparable experiment results, where the T-Forward is slightly better. When we combine above items step-by-step, much performance improvement could be obtained.\n\n| Dataset | Cifar-10 |\n| Method | Sym-20 | Sym-40| Sym-60 | Asym-20 | Asym-40 | Asym-60 | Pair-20| Pair-40 | Pair-60 |\n| :-----------: | ----: | ----: | :----: | ----: | ----: | :----: | ----: | ----: | :----: |\n| T-For ($T$) | 89.53$\\pm$0.11 | 85.38$\\pm$0.13 | 73.01$\\pm$0.54 | 89.46$\\pm$0.21 | 85.74$\\pm$0.11| 74.54$\\pm$0.12 | 90.25$\\pm$0.40 | 88.40$\\pm$0.35 | 74.08$\\pm$1.88 |\n| T-Back ($T^{b}$) | 88.40$\\pm$0.12 | 84.97$\\pm$0.16 | 73.12$\\pm$0.79 | 88.97$\\pm$0.14 | 85.81$\\pm$0.31 | 73.40$\\pm$0.81 | 90.03$\\pm$0.12 | 87.09$\\pm$0.93 | 73.26$\\pm$0.89 |\n| $T+T^{b}$ | 89.64$\\pm$0.16 | 85.47$\\pm$0.32 | 73.39$\\pm$0.40 | 89.62$\\pm$0.24 | 86.25$\\pm$0.03 | 74.80$\\pm$0.21 | 90.67$\\pm$0.27 | 89.35$\\pm$0.49 | 78.62$\\pm$1.40 |\n| ours | 90.44$\\pm$0.19 | 87.30$\\pm$0.25| 81.01$\\pm$0.25 | 90.55$\\pm$0.03 |87.29$\\pm$0.05| 82.58$\\pm$0.24 | 91.36$\\pm$0.13 | 91.08$\\pm$0.08 | 71.63$\\pm$0.39 |\n\n\n| Dataset | Cifar-100 |\n| Method | Sym-20 | Sym-40| Sym-60 | Asym-20 | Asym-40 | Asym-60 | Pair-20| Pair-40 | Pair-45 |\n| :-----------: | ----: | ----: | :----: | ----: | ----: | :----: | ----: | ----: | :----: |\n| T-For ($T$) | 64.23$\\pm$0.64 | 56.02$\\pm$0.39 | 40.89$\\pm$0.37 | 65.30$\\pm$0.01 | 56.31$\\pm$0.42 | 42.21$\\pm$0.58 | 69.27$\\pm$0.14 | 44.65$\\pm$0.37 | 39.10$\\pm$0.26 |\n| T-Back ($T^{b}$) | 63.39$\\pm$0.62 | 54.96$\\pm$0.43 | 41.15$\\pm$0.82 | 64.56$\\pm$0.34 | 55.09$\\pm$0.55 | 41.73$\\pm$0.73 | 68.61$\\pm$0.19 | 44.41$\\pm$0.31 | 38.86$\\pm$0.44 |\n| $T+T^{b}$ | 64.95$\\pm$0.91 | 56.36$\\pm$0.51 | 41.94$\\pm$0.43 | 65.52$\\pm$0.28 | 57.10$\\pm$0.20 | 42.72$\\pm$0.29 | 69.50$\\pm$0.53 | 44.79$\\pm$0.65 | 39.16$\\pm$0.58 |\n| ours | 67.74$\\pm$0.17 | 61.71$\\pm$0.20 | 49.30$\\pm$0.82 | 68.34$\\pm$0.24 | 62.64$\\pm$0.49 | 50.29$\\pm$0.24 | 71.63$\\pm$0.39 | 70.87$\\pm$0.14 | 69.18$\\pm$1.30 |\n\n\n2、In line 43-44, why anchor point assumption is a special case of the sufficiently scattered assumption?\n\nAs described in the previous representative work VolMinNet[14], the anchor-point assumption is a sufficient but not necessary condition for the sufficiently scattered assumption when the number of classes $C>2$. The proof of the above proposition can be found in VolMinNet[14]. To briefly answer this question, we first illustrate the definition of the anchor point assumption. For each class $k\\in {1,...,C}$, if there exists an instance $\\textbf{x}^k\\in X$ with $P(Y=k|X=x^{k}) = 1$, then we define these examples are anchor points. Under the anchor point assumption, the transition matrix based label noise learning method focuses on finding anchor points for each class. Intuitively, if the anchor point assumption is satisfied, then there exists a matrix $\\textbf{V}=[P(Y|X=\\textbf{x}^1),...,P(Y|X=\\textbf{x}^C)]=\\textbf{I}$, where $\\textbf{x}^1,...,\\textbf{x}^C$ are anchor points for different classes and $\\textbf{I}$ is the identify matrix. As illustrated in VolMinNet[14], the convex cone formed by the columns of $V$ denoted as $cone\\{V\\}$. We can clearly see that $cone\\{V\\}=cone\\{\\textbf{I}\\}$, and the $cone\\{V\\}$ can only be enclosed by the convex cone of permutation matrices. This shows that sufficiently scattered assumption is satisfied. Therefore, anchor point assumption is a special case of the sufficiently scattered assumption, but vice versa.\n\n3、The authors should clearly define the experiment settings and how they are combined.\n\nDivideMix is one representative semi-supervised method which integrated several technics to help improve the model performance. First, it uses a a Gaussian mixture model to model the loss distribution for each sample, then it dynamically splits the training data into the clean labeled subset and unlabeled subset with noisy samples. And finally adopt the semi-supervised method to train the model with these labeled and unlabeled data. However, there still contain many noisy samples in the filtered clean subset that are considered to be clean. Therefore, when training model on these selected clean subset with the supervised method, i.e., cross-entropy loss, we could also integrate our proposed transition matrix estimation module into the DivideMix framework.\n\n", " 1、It needs some model complexity analysis, especially for the training process of the backward and forward transition matrix.\n\nCompared with the previous representative work VolMinNet[13], our proposed method does not need to directly compute the volume of the transition matrix $T$, i.e., vol($T$), during model training. But the proposed method needs to optimize the backward transition matrix $T^{b}$, through minimizing another two cross-entropy loss, and it will also introduce extra $C\\times C$ parameters, where $C$ is the number of classes. However, the introduced $C\\times C$ extra parameters in our method are relatively very small compared with the overall parameters of the whole model. Also, both the forward and backward transition matrix $T$ and $T^{b}$ can be optimized in an end-to-end manner simultaneously. Therefore, our method requires almost the same training time as the compared method VolMinNet in practice, just introduces another $C\\times C$ extra parameters.\n\n\n2、Some implementation details are missing. How do you combine the proposed transition matrix estimating approach with the traditional DivideMix[11] algorithm.\n\nDivideMix uses a Gaussian mixture model to model the loss distribution for each sample, dynamically splits the training data into the clean labeled subset and unlabeled subset with noisy samples. Then it adopts the semi-supervised method to train the model with these labeled and unlabeled data. However, there still contain many noisy samples in the filtered clean subset that are considered to be clean. Therefore, when training model on these selected clean subset with the supervised method, i.e., cross-entropy loss, we could also integrate our proposed transition matrix estimation module into the DivideMix framework. Specifically, we utilize the proposed method to further model the label noise in the filtered clean subset \n\n\n3、Besides answering the above listed weaknesses, I also curious about the following question: Why not directly use $T^{-1}$ instead of the learned $T^b$ to optimize the transition matrix?\n\nDifferent from the traditional method that always estimate the transition matrix $T$ through minimizing the cross-entropy loss between the noisy class-posterior probability $P(\\bar{\\textbf{Y}}|X)$ and the given noisy label $\\bar{y}$, under specific constraints. We also propose to estimate the backward transition matrix $T^{b}$ simultaneously, to act as $T^{-1}$. Then we can also build the consistency regularization. However, since the transition matrix is to model the noisy data generation process, each element in the transition matrix has its physical meaning. Specifically, we always maintain the forward and backward transition matrix ($T$ and $T^b$) be diagonally dominant column stochastic matrix. Therefore, directly computing $T^{-1}$ cannot satisfy this constraint, and the backward transition matrix $T^b$ will be different from $T^{-1}$. Most important, the newly computed backward transition matrix is worked as the regularization term aiming to maximize the volume of the clean class posterior probability. Also, through the consistency regularization term, we could make full use of the invertible relationship between these two matrices $T$ and $T^b$. Finally, it could encourage the estimated transition matrix to converge to the optimal solution.", " 1、Motivation of the proposed method: 1) Current state-of-the-art consistent estimator for transition matrix has been developed via incorporating the minimum volume constraint of $T$ into label-noise learning. However, computing the volume of $T$ heavily relies on the inaccurately estimated noisy class posterior $P(\\bar{\\textbf{Y}}|X=\\textbf{x})$, which could lead the transition matrix $T$ to be poorly estimated. Instead, our method theoretically proves that minimizing the volume of $T$ is equal to maximizing the volume constraint of the clean class posterior $P(\\textbf{Y}|X=\\textbf{x})$, to reduce the side-effects of the inaccurate estimated noisy class posterior. To obtain the clean class posterior with maximum volume, we propose to regularize it through the backward transition matrix $T^{b}$ and the given noisy label $\\bar{y}$. 2) Different from the traditional method, we also propose to estimate the backward transition matrix $T^{b}$ simultaneously, to act as $T^{-1}$. When we obtain the forward and backward transition matrices $T$ and $T^{b}$, we could build the ``indirect’’ cycle-consistency through minimizing the approximation error between $P(\\textbf{Y}|X=\\textbf{x})$ and $T^{b}(TP(\\textbf{Y}|X=\\textbf{x}))$, which could make full use of the invertible relationship between these two matrices $T$ and $T^{b}$ indirectly. Based on the above two reasons, it is very necessary to learn the backward transition matrix $T^b$, though which it could encourage the estimated $T$ to converge to its optimal solution.\n\n2、Ablation study on just minimizing the second term in Eq.5.\n\nWe have done detailed ablation study to reveal how each item contributes to the overall method and performance improvement, which includes all the intermediate results (Eq.(2),Eq.(3) and Eq.(4)). All the experiment results on two synthetic datasets and two real-world datasets are illustrated in the following table. We can clear see that just using T-Forward and T-Backward transition matrix could obtain comparable experiment results, where the T-Forward is slightly better. When we combine above items step-by-step, much performance improvement could be obtained.\n\n| Dataset | Cifar-10 |\n| Method | Sym-20 | Sym-40| Sym-60 | Asym-20 | Asym-40 | Asym-60 | Pair-20| Pair-40 | Pair-60 |\n| :-----------: | ----: | ----: | :----: | ----: | ----: | :----: | ----: | ----: | :----: |\n| T-For ($T$) | 89.53$\\pm$0.11 | 85.38$\\pm$0.13 | 73.01$\\pm$0.54 | 89.46$\\pm$0.21 | 85.74$\\pm$0.11| 74.54$\\pm$0.12 | 90.25$\\pm$0.40 | 88.40$\\pm$0.35 | 74.08$\\pm$1.88 |\n| T-Back ($T^{b}$) | 88.40$\\pm$0.12 | 84.97$\\pm$0.16 | 73.12$\\pm$0.79 | 88.97$\\pm$0.14 | 85.81$\\pm$0.31 | 73.40$\\pm$0.81 | 90.03$\\pm$0.12 | 87.09$\\pm$0.93 | 73.26$\\pm$0.89 |\n| $T+T^{b}$ | 89.64$\\pm$0.16 | 85.47$\\pm$0.32 | 73.39$\\pm$0.40 | 89.62$\\pm$0.24 | 86.25$\\pm$0.03 | 74.80$\\pm$0.21 | 90.67$\\pm$0.27 | 89.35$\\pm$0.49 | 78.62$\\pm$1.40 |\n| ours | 90.44$\\pm$0.19 | 87.30$\\pm$0.25| 81.01$\\pm$0.25 | 90.55$\\pm$0.03 |87.29$\\pm$0.05| 82.58$\\pm$0.24 | 91.36$\\pm$0.13 | 91.08$\\pm$0.08 | 71.63$\\pm$0.39 |\n\n| Dataset | Cifar-100 |\n| Method | Sym-20 | Sym-40| Sym-60 | Asym-20 | Asym-40 | Asym-60 | Pair-20| Pair-40 | Pair-45 |\n| :-----------: | ----: | ----: | :----: | ----: | ----: | :----: | ----: | ----: | :----: |\n| T-For ($T$) | 64.23$\\pm$0.64 | 56.02$\\pm$0.39 | 40.89$\\pm$0.37 | 65.30$\\pm$0.01 | 56.31$\\pm$0.42 | 42.21$\\pm$0.58 | 69.27$\\pm$0.14 | 44.65$\\pm$0.37 | 39.10$\\pm$0.26 |\n| T-Back ($T^{b}$) | 63.39$\\pm$0.62 | 54.96$\\pm$0.43 | 41.15$\\pm$0.82 | 64.56$\\pm$0.34 | 55.09$\\pm$0.55 | 41.73$\\pm$0.73 | 68.61$\\pm$0.19 | 44.41$\\pm$0.31 | 38.86$\\pm$0.44 |\n| $T+T^{b}$ | 64.95$\\pm$0.91 | 56.36$\\pm$0.51 | 41.94$\\pm$0.43 | 65.52$\\pm$0.28 | 57.10$\\pm$0.20 | 42.72$\\pm$0.29 | 69.50$\\pm$0.53 | 44.79$\\pm$0.65 | 39.16$\\pm$0.58 |\n| ours | 67.74$\\pm$0.17 | 61.71$\\pm$0.20 | 49.30$\\pm$0.82 | 68.34$\\pm$0.24 | 62.64$\\pm$0.49 | 50.29$\\pm$0.24 | 71.63$\\pm$0.39 | 70.87$\\pm$0.14 | 69.18$\\pm$1.30 |\n\n\n| Dataset | Cloting1M | Food-101N|\n| Method | Cloting1M | Food-101N|\n| :-----------: | ----: | :----: |\n| DivideMix | 74.58 | 84.37 |\n| DivideMix+T-For ($T$) | 74.83 | 85.07 |\n| DivideMix+T-Back ($T^{b}$) | 74.75 | 84.83 |\n| DivideMix+ours | 75.12$\\pm$0.05 | 86.11$\\pm$0.03 |\n\n\n3、What are possible reasons why the proposed method performs worse on natural noise models compared to the synthetic noise instances?:\n\nThe natural noise models are always instance-dependent label noise, which are more complex than the synthetic label noise with predefined noise label distribution. Besides, there are also some open-set labels in the real-world datasets with noisy labels. But for some other semi-supervised method, i.e., DividMix, although it performs better than many transition matrix based methods, this method is a combination of many existing technics. Therefore, we also integrate our proposed method into the DividMix method to further improve their performances, and final obtain the state-of-the-art results.", " The paper proposes a method to achieve good prediction accuracy under label noise, by estimating the transition matrix from clean conditional probabilities, to noisy conditional probabilities. The paper presents experiments on synthetic noise models with a known transition matrix and natural noise models.\n Strengths:\n\n- Some of the experimental results are promising, especially on the synthetic noise models (Tables 1-3 and Figure 2).\n\nWeaknesses:\n\n- The paper could benefit from some changes regarding its style, which is atypical and unfortunately hurts clarity quite significantly. Here are just a few examples of things that could be improved. The introduction contains heavy notation that is not necessary at this point. The introduction even contains the proof of the “theoretical result”, as the paper notes on line 188. The paper is also not self-contained, and important concepts are not defined at all, e.g. the sufficiently scattered assumption or the anchor point assumption, the sym, asym or pair noise models, the baselines etc. In addition, the writing is at times too colloquial (e.g. “we creatively propose”, “what’s more” etc). There are also numerous typos and grammar mistakes (line 39, 53 etc).\n\n- The paper does not explain how the authors arrived at the procedure described in section 2.2. (see also the Questions section). There is also very little discussion on the impact of the 3 terms in the loss, or why the loss in equation 5 is particularly well suited for this problem.\n\n- It is not possible to review the soundness of the experimental evaluation since too many details are currently missing from the manuscript. \n - What is the motivation for the method proposed in section 2.2? For instance, why is it necessary to learn the second matrix $T^b$? \n\n- What happens if one only minimizes the second term in equation 5? Some of the experiments imply good performance and it would be interesting to understand what is the reason behind the method’s success in those scenarios. A more thorough ablation study would likely reveal what makes the method work well and what is perhaps unnecessary and can be stripped away from the loss.\nIt is difficult to assess the performance of the method because crucial details about the experimental setup are missing. What are the exact noise models? How many classes are assumed to suffer from label noise in the symmetric/asymetric/pair model? What was the procedure used for hyperparameter tuning for the baselines and for the proposed method?\n\n- The method seems to not perform particularly well on datasets with natural noise. Why are only some and not all baselines presented in tables 4 and 5? What are the confidence intervals for the values in tables 4 and 5? What are possible reasons why the proposed method performs worse on natural noise models compared to the synthetic noise instances? I suggest adding to the experimental comparison more settings with noise models other than the ones considered in Tables 1-3, e.g. random flips for all classes with equal probability, other natural label noise datasets (e.g. OpenImages, MS-COCO with the noisy annotations, the dataset of https://github.com/zhongyy/Unequal-Training-for-Deep-Face-Recognition-with-Long-Tailed-Noisy-Data etc).\n\n- How does a simple baseline that uses early stopping regularization (see e.g. https://arxiv.org/pdf/1910.09055.pdf and http://proceedings.mlr.press/v108/li20j/li20j.pdf) compare to the proposed method?\n It is unclear from the results reported in the tables (in particular Tables 2 and 3) when the proposed method breaks and what exactly are the assumptions that it requires to perform well. Instead of selecting the 3 fixed noise rates reported in the paper, I suggest a plot in which the noise rate is varied on the Ox axis, while the Oy axis indicates the test accuracy. See also the discussion regarding other noise models in the Questions section.\n", " This paper proposes the idea of using the forward-backward cycle-consistency regularization to estimate the transition matrix T. This method addresses a long-standing problem of estimating the transition matrix depending on the inaccurately estimated noisy class posterior probability. Theoretical analysis and experimental results both show that the proposed method is superior to the compared methods. Particularly, it seems that this paper firstly proposes to estimate the transition matrix T in a backward manner. Strengths:\n1) Novelty: The main contribution of this work is the idea of estimating the transition matrix T with the forward-backward cycle-consistency regularization, under the sufficiently scattered assumption. Maybe, this is the first to estimate the transition matrix T bidirectionally.\n2) Writing: The paper looks good and well-written. The introduction and the algorithm part are easy to follow.\n3) Experiments: The proposed method achieves better results in both the synthetic and real-world datasets, and provides detailed ablation study.\n\nWeaknesses:\n1) It needs some model complexity analysis, especially for the training process of the backward and forward transition matrix.\n2) Some implementation details are missing. How do you combine the proposed transition matrix estimating approach with the traditional DivideMix[11] algorithm, as shown in Table 4 and Table 5 ?\n Besides answering the above listed weaknesses, I also curious about the following question: Why not directly use T^-1 instead of the learned T^b to optimize the transition matrix? Yes.", " This paper proposes a label-noise learning algorithm through estimating the transition matrix T, to build a statistically consistent classifier. Specifically, the proposed algorithm tries to estimate the transition matrix under the forward-backward cycle-consistency regularization, which helps to minimize the volume of the transition matrix indirectly without exploiting the estimated noisy class posterior. Theoretical analysis and experimental results justify the effectiveness of the proposed method. Strengths:\n1)\tThe proposed method is technical sound and novel. It develops an algorithm to estimate the transition matrix under the sufficiently scattered assumption, by incorporating the proposed cycle-consistency regularization. This method can reduce the dependency of estimating the transition matrix T on the inaccurately estimated noisy class posterior.\n2)\tDetailed theoretical analysis illustrates that the cycle-consistency regularization helps to minimize the volume of the transition matrix T, which encourages the estimated T to converge to its optimal solution.\n3)\tThe experiment is sufficient. They illustrate the effectiveness of the proposed method from both of the classification performance and the T estimation error, on both the synthetic and real-world noisy datasets.\nWeaknesses:\n1)\tSince this method needs a forward-backward training process, does it require more time to train the network? How many extra parameters are introduced in the newly proposed method compared with the previously proposed method VolMinNet[13]?\n2)\tThis paper states that it tries to estimate the transition matrix under the sufficiently scattered assumption, what’s the difference between this assumption and the previous anchor point assumption?\n see the weaknesses above YES", " This paper focuses on the class-dependent noisy labels. To solve this, the authors proposes a cycle-consistency regularization on the estimation of the transition matrix for learning with class-dependent noisy-labels. The proposed method could encourage the estimated transition matrix to converge to its optimal solution, without explicitly estimating the noisy class posterior probability. Therefore, it could help to build a better statistically consistent classifier. Experimental results on several datasets show the effectiveness of this method, on reducing the estimation error of T and boosting the classification performance. Strengths:\n1. Different from the previous representative work VolMinNet, this paper proposes a new strategy to optimize the transition matrix T. The proposed method successfully addresses the problem of how to reduce the side-effects of the inaccurate noisy class posterior on estimating the transition matrix T. \n2. The author conduct experiment on several datasets to show the effectiveness of the proposed method. Moreover, the theoretical analysis is also provided to demonstrate the effectiveness of the algorithm.\n\nWeaknesses:\n1. In the overall objective function shown in Eq. (4), I believe we could also obtain the experiment results by just optimizing the backward transition matrix T^b. That is to say, we could just minimize Eq. (3) to obtain one intermediate results. However, this paper does not list this result for experimental ablation study.\n2. In line 43-44, why anchor point assumption is a special case of the sufficiently scattered assumption? \n3. The author claimed that the proposed method can be used as a plug-and-play module and integrate it into the DivideMix. The authors should clearly define the experiment settings and how they are combined. See the weaknesses above. The authors have discussed the limitations of this paper, and there is no negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 5 ]
[ "cjdR3bgSbpN", "EHO_NmhT1sw", "KlZL5zUjWWb", "2fxE9Lc2-JJ", "GuAwOXlqVcu", "xUf9OzGCPZ1", "fu_ImERHzeN", "mBOiYivWveG", "nips_2022_IvnoGKQuXi", "mBOiYivWveG", "mBOiYivWveG", "yrYAgR20Wj5", "PuN27q2z9E", "c4jY7sh1pan", "mBOiYivWveG", "nips_2022_IvnoGKQuXi", "nips_2022_IvnoGKQuXi", "nips_2022_IvnoGKQuXi", "nips_2022_IvnoGKQuXi" ]
nips_2022_V3kqJWsKRu4
InsPro: Propagating Instance Query and Proposal for Online Video Instance Segmentation
Video instance segmentation (VIS) aims at segmenting and tracking objects in videos. Prior methods typically generate frame-level or clip-level object instances first and then associate them by either additional tracking heads or complex instance matching algorithms. This explicit instance association approach increases system complexity and fails to fully exploit temporal cues in videos. In this paper, we design a simple, fast and yet effective query-based framework for online VIS. Relying on an instance query and proposal propagation mechanism with several specially developed components, this framework can perform accurate instance association implicitly. Specifically, we generate frame-level object instances based on a set of instance query-proposal pairs propagated from previous frames. This instance query-proposal pair is learned to bind with one specific object across frames through conscientiously developed strategies. When using such a pair to predict an object instance on the current frame, not only the generated instance is automatically associated with its precursors on previous frames, but the model gets a good prior for predicting the same object. In this way, we naturally achieve implicit instance association in parallel with segmentation and elegantly take advantage of temporal clues in videos. To show the effectiveness of our method InsPro, we evaluate it on two popular VIS benchmarks, i.e., YouTube-VIS 2019 and YouTube-VIS 2021. Without bells-and-whistles, our InsPro with ResNet-50 backbone achieves 43.2 AP and 37.6 AP on these two benchmarks respectively, outperforming all other online VIS methods. Code is available at https://github.com/hf1995/InsPro.
Accept
The paper discusses a method for online video instance segmentation. Reviewers appreciated the proposed method but raised concerns regarding difference of reported results to other papers, method being similar to prior work, and limited novelty. The rebuttal addressed most of the concerns prompting reviewers to increase their rating to an accept recommendation. AC doesn't see reasons to overturn an unanimous reviewer recommendation.
val
[ "xaXCO9ap49N", "ZoyjWbtIIJB", "1sghoR7OYE", "Dn6jfb4sIeW", "_RYCw6tA4DD", "GrzJW7Opha", "4iLy3clB0Q3", "6HqW7pJpR-", "6HxE3UIM0gn", "w4T1rKUlF3n", "YIPUF-AjFZ", "fN1LZg_q6gK", "KKJPQeY5aE", "en4ag8a2oYD", "_dfeZn1T65", "e-w-M5WT3Hf", "B1zWW1R5IvR", "3L6ZJA3vGKb", "NFJ7wLIF63t", "i3g4aiT5etZ", "bkTkc0HIqOk", "iBpk8uQqq8E" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Wow! It is very glad to hear that we address most of your concerns. Thank you very much for this kind rating upgrade. We are really delighted to hear this good news. Have a nice one!", " Thanks for your response and I'm feeling sorry for the delayed reply. The revised version of InsPro covers most of my concerns. The latest revised version is good to me and I decide to upgrade my rating. ", " Many thanks for raising the rating, which is really inspiring and joyful!", " The author has addressed my concerns. The rating is updated to weak accept.", " Thanks for your review, which makes our paper more clear and sound. We hope that we have addressed all your concerns. \n\nAs it is the last day of the Reviewer-Author Discussion session, if you have other concerns, please don't hesitate to let us know. ", " It is glad to know that part of your concerns is addressed. \n\nThank you for your careful review, which helps improve our paper greatly, and this kind rating upgrade. It is really encouraging and delightful! If you have any other questions, please let us know.", " Thanks for your review, which helps us clarify our contributions and performance details. Specifically, we add more detailed comparisons between our work and TrackFormer and EfficientVIS to the related work and experiment sections to elucidate our contributions. We also add a table in the supplementary material to display the performance differences between the our base model and QueryInst in terms of COCO instance segmentation.\n\nAs it is the last day of the Reviewer-Author Discussion session, we would like to know if we have solved your concerns and if there are any other concerns we can address for you. Please feel free to let us know.", " Thanks for your review, which helps us spot and resolve the ambiguities. We hope that we have addressed your concerns in this Rebuttal. \n\nAs it is the last day of the Reviewer-Author Discussion session, if you have any other questions, please feel free to let us know.", " Thanks for the updated content. Part of my concern on the paper novelty in object query propagation, and differences to existing methods has been addressed. Thus I decide to raise my score to borderline accept.", " Hi all, we have completed and uploaded a revised version of our paper according to your helpful comments. We also updated the supplementary material accordingly. Please check them out. We managed to include most of the responses to your concerns in the revision (most of them can be found in the colored text of the paper). However, due to the 9-page limit of the revised paper, we couldn’t elaborate on every response. We will add more detailed responses in the camera-ready version (camera-ready version can have 10 pages) if our paper is accepted. Thanks.", " Thanks for the suggestion. \nWe have started revising the paper. Hopefully, we can complete a draft revision within this discussion period.", " Thanks for the response. The authors promise revisions for solving the neglected discussion/comparison with related works in query-based VIS and temporal object propagation. Also, authors provides long answer in the 'common concern' with TransTrack, TrackFormer, MOTR, EfficientVIS. This is not a vey small modification but critical to the paper. \n\nI would appreciate a revised version to help my final rating decision. Only by clearly discussing the differences with related works can the paper novelty/advantage be revealed. ", " We thank the reviewer for appreciating our work and giving helpful advice. In what follows, we deal with the concerns. \n#### __Weaknesses__: \n__Q1__: The author may claim the relation and difference between InsPro and TrackFormer. \\\n__A1__: Please refer to the __Common Concern__ (https://openreview.net/forum?id=V3kqJWsKRu4&noteId=B1zWW1R5IvR). \n\n__Q2__: The author may clarify the setting or baseline for these results in ablation studies. \\\n__A2__: The table below lists the experiment settings used for the ablation studies in Table 2. __Table 2(a)__ shows how the proposed instance query-proposal propagation and temporally consistent matching contribute to the performance. __Table 2(a) C__ represents the basic version of InsPro, which achieves an AP of 37.4. __Table 2(c)__ displays the effectiveness of the proposed box deduplication loss (BDL). Equipping the basic InsPro with BDL, the AP score is improved from 37.4 to 38.4. __Table 2(b)__ shows the effectiveness of the intra-query attention. After adding the intra-query attention to the w/BDL method in Table 2(c), the AP score is further improved from 38.4 to 40.2, and we use 40.2 AP as the final InsPro performance for comparison in Table 1(a). \nBesides, __Table 2(d)__ shows the superiority of our proposed temporal propagation strategy over the commonly used 'track-by-detect' paradigm (37.4 AP vs 31.5 AP). To ensure a fair comparison, we restrict the only independent variable in this experiment to be the object tracking method and more details can be found in Sec. 4.4. \nThank you for pointing out this confusion. We will correct it in our new version.\n\nMethods | experimental setting | AP | AP50 | AP75 |\n--- |:----------------------------------------------------:|:--------:|:--------:|:--------:|\n__Tab.2(a) A__ | image instance segmentation baseline | 24.0 | 41.3 | 24.2 |\n__Tab.2(a) B__ | __Tab.2(a) A__ + instance query-proposal propagation | 36.3 | 56.3 | 38.9 |\n__Tab.2(a) C__ | __Tab.2(a) B__ + temporally consistent matching | __37.4__ | 57.6 | 41.1 |\n__Tab.2(c) w/ BDL__ | __Tab.2(a) C__ + box deduplication loss (BDL) | 38.4 | 57.7 | 41.6 |\n__Tab.2(b) T=18__ | __Tab.2(c) w/ BDL__ + intra-query attention | __40.2__ | __62.9__ | __43.1__ |\n__Tab.2(d) Track-by-detect__ | __Tab.2(a) A__ + explicit tracking | 31.5 | 49.3 | 34.1 |\n\n__Q3__: It would be better to have an analysis of the variance of InsPro-lite, e.g. the effects of key frame numbers. \\\n__A3__: Thank you for this suggestion. The table below lists the performances of InsPro-lite using different key frame intervals (GPU: 1 RTX 2080Ti). When k=1, it represents the original InsPro model. In addition, we provide more details about InsPro-lite in Sec. A.1 of the supplementary material.\n\nk | 1 | 5 | 10 | 15 |\n--- |:----:|:----:|:----:|:----:|\nAP | 40.2 | 39.4 | 38.7 | 37.5 |\nFPS | 26.3 | 41.8 | 45.7 | 49.1 |", " Thanks the reviewer for approving of the motivation and performance of our work, and giving useful advice. We address the concerns as follows. \n#### __Weaknesses__:\n__Q1__: Missing discussion of related query-based VIS methods and clip-level VIS methods. \\\n__A1__: We appreciate the reviewer's extensive knowledge of VIS. We will add more discussion of IFC and VisTR in the Query-based methods subsection, and include SeqFormer and Mask2Former in this part too. \n\n__Q2__: The tech differences comparison to PCAN. \\\n__A2__: Thanks for this suggestion. We will add the following discussion of PCAN and performance comparison to our revised version. \nPCAN proposes frame- and instance-level prototypical cross-attention modules to leverage rich spatio-temporal information distilled from previous frames to facilitate better segmentation. This generates augmented features for model to produce better object instances in the current frame, which shares the similar benefits of our intra-query attention module. However, PCAN adopts an additional explicit tracking head to complete object association, while our InsPro performs the association implicitly through an instance query and proposal propagation mechanism, which is simpler.\nAs for the performance comparison, under similar experiment settings, PCAN achieves 36.1 AP on the YouTube VIS 2019 validation set, while our InsPro reaches 40.2 AP. \n\n__Q3__: The novelty of InsPro and the discussion among InsPro, TransTrack, and MOTR. \\\n__A3__: Please refer to the __Common Concern__ (https://openreview.net/forum?id=V3kqJWsKRu4&noteId=B1zWW1R5IvR).\n\n__Q4__: What are the typical failure cases of the methods? \\\n__A4__: From our observation, InsPro may fail to segment and track some tiny objects since we do not make special designs for handling tiny objects.\n\n__Q5__: How to handle new objects with similar appearance? \\\n__A5__: This is a good question. In our system, we propagate not only object queries but their corresponding proposals. Since those proposals have tracked objects positional prior encoded, when using such a query-proposal pair to predict object, it is easy for the network to distinguish objects of similar appearance. Please refer to Figure 3 and Figure 5 in the supplementary material for some examples.\n\n__Q6__: How to handle/correct the accumulation error during propagation process . For example, an instance query is wrongly matched at an early inference stage? \\\n__A6__: We guess the reviewer is concerned that an instance query may be matched to a different object in the next frame and this error will be propagated to further frames. If so, we would like to explain that, since we do not perform explicit matching, such an error would not be propagated to further frames, and because we do instance segmentation on each frame based on the enhanced queries with their past ones by intra-query attention, this error can probably be corrected in the next frame. \n\n#### __Limitations__:\n__Q__: There is no limitation or potential negative societal impact discussion in the paper. \\\n__A__: Thanks for this reminder. We have discussed the broader impact and future works in Sec. A.6 of supplementary material. We will add more limitation and societal impact discussions in the revised revision. ", " We appreciate that the reviewer thinks our method is simple, effective and easy to follow, and provides helpful comments. \n\n#### __Weaknesses__:\n__Q1__: The novelty of the proposed InsPro and the detailed comparisons among InsPro, TrackFormer, and EfficientVIS. \\\n__A1__: Please refer to the __Common Concern__ (https://openreview.net/forum?id=V3kqJWsKRu4&noteId=B1zWW1R5IvR). \n\n__Q2__: IFC can run near-online inference. \\\n__A2__: Thanks to the reviewer for this comment. We have noticed that a near-online inference scenario is defined in IFC to make a trade-off between latency and accuracy. In this setting, when T=5, IFC achieves 39.0 AP, which is lower than our 40.2 AP. We will update our expression in the revised version. \n\n#### __Questions__: \n__Q__: How about the performance difference between the proposed InsPro and QueryInst in terms of COCO instance segmentation? \\\n__A__: The table below lists the performance comparison between our InsPro and QueryInst on the COCO instance segmentation validation set. Except that they have different segmentation head structures, both methods adopt the same ResNet-50 backbone, 100 queries, a training time of 36 epochs, and the same data augmentation. The number of FLOPs is tested with an input image resolution of 640 x 360. It can be seen that our base model actually performs a bit poorer than QueryInst. \n\nMethod | AP | AP50 | AP75 | APs | APm | APl | Param (M) | FLOPs (G) | \n--- |:----:|:----:|:----:|:----:|:----:|:----:|:---------:|:---------:|\nQueryInst-R50-100-query-3x | 39.8 | 61.8 | 43.1 | 21.3 | 42.7 | 58.3 | 170.8 | 95.7 |\n__InsPro-R50-100-query-3x__ | 39.4 | 61.8 | 41.9 | 19.7 | 42.9 | 59.3 | 106.1 | 45.5 | ", " To show the superiority of our method, we conduct more experiments to verify this. \n\nWe select the most recent MOTR as a representative method. For a fair comparison, we implement the core query propagation module used in MOTR on the same baseline with us (refer to MOTR's code https://github.com/megvii-research/MOTR). We follow MOTR exactly to set up other model and experiment settings. To exclude the influence of other factors, we do not use temporal feature aggregation in both methods. The tables below show the comparisons on YouTube-VIS 2019 and ImageNet VID. It can be seen that our InsPro outperforms MOTR in both benchmarks.\n\nmethod | AP | AP50 | AP75 |\n--- |:----:|:----:|:----:|\nMOTR-YTVIS19 | 37.4 | 56.9 | 40.3 |\nInsPro-YTVIS19 | 38.4 | 57.7 | 41.6 |\n\nmethod | AP | AP50 | AP75 |\n--- |:----:|:----:|:----:|\nMOTR-VID | 38.0 | 57.1 | 39.7 |\nInsPro-VID | 39.5 | 57.2 | 42.7 |\n\nFor further comparison, we also implement an online version of EfficientVIS by simply setting clip length T=1. For a fair comparison, we do not use intra-query attention in our InsPro. \nThe tables below list the comparison results on YouTube-VIS 2019 and ImageNet VID respectively. Our InsPro surpasses online EfficientVIS by a large margin.\n\nmethod | AP | AP50 | AP75 |\n--- |:----:|:----:|:----:|\nonline EfficientVIS-YTVIS19 | 36.6 | 55.5 | 40.3 |\nInsPro-YTVIS19 | 38.4 | 57.7 | 41.6 |\n\nmethod | AP | AP50 | AP75 |\n--- |:----:|:----:|:----:|\nonline EfficientVIS-VID | 33.2 | 48.8 | 35.5 |\nInsPro-VID | 39.5 | 57.2 | 42.7 |", " __Q__: Relation and difference between InsPro and other query propagation methods (TransTrack, TrackFormer, MOTR, EfficientVIS). \\\n__A__: We first discuss the relation and difference between our InsPro and other query propagation methods to clarify the contributions of our work. As reviewers mentioned, the method that associates objects across frames through query propagation mechanism has been recently explored in several works, such as TransTrack (arXiv2012.15460), TrackFormer (CVPR2022), MOTR (ECCV2022) and EfficientVIS (CVPR2022). This shows the effectiveness and potential of such a new object linking approach. On the other hand, it indicates that there still are many challenges to be solved to make this approach work well. As we see, these challenges include: 1) steadily binding one evolving object query or object query-proposal pair to one specific object across frames, 2) accurately detecting and tracking new objects, 3) effectively suppressing duplicate detections or tracklets, and 4) elegantly handling tough scenarios like occlusion. In what follows, we elaborate on the similarity and difference between our InsPro and existing query propagation methods in terms of these four aspects.\n\n1) Stably binding query and object is the key to the success of 'track-by-query_propagation' mechanism. Our InsPro and other query propagation methods share a similar inter-frame query-object binding mechanism which is realized by a temporally consistent groundtruth-prediction matching in the training process. Here we also want to point out that TransTrack is basically a 'track-by-detect' method, although it propagates object queries. This is because it still needs to explicitly match detection boxes to tracking boxes in each frame, unable to perform object association implicitly.\n\n2) Detecting new objects is a big challenge of query-propagation-based tracking methods. \nTransTrack uses a complete new object query set to detect seen and new objects, which is redundant since seen objects can be detected by previous queries. Instead, TrackFormer uses a track query subset selected from the previous frame to detect seen objects and a new object query subset to predict new objects in the current frame. MOTR takes a similar approach to detecting new objects. However, these approaches rely on heuristic rules to build the track query subset, which is not elegant and may harm performance. For example, some track queries with low prediction scores in the previous frame would be removed and are not passed to the current frame. However, these track queries may represent objects with heavy occlusion whose trajectories would break due to the removal. As for EfficientVIS, it does not consider this detecting new objects problem, and its performance will probably be greatly impacted if there are new objects in the next clip.\nIn contrast, our InsPro simply propagates all object queries produced in the previous frame to the current frame, which is much simpler and more elegant. Thanks to our proposed Box Deduplication Loss, those unmatched queries that are filtered out in the aforementioned methods are pushed away from matched queries and serve as candidate queries in our method, which can be used to detect new objects (see Figure 5 in the supplementary). \n\n3) Duplicate detections or tracklets are a common problem in query-propagation-based tracking methods. TransTrack relies on a high score threshold to keep fewer track queries to alleviate this problem. Similarly, TrackFormer employs NMS to remove duplicate predictions. MOTR builds a temporal aggregation network to learn more discriminative features to address this problem, while EfficientVIS does not discuss this problem.\nBy contrast, we design a Box Deduplication Loss to suppress duplicates and an Inter-query attention module to enhance queries with their predecessors. Our solution avoids heuristic rules and post-processing steps, and is more effective according to the experimental results (Please refer to the below __Experimental Results of Common Concern__). \n\n4) Tough scenarios for tracking include occlusion and motion blur etc. TransTrack, TrackFormer and MOTR use heuristically selected track queries to associate objects across frames. This may miss objects with heavy occlusion. In comparison, our InsPro keeps all object queries and does not have this concern. Furthermore, we enhance current object queries with their historical ones, which can significantly improve performance in tough scenarios.\n\nOverall, we argue that details make the difference. Although those existing methods also take a query propagation approach to object instance association, our method does better at details. Through analyses and experiments (Please refer to the below __Experimental Results of Common Concern__), we have shown that our method is simpler, more elegant and more effective, and we believe it can provide value to the community. \n", " We thank the reviewer for the positive feedback and helpful comments. We address the questions below.\n#### __Questions__:\n__Q1__: Why are some numbers in Table 1 (a) different from those in other papers? \\\n__A1__: This is because the experimental results of VisTR used in our paper are taken from arXiv:2011.14503v2, and IFC's from arXiv:2106.03299v1, which seem outdated. Thanks for the reviewer's reminder. We have found that both VisTR and IFC have updated their experimental results in their camera-ready version, which still are inferior to ours. We will check and update the experimental results of all methods in Table 1 in the revised version.\n\n__Q2__: For a fair comparison, the author should also provide the model parameter size after the replacement. \\\n__A2__: Thanks for this suggestion. The parameter size and FLOPs comparisons are shown in the table below. Both methods use ResNet-50 as backbone and have an input resolution of 640 x 360. \\\nAs shown in the table, our InsPro surpasses the 'track-by-detect' model by a large margin even though InsPro has fewer parameters and FLOPs. We think this is because 'track-by-detect' model requires an extra tracking head for explicit instance association, which increases the model size.\n\nMethod | AP | AP50 | AP75 | FPS | Param (M) | FLOPs (G)|\n--- |:--------:|:--------:|:--------:|:--------:|:---------:|:--------:|\nTrack-by-detect | 31.5 | 49.3 | 34.1 | 25.4 | 119.9 | 48.3 |\nOurs | __37.4__ | __57.6__ | __41.1__ | __26.3__ | __106.1__ | __45.5__| \n\n", " The paper proposes a framework for the video instance segmentation task. In order to avoid explicit instance association, which is responsible for computation overhead increment, the author designed both instance query and proposal propagation mechanism.\nThe framework consists of an intra-query attention module, temporally consistent matching, and an additional loss for removing duplicated predictions. [Strengths]\n\n1. The proposed method is simple and reasonable.\n2. The paper is clearly presented and well-organized.\n3. The proposed method outperforms all baselines including current state-of-the-art methods.\n\n[Weaknesses]\nPlease refer to Questions. 1. I notice that some of the numbers in Table 1a are different from those in other papers.\n\nFor example, according to Table2 in IFC[10] paper, the number of VisTR and IFC are:\n\nName | AP | AP50 | AP75 | AR1 | AR10\n\nVisTR | 35.6 | 56.8 | 37.0 | 35.2 | 40.2\nIFC | 41.2 | 65.1 | 44.6 | 42.3 | 49.6\n\nAlthough that does not necessarily raise a concern about the result, I wonder what could cause the difference.\n\n\n2. In L301-313, the author argues the performance difference results from temporal propagation, yet it could simply be caused by the model size difference. For a fair comparison, the author should also provide the model parameter size after the replacement. Please refer to Questions.", " This paper presents InsPro, a new method for online video instance segmentation by propagating instance queries across frames. The proposed InsPro, built on SparseR-CNN, defines a fixed set of instance queries and proposals to recognize and segment objects, and then propagate the updated queries and proposals to next frame. A temporal memory bank along with an intra-query attention are adopted to augment the instance queries. The proposed method InsPro achieves good results on Youtube VIS-2019 and Youtube VIS-2021 in terms of online video instance segmentation. ### Strength\n\n- The proposed method based on Sparse R-CNN adopts query-proposal propagation across frames for video instance segmentation and a temporal memory bank to augment queries with past frames.\n- The proposed method is simple and effective, and easy to follow.\n- This paper proposes an effective box de-duplication loss to further remove the duplicates.\n- The overall performance of the proposed InsPro is good and experiments are abundant.\n\n### Weakness\n\n- The core idea of InsPro about using query propagation across frames has been explored in several works [1][2]. TrackerFormer[1] and EfficientVIS[2] adopt a similar way to propagate queries across frames and also can be reshaped into an online method. I'm concerned about the novelty of the proposed InsPro and the detailed comparisons among InsPro, TrackerFormer, and EfficientVIS.\n- Line269, Sec 4.3: As far as I know, IFC can run near-offline(T=2) inference, see [3].\n\n\n[1] Meinhardt et.al. TrackFormer: Multi-Object Tracking with Transformers. \n[2] Wu et.al. Efficient Video Instance Segmentation via Tracklet Query and Proposal. \n[3] Hwang et.al. Video Instance Segmentation using Inter-Frame Communication Transformers. 1. The proposed method is built based on Sparse R-CNN but applies dynamic/conditional convolution to generate instance masks. I'm concerned about the pretraining performance on COCO. QueryInst[1] also extends Sparse R-CNN for (video) instance segmentation, how about the performance difference between the proposed InsPro and QueryInst in terms of COCO instance segmentation?\n\n [1] Fang et.al. Instances as Queries.\n no", " The paper proposes InsPro for online video instance segmentation. Instance queries are propagated and updated from the previous frames to current frame with implicit object association. Intra-query attention, temporally consistent matching, and box deduplication loss\nare also proposed. Experiments are conducted on the Youtube-VIS 2019 and 2021.\n **Strengths**:\n\n1. The paper has a good motivation to solve online VIS problem with implicit query matching. The proposed method achieves both good evaluation performance and inference speed.\n\n2. The paper shows the effect of the proposed query/proposal propagation, intra query attention and BDL with adequate ablation experiments. \n\n3. The paper is organized well with good writing and figures.\n\n**Weakness**:\n\n1. Missing discussion for related works in query-based VIS methods. In the paragraph for Query-based Methods, both IFC[a] and VisTr [b] are neglected. Also, SeqFormer[c] and Mask2Former[d] are also query-based VIS although they are arXiv but worth mentioning.\n\n2. Missing related works in clip-level VIS Methods, where the online VIS method PCAN [e] also proposes the online object feature propagation and updating for the target tracklet. The tech differences comparison to [e] should be discussed in the related work section as well as the table results comparison on Youtube-VIS. \n\n[a] Video instance segmentation using inter-frame communication transformers. NeurIPS, 2021.\n\n[b] End-to-end video instance segmentation with transformers. CVPR, 2021.\n\n[c] SeqFormer: a Frustratingly Simple Model for Video Instance Segmentation. arXiv:2112.08275\n\n[d] Mask2former for video instance segmentation. arXiv preprint arXiv:2112.10764 (2021)\n\n[e] Prototypical Cross-Attention Networks for Multiple Object Tracking and Segmentation. NeurIPS, 2021.\n\n\n3. The tech contribution of InsPro is limited. The idea of query propagation has been adopted in both TransTrack and MOTR. The frame-level InsPro is a straightforward combination between Sparse R-CNN (for query-based object detection) and CondInst(providing mask head). Then, the frame-level InsPro is further extended to video by instance query propagation during online inference.\n\n4. What are the typical failure cases of the methods? How to handle new objects with similar appearance? How to handle/correct the accumulation error during propagation process, for example, an instance query is wrongly matched at an early inference stage?\n I will consider raise my rating if the concern in the weakness part can be well addressed. The paper neglects discussion/comparison with related works in query-based VIS and temporal object propagation. The key idea of query propagation has been explored in TransTrack, which influences the paper novelty a lot. There is no limitation or potential negative societal impact discussion in the paper.", " This paper proposes a query-based framework for online video instance segmentation, which designs an instance query and proposal propagation mechanism to perform instance association implicitly. With such a mechanism, they achieve implicit instance association in parallel with segmentation and elegantly take advantage of temporal clues in videos. Experiments show the effectiveness of this method. Strengths:\n1) This paper proposes a novel query&proposal propagation mechanism for the video instance segmentation task, which is proved effective.\n2) This paper achieves satisfactory results among the online VIS methods.\n3) This paper is well organized.\n\nWeakness:\n1) The key idea of query&proposal propagation in this paper is similar to the method proposed in TrackFormer [28] (CVPR 2022), thus the author may claim the relation and difference between them. \n2) It is confusing for the baseline of ablation study in Tab. 2 (a)(c)(d). The AP score of 37.4 in these tables is mismatched with the AP score of 40.2 in Tab. 1 and Tab. 2(b), thus the author may clarify the setting or baseline for these results.\n3) It would be better to have an analysis of the variance of InsPro-lite, e.g. the effects of key frame numbers. \n As described in weakness. No serious negative societal impact of this work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5 ]
[ "ZoyjWbtIIJB", "4iLy3clB0Q3", "Dn6jfb4sIeW", "6HqW7pJpR-", "KKJPQeY5aE", "6HxE3UIM0gn", "_dfeZn1T65", "3L6ZJA3vGKb", "YIPUF-AjFZ", "nips_2022_V3kqJWsKRu4", "fN1LZg_q6gK", "en4ag8a2oYD", "iBpk8uQqq8E", "bkTkc0HIqOk", "i3g4aiT5etZ", "B1zWW1R5IvR", "nips_2022_V3kqJWsKRu4", "NFJ7wLIF63t", "nips_2022_V3kqJWsKRu4", "nips_2022_V3kqJWsKRu4", "nips_2022_V3kqJWsKRu4", "nips_2022_V3kqJWsKRu4" ]
nips_2022_aGFQDrNb-KO
Multi-dataset Training of Transformers for Robust Action Recognition
We study the task of robust feature representations, aiming to generalize well on multiple datasets for action recognition. We build our method on Transformers for its efficacy. Although we have witnessed great progress for video action recognition in the past decade, it remains challenging yet valuable how to train a single model that can perform well across multiple datasets. Here, we propose a novel multi-dataset training paradigm, MultiTrain, with the design of two new loss terms, namely informative loss and projection loss, aiming to learn robust representations for action recognition. In particular, the informative loss maximizes the expressiveness of the feature embedding while the projection loss for each dataset mines the intrinsic relations between classes across datasets. We verify the effectiveness of our method on five challenging datasets, Kinetics- 400, Kinetics-700, Moments-in-Time, Activitynet and Something-something-v2 datasets. Extensive experimental results show that our method can consistently improve state-of-the-art performance. Code and models are released.
Accept
The paper proposes a co-training method for video representation learning, by training video transformers on multiple video datasets. The paper proposes two novel loss terms: informative loss and projection loss. The informative loss encourages the variance of each dimension in the embedding to be large. The projection loss maps predictions from other datasets to the current dataset, to learn the label relation across datasets by using ground-truth action labels to compute standard cross-entropy loss. Based on the feedback provided by the reviewers, we recommend this paper for publication at NeurIPS 2022. The reviewers had some concerns about the paper. Reviewer YQNQ had concerns that the design of the projection loss and the informative loss did not consider the temporal dynamics, and that it does not compare with multi-domain methods. Reviewer iVPd recommended considering tasks like detection, segmentation, etc. discussing these methods in the related work for broader scope. Reviewer U8Wa mentioned that the experimental findings in this paper are quite different from findings in CoVER, but no explanation is provided. We thank the authors for addressing the comments of the reviewers in their review during the author feedback period. The authors seem to have addressed some of the concerns/feedback from the reviewers with detailed discussions -- it would be good to include these discussions, as much as possible, in the updated paper or supplemental materials.
train
[ "u8znSkxkW3M", "K5XPJiaOGGQ", "iOuevRk4Ecq", "GfuPRDEfzCr", "xSBxokVttq6", "fBFi2XNKUa_", "jOkuFm850-z", "OFMh8_NpsF_", "a7SyyN7UZKN" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\n Thank you for the clarification. My concerns have been addressed and I don't have further questions.", " Dear Reviewer iVPd, \nThank you very much again for the time and effort put into reviewing our paper. We believe that we have addressed all your concerns in our response. We have also followed your suggestion to improve our paper and have added additional experimental analysis. We kindly remind you that we are approaching the end of the discussion period. We would love to know if there is any further concern, additional experiments, suggestions, or feedback, as we hope to have a chance to reply before the discussion phase ends.", " As suggested, we analyze the cross-dataset projection weights of the K400/312p model in Table 1, listed top 5 concepts for each pair of datasets below.\nWe make two observations:\n1. The top projections are visually similar actions, which confirms our intuition that there are intrinsic relations in the datasets that the model can mine to improve performance. For example, “bending metal” in K400 and “bending” in MIT, “parkour” in K400 and “Capoeira” in Activitynet. Interestingly, “Wiping something off of something” in SSv2 and “cleaning windows” in K400.\n2. The action with the same name may not have the highest weights. In “mit to kinetics”, the “sneezing” action ranks 5th in the projection weights, suggesting that there might be discrepancies of the same concept in different datasets.\n\nThese observations are interesting and one may compare the learned weights with textual semantic relations (like those in ConceptNet). We leave this to future work. We will add this analysis to the revised version.\n\nkinetics to mit\n[('bending metal', 'bending', 0.19999029), ('riding elephant', 'skating', 0.19910285), ('pushing wheelchair', 'swinging', 0.19789538), ('tossing coin', 'tattooing', 0.1961417), ('cleaning toilet', 'plunging', 0.19546732)]\n----------------------------------------\nkinetics to ssv2\n[('playing volleyball', 'Covering something with something', 0.1490408), ('washing hair', 'Moving something and something closer to each other', 0.14768724), ('using computer', 'Pushing something so it spins', 0.1392663), ('faceplanting', 'Picking something up', 0.13809656), ('jogging', 'Lifting something with something on it', 0.13718487)]\n----------------------------------------\nkinetics to activitynet\n[('parkour', 'Capoeira', 0.1922843), ('running on treadmill', 'Walking the dog', 0.17684066), ('whistling', 'Snowboarding', 0.17329654), ('water sliding', 'Kayaking', 0.16401285), ('riding mule', 'Canoeing', 0.16242917)]\n----------------------------------------\nmit to kinetics\n[('gambling', 'bookbinding', 0.1396648), ('autographing', 'eating ice cream', 0.13093388), ('tearing', 'ripping paper', 0.12916225), ('hitchhiking', 'throwing ball', 0.12469659), ('sneezing', 'sneezing', 0.12128164)]\n----------------------------------------\nmit to ssv2\n[('drying', 'Pretending to close something without actually closing it', 0.11436209), ('twisting', 'Lifting up one end of something without letting it drop down', 0.11326913), ('fueling', 'Showing something behind something', 0.11158158), ('trimming', 'Putting something on the edge of something so it is not supported and falls down', 0.10773694), ('hitchhiking', 'Scooping something up with something', 0.10572156)]\n----------------------------------------\nmit to activitynet\n[('fueling', 'Baton twirling', 0.12002337), ('pitching', 'Baton twirling', 0.11702621), ('snapping', 'Using parallel bars', 0.11015125), ('saluting', 'Baton twirling', 0.10861136), ('frying', 'Waxing skis', 0.10808712)]\n----------------------------------------\nssv2 to kinetics\n[('Wiping something off of something', 'cleaning windows', 0.13989474), ('Throwing something in the air and catching it', 'dining', 0.13834158), ('Pretending or trying and failing to twist something', 'playing poker', 0.13351966), ('Turning the camera upwards while filming something', 'playing poker', 0.13347316), ('Pulling two ends of something so that it separates into two pieces', 'dining', 0.13303888)]\n----------------------------------------\nssv2 to mit\n[('Throwing something in the air and catching it', 'hunting', 0.1650513), ('Turning the camera downwards while filming something', 'flowing', 0.16386871), ('Dropping something into something', 'kneeling', 0.16295567), ('Pulling two ends of something so that it gets stretched', 'pulling', 0.16081172), ('Turning the camera upwards while filming something', 'planting', 0.15848677)]\n----------------------------------------\nssv2 to activitynet\n[('Turning the camera downwards while filming something', 'Applying sunscreen', 0.12933515), ('Approaching something with your camera', 'Mooping floor', 0.12181736), ('Putting something upright on the table', 'Mooping floor', 0.11953914), ('Turning the camera upwards while filming something', 'Doing fencing', 0.118271396), ('Showing a photo of something to the camera', 'Playing flauta', 0.11793875)]\n\n(omitting activitynet’s due to space limit)", " Thank you for your comments and questions. \n\n**Q1: Why removing the informative loss leads to a complete failure? Would you please provide more insights or analysis?**\n\nThanks for pointing this out. We further investigate why \"-informative Loss\" completely fails but \"Vanilla\" seems to work by running an experiment of \"-informative Loss -projection add\", which means we remove the projected logits addition in Eq. 5 and directly compute classification loss on the projected logits. Therefore we can consider this run as adding additional projection branches to the vanilla architecture. The results are slightly better than \"Vanilla\" on K400 and much better on MiT/SSv2. See Table 1* below. It indicates that adding projected logits to the original branch without informative loss would prevent the model from converging (the total loss does not go down). We will add this experiment and analysis to the paper.\n\nTable 1*:\n\n| Method | K400 | MiT | SSv2 | ActNet|\n|--------|-------|-------|-------|-------|\n| Vanilla (50ep) | 80.1 | 33.4 | 60.8 | 86.5|\n| Vanilla (200ep) | 80.6 | 35.1 | 56.8 | 86.3|\n| -Informative Loss -projection add | 80.4 | 38.7 | 62.6 | 86.5|\n\n**Q2: Vanilla joint-training compared to CoVER**\n\nThanks for the comment. We believe the failed improvement for vanilla models are most likely due to the lack of ImageNet pretraining and a smaller model. As we have observed and stated (L270-L272), vanilla training of our model is unstable. In CoVER, a much larger backbone (121.4M parameters vs. our 51.2M parameters), higher resolution inputs (448x448 vs. our 224x224) and ImageNet-21K pretraining are used across all experiments. In Table 1*, the \"-Informative Loss -projection add\" has more parameters than vanilla and the performance is improved. We agree that it is important to see results with ImageNet pretraining and we are doing so in Table 2*.\n\n**Q3: Lack of ImageNet Pretraining.**\n\nAs stated in response to Reviewer iVPd, training from scratch is about twice as fast as the training recipe with ImageNet-21K pretraining. We did not conduct ImageNet pretraining due to limited resources (and MViTv2 authors had not released code or pretrained models at the time of experiments). We have downloaded ImageNet-21K-P [7*] and conducted pretraining experiments on K400 with a smaller 16x4 model (we used 32x3 model in the paper) as shown in Table 2*. We are still running the CrossRoad method with K400-multi-dataset but may take 10 days to finish (due to significant budget cut in GPU spending, we are running with 4xV100 GPUs whereas before we had 128). As we see, ImageNet pretraining does lead to significant improvement and we are expecting to see further boost with our CrossRoad method. We will add the experiments with 32x3 models in the revised version.\n\nTable 2*: ImageNet pretraining\n\n| Model | K400 |\n|--------|-------|\n| MViTv2 16x4 (from scratch 200 ep) | 78.8 / 93.5 |\n| MViTv2 16x4 (from IN-21K-P 75 ep) | 80.6 / 94.7 |\n", " Thank you for the helpful comments and suggestions. We will add suggested cross-dataset/multi-dataset training methods to our related work. Here we answer the specific questions below.\n\n**Q1: Clarification on computation cost (from scratch on videos vs. using large-scale image datasets).**\n\nThank you for your comment. The computation cost on L248 we refer to is the inference cost (the \"gFLOPs\" column in Table 1 & 2), not training cost. We will clarify this point in the paper.\n\nAs for training cost, the reviewer is right that for MViTv2 or other similar ViT models, their image versions with the same spatial resolution are usually 1/10 or even 1/20 of their video versions in terms of inference FLOPs. Below we compare training wall time (since this directly converts to money spending on GPU clusters) of different setups.\n\nFor MViTv2, the training schedule for image-initialized models are 300 epochs on ImageNet-1K (or 90 epochs on ImageNet-21K+ImageNet-1K) and then fine-tune on Kinetics for 100 epochs (or 75 epochs for ImageNet-21K models). Please refer to Appendix B.2 and B.4 in the MViTv2 paper for more details. Previously we did not use ImageNet pretraining mainly because we did not have a copy of the ImageNet data (1.3TB for ImageNet-21K!). We now have downloaded ImageNet-1K and ImageNet-21K-P [7*] (a compressed version of ImageNet-21K) for training cost experiments.\n1. Training time for videos: on our cluster, for Kinetics-400 (240K samples) training with 8 V100 GPUs, we are able to run at 75.2 clip/s whereas similar model variant in MViTv2 paper can achieve 91.0 clip/s (Table A.6 in MViTv2 paper). The difference may be due to different performance of I/O of clusters and video decoding methods (we use decord [8*] while they use torchvision). This means that training 1 epoch of K400 takes about 3200 seconds (6900 seconds for K700) with 8xV100 GPUs.\n\n2. Training time for ImageNet: with the same 8xV100 GPUs, training on ImageNet-1K (1.28M samples) for 1 epoch takes about 1800 seconds and 11000 seconds on ImageNet-21K-P (11.06M samples). All our experiments have >90% GPU utilization. This roughly translates to 1/10 of the training cost **per sample** compared to video training.\nTherefore, finetuning from ImageNet-1K recipe takes 34% more time (92% more if from ImageNet-21K-P) than training from scratch on K400, using our code and GPU clusters. From our experience, training from scratch will take less computation resources overall. However, finetuning from ImageNet will lead to 1-2 points top-1 accuracy boost (See Table 2* in response to Reviewer U8Wa).\n\n**Q2: MViTv2 without relative positional embedding.**\n\nThanks for pointing this out. We implemented MViTv2 based on their paper and the MViTv1 code [9*]. We were not able to implement the relative positional embedding part due to missing details (symmetric relative or not, etc.) in their paper so we use decomposed absolute positional embedding. The official code for videos has not released until July [9*]. Therefore we experiment with the \"MViTv2 w/o rel\" setting for all our runs (L137 and footnote). The number of \"MViTv2 w/o rel\" comes from Table A.6 of the MViTv2 paper (it should be 80.4 instead of 80.1, for MViTv2 w/ abs. pos. We will fix it in the revised version) as a reference. However, our baseline can only achieve 79.8 (Table 1) with the same training recipe, which may be due to slight differences in the Kinetics dataset (missing some videos, etc.). We will make it clear in the edition.\n\n**Q3: \"Vanilla\" cross-dataset training v.s. \"- informative loss\".**\n\nYes, you are right that \"Vanilla\" = CrossEntropy (CE) only, \"- informative loss\" = CE + projection loss. The full model uses both losses. We will revise Table 3 to make it clear.\n\n**Q4: What is \\sigma_k in Eq (7) and L198 and how do you determine the value?**\n\n\\sigma_k is a scalar that works as a weighting term for dataset k. \\sigma is a vector of learnable parameters to avoid tuning loss weights for different datasets (L196). We will make it clear.\n\n**Q5: ActivityNet dataset construction.**\n\nWe cut the annotated segments of the videos into 10-second long clips and split the dataset into 107K training and 16K testing (L214 - L215). We will make it clear and release the exact data preparation code.\n\n**Q6: Visualization of projection weights.**\n\nThanks for the suggestion. We are in the process of generating the visualization and will update it in the revised version.\n\n[7*] Ridnik, Tal, et al. \"Imagenet-21k pretraining for the masses.\" NeurIPS 2021.\n\n[8*] https://github.com/dmlc/decord\n\n[9*] https://github.com/facebookresearch/SlowFast\n", " Thanks for the helpful comments and suggestions. We will add all the mentioned related work and address the table caption format problem in the revised version.\n\nIn this paper, our practical goal is to propose a training paradigm for parameter-efficient models across multiple datasets, which could lead to less inference time overall for recognizing the same number of action classes.\nHere we address specific questions below.\n\n**Q1: Comparison with video domain generalization and multi-domain methods.**\n\nA1: Sorry for the confusion. Our work is under the multi-task learning topic, not video domain generalization (DG). The key distinction is that our goal is to train a single model on multiple related tasks (multiple action datasets) such that the model performs well on the same set of tasks, whereas DG aims to generalize a model to unseen data distributions. Please refer to Table 2 of this latest published survey on domain generalization [1*].\n\nIn terms of datasets and experimental setting of video domain generalization, we have found that most DG methods [2*, 3*, 4*, 5*] are compared on the UCF-HMDB dataset, and they mostly followed the adversarial domain generalization framework [6*]. Specifically, our setting differs in that:\n\n1. the size of the datasets. In video DG, the training and testing datasets usually contain a couple of thousand videos, whereas our setting includes large-scale video datasets of millions of videos.\n\n2. number of action classes. In video DG, it usually requires that the target action classes are shared in both source and target domains. Hence the action classes involved are usually less than 14 (See Table 1 in [5*]). In our setting, which is the same or similar as in [27, 45, 1, 11], we aim to train a single-model for over a thousand action classes (and the classes need not to be shared across datasets).\n\n3. training objectives. In both single-source or multi-source video DG, methods aim to learn a model in such a way that the model can generalize well to any Out-of-Distribution target domain. We train and test on the same domains. In a way our setting is easier than video DG (as well as video domain adaptation, where target domain data is sparsely provided) since we have access to the target domain data and labels during training. In terms of model performance, in a recent work [5*], under DG setting, top-1 accuracy on Kinetics (14 classes) is under 20% while our method averages 80%+.\n\nWe apologize for using the misleading \"cross-dataset\" term. We will revise our phrasing (changing to \"co-training\", etc.), make the task distinction clear in Section 1 & 2 and add the aforementioned references. Please also provide specific references if the reviewer thinks we are still missing any.\n\n**Q2: Clarify novelty of this paper on backbone, loss and temporal modeling.**\n\nA2: We respectfully disagree with the statement that \"this work has no contributions to both the backbone and loss\". We clarify our contributions below.\n\nOur method is closely related to the line of multi-task learning methods [1, 27, 11, 45] for videos. Previous works of this field are scarce maybe due to the large demand for computation resources. In [1] (NeurIPS'21) the authors proposed a multi-modal transformer and a novel data augmentation method for training. In [45] and [27] the authors proposed to train with both image and video or other tasks simultaneously to improve performance. In [11] (ECCV'20) the authors proposed to train a teacher model to filter webly-labeled data for the final omni-source training.\n\nOur method is the first to propose a simple yet effective way (no multi-stage training, no complex dataset-wise sampling, no dataset-wise hyper-parameter tuning) to capture multi-action relations (L183-L187) and informative representations using SOTA vision transformers (L30-L57).\n\nWe are the first to effectively combine the informative loss (inspired by self-supervised contrastive learning in image recognition [3]) and the multi-task projection loss (built upon multi-task learning in image domain [22]) to provide a principled way for multi-action-dataset training (Section 3.2).\n\nThe proposed method has been deployed in industrial products and it is proven to save computation resources in practice.\n\nIn this work, we do not put emphasis on temporal modeling due to the fact that the base model (MViTv2) only has a receptive field of 2-3 seconds temporally. We leave long-term multi-action recognition for future work. We will clarify this part.\n\n[1*] \"Domain generalization: A survey.\" TPAMI 2022.\n\n[2*] \"VideoDG: generalizing temporal relations in videos to novel domains.\" TPAMI 2021.\n\n[3*] \"Temporal attentive alignment for large-scale video domain adaptation.\" ICCV 2019.\n\n[4*] \"Shuffle and attend: Video domain adaptation.\" ECCV 2020.\n\n[5*] \"Dual-Head Contrastive Domain Adaptation for Video Action Recognition.\" WACV 2022.\n\n[6*] \"Adversarial discriminative domain adaptation.\" CVPR 2017.\n", " This paper is the first work to propose an informative representation of regularization into cross-dataset action recognition. This work makes full use of existing visual transformer backbones. This method is dedicated to learning robust and informative representations. By combining the projection loss, this work can effectively mine intrinsic class relations. Experiments on different datasets prove the method can achieve better performance and produce state-of-the-art results. Strengths:\n- This work seems to be the first work to introduce informative representation regularization into cross-dataset training for action recognition. It explores how to learn robust representations among multiple video domains. \n- To my knowledge, this work is the first work to bring the informative loss and projection loss into cross-dataset action recognition. This self-supervised loss can bring performance gains. \n- This method may be suitable for any action recognition model. \n\nWeakness:\n- This work seems to be the combination of existing video backbones, projection loss, and informative loss. Cross-dataset action recognition is a challenging problem due to the temporal information inner the video sequence. This paper did not consider more on the temporal information in the sequence. The design of the projection loss and the informative loss did not consider the temporal dynamics. The projection loss and the informative loss should be carefully designed for this specific cross-dataset action recognition task not directly used. \n- This work has no contributions to both the backbone and loss. The novelty of this work should be clarified.\n- The problem this work studies is a multi-domain problem. However, this work did not compare with multi-domain methods. - The table caption should be above the column.\n- The cross-dataset action recognition has yielded many forms such as Video Domain Adaptation and Video Domain Generalization. The cross-dataset in this paper focuses more on multi-domain learning. The concept of cross-dataset in this paper may lead to misleading and need to be clarified. \n- This paper should add the multi-domain baselines for a fair comparison with the existing multi-domain methods.\n- The author should clarify the novelty of this paper as described in Strengths And Weaknesses.\n\n\n======\nI have read the author's comments. Most of my concerns are clarified. I will increase my score. Yes", " The paper proposes a method to train video transformer on multiple video datasets. Instead of simply applying multiple cross-entropy loss over, the authors proposed to manipulate on embeddings encoded by an improved multiscale vision transformer to capture the intrinsic relations between classes across different action datasets. Specifically, they first adopt the informative loss from Barlow-Twins to maximize variance and covariance across embedding channels. Second, they propose to perform directed project from one dataset's classification head to another, to learn the label relation across datasets. The results show that the full pipeline is superior to vanilla training over multiple datasets and achieve state-of-the-art result. ### Strengths\n\n+ The performance is good considering that CrossRoad is purely trained on top of video dataset without any image pretraining.\n\n+ The method is simple and the informative loss borrowed from BarlowTwins seems effective, especially when we want to include projection across different classification heads.\n\n### Weaknesses\n\n- Cross-dataset Training is not only a standalone problem in the video domain. There has been a few works on other tasks such as detection [1,2], segmentation [3]. The authors are suggested to consider discussing these methods in the related work for broader scope.\n\n[1] Zhou, Xingyi, et al. \"Simple multi-dataset detection.\" CVPR 2022.\n[2] Wang, Xudong, et al. \"Towards universal object detection by domain attention.\"CVPR 2019.\n[3] Lambert, John, et al. \"MSeg: A composite dataset for multi-domain semantic segmentation.\" CVPR 2020.\n\n- A few arguments need further justification, including\n - The computation cost comparison between training video model from scratch v.s. pre-training on image dataset and then finetuning. (See Q1.)\n- Some settings in the ablation need further clarification, including:\n - The reason of studying MViTv2 without relative position embedding (Q2.).\n - \"Vanilla\" cross-dataset training v.s. \"- informative loss\". I don't fully get the difference. Do you mean \"Vanilla\" = CrossEntropy (CE) only, \"- informative loss\" = CE + projection loss?\n\n- What is $\\sigma_k$ in Eq (7) and L198 and how do you determine the value?\n\n- How does the learned directed class projection weights look like? Some visualization and discussion might be preferred.\n\n- How do you construct the video clips on ActivityNet? Do you uniformly cut the entire video into 10-second clips or only keep those temporal segments annotated as activities. This might be useful for later efforts trying to reproduce the results. 1. \"Note that our model does not use any image training datasets, and our model computation cost is only a fraction of the baselines\" (L247-248). It is true that pre-training on large image datasets. However, training video model from scratch typically means more epochs for convergence. (e.g. MViTv2 uses 200 epochs). Since training image model is often cheaper (1/10 than video model), I would see more discussion to justify this argument.\n\n2. MViTv2 without Rel-PE. If I am not mistaken, MViTv2 uses relative positional embedding by default (See their Figure 2). Therefore I do not fully understand the statement \"The “MViTv2 w/o rel” indicates the model without the relative positional embedding in the original paper\" (L234-235).\n\n\n==== \nPost-rebuttal revision:\nMy questions have been addressed. A 34-90% improvement over image-based pretraining at the cost of 1-2% drop of accuracy is fine but not impressive. The learned directed class projection is somewhat interesting. Therefore I will keep the rate of \"5 Borderline accept\". Yes. The authors have discussed the limitations, e.g., co-training is limited on video datasets only. They have also covered potential negative societal impacts such as dataset biases.", " This paper proposes a new co-training paradigm **CrossRoad** for video representation learning. \nIt consists of two novel loss terms, namely informative loss and projection loss. \nThe informative loss encourages the variance of each dimension in the embedding to be large.\nThe projection loss maps predictions from other dataset heads to the current dataset class and uses ground-truth action label to compute the standard cross-entropy loss. \nExperiment results show that the two auxiliary losses are helpful in co-training. ## Strength\n1. This work achieves strong recognition results via co-training multiple datasets with a relatively light transformer backbone MViTv2, the improvement across multiple datasets is around 2% to 4%. \n2. The efficacy of two novel components are validated via ablation study.\n\n## Weakness\n1. Large-scale image pre-training is a common practice among video transformers. However, the related results are not provided in this paper. \n2. Though the authors have the ablation study for two aux losses, there exists few analyses (See questions). \n3. Some experimental findings in this paper is quite different from findings in CoVER, but no explanation is provided. 1. Why removing the informative loss leads to a complete failure? Would you please provide more insights or analysis? \n2. In experiments, with vanilla co-training, the K400 performance improved just a little bit (only 0.3% and is the same as MViTv2 w/o rel entry in Table 1), while performance on other datasets drops drastically. That finding is, however, quite different from the findings in CoVER. In CoVER, with vanilla joint-training, the performance on all datasets improves. What do you think could be the reason? Is it due to different pre-training or different architectures adopted? Please provide more results (like experiments with IN-21K pretraining) to support you conclusions. Yes." ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "K5XPJiaOGGQ", "iOuevRk4Ecq", "xSBxokVttq6", "a7SyyN7UZKN", "OFMh8_NpsF_", "jOkuFm850-z", "nips_2022_aGFQDrNb-KO", "nips_2022_aGFQDrNb-KO", "nips_2022_aGFQDrNb-KO" ]
nips_2022_a8qX5RG36jd
LasUIE: Unifying Information Extraction with Latent Adaptive Structure-aware Generative Language Model
Universally modeling all typical information extraction tasks (UIE) with one generative language model (GLM) has revealed great potential by the latest study, where various IE predictions are unified into a linearized hierarchical expression under a GLM. Syntactic structure information, a type of effective feature which has been extensively utilized in IE community, should also be beneficial to UIE. In this work, we propose a novel structure-aware GLM, fully unleashing the power of syntactic knowledge for UIE. A heterogeneous structure inductor is explored to unsupervisedly induce rich heterogeneous structural representations by post-training an existing GLM. In particular, a structural broadcaster is devised to compact various latent trees into explicit high-order forests, helping to guide a better generation during decoding. We finally introduce a task-oriented structure fine-tuning mechanism, further adjusting the learned structures to most coincide with the end-task's need. Over 12 IE benchmarks across 7 tasks our system shows significant improvements over the baseline UIE system. Further in-depth analyses show that our GLM learns rich task-adaptive structural bias that greatly resolves the UIE crux, the long-range dependence issue and boundary identifying.
Accept
This paper proposes a latent adaptive structure-aware generative language model (GLM) to leverage syntactic knowledge for information extraction tasks. The proposed model incorporates a latent structure induction module that automatically induces tree-like structures akin to dependency and constituency trees. Experiments in 12 IE benchmarks across 7 tasks showed significant improvements over the baseline. Overall, all reviewers feel positively about this paper, even though they mention some aspects which can be improved in the final version. The conversion of information extraction tasks into a problem solvable by a GLM problem with three different prediction modules is original and valuable, and the experiments are well designed and generally convincing (although additional experiments in more recent and larger scale datasets would make the paper stronger). The author response addressed well all the concerns of the reviewers, including the addition of several missing references. I urge the authors to incorporate these in their paper and to report the runtime of their method, to better understand the tradeoff between performance and speed, as well as examples of induced tree structures produced by their method, as suggested by one of the reviewers.
val
[ "uVMGhKHU04q", "eaTT5AoHY81", "HkMUAIRIkgH", "ogNoDsgUuvN", "K96IrfIsejW", "7ZnHCqstb3y", "l7TdtIvr2uU", "8rrYTAX4nV4", "BX_yt1PFy60", "mCjvPcStWp3" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your acknowledgment. We representatively present the parsing results of the constituency syntax. Following are the experimental results of the grammar induction w.r.t. each tag, as you indicated. The results are the recall rates of the labels that were identified by the model (label recall). \n\n\n| Model | SBAR | NP | VP | PP | ADJP | ADVP |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- |\n| PRPN | 50.0 | 59.2 | 46.7 | 57.2 | 44.3 | 32.8 |\n| PCFG | 56.1 | 74.7 | 41.7 | 48.8 | 40.4 | 52.5 |\n| DIORA | 53.2 | 65.7 | 45.2 | 59.0 | 40.1 | 29.7 |\n| LasUIE (after post-training)\t | 47.3 | 53.7 | 37.8 | 43.8 | 36.4 | 27.0 |\n| LasUIE (after fine-tuning)\t | 7.5 | 30.2 | 16.5 | 15.6 | 4.7 | 8.2 |\n\nWe see that the fine-tuned LM shows clearly better parsing results on those shorter phrases (e.g., NP, VP, PP), instead of the long expressions (e.g., SBAR, ADJP, ADVP). We can imagine the fine-tuned model learns fine-grained phrases that are coincident with the end tasks' needs.\n", " Thanks for your clarification. I have no other questions.", " Thanks. I think all your responses make sense. For the last question, I think it would be more informative if you could present such numbers per constituency/dependency tags.", " We appreciate that you acknowledge the novelty and impact of our method. All your possible concerns are addressed as follows:\n\n\n----\n\n**Q: The experimented datasets are quite old, it would be better if there are some results on larger scale IE datasets.**\n\n**A:** Actually all the data of IE tasks used in our experiments are the benchmark ones in NLP community, and we think they have well representativeness for proving the advantages of our proposed model. Meanwhile, those IE benchmark datasets have medium sizes. For example, the OntoNote data for NER task comes with no more than 80k sentences, and for other tasks the training sets are even much less (see appendix **C.3.3 Data Specification**). We will search for a larger scale of IE datasets and show the results on them in our revision.\n\n\n\n\n----\n\n**Q: Apart from the dependency and constituency structure, are there any other structures that can be incorporated into the training process ?**\n\n**A:** In NLP community, in addition to the linguistic parsing trees (dependency and constituency structures), yes there are other structures, to name a few: Gumble-tree [1], Binary balanced tree, and also some fixed trees e.g., left-branch tree, right branch tree [2]. We note that, yes from the engineering perspective, all those types of tree structures can be incorporated into the IE systems for task enhancements. However, we believe that all those pattern-fixed structures are not as effective as the dynamically induced latent structure for UIE tasks.\n\n\n[1] Jihun Choi, Kang Min Yoo, Sang-goo Lee. Learning to Compose Task-Specific Tree Structures. In AAAI 2018: 5094-5101.\n\n[2] Haoyue Shi, Hao Zhou, Jiaze Chen, Lei Li. On Tree-Based Neural Sentence Modeling. In EMNLP 2018: 4631-4641.\n\n----\n\n**Q: May need to report the training time/complexity for this method, and the computing resources used.**\n\n**A:** Thank you for this indication. Actually, we showed the analysis of the model efficiency&complexity both theoretically and empirically in appendix **D.2 Efficiency Analysis**. We kindly refer you there for more details.\n\n", " Thank you for the valuable and supportive suggestions. We hereby carefully address your concerns one by one.\n\n\n\n----\n\n**Q: The post-training is a bit unclear to me. It seems like you construct trees from the pre-trained language models and treat them as ground truth for training. Am I correct?**\n\n**A:** We here kindly note that, the post-training works not as what you described above. The structure induction process of our LasUIE GLM during the post-training is performed without the attendance of the external (either auto-predicted or ground truth) tree annotations. It is a totally automatic and unsupervised learning. So we don’t need to construct certain trees from pre-trained language models in advance. We try to add more details in our revision to make it easier to understand. Please kindly go to appendix **B.1 Three-stage Training Pipeline** for obtaining a more clear understanding, with a visual illustration of our proposed three-stage training process.\n\n\n\n----\n\n**Q: The EE result is weird. The reported score of OneIE is around 56 but it's 48.3 in the paper. Also, OneIE is not SOTA anymore.**\n\n**A:** We respectfully note that this work follows the practice of the SoTA baseline of UIE [1], and thus our evaluation metric is also in a fully end-to-end manner (we detailed the evaluation method in appendix **C.3.4 Evaluation**). In other words, the EE performance simultaneously includes the **entity mentions, event triggers, relations, and arguments** respectively, which is a much stricter metric, and this is how the SoTA results of EE come with 48.3% F1. But the result you indicated above with 56.8% F1 by OneIE model [2] only measures the argument detection. If the OneIE measures the EE performances in the same end-to-end manner, the result should be far lower than 48.3% F1. That being said, we will update all the SoTA baselines of the separate IE tasks in revision.\n\n\n[1] Yaojie Lu, Qing Liu, Dai Dai, Xinyan Xiao, Hongyu Lin, Xianpei Han, Le Sun, and Hua Wu. Unified structure generation for universal information extraction. In ACL, pages 5755–5772, 2022.\n\n[2] Minh Van Nguyen, Viet Dac Lai, Thien Huu Nguyen. Cross-Task Instance Representation Interactions and Label Dependencies for Joint Information Extraction with Graph Convolutional Networks. In NAACL-HLT 2021: 27-38.\n\n----\n\n**Q: What will happen if you fine-tune for each task separately after the post-training (no unified training for every tasks)?**\n\n**A:** We respectfully, guess you may misunderstand the idea of our three-stage of training process. Actually, what our model does is that it is just fine-tuned for each task exclusively after the post-training. So yes, the fine-tuning step is for one specific task only, i.e., **task-specific structure fine-tuning**. Because each end task will intuitively rely on the different structural bias that can be learned by the specific fine-tuning.\n\n\n\n----\n\n**Q: What will happen if you do not use additional corpora (Wikipedia and BooksCorpus) but only use the downstream texts for the post-training?**\n\n**A:** If using only the texts from the end task for the structure-aware post-training, the structure learning will be badly hurt. The main reason lies in the data amount. The training sets for the downstream tasks come with no more than 80k (OntoNote for NER task) sentences, and for other tasks the training sets are even much less (see appendix **C.3.3 Data Specification**). We performed the analysis on the influence of post-training data size (see appendix **D.3 Influence of Post-training Data Size**) and we show that when the post-training sentences are over around 800k, our LasUIE can achieve near-to-top performances. If using less than 100k data, the performances are severely worsened universally for all end tasks.\n\n\n\n\n----\n\n**Q: Why not train LasUIE with T5-large and directly compare to UIE?**\n\n**A:** It is all about the running cost. T5-large is a very big model, and with T5-large, training our LasUIE will cost too much more days for one same experiment. To cover more experiments and present more results for the NeurIPS submission, we thus take the lighter version of the T5 base, which we think should not influence the experimental conclusions if under fair comparisons that all comparing models use the same T5 base. That being said, we will later publish more results with T5-large version.\n\n\n\n----\n\n**Q: The UIE paper reported the EE (ACE-05) scores as well. You should list them in the Table 1.**\n\n**A:** Although UIE paper reported the EE (ACE-05) scores, they did not show the end-to-end measuring performances; they show the separate results of the detection of triggers and arguments instead. We will consider re-running their model and show the end-to-end EE performances in our Table 1. Thank you for the suggestion.\n\n\n\n\n----\n\n**Q: Some limitations.**\n\n**A:** Thank you for your suggestions, and we will mention the additional use of the corpora, and add the missing relevant references.\n", " Thank you for acknowledging the strengths of our work. Following we show the feedbacks on your concerns and questions.\n\n----\n\n**Q: Prior related work of task-specific latent structure idea is not mentioned in the paper.**\n\n**A:** Thank you for indicating these prior works. In appendix **A.4 Extended Related Work** we mentioned the line of work about the latent structure induction [55,56,23]. We will further add the references in our revision about the idea of constructing task-specific latent structures you pointed out here [1,2,3], and clearly state them in the Related Work part. \n\n[1] Jihun Choi, Kang Min Yoo, Sang-goo Lee. Learning to Compose Task-Specific Tree Structures. In AAAI 2018: 5094-5101.\n\n[2] Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum. Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders. In NAACL-HLT (1) 2019: 1129-1141.\n\n[3] Adina Williams, Andrew Drozdov, Samuel R. Bowman. Do latent tree learning models identify meaningful structure in sentences? TACL 6: 253-267 (2018).\n\n[23] Yoon Kim, Chris Dyer, and Alexander Rush. Compound probabilistic context-free grammars for grammar induction. In ACL, pages 2369–2385, 2019.\n\n[55] Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron C. Courville. Neural language modeling by jointly learning syntax and lexicon. In ICLR, 2018.\n\n[56] Yikang Shen, Shawn Tan, Alessandro Sordoni, and Aaron C. Courville. Ordered neurons: Integrating tree structures into recurrent neural networks. In ICLR, 2019.\n\n----\n\n**Q: The tradeoff between the performance and speed.**\n\n**A:** We in appendix **D.2 Efficiency Analysis** presented the discussion concerning the model inference speed, and made comparisons with the SoTA baselines. We show that our structure-aware GLM that makes use of the latent structures achieves better performances without the sacrifice of running efficiency much. We kindly refer the reviewer to this part for a more detailed analysis.\n\n\n----\n\n**Q: Include significance testing.**\n\n**A:** We in appendix **C.3.4 Evaluation** detailed the specification of experimental evaluation. We report the average scores with unbiased standard deviations on 5 runs with different random seeds. In practice, for those re-implemented baselines we actually performed **paired t-test** with $p < 0.05$.\n\n\n----\n\n**Q: A more extensive analysis of the induced tree structures would also be very useful.**\n\n**A:** We in appendix **D.6 Case Study** presented some pieces of empirical visualizations of the induced structures. From the visualizations we also find that our LasUIE GLM has the advantage of explainable prediction.\n\n\n\n----\n\n**Q: How different/similar are the induced trees and the original dependency/constituency trees?**\n\n**A:** To validate this, we in these rebuttal days perform the unsupervised tree (grammar) induction based on the PTB test set, including the Constituency tree and Dependency tree, and make comparisons (Accuracy) with some representative methods of this task. Note that we directly take the induced trees from the attention heads before compacting them into the forest so that we retain the tree topology. We see from the table that our model after post-training of unsupervised structure induction shows the tree induction performances on par with strong-performing systems. After fine-tuning with the end tasks, interestingly, the results of tree induction are rapidly dropped. We assume that there are quite distinctions between the induced trees (structures) and the fixed dependency/constituency trees.\n\n| Model | Constituency | Dependency |\n| - | :-: | :-: |\n| PRPN[1] | 58.3 | / |\n| PCFG[2] | 60.1 | / |\n| DIORA[3] | 56.2 | / |\n| NDMV[4] | / | 67.5 |\n| LasUIE (after post-training) | 53.6 | 64.4 |\n| LasUIE (after fine-tuning) | 17.2 | 25.6 |\n\n\n[1] Yikang Shen, Zhouhan Lin, Chin-Wei Huang, and Aaron C. Courville. Neural language modeling by jointly learning syntax and lexicon. In ICLR, 2018.\n\n[2] Yoon Kim, Chris Dyer, and Alexander Rush. Compound probabilistic context-free grammars for grammar induction. In ACL, pages 2369–2385, 2019.\n\n[3] Andrew Drozdov, Patrick Verga, Mohit Yadav, Mohit Iyyer, Andrew McCallum. Unsupervised Latent Tree Induction with Deep Inside-Outside Recursive Auto-Encoders. In NAACL-HLT (1) 2019: 1129-1141.\n\n[4] Songlin Yang, Yong Jiang, Wenjuan Han, Kewei Tu. Second-Order Unsupervised Neural Dependency Parsing. In COLING 2020: 3911-3924.\n\n", " We thank all reviewers for their time and for giving valuable and supportive comments. Information Extraction is one of the most fundamental tasks in NLP & data mining community. In this paper we follow the latest line of IE under the unified form (i.e., UIE), and investigate a generative language model that is empowered with linguistic(-like) structure knowledge. Both theoretically and empirically, our method shows great potential in solving the key challenges of the IE or UIE, including the long-range dependence issue and boundary identifying, pushing the current state of the art on a wide range of IE datasets.\n\nWe believe our method will spur more follow-up research on the line of structure-aware LM for UIE, and meanwhile show more impacts on NLP community. We will release our codes and metadata upon the acceptance of the work. We will further proofread the article, correct all the typos and double-check the contents according to reviewers’ comments, so as to make it ready to publish.\n\nAdditionally, we would like to draw reviewers’ attention to the [**supplementary material**](https://openreview.net/attachment?id=a8qX5RG36jd&name=supplementary_material) part. There we previously uploaded the full version of our paper with **detailed appendix**, including many more model and experimental specifications and extended analyses. We sincerely hope that reviewers could have a read, which may help to build a better understanding of this work.\n", " The paper proposes a latent adaptive structure-aware generative language model for information extraction tasks. The proposed model incorporates a latent structure induction module that automatically induces tree-like structures akin to dependency and constituency trees. Experiments in IE benchmarks show that the proposed model outperforms the state-of-the-art models. Strengths\n* I appreciate the extensiveness of the experiments in Section 5. I personally had questions regarding the difference between constituency and dependency structures as well as the difference between internally learned (latent) and externally predicted structures. These are answered well in the paper.\n* The paper is written well. The motivation behind the use of latent structures and the intuition behind the design of the model look sound.\n\nWeaknesses\n* The idea of using task-specific latent structures started in this paper [1], however this is not mentioned at all in the paper. Moreover, there are many prior work in latent structure induction [2, 3, among others] that were also not mentioned in the paper.\n* It would be very helpful for readers to know the tradeoff between the performance and speed. This is crucial especially since the differences in performance between the proposed model and the state-of-the-art models are rather small.\n* Please also include significance testing in the results.\n* A more extensive analysis on the induced tree structures would also be very useful for readers. For example, are the trees interpretable and useful outside of the model? Can humans make use of such trees to explain predictions? Also, including examples of induced trees would be nice to have.\n\n[1] https://ojs.aaai.org/index.php/AAAI/article/view/11975\n[2] https://arxiv.org/abs/1904.02142\n[3] https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00019/43445/Do-latent-tree-learning-models-identify-meaningful * How different/similar are the induced trees and the original dependency/constituency trees? These can be compared by calculating the tree induction accuracy. * A more extensive analysis on the induced trees would have improved the paper.", " This paper propose a new framework to improve the unified information extraction by considering syntactic knowledge in an unsupervised way. Specifically, they reuse the constituency tree information and dependency tree information learned by pre-trained language models and do further learning. They also provide structural broadcaster and task-oriented fine-tuning to utilize the learned syntactic features. Experimental results on several information extraction tasks show the potential of the propose LasUIE. Strength\n- They show improvements on several types of information extraction tasks and datasets.\n- They provide ablation studies to analyze the influence of each module.\n\nWeakness\n- Some technical details are not clear for me (please see the questions below).\n- The EE result is weird (please see the questions below). - The post-training is a bit unclear to me. It seems like you construct trees from the pre-trained language models and treat them as ground truth for training. Am I correct?\n- The EE result is weird. The reported score of OneIE is around 56 but it's 48.3 in the paper. Also, OneIE is not SOTA anymore, here is one reference paper\n - Cross-Task Instance Representation Interactions and Label Dependencies for Joint Information Extraction with Graph Convolutional Networks, NAACL 2021\n- The post-training can be independent to the downstream tasks. What will happen if you fine-tune for each task separately after the post-training (no unified training for every tasks)?\n- What will happen if you do not use additional corpora (Wikipedia and BooksCorpus) but only use the downstream texts for the post-training?\n- In the paper, you mention for fair comparison, you reimplement UIE with T5-base. Why not train LasUIE with T5-large and directly compare to UIE?\n- The UIE paper reported the EE (ACE-05) scores as well. You should list them in the Table 1. - The proposed method uses additional corpora (Wikipedia and BooksCorpus) to learn the syntactic information. I suggest the author explicitly mention this in the Table 1 and Table 2.\n- Missing related work\n - Structured Prediction as Translation between Augmented Natural Languages, ICLR 2021\n - Cross-Task Instance Representation Interactions and Label Dependencies for Joint Information Extraction with Graph Convolutional Networks, NAACL 2021", " This paper proposed a structure-aware GLM, in which they leveraged the syntactic knowledge for UIE. A heterogeneous structure inductor is explored to unsupervisedly induce rich heterogeneous structural representations by post-training an existing GLM. The authors did experiments over 12 IE benchmarks across 7 tasks and showed significant improvements over the baseline UIE system. Strength:\n* Convert the UIE into a generative LM problem, and designed three modules to do the prediction\n* An structure aware training is added before the fine tuning stage. \n* Good experiments design.\n\nWeakness:\n* The experimented datasets are quite old, it would be better if there are some results on larger scale IE datasets. Apart form the dependency and constituency structure, are there any other structures that can be incorporated into the training process ? May need to report the training time/complexity for this method, and the computing resources used. " ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "HkMUAIRIkgH", "K96IrfIsejW", "7ZnHCqstb3y", "mCjvPcStWp3", "BX_yt1PFy60", "8rrYTAX4nV4", "nips_2022_a8qX5RG36jd", "nips_2022_a8qX5RG36jd", "nips_2022_a8qX5RG36jd", "nips_2022_a8qX5RG36jd" ]
nips_2022_Ojakr9ofova
Scalable Infomin Learning
The task of infomin learning aims to learn a representation with high utility while being uninformative about a specified target, with the latter achieved by minimising the mutual information between the representation and the target. It has broad applications, ranging from training fair prediction models against protected attributes, to unsupervised learning with disentangled representations. Recent works on infomin learning mainly use adversarial training, which involves training a neural network to estimate mutual information or its proxy and thus is slow and difficult to optimise. Drawing on recent advances in slicing techniques, we propose a new infomin learning approach, which uses a novel proxy metric to mutual information. We further derive an accurate and analytically computable approximation to this proxy metric, thereby removing the need of constructing neural network-based mutual information estimators. Compared to baselines, experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify that our method can more effectively remove unwanted information with limited time budget.
Accept
**Summary**: This paper develops an infomin-based representation method that based on the recently-proposed sliced mutual informaiton estimator. Unlike other methods, the proposed approach does not rely on an adversarial objective and provides tractable proxy-metric that eliminates the need for neural estimators of the mutual information. Experiments on independence tests, disentangled representation learning and algorithmic fairness aim to illustrate both improved utility and higher scalability. **Strengths**: Reviewers we overall positively predisposed towards this paper. They noted that this is a well-written paper, with sound and well-motivated theoretical analysis [d477, hqSG]. The proposed method, which derives from canonical correlation analysis is novel and computationally efficient [d477]. Experiments are sound and satisfactory, with benchmarks that include a good range of datasets and baselines. Reviewer *dNaL* notes good results in terms of both expectation and variance, in addition to improved computational efficiency relative to adversarial methods. **Weaknesses**: Reviewers also noted limitations. Reviewer *dn477* noted a missing reference to CLUB (Chen et al., ICML, 2020) which would be a strong neural baseline. Several reviewers found that scalability claims are not strongly supported and that larger-scale experiments might strengthen the paper in this context [d477, hqSG]. More generally, reviewers were concerned that the submission lacks certain important implementation details [d477, dNAL]. In terms of the experiments, reviewers had a number of suggestions, including a comparison between slice MI to the analytically calculated MI for some toy example [d477], a comparison simpler a simpler adversarial cross-entropy based methods for the fairness experiments (e.g. DANN/LAFTR) [dNaL], a comparison to LieGroupVAE [hqSG], and reporting of disentanglement metrics such as MIG and the FactorVAE score [hqSG]. **Reviewer Author Discussion**: While the authors were not able to carry out larger-scale experiments, they provided an ablation study to further support claims of scalability. They also added discussion of CLUB, and clarified that DANN/LAFTR are similar to the Neural TC baselines, clarified that disentanglement scores cannot be computed for the vector-valued quantities that are under consideration for this paper. Reviewer *D477* raised their score 5->6, reviewer *hqSG* raised their score 6->7. **Reviewer AC Discussion**: Reviewers unfortunately did not respond to the AC during the discussion phase. The AC takes this as a signal that reviewers do not object to acceptance, but also do not champion it. **Overall Recommendation**: This submission is just about above the bar for an accept, though lack of a clear champion among reviewers somewhat limits confidence.
train
[ "cn_PKrwR5xw", "hvoVdfYiwbY", "GFWg_R7mW0Z", "2bwJkktn7nN", "dPFdeme7-AF", "Isg1gxFA7H8", "l4HbSze2QC-", "2ibZBv8ZAGc", "KBp8iWkiAAJ", "0yCXVwNjE9Y", "3dWgIzjhqtE", "MzPMjm83zUp" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' response. \n\nI am glad to see that the presentation of the paper significantly improves, and the authors add a pseudo algo comparison with adversarial learning based approaches as well as the CLUB baseline. I am mostly satisfied with the answers. Thus I update the score from 5 to 6.\n", " The author's answer and updated work address my concerns. The disentanglement evaluation setup based on vector rather than scalars seems sensible - and it seems that traditional metric could be adapted into this setup. The new ablation study and author's reply addressed and clarified my scalability question.", " *Reviewer hqSG*: Could you comment on the author response? In particular, do you agree that disentanglement metrics are not applicable to vector-valued variables? Does the new ablation study on p5 address (some of) your concerns regarding the strength of baselines and scalability?\n\nI would also appreciate if you could expand upon your overall opinion of this submission; you review in its current form is rather terse, which makes it difficult for the meta-reviewers to evaluate whether this submission should be strongly considered for acceptance. \n\nThanks!", " *Reviewer dNaL*: The authors have followed up to determine how the performance of adversarial methods depends on the number of adversarial steps. Could let them know what you think of these additional experiments? I would also appreciate hearing whether you agree that the Neural TC baseline is sufficiently similar to DANN/LAFTR. Thanks! ", " *Reviewer D477*: The authors have added a comparison to CLUB, as well as an additional ablation study to support the claim of scalability. Could you let the authors know to what extent your concerns have been addressed? ", " We would like to thank all reviewers and the AC for their time and the insightful comments. The comments are highly valuable and help greatly to improve the quality of the work. Below is a summary of our response:\n\n- We appreciate all reviewers for acknowledging the technical novelty and theoretical soundness of the method. The proposed method presents a third way (i.e. random independence test) for achieving infomin representation learning where the choices have long been either adversarial- or variational- methods. A new, detailed analysis on how the method differs to recent baseline suggested by the reviewers (reviewer d477) are also included in the rebuttal.\n\n- We also thank all reviewers for pointing out the limitations of our work in empirical evaluation, most of which as we believe have now been addressed. The efforts include comparison with further baselines (requested by reviewer d477, dNaL and hqSG), new ablation studies regarding the scalability of the proposed method and the baselines (requested by reviewer dNaL and reviewer hqSG), clarifications on the evaluation metrics used, better reproducibility (code demo + better presentation) etc. Please see the one-to-one response below.\n\nAny further comments/criticisms/suggestions are welcome. \n\n-------------------------------------------------------------------------------\n\nWe sincerely thank the reviewers for their post-rebuttal feedback, as well as the AC who has been very responsible. \n", " We are thankful for your positive comments as well as the suggestions. They are really insightful and help a lot in improving the work. Following your suggestions we have made several improvements. Please see below.\n\n- As you kindly suggested, we have now included a new ablation study on how the number of adversarial steps will affect the performance of adversarial methods. The results for the fairness tasks are presented on p5 in the updated appendix (the same experiment for disentangled representation learning is still running and is really time-consuming). Two conclusions can be made: (a) given sufficient time budget, adversarial methods can indeed catch up with or even outperform our method, so their poor performance here may be just due to insufficient training time rather than other factors such as their neural/minmax nature; (b) however, to reach the same level of fairness, the required time in adversarial methods are much longer than our method, typically several times or even an order longer. Note that we are conservative in generalising these conclusions above to other tasks/datasets. More comprehensive reports will be given after collecting the results of disentanglement tasks.\n\n- Together with the analysis in the appendix, the ablation study above may also serve as a verification of one typical failure mode (non-sufficient training time) of adversarial training methods. We will continue to verify other failure modes (e.g. optimisation difficulty).\n\n- We have also investigated in depth DANN/LAFTR, two important baselines you mentioned. Although they have different motivation and loss functions, we found that these cross-entropy-based methods have many similarities to Neural TC, a baseline we already considered. More specifically, the neural network in both methods estimates the conditional density $p(T|Z)$ either explicitly (DANN/LAFTR) or implicitly (Neural TC). See the analysis “discussion on other potential baseline” in p5 of the updated appendix. In this sense we feel it is unnecessary to compare with DANN/LAFTR but just mention them. Another reason we choose not to compare with DANN/LAFTR is that they can only be applied to the case where $T$ is discrete, whereas the tasks considered here cover both continuous and discrete cases. Neural TC, on the contrary, does not have such limitations and can be applied in all cases. We have added the above discussion in the revised manuscript.\n\n- Other suggestions such as improving notation and the appearance of figures, and including commonly-used evaluation metrics will also be addressed very soon. ", " \nThank you very much for your valuable feedback. Below we try to address your concerns and questions. \n\nTo your comment regarding the weakness of the work:\n\n- We agree that adopting metrics in existing literature like MIG or Factor-VAE score will be very helpful but they might not be so suitable here. This is because these metrics are mainly designed to assess how each dimension $Z_d$ (a scalar) in the representation $Z$ is disentangled from each other, whereas our scenario here is to assess the disentanglement between two vectors ($Z$ and $T$). In fact, if one adapts MIG to our scenario, he/she will see the that MIG is indeed equivalent to the metric we used (we can elaborate more on this). On the other hand, we have also tried to adapt Factor-VAE score to our case. However, this metric needs to find the most invariant dimension in controlled generation, but how this invariance is defined for vectors remains unclear. Nonetheless we will continue to try other ways for adapting these metrics to our case.\n\nTo your questions:\n\n- Thanks for pointing us to LieGroup VAE, which broadens our view. This kind of group theory-based method for achieving disentanglement is definitely worth mentioning, and we have now included a discussion about it (and other related works) in the revised paper (see p5 in the main text). However, as our focus here is to compare different information-theoretic approaches (as represented by the current baselines), we regret that we will not directly compare with LieGroup VAE (or other group theory-based methods) this time, but rather leave it to future work, where the comparison between information theory-based and group theory-based approaches for disentangelment will be explored.\n\n- Suggestions on latent traversal: the label swapping experiment presented in Figure 2 and Figure 3 is indeed a kind of latent traversal. More specifically, if we travel horizontally among the images in the same row, then it is the same as \"fix $T$ and change $Z$\" (where the sampling distribution $p(Z)$ is implicitly defined by the training set). On the other hand, if we compare the images before and after label swapping, then it is the same as \"fix $Z$ and change $T$\" (where the sampling distribution $p(T)$ is a uniform distribution). We have revised the captions in Figure 2 and Figure 3 accordingly to better inform the readers. \n\n- Baselines too weak: the seemingly poor performance of the considered baselines is indeed the result of our controlled experiment where the time budget of the baseline is set to be the same as our method. In fact, if we increase the time budget of these baselines (i.e. increase the number of gradient steps in inner loop optimisation) unlimitedly, the performance of these baselines can catch up with our method or even outperform it. See the new ablation study on p5 in the revised appendix. On the other hand, we do have included a new, strong baseline i.e. Variational Upper Bound (denoted as VB in the main text) in the revised paper. Please see the paper for more details.\n\n- Experiments on how the new method scales compared to previous baselines: in this work, we mainly define scalability as the time complexity of the algorithm required to reach a certain level of performance. In adversarial methods, this is $O(L_1 L_2)$ whereas in our adversary-free method this is $O(L_1)$. Here $L_1, L_2$ are the number of gradient steps for outer and inner loop optimisation respectively. To verify that the proposed method is really more scalable, a new ablation study on how the performance of adversarial approaches change w.r.t $L_2$ has now been included; see \"ablation study on the number of adversarial steps\" on p5 of the updated appendix. From the new experiments, we see that in order to achieve the same level of performance as our method, adversarial methods typically require a large $L_2$ and hence much longer execution time than our method. We believe this demonstrates the scalability of our method.\n\n", " Many thanks for your detailed comments and the criticism. They identify some important weaknesses of the work which we hope to address here and in the revision. We would also like to clarify some points which may have been misunderstood due to our presentation.\n\nClarifications:\n\na) *SI is not an estimator to MI nor its bounds*, but a proxy of MI used in optimisation. While SI shares the same motivation as CLUB (to achieve infomin learning), and that they both have the property \"SI=0/CLUB=0 -> MI=0\", the principles behind them are quite different. More specifically, CLUB works by first estimating (an upper bound of) MI then minimising it. SI, on the contrary, never estimates MI (or its bound), but instead minimises statistical dependence in the sliced spaces. The difference here is “MI estimate in the original space” v.s. “independence test in the sliced space”. Importantly, working in the sliced space allows us to find an analytical expression for SI, which is not possible for CLUB (and other neural MI estimators). A demo for demonstrating how SI works as compared to CLUB has been included in the uploaded code.\n\n\nb) *SI is neural network-free*. While it is natural to model the functions $h, g: R \\to R$ in SI by neural networks, they are modelled by K-degree polynomials (remark that $h, g$ are 1D functions so polynomials are generally powerful enough). For a particular slicing direction, the parameters (i.e. the coefficient) of the polynomials for that direction can be solved analytically by eigendecomposition, as explained by the texts from eq.5 to eq.6 on p3. No neural network is used here. That being said, it is also possible to model $h, g$ as neural networks, which are more powerful but do not have analytic solution any more (and in such case one have to learn them by SGD). \n\nResponding to the commented weakness:\n\n- We thank you very much for pointing us to CLUB, an important baseline that we should not have missed. We have now compared to CLUB. Please see the revised manuscript. Code for reproducing the results has also been updated (which is adapted from the author’s official repo). We find that CLUB does work very well in many cases, being a strong baseline to compare with (possibly due to its upper bounding nature). However, it is still less effective than our method under the time budget given. One possible explanation is that CLUB is still in essence an adversarial approach (where the conditional density $p$ needs to be estimated by some gradient steps first), and the tightness of the upper bound in CLUB depends on how good $p$ is. If the number of gradient steps used to learn $p$ is insufficient, or $p$ is not powerful enough, the resultant bound may not be tight and hence less satisfactory performance. Nonetheless, given sufficient training time and powerful enough $p$, it is completely possible that CLUB will outperform our method.\n\n- We also agree that it is beneficial to include large-scale experiments such as those considered in InfoBert, where the model size is large and the dimensionality of representation is high. However, besides these factors, scalability may also be evaluated by the time complexity of an algorithm. From this viewpoint, an adversarial training-free method is clearly more scalable than its adversarial counterpart as it scales the time complexity from $O(L^2)$ to $O(L)$ (here $L$ is the number of gradient steps). In fact, in a new ablation study, we find that to achieve the same level of performance, our adversary-free method typically requires much less time than adversarial methods; see p5 \"ablation study on the number of adversarial steps\" in the new appendix. So we believe our claim on scalability is still reasonable. \n\n- Thanks a lot for pointing out these issues. We have now correctly cited the two datasets as well as provided an algorithm block in the main text (see p4 in the main text). To ease comparison, we have also provided an algorithm block for conventional adversarial approaches. For the learning of the functions $h, g$, they are not neural networks and are solved analytically using objective (6), as already clarified in b) above.\n\n\n- As clarified in a), SI is just a proxy of MI used in optimisation rather than an estimate to it (they even do not work in the same space). It is neither the lower nor upper bound of MI. In this sense, it may make little sense to compare the values of SI and MI directly as done in the CLUB paper. Investigating the test power of SI as an independence test as done in the current experiment may be more sensible.\n\nResponding to your question:\n\nAs clarified in b), there is no neural network used in our method: the functions $h, g$ in SI are approximated by polynomials whose parameters can be solved analytically. This analytical property, which removes the need of adversarial training, makes our method fundamentally different to other neural network-based approaches, and is the source of scalability of our method.", " In this paper, the authors leverage sliced mutual information and propose a tractable infomin learning algorithm. Compared to the existing methods, the proposed algorithm requires neither adversarial training nor a neural-based mutual information estimator. Instead, the authors replace the mutual information term with a sliced version in the infomin objective, and derive an analytic approximation to sliced mutual information. Moreover, solving hi and gj in the sliced mutual information together yields an upper bound of the analytic solution, and thus more efficient in practice. The authors compare the proposed method with 4 different baselines, including the non-parametric method and adversarial training method. They evaluate them on three tasks: in the independence test, they show that sliced mutual information is less effective than the adversarial training methods, but they require longer training time; in the fairness experiment, under a similar level of utility, the proposed method has significantly better fairness than the baselines; in the disentangled representation learning experiment, they show a similar observation to the fairness experiment that the proposed method achieves much better disentanglement without sacrificing the utility a lot. **Strengths**:\n- The paper is overall well written. The theoretical analysis is sound and well motivated.\n- The idea of using proxy metrics instead of directly working on the intractable MI is interesting\n- The experimental setup is sound with different baselines\n\n**Weaknesses**:\n- I think the paper misses one important and related paper [1], and thus lacks further discussion of one important implementation of infomin optimization that leverages neural-based methods to give a trackable estimation of the upper bound of mutual information. This should serve as a strong baseline and compare with the proposed method, non-parametric methods, and adversarial training methods as well. I am expecting to see more quantitative analysis on the comparisons. \n- I am also looking forward to seeing more larger-scale experiments, as the paper is claimed to be “scalable”. For example, the CLUB MI estimator [1] is known to be applied to large language models like BERT to improve the adversarial robustness [2]. \n- The paper misses some important implementation details: (1) a brief introduction and citation to the datasets Dsprite and CMU-pie are missing. (2) how the networks h,g are trained needs more details, e,g, the training objective. (3) I am also looking forward to seeing a detailed pseudo algorithm in terms of the details mini-batch learning to improve the reproducibility.\n- I am also expecting to see how is the sliced MI related to the real MI. It might be better to give some toy examples where MI can be calculated and see how close the methods are.\n\n[1] Cheng, Pengyu, Weituo Hao, Shuyang Dai, Jiachang Liu, Zhe Gan and Lawrence Carin. “CLUB: A Contrastive Log-ratio Upper Bound of Mutual Information.” ICML (2020).\n\n[2] Wang, Boxin, Shuohang Wang, Yu Cheng, Zhe Gan, R. Jia, Bo Li and Jingjing Liu. “InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective.” ICLR (2021)\n Please answer the questions in the weaknesses section. \n\nBoth your method and MI upper bound [1] utilize neural networks for MI estimation, could the authors elaborate more on the advantages of your method?\n\nI am willing to increase my scores if my questions and concerns are resolved.\n See the weakness section.", " The authors propose an adversary-free method for informin-based representation-learning based on the recently-proposed sliced mutual information estimator, noting the optimisation difficulties and costs associated with the adversarial approaches that have long been the go-to methods for tasks such as disentanglement and fair-representation learning. This is achieved by substituting the empirical approximation of sliced mutual information (SI) with a closed-form solution that can be well-approximated with K-degree polynomials, leading to a problem that can be efficiently solved using canonical correlation analysis. The authors establish error bounds on said analytic approximation and show that the entailed functions can be solved in parallel for each slice without violating the objective.\nExperiments are performed on a number of datasets spanning the interrelated tasks of independence testing, disentanglement, and fair-representation learning, with Renyi correlation between the learned representation and Y (aiming to maximise) and T (aiming to minimise) as the evaluation metric. The proposed method, Slice, is shown to outperform other infomin-learning approaches by a significant\nmargin while remaining computationally efficient.\n - The paper builds on recent research on mutual-information estimation to develop a method that is both adversarial free\nand computationally-efficient, and how it might be practically applied to problems of disentanglement and fair-representation\nlearning -- to my knowledge, the analytic approximation, derived from canonical correlation analysis, is novel within the\ngiven context, and seems to be both theoretically and practically sound.\n- The motivation for the seeking out non-adversarial alternatives to infomin learning is clearly established and the description of the method is generally well-structured and easy-to-follow, though I feel there is some room for improvement in the choice of notation (such as in the decision to use D and d for the dimensionality of Z and T, respectively).\n- The analysis of the results is satisfactory; the conclusion is serviceable but feels rushed and I'm not convinced there's sufficient evidence given in the paper to 'verify' the failure modes of adversarial methods.\n- The method is benchmarked against a good range of relevant baselines and using an appropriate suite of datasets\n(though it would be nice to see variation in the domain of the fairness datasets) and results are aggregated over multiple\nreplicates in all cases. The procedures associated with each set of experiments are outlined in sufficient detail.\n- While the existing baselines are solid, for the fair-representation learning experiments it would be nice to see a simpler \nadversarial cross-entropy-based baseline a la DANN/LAFTR, given its prominence in the literature, despite its inability\nto capture higher order dependencies like its Renyi counterpart.\n- Given the statements made about the convergence of the adversarial methods, I feel it would also be of interest to include ablations in which the number of adversarial steps is increased enough to determine whether this is a relevant factor, or whether the problems with these methods truly are of a more fundamental nature.\n- The method, Slice, seems to perform impressively in terms of both expectation and variance. While Slice does incur decent computational overhead compared the simpler baselines, such as 'Pearson', it is nonetheless shown to be significantly more efficient than its adversarial competitors which suffer additional problems due to their parametric/minimax nature. The results included in the main paper, together with those included in the Appendices, support these conclusions convincingly enough, with the experimental setups used to produce them largely consistent with that of past work.\n- While the authors do discuss the relationship between Renyi correlation and DP, and good rationale for adopting solely the former, it would nonetheless be good to additionally include the the latter for the evaluation of the method in the context of representation learning, given the conventions of the pre-existing literature on the problem.\n- The images in Figures 2 and 3 comparing how Slice and an adversarial method (which one?) cope with label-swapping are rather small and there's no indication as to what the label corresponds to within the figures themselves or their captions. Those things aside, I think it would be helpful if the differences between the methods were highlighted as it's not immediately obvious where exactly one should be looking.\n - I am satisfied with the extent to which the the limitations and ethics of the work have been addressed.", " The authors propose a method for infomin learning based on information theory tools from previous work, that estimates the Mutual Information (MI) in the “sliced space”, which intuitively is a facet of MI. Strengths\n- The method seems explained well in terms of theoretical justifications\n- The experiments seem to be provided with sufficient details\n\nWeaknesses\n- In the disentanglement experiment other metrics could be used to make sure that models are learning disentangled representations, e.g. MIG, factorVAE score, etc\n - On the disentanglement experiment more recent work could be considered as baseline, for example LieGroupVAE (https://arxiv.org/pdf/2106.03375.pdf) ?\n- In the disentanglement experiment qualitative Figure2-3. Should latent traversals also be explored, to understand if the representation is disentangled?\n- In the fairness experiment ρ∗(Z, T ) seems to get non-comparable results for the baselines. Should other better performing methods be considered as baselines?\n One of the claims is the scalability, however a dedicated experiment demonstrating how the model scales compared to previous baselines is missing.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "dPFdeme7-AF", "GFWg_R7mW0Z", "2ibZBv8ZAGc", "l4HbSze2QC-", "KBp8iWkiAAJ", "nips_2022_Ojakr9ofova", "3dWgIzjhqtE", "MzPMjm83zUp", "0yCXVwNjE9Y", "nips_2022_Ojakr9ofova", "nips_2022_Ojakr9ofova", "nips_2022_Ojakr9ofova" ]
nips_2022_OzbkiUo24g
Linear tree shap
Decision trees are well-known due to their ease of interpretability. To improve accuracy, we need to grow deep trees or ensembles of trees. These are hard to interpret, offsetting their original benefits. Shapley values have recently become a popular way to explain the predictions of tree-based machine learning models. It provides a linear weighting to features independent of the tree structure. The rise in popularity is mainly due to TreeShap, which solves a general exponential complexity problem in polynomial time. Following extensive adoption in the industry, more efficient algorithms are required. This paper presents a more efficient and straightforward algorithm: Linear TreeShap. Like TreeShap, Linear TreeShap is exact and requires the same amount of memory.
Accept
Shapley values are a common tool used for evaluating feature importance. In this work the authors present a way to accelerate the computation of these values when the model used is a tree or an ensemble of trees. The algorithm presented has linear computational complexity with respect to the maximal depth of the tree $D$ while previous algorithms had a computational complexity proportional to D^2 or even worse. The results are theoretically well grounded, and a small empirical study shows the merits do translate from theory to practice. There was a consensus among reviewers that this work presents a strong scientific contribution that is relevant to NeurIPS. Some comments were made about the presentation of this work, but all agreed that the merits outweigh these limitation and therefore we recommend accepting this work to NeurIPS. Nevertheless, we encourage the authors to take a close look at the comments made by the reviewers and try to improve the presentation for the camera-ready of this work. We think that improving the presentation will improve the potential impact of this work.
train
[ "HY6bStVUcF", "kRR8JpTRaQ", "CoFjWgrjo7S3", "FiQ3H_HFDBY", "2QhrYA20dRP", "yG61pcTtXhu", "AyIf5rs6eK", "oO3zrH5x17", "hbVQ7FQcVaH", "QHW_bhgkEQ9" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their response. A couple additional thoughts:\n\n**About equation 11.** I get that the two steps are replacing $M$ with $F(R)$ and partitioning subsets based on their size. The part that would be nice to reproduce is how you derive the new weights for each $S \\subseteq F(R) \\setminus i$ in the updated summation: they change from $\\frac{1}{m} * \\binom{m - 1}{|S|}^{-1}$ in eq. 10 to $\\frac{1}{d} * \\binom{d - 1}{|S|}$ in eq. 11. It seems that this should be done by counting the ways to take $S \\subseteq F(R) \\setminus i$ and create subsets $W = S \\cup T$ for $T \\subseteq M \\setminus (F(R) \\cup i)$ with different sizes $|W| = h$, and then summing the weights across $h = |S|, |S| + 1, ..., m - (d - |S|) - 1$. This is non-trivial and readers should not be forced to derive it from scratch, please include it in the appendix.\n\n**About the name \"Linear TreeSHAP.\"** Like reviewer 1DKp, I'm not fond of the name as it does not reflect the new ideas in the algorithm or provide an accurate/complete description of the computational complexity. I don't have a specific recommendation or request, but I would encourage the authors to rethink the name.\n\n**About additional datasets.** This seems like a straightforward addition to the supplement that you can reference in the experiments section.\n\n**About the consistency between TreeSHAP variants.** This also seems like a straightforward addition to the supplement, and this would be more helpful than including it in the supplementary code. It probably doesn't need to be in the main text for this version or any future journal version of the work, but you could add it there and reference it in the experiments section.\n\n**Other presentation improvements.** Please see the \"weaknesses\" section of my review for a couple simple writing changes that would make the paper easier to follow.\n\nMy overall view of the paper remains positive, so I'm keeping my score as is.", " Thank you for your detailed review comments. We will adopt all the suggestions on missing experiment details and fix all the typos in the final version.\n\nFor your interest, the code to reproduce the experiment and the actual running time is shared in the supplementary material. \n\nAnd here are the answers to the specific questions \n\n- How does Fast TreeShap v2 compare against Linear TreeShap when the number of samples to be explained increases?\n\n\nThere shouldn't be any different from the reported. As all the methods are parallelized on the instance level. And in our reporting, the precomputing time of Fast TreeShap v2 is omitted from the comparison. ", " 1, Effect of unused feature\n\nYou are absolutely right. Once the decision tree model is fitted, the information on unused features is lost. And the Shapley value is only defined on the fitted model instead of the fitted model and training dataset. To address this limitation, we need to redefine the Shapley value. Even though it's out of the current paper's scope, but surely worth our future study. \n\n2, Weights balancing\n\nThis is also very similar to the first question; we need to redefine the Shapley value of the decision tree. Also, an exciting future research direction. Thank you for your brilliant question/direction :) ", " Thanks for your detailed review. We will incorporate the minor comments in the final version. \n\nAnd here are our answers to your questions\n\n- Derive equation 11 from equation 10:\n\nWe combined two steps into one to save space. The first step is to replace M with F(R). Since m is the number of features in the dataset and d = |F(R)| is the number of features in the specific decision rule. We can ignore features not specified in the rule.\nAnd the second step is to partition all the subsets of F(R) based on size ranging from 0 to d-1. \n\n- Motivation for the name \"linear TreeSHAP,\"\n\nYes, the motivation is because of reduced dependence on D in the complexity analysis.\n\n- More intuition for how the approach works:\n\nShapley value of decision tree is defined combinatorically. We utilize the connection between polynomials and combinatorics and caching-friendly tree structure. \n\n- Some discussion about how our approach differs in its derivation and the techniques from TreeSHAP and FastTreeSHAP involved:\n\nIn the framework of Linear Treeshap, recall Linear Treeshap has a top-down and a bottom-up tree traversal; Treeshap is essentially only doing top-down tree traversal and then enumerating all the leaves to root paths independently. As for Fast Treeshap, it is doing almost the same as Treeshap, except for enumerating all the possible paths in the beginning. \n\n- The results are quite positive. Would it be possible to show how the different algorithms improve when parallelizing across multiple cores?\n\nTypically, the parallelization is on the instance level. With multiple cores/machines, we should see a linear speedup proportional to the number of cores for all the methods compared. Even for the fast treeshap v2, the pre-computing time is omitted from the comparison. \n\n- Would it be possible to add a couple more datasets, for example those shown in the FastTreeSHAP work?\n\nYeah, of course, we can, and there shouldn't be any different. Due to limited space, we didn't add more but would probably extend this paper in some journal format and include more datasets. Also, there are aspects of numerical stability of different methods we didn't have space to elaborate on. \n\n- Because the math is difficult to verify, it would be very helpful to show experimentally that the three algorithms (TreeSHAP, FastTreeSHAP, Linear TreeSHAP) yield identical results.\n\nYes, we did verify that all of them match up to 5-digit precision and the code and results are provided in supplementary materials. \n\n\n\n\n", " Adding to TreeShap or simply an independent GitHub repo is still not decided, but we are open to suggestions. ", " The paper introduces \"Linear TreeSHAP\", a novel method to compute exact Shapley-Value for decision trees. The method leverages observations from the well-known and used TreeSHAP method, and utilizes properties of polynomials to reduce the complexity of TreeSHAP from $O(LSD^2)$ to $O(LSD)$, when $D$ is the max depth of the tree. Linear TreeSHAP is empirically compared to TreeSHAP and its more efficient variant FastTreeSHAP, and shows significant improvement in run time for deep decision trees. Strengths:\n\nShapley Value based feature importance methods are widely used both in academia and in the industry. Since computing the exact Shapley Value is hard computationally-wise, introducing methods that can efficiently calculate the exact values are of high importance. This paper introduces such a solution for decision trees, which are commonly used models, especially in scenarios where explainability is required. The algorithm is based on a grounded theory that guarantees linear run time for any tree and showed both theoretically and empirically to outperform existing methods.\n\nI also think that the authors did a good job of explaining the background and simplifying the intuition behind the algorithm.\n\nWeaknesses:\n\n* I think that the paper is worth publishing as is, but since its main potenital impact is to practically allow computation of Shapley Value on trees, providing more information about its limitations and assumptions might help practitioners to use it more wisely (I provide some examples in the questions section).\n* In the experiments, It seems like Linear TreeSHAP showed to significantlly outperform FastTrreeSHAP only in cases where FastTrreeSHAP V2 falls back to FastTreeSHAP V1 due to memory limitations. In my opinion, it is important to show at which point Linear TreeSHAP outperforms FastTrreeSHAP V2 where there are no memory limitations. I would like to hear the authors' opinions on two possible limitations of the method:\n1. $\\textbf{the effect of unused features}$ - consider the tree described in figure 1, and assume that there is another feature provided to the model - \"season\". Also, assume that we fit a tree with a low depth and that the fitted tree does not contain the season feature at all. When evaluating $E(f(x)| x_{(season, couldy)})$, the method will assume 0.5 prob. of temperature to be above 19, and 0.5 prob. to be below 19. However, it is very likely that $P(temperature < 19| season) \\neq P(temperature < 19)$, so even though the model does not explicitly use season, the conditional probability-based feature elimination process that estimates $\\phi(\\{season, cloudy\\})$ should be effected from the knowledge that the season feature provides.\n\n2. $\\textbf{weights balancing}$ - When training decision tree classifiers in practice, it is very common to balance the weights between the different classes. When doing so, the weights are no longer representing the conditional distribution of the samples in the train data and therefore using these weights to evaluate $f(x_s)$ might be inaccurate. I wonder if there is another interpretation of the results yielded in this scenario and if there is a way to use the training data distribution to overcome this issue. \n\nDespite the fact that both limitations also apply to the original TreeSHAP, I believe that highlighting them in the paper will allow further research, and prevent misuse of the method. It would be helpful if the authors will mention some limitations of their work, especially about how to interpret the output of their method. For example, it is important to mention that this method only explains the tree, and should be used carefully when trying to infer causality or conclusions on the structure of the data.", " This paper proposes an efficient implementation of an algorithm to compute of Shapley values from decision trees. This implementation is based on polynomial arithmetic. With respect to existing algorithm it reduces the dependency of computational complexity on tree depth from D^2 to D, while it remains exact.\n Reducing the computing times of exact TreeShap is a very nice result given the desirable theoretical properties of this metric and its popularity. So, I think the contribution of the paper is important. The derivation of the algorithm, based on polynomial arithmetic, is very clever and looks sound to me (although I didn't check every steps of the proofs and I'm also not an expert of polynomial arithmetic). In the end, the final algorithm remains rather simple to implement.\n\nThe presentation in the paper could be improved however. The introduction is very minimal and seems to have been rushed. Section 1.1 presents the computational complexity of all compared methods without any discussion. I think that Linear TreeShap should be better contrasted with respect to previous results. The paper should incorporate a more detailed presentation of related works.\n\nIn contrast, I find the description of the proposed algorithm rather clear and accurate, despite a few typos (see below). The pseudo-code is also clear. I think however that some proofs could have been moved to the supplement to gain space for more discussion in the introduction and experimental section.\n\nThe experiments are very minimal and a bit rushed again. I understand that the computational complexity of the proposed algorithm is strictly better than all other proposals, without any sacrifice in terms of exactitude and space complexity. But still, I would have been interested by an assessment of the impact of more parameters on performance, in addition to tree depth. For example, I see that Fast TreeShap v2 carries out some pre-computations to improve computing times when several explanations are computed. How does this method compare against Linear TreeShap when the number of samples to be explained increases? Reporting actual computing times would have been interesting as well.\n\nSome details are also missing: the size of the test set used on both datasets, how the trees have been constructed (are they single trees, forests, boosted ensembles?). \n\nMinor comments: \n- I have no better name to suggest but I'm not fond of the name of the method, \"linear TreeShap\". Note that strictly the dependence is not linear with respect to tree complexity or depth, because it depends on the product LD.\n- It should be mentioned in the paper that the proof of proposition 2.1 is in the appendix.\n- Some typos:\n - Line 50: \"tail and head e\" => of e.\n - Line 88: \"equalivant\"\n - Line 94: Use $R^v$ instead of $R$?\n - Eq (2): \"v\" is used both to denote a leaf and its value. Maybe a different notations should be used.\n - Line 159: \"last edge of in $P_{i,v}$\".\n - Line 160: \"any $p_e$ does not\" => \"any $p_e$ that does not\".\n - Line 204: \"to to\".\n - There are many problems in the references (authors list in [2], name of proceedings is cut in [4], no journal in [7], [9] is a reference to a review of Quinlan's book, not to Quinlan's book).\n\n I would suggest the authors to rewrite their introduction, to improve the experimental part and add a conclusion.\n I don't see any negative societal impacts to the work (since it aims at improving model interpretabilaty). There is no discussion of the limitation of the approach. Given the nature of the contribution, a strict improvement of some existing work, it's not really a problem though.\n", " - The authors proposed a novel algorithm that improves the computational complexity of TreeShap with the same amount of memory. \n- TreeShap has been a popular algorithm for understanding ensemble of decision trees. The proposed algorithm will have some applications for large-scale data. \n Strengths:\n- The correctness of the algorithm is rigorously verified. \n- The improvement in computation is significant. \nWeaknesses:\n- The algorithm is a specific refinement to an algorithm in a relatively narrow area. But this is not a reason for rejection. - Will the authors release the code and add it to the official TreeShap github? The area is relatively narrow. ", " This work considers the problem of calculating Shapley values for tree-based models. Without a clever algorithm, calculating Shapley values has exponential running time, but TreeSHAP introduced a procedure for making the running time polynomial. The approach presented here, Linear TreeSHAP, aims to design a procedure that is even more efficient (without compromising on memory). \n\nThe approach is quite technical, but if the authors' results are correct, then Linear TreeSHAP achieves exact Shapley value calculation with better time complexity in theory (by a factor of $D$, the maximum tree depth) and a noticeable speedup in practice (which grows larger with deeper trees). The tools used here are interesting, and to my knowledge novel in this subfield. ### Strengths\n\n- The authors present a new procedure for calculating Shapley values for tree-based models. They provide results showing that their algorithm is exact (it involves no statistical estimation) and efficient (the time complexity analysis yields a linear rather than quadratic dependence on the tree depth). The speedup is reflected in practice\n- The authors make use of interesting technical tools (summary polynomials) to design their approach\n\n### Weaknesses\n\nThe presentation was difficult to follow. This is perhaps inevitable for a technical approach like this, but there were a couple places where it could be improved:\n- $m$ and $N$ are both used to denote the number of features\n- The process of linearizing a tree could be described specifically around lines 91-94, as it's quite simple - i.e., there's a single decision node with a split based on all edges in $P_v$, with the prediction being either the original node prediction or zero\n- \"Marginal prediction\" is a confusing name for $q_{i, v}(x)$. That name sounds like \"marginal contribution\" from the game theory context, but to get the marginal contribution we actually need $R_S(x)(q_{i, v}(x) - 1)$, so the multiplier itself is not the marginal contribution\n- In equation 6, this isn't a completely obvious result so it may be helpful to include something like \"according to this definition of $R_S$, we have the following equivalence with the previously defined $f_S$\"\n- I attempted to verify the math up until a certain point, and I found equation 11 somewhat difficult to derive from equation 10. I don't think this result is described in the main text or appendix, could it be added?\n\nSome other notes on presenting the approach:\n- What's the motivation for the name \"linear TreeSHAP,\" is it the reduced dependence on $D$ in the complexity analysis? \n- In the introduction, would it be possible to give more intuition for how the approach works instead of \"we solve the exact Shapley value computing problem based on polynomial arithmetic\" ? I'm not sure how I would refine this, but it's a pretty vague description of what's presented in later sections\n- Could the authors add some discussion about how their approach differs, in its derivation and the techniques involved (rather than just the run-time), from TreeSHAP and FastTreeSHAP? For readers who aren't familiar with the details of those algorithms, even a high-level discussion would be helpful\n\nAbout the experiments: \n- The results are quite positive. Would it be possible to show how the different algorithms improve when parallelizing across multiple cores? \n- Would it be possible to add a couple more datasets, for example those shown in the FastTreeSHAP work?\n- Because the math is difficult to verify, it would be very helpful to show experimentally that the three algorithms (TreeSHAP, FastTreeSHAP, Linear TreeSHAP) yield identical results\n\nNits: \n- On line 5, \"provides a linear weighting\" is an unusual description for SHAP. In what sense is it linear? The attributions are perhaps *additive*, in that they sum to the prediction (minus the base rate prediction) and are derived from a weighted least squares problem\n- On lines 22-23: these criticisms of prior work don't make sense. How are either GPUTreeSHAP or FastTreeSHAP lacking mathematical foundation? GPUTreeSHAP is based on the original TreeSHAP algorithm, and FastTreeSHAP is mathematically justified as well. How are they \"empirical\" approaches? That sounds like it means heuristics that aren't necessarily correct or well understood, which isn't true here. And how are they harder to understand than Linear TreeSHAP? Several questions about the presentation and experiments are included above. There are no negative societal impacts for this work.\n\nThe authors give a detailed description of their algorithm's run-time and memory complexity, describing the most expensive computations. The only additional material that might highlight limitations would be an expansion of the experiments section.", " The authors present a new exact method to compute Shapley values for decision trees that can be computed in O(SLD) where S is the number of samples, L is the number of leaves and D is the maximum depth of the tree. This compares to previous exact methods operating in O(SLD^2). The authors provide both the mathematical proof for their method as well as empirical comparisons to previous algorithm to compute Shapley values. The paper is well-written and motivated and provides a clear improvement over previously available methods to compute Shapley values for decision trees. Decision trees are an important non-linear prediction tool and Shapley values integral in analyzing their fit and selecting variables. Hence, I believe this paper makes a significant and original contribution to the field. I have no further questions or suggestions for this work. There are no obvious negative societal effects of this work." ]
[ -1, -1, -1, -1, -1, 7, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 3, 4, 3 ]
[ "FiQ3H_HFDBY", "AyIf5rs6eK", "yG61pcTtXhu", "hbVQ7FQcVaH", "oO3zrH5x17", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g", "nips_2022_OzbkiUo24g" ]
nips_2022_L7P3IvsoUXY
CATER: Intellectual Property Protection on Text Generation APIs via Conditional Watermarks
Previous works have validated that text generation APIs can be stolen through imitation attacks, causing IP violations. In order to protect the IP of text generation APIs, recent work has introduced a watermarking algorithm and utilized the null-hypothesis test as a post-hoc ownership verification on the imitation models. However, we find that it is possible to detect those watermarks via sufficient statistics of the frequencies of candidate watermarking words. To address this drawback, in this paper, we propose a novel Conditional wATERmarking framework (CATER) for protecting the IP of text generation APIs. An optimization method is proposed to decide the watermarking rules that can minimize the distortion of overall word distributions while maximizing the change of conditional word selections. Theoretically, we prove that it is infeasible for even the savviest attacker (they know how CATER works) to reveal the used watermarks from a large pool of potential word pairs based on statistical inspection. Empirically, we observe that high-order conditions lead to an exponential growth of suspicious (unused) watermarks, making our crafted watermarks more stealthy. In addition, CATER can effectively identify IP infringement under architectural mismatch and cross-domain imitation attacks, with negligible impairments on the generation quality of victim APIs. We envision our work as a milestone for stealthily protecting the IP of text generation APIs.
Accept
The authors propose a watermarking technique (CATER) to claim ownership of text generation APIs in the presence of imitation attacks. Their main idea is based on the observation that in the state of the art by analyzing the word frequency in API responses as well as publicly available data, an adversary's odds to learn the watermark increases. To remedy this, CATER conditionally watermarks the response to prevent the adversary from deciphering the watermarking keys. Reviewers found the topic of the paper timely, is writing clear, and the overall contribution sound and of interest to the community.
train
[ "2oFl72mOWjr", "szkVDD0Xa7E", "v7V0fWYCio8", "F4CqGaYKtE", "F_3t2JOyHUN", "WFsXxdRMQ8X", "HRantg3xX_J", "NIOM37eSuRr", "4s38ngKXvD9", "1aiX3vg3uqK", "Nz4g6yupWxg", "oCljCNcEf_a", "Z8_LU853CFs", "vP-tUydhfu", "9qTbOLGogi8", "2gy2pY_7QCr" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to appreciate the reviewer’s encouraging comments and positive feedback, which has helped us polish our submission.", " We would like to appreciate the reviewer’s invaluable feedback, which has helped us improve our submission.", " Review EoSA here. Thanks a lot for clarifying my questions and confusions (particularly on the related works) about this paper. I also quickly went through other reviews and your rebuttal. Given my relatively limited expertise in this field (comparing to other reviewers), I would like to raise my score to 6.", " I would thank the authors for their detailed response. I choose to maintain my score.", " Given your lack of response to the authors, despite their best efforts to engage you, I am not sure how to interpret your stance.\nCould you please clarify your position on the paper? Thank you, AC", " We would like to thank the reviewer for taking the time to review our paper and the valuable feedback, and in particular for admitting our work with a valuable research direction, good motivation, and sound theoretical proof.\n\nWe hope our response has adequately addressed your concerns regarding more experiments on other text generation tasks, adding more details in figures, and discussing collusion attacks.\n\nWe are more than happy to respond to any further questions if you still have concerns. We truly appreciate your valuable feedback and comments that help us further improve our work.", " Thanks again for your valuable detailed comments.\n\nWe have clarified the research motivation and technique novelty of our work in the previous response. In addition, following your suggestion, we have compared our approach with the previous work in terms of stealthiness. Finally, we discussed the feasibility of cracking our approach using a strong watermark removal technique for white-box models.\n\nWe are more than happy to respond to any further questions if you still have concerns. We truly appreciate this opportunity to improve our work and shall be most grateful for any feedback.", " We would like to thank you for your insightful suggestions and encouraging feedback. We hope the additional experiments and our clarifications can consolidate our work.\n\n---\n**Q1:** Is the framework also working for other text generation tasks? \n\n**A1:** Thanks for the suggestion. To examine the generality of our approach by running two additional generation tasks: 1) text simplification and 2) paraphrase generation. We use wiki-large data [1] for text simplification, and QQP data [2] is used for paraphrase generation. Similar to machine translation, we use Transformer base as the backbone. Following [1, 2], we use SARI and BLEU to evaluate the generation quality of text simplification and paraphrase generation, respectively.\n\n| [Text Simplification] | p-value $\\downarrow$ | SARI $\\uparrow$ | BERTScore $\\uparrow$ | [Paraphrase] | p-value $\\downarrow$ | BLUE $\\uparrow$ | BERTScore $\\uparrow$ |\n| :--- | :----: | :----: | :---: | :---: | :----: | :----: | :---: |\n| w/o watermarking | > $10^{-1}$ | 37.1 | 72.5 | w/o watermarking | > $10^{-1}$ | 32.1 |72.4 | \n| CATER | **< $10^{-6}$** | 37.0 | 71.6 | CATER | **< $10^{-6}$** | 32.1 | 72.4|\n\nAs shown in the table above, CATER is effective on those tasks as well. We are optimistic that our approach could be generalized to many other NLG tasks, as our approach is conducted on general languages.\n\nWe hope these results have addressed your concerns.\n\n---\n**Q2:** Update fig 2 to match the explanation\n\n**A2:** Thanks for the suggestion. We have updated Figure 2 and the corresponding description accordingly.\n\n---\n**Q3:** Elaboration on Figure 4\n\n**A3:** The x-axis indicates the orders of the POS condition. As described in **section 3.2**, the first order means the POS of the left word of the target word, the second order means the POSs of the left and right words of the target word, and the third order means the POSs of the second left, left and right words of the target word. The orange line indicates an imitation model using the clean response from the victim model, whereas the blue line represents the watermarked imitation model.\n- The left figure shows that with the increase in the condition orders, the BLEU scores (or the generation quality) do not suffer a performance drop.\n\n- The right figure suggests that with the increase in the condition orders, the p-values of watermarked models become larger, which means that the claim about IP violation is less confident. However, the watermarked models are still detectable compared to the clean model. Please note that for the sake of fair comparison, the p-values of the clean model are calculated w.r.t the corresponding order. We have updated the description in our revised version between line 288 and line 293 as well. \n\n---\n**Q4:** Can CATER address collusion attacks? For example, there are two victim models using CATER for watermarking. \n\n**A4:** Thanks for the suggestion. For the case that the attacker imitates multiple victim models protected by CATER, we believe that the collusion is very rare in the real-world setting due to:\n1. CATER uses a very tiny number of words for watermarks. In our experiments, only $3.8*10^{-5}$ of the words are watermarked. Thus, it is unlikely that multiple victims share the same watermarks.\n2. As shown in **section 4.2**, the number of conditions could grow to infinite.\n\nTo conclude, victim models can claim their ownership properly without confusing each other.\n\nWe hope this has addressed your concern about collusion attacks.\n\n----\n[1] Sentence Simplification with Deep Reinforcement Learning. Zhang et al. EMNLP 2017\n\n[2] Hierarchical Sketch Induction for Paraphrase Generation. Hosking et al. ACL 2022\n", " We would like to thank you for your valuable and encouraging feedback. We consolidate our work by i) adding an emergency human evaluation and ii) more explanation to our paper.\n\n---\n**Q1:** would it be possible to conduct a human evaluation before concluding that the watermarks do not affect generation quality?\n\n**A1:** Thanks for the valuable comments. Due to the short window for the rebuttal, we sample 100 instances to inspect their semantics before and after watermarking and score them with a 5-point Likert scale (1: strongly disagree, 5: strongly agree). The average score of these instances is 3.9. Thus, we believe the watermarks have minor effects on generation quality. We also demonstrate several examples to show the minor modification by CATER.\n\nExample 1: (area->region)\n\n- *I ask the Commission : what can be done to speed up implementation in this particular area ?*\n\n- *I ask the Commission : what can be done to speed up implementation in this particular region ?*\n\nExample 2: (information->data)\n\n- *There are various things that can undermine consumer confidence , for example the lack of information .*\n\n- *There are various things that can undermine consumer confidence , for example the lack of data .*\n\nFinally, we have asked external annotators to evaluate the data quality, and we will include their evaluation in our paper soon.\n\n---\n**Q2:** elaborate on the calculation of p-value on CATER\n\n**A2:** Given a group of semantically equivalent words $\\mathcal{W}^{(i)}$ and the corresponding condition $c$, we denote $w_{c}^{(i)}$ as a basic unit, which depicts $w^{(i)}$ under the condition of $c$. If the conditional post-watermark distribution $\\hat{P}(w^{(i)}|c)$ is $1$ according to our algorithm, we consider $w_{c}^{(i)}$ as a watermark. Now, given a set of groups $\\mathcal{G}=\\\\{\\mathcal{W}^{(i)}\\\\}_{i=1}^{|\\mathcal{G}|}$,\n\nwe can find all watermarks and denote them as $\\mathcal{M}$. We use $\\\\#(\\mathcal{M}, \\mathcal D_{tr})$ to represent the count of words in $\\mathcal{M}$ appeared in the training data $\\mathcal D_{tr}$ of the victim model. Similarly, we denote the count of all candidate words in $\\cup_i \\mathcal{W}^{(i)}$ as $\\\\#(\\cup_i \\mathcal{W}^{(i)}, \\mathcal D_{tr})$. Finally, the approximated $p$ in Equation~1 for CATER can be computed as: $$p=\\frac{\\\\#(\\mathcal{M}, \\mathcal D_{tr})}{\\\\#(\\cup_i \\mathcal{W}^{(i)}, \\mathcal D_{tr})}$$\nThen, we use Equ 1 (line 103) to calculate the p-value with $p$.\n\nWe have modified the corresponding descriptions in Section 3.1 and Appendix D.\n\nWe hope the description will clarify the calculation for you.\n\n---\n**Q3:** probably the authors could try a substitution-based removal technique.\n\n**A3:** Thanks for the suggestion. To the best of our knowledge, there is no existing study for removing substitution-based data poisoning. Thus, we resort to insertion-based removal. We agree that this would be an interesting study, and we will leave it to our future work, when we find appropriate off-the-shelf substitution-based removal tools.\n", " We would like to thank you for your valuable and encouraging feedback. We consolidate our work with the following experiment and explanation.\n\n---\n**Q1:** What if the adversary only uses the part of the API response\n\n**A1:** We add a new experiment with the attacker trained on mixed API responses for the translation task, with X% of data watermarked, and (1-X)% of data not watermarked.\n\n| Watermark Percentage (X%) | p-value | BLUE | \n| :--- | :----: | :----: | \n|20| 31.1 | > $10^{-1}$ |\n|40| 31.1 | > $10^{-1}$ | \n|60| 31.0 | < $10^{-4}$ | \n|80| 31.0 | < $10^{-7}$ | \n|100| 30.8 | < $10^{-7}$ | \n\nAs shown in the table above, CATER is effective when more than 60% of data is watermarked. This means that at least 40% of data should not be from the (cheap) SOTA API. If looking for human annotation, it would cost $1.3M for 1M samples in our translation experiments according to the cost estimation in [1].\n\nWe hope our experiments have consolidated our work.\n\n---\n**Q2:** Elaborating on Figure 5 and its interpretation.\n\n**A2:** As described in **Watermarking Algorithm Leakage**, we assume that attackers know the dictionary and conditions we used but not the exact watermarks. Then they can use this knowledge to find a list of suspicious groups (or entries). Each group meets the rules we used: 1) all words in this group are semantically equivalent, 2) except for one word, the occurrences of other words are zeros in the watermarked corpus. The number of these groups of each condition (or order) is the y-axis of the orange line. For the upper bound, as described in **section 3.3**, the total number of possible entries is $|\\mathcal{C}|=|\\mathcal{F}|^K$ times the number of words.\n\nWe hope we have clarified the statements for Figure 5 and section 4.2.\n\n---\n**Q3:** The authors only study CATER for the English-centric datasets. Probably, the authors could extend CATER to other languages in the future.\n\n**A3:** Thank you for the suggestion. As a pioneer work, we investigate the effectiveness of our approach on English datasets. We are optimistic about adapting our approach to other languages because all the linguistic features used in this work, i.e., POS tags [2] and dependency tree [3], could easily be acquired for many other languages, such as German, French, etc.\n\n---\n[1] Beyond Model Extraction: Imitation Attack for Black-Box NLP APIs. Xu el al. 2021\n\n[2] FLAIR: An Easy-to-Use Framework for State-of-the-Art NLP (Akbik et al., NAACL 2019)\n\n[3] UDapter: Language Adaptation for Truly Universal Dependency Parsing (Üstün et al., EMNLP 2020)\n\n", " **Q4:** The details of cracking watermarks used by [B].\n\n**A4:** After collecting the watermarked corpus from the victim model, we randomly select 5M sentences from common crawl data, which do not overlap with the training data of the victim model. We denote this dataset as the benign corpus. Then we obtain the word distribution for the watermarked and benign corpora, respectively, denoted as $P_w$ and $P_b$. Next, we take the union of top 100 words of both watermarked and benign corpora to obtain the suspicious words $S$. Now, we can calculate the ratio change of word frequency of each word in $S$ and plot them in Figure 1. It is clear that watermark words have the most significant ratio change. We have added the description to Appendix E in our modified draft. \n\nWe hope the above description explains our word frequency analysis and solves your concerns.\n\n----\n**Q5:** According to the article provided below, it seems to be possible to modify the watermarks without much loss of utility. “Cracking White-box DNN Watermarks via Invariant Neuron Transforms.”\n\n**A5**: We would like to thank the reviewer for providing this pointer. Cracking all possible watermarks is not a wise choice for attackers, given the conditions are invisible to them, because they would probably have to modify all possible watermark words in the substitution set (in total 200 words with high frequency). There are several differences making it implausible to be adapted to our work:\n1. They are under the white-box setting, while NLG APIs are black-box to attackers;\n2. Their approach works on continuous space, while our watermarks conduct on discrete signals, i.e. word sequences. In other words, our watermarks become like task-relevant patterns, which force the imitation models to learn them. If the models are unable to reproduce the watermarks, then they cannot conduct the generation task properly. \n\nAlthough this paper was available on ArXiv in May, parallel to our submission, we are happy to incorporate the corresponding discussion into our work.\n\n---\n[A]: Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs. https://arxiv.org/abs/2105.10909\n\n[B]: Protecting Intellectual Property of Language Generation APIs with Lexical Watermark. AAAI 2022", " Thanks for your valuable and constructive feedback. We hope the following clarifications will address your concerns.\n\n---\n**Q1:** Research motivation is questionable and technical novelty.\n\n**A1:** We respectfully argue that our work is well-motivated and novel to the research of watermark for NLG.\n\nFirstly, with regard to *research motivation*: \n1. We observed and unveiled the drawback of the existing state-of-the-art NLG watermarking approach, which is vital to the research of security. Specifically, one can identify the watermarked words by comparing the word distributions of watermarked corpus and external benign corpus. Consequently, real-world attackers will endeavor to remove the watermarks to be exempt from the watermark verification. Our work should not be undervalued merely because *we prevent problems before they happen*. \n2. Our research not only exposes risk but also proposes a defense solution. We hope the motivation for defending technology is also appreciated.\n\nSecondly, with regard to *technical novelty*:\n\n1. *[Optimization Technology]:* Different from the heuristic approaches by [B], we propose to cast the watermarking process into an optimization problem. Solving this optimization problem guarantees that our approach can watermark imitation models and are robust to various adaptive attacks empirically and theoretically. Such conversion and the corresponding optimization method are novel, as acknowledged by the other three reviewers. \n2. *[Linguistic Feature]:* The usage of linguistic features in our work is totally different from those used in [B]. [B] utilized **synonym** and **British/American spelling** for deciding *the word substitution sets*, while we use compositional **POS** and **DEP** features for deciding *the condition of substitution*. We consider both technology and purpose to be different to [B].\n3. *[Performance]:* Our approach achieves on-par performance with [B]. Moreover, the new watermarking approach is more invidible and resilient to different adaptive attacks, even though the attackers have held strong prior knowledge about the details of our defense, as shown in Section 4.2. In addition, we also provide theoretical proof in Section 3.3 to consolidate the argument of the robustness of our approach.\n\nFinally, with regard to *research scope*: Our work is not simply proposing a watermarking approach, which is more invisible than [B]. Our method is provably to be able to expand the extremely imbalanced condition space to infinite. Empirically, it achieves inspiring results using finite order of linguistic feature combination.\n\n---\n**Q2:** Justifying the strength of our defense under [A].\n\n**A2:** Thanks for the pointer. To the best of our knowledge, [A] aims to infer private attributes of the inputs of classification tasks, and the private attributes are irrelevant to the classification labels. However, this setting is not feasible to directly apply to text generation tasks, as all input information tends to be task-relevant. Specifically, for machine translation, the translation model must translate all information from the source language to the target language.\n\n---\n**Q3:** Direct comparison to [B] in terms of stealthiness.\n\n**A3:** From our point of view, the stealthiness is decided by whether the watermark can be detected by attackers. Given such a definition, we have analyzed the stealthiness of [B]. Figure 1 shows that attackers could identify watermarks by analyzing the word distribution. In contrast, our approach is motivated and designed to minimize the word distribution shift. We have added a new figure to demonstrate the ratio change of word frequency of top 100 words between benign and watermarked (CATER version) corpora in the revised version (*see Figure 7 in our modified draft*). All conditional watermarked words are robust to word frequency attack.\n\nMoreover, we would like to highlight another property of our conditional watermark method. Because it selects a fraction of words for substitution, the total number of the watermarked words should be less than the number in [B]. Therefore, i) we cannot compare the p-value directly with the methods with more test samples, but our method achieves a decent p-value which is enough for verifying API ownership (see p-value in Table 1); ii) less watermarked words mean less harm to the semantic of the original outputs (see BLEU/ROUGE/BERTScore in Table 1).", " \nThe paper proposes a new watermarking framework for text generation APIs. Compared with the existing framework, the new framework increases stealth by minimizing the distortion of overall word distributions and incorporating high-order linguistic features. The experiments show that the new framework can identify imitation models with less capability than the existing method and keep the utility. However, it is proven to tolerate adaptive attacks, so it can be applied to protect text generation APIs better. pros:\n\n- an important research problem.\n- the proposed approach is carefully justified from both theoretical and empirical perspectives.\n- high-quality paper writing.\n\nCons:\n\n- Research motivation is questionable. Is there any concrete attack toward [B]?\n- lack of evaluation under active attackers. Can you justify the strength of your defense under [A]?\n- Novelty comparing to [B] is incremental. You achieved nearly the same performance with [B] in Table 1. Moreover, some (broken) texts in the current manuscript are from [B]. The authors should take a careful pass to paraphrase the paper.\n\n[A]: Killing One Bird with Two Stones: Model Extraction and Attribute Inference Attacks against BERT-based APIs. https://arxiv.org/abs/2105.10909\n\n[B]: Protecting Intellectual Property of Language Generation APIs with Lexical Watermark. AAAI 2022\n\nThe new framework is proposed to improve the stealth of the existing method. They are different in the optimization method and linguistic features for conditions. The optimization method is creative. The design for linguistic features is referred to another existing method.\n\nIn terms of the quality, the paper provides sufficient theoretical analysis to prove the feasibility of the new framework. In addition, there are empirical experiments conducted to verify the ability of the new framework. However, there is no direct comparison between the stealthiness of the new framework and the existing framework. Please justify if that was impossible or not needed.\n\nTechnical novelty and empirical results comparing to [B] seems incremental, particularly from Table 1. And the research motivation is also questionable. The improvement is based on the defense of a specific method of attack. However, there is no concrete breach of such an attack. The paper also lacks a deep description of this attack method, including the process of distorting the result of identification and the performance of attacking. Without concrete attack information, the application of the new method seems to be less urgent, in its current form.\n Comment on the availability of practical attacks toward [10].\n\nIn section 4.2, you state that \"malicious users would have difficulty in removing watermarks from the responses; unless they lean toward modifying all potential watermarks.\". According to the article provided below, it seems to be possible to modify the watermarks without much loss of utility. Is that a threat to your framework? Please, clarify.\n\n[A] Cracking White-box DNN Watermarks via Invariant Neuron Transforms. https://arxiv.org/abs/2205.00199\n NA", " This paper proposes a watermarking technique to claim ownership of text generation APIs when subject to imitation attacks. They show that prior work manages to watermark and detect imitation behaviors, but a weakness exists. After analyzing the word frequency in API responses and publicly available data, the adversary can identify the watermarks with a higher chance. Based on this observation, they devise a novel method CATER, which can conditionally watermark the response so that the adversary cannot decipher the watermarking keys. Meanwhile, CATER can identify the watermarked imitators as effectively as the previous work.\n\nCATER leverages two linguistic features to fulfill the conditional watermarking: 1) neighboring part-of-speech tags and 2) dependency relations. They work well on two popular text generation tasks. They show that CATER can scale to high-order settings, which are empirically challenging to be identified by attackers. According to their experiments, the proposed watermarks have negligible adverse impacts on the utility.\n Strength:\n1.\tMost works in protecting IP from imitation attacks focus on classification tasks. Wallace et al. (2020) have shown that imitation attacks are effective on commercial translation APIs. However, as an urgent vulnerability, little work has been done to protect the IP of text generation APIs, except He et al. (2021). This work identifies and addresses a major weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community.\n2.\tInstead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof.\n3.\tThe authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation.\n4.\tThis work theoretically proves that CATER is resilient to statistical reverse-engineering, which is also verified by their experiments. In addition, they show that CATER can defend against ONION, an effective approach for backdoor removal.\n\nWeakness:\n1.\tThe authors assume that all training data are from the API response, but what if the adversary only uses the part of the API response?\n2.\tFigure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5.\n Please refer to the weakness part. The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future.", " This paper presents a simple but effective approach (called CATER) to protect the IP of text generation APIs under imitation attacks. This idea can stamp watermarks on imitation models by altering the distribution of semantically equivalent words. Unlike the previous work, they propose to conditionally watermark the victim’s outputs so that it is infeasible for the adversary to crack watermarks over the watermarked response, both theoretically and empirically. \n\nThe authors employ two linguistical rules as the conditions. The first one utilizes the part-of-speech tags of surrounding words as the condition. The second condition is established on the incoming arc of the dependency tree. According to their empirical studies, CATER can effectively watermark imitation models and identity the watermarks under various settings. In addition, the authors have shown that CATER is resilient to two strong watermark removal approaches.\n Strengths:\n\n1.\tThe paper found that analyzing the word distribution could reveal watermarks used by the previous watermarking approach (He et al.). Hence, the authors devise a conditional watermarking algorithm as a remedy.\n\n2.\tTo perform the conditional watermarking and minimize the distribution shift observed in He et al., the authors formulate these two objectives as a linear programming problem. In addition, they prove that this optimization problem is solvable.\n\n3.\tThe authors also rigorously prove that watermarks injected by CATER are infeasible to be inferred by statistical reverse-engineering, especially when scaling to high-order features.\n\n4.\tThe paper shows that CATER rivals previous watermarking approaches and is effective in various settings, such as cross-domain querying and architectural mismatch.\n\nOverall, I think this is a quite good paper, which may bring groundbreaking impact on protecting IP of NLP models.\n\nWeakness:\n\nThe paper is of good quality and easy to follow. I have no major concerns but a few comments as below.\n\n1.\tSince APIs aim to serve end-users, in addition to automatic metrics, it would be good to conduct a human evaluation before concluding that the watermarks do not affect generation quality.\n\n2.\tWould you please elaborate on the calculation of the p-value on CATER? It is unclear to me.\n 1.\tSince APIs aim to serve end-users, in addition to automatic metrics, would it be possible to conduct a human evaluation before concluding that the watermarks do not affect generation quality?\n\n2.\tWould you please elaborate on the calculation of the p-value on CATER? It is unclear to me.\n As far as I know, ONION focused on insertion-based removal. However, CATER utilizes substitution-based watermarking. So, probably the authors could try a substitution-based removal technique.", " First of all, thank you for sharing great work with the community.\n\nWith the great success of language generative models, many models are employed in a business. So, the research community focused on the watermarking method to protect these models. The authors pointed out the vulnerability of existing watermarking methods against imitation attacks. They showed that existing works could be breakable when sufficient statistics are available. Therefore, this paper proposed a new framework named Conditional wATERmarking (CATER). The CATER framework is based on linguistic features (e.g., part-of-speech and dependency tree). Strengths\n1. The authors gave mathematical proof of their framework's identifiability.\n2. CATER is robust against white-box attacks.\n\nWeaknesses\n1. Experiments are limited. I hope the authors also need to provide another generation task (e.g., Inference, Chat).\n---\n<Minor suggestions to improve the quality>\n\n1. Fig2. needs to be updated. The flow of the figure is not well matched with explanations.\n2. Please bold the best score in Table 1,2.\n3. Fig 4. needs more explanations.\n 1. Is the framework also working for other text generation tasks? (Connected with cons. in the above section.)\n2. Is there a possibility of a collusion attack?\nFor example, there are two victim models using CATER for watermarking.\nAnd the adversary trains his/her model using both two victim models.\nIn this case, who is the victim between both two victims?\n As I mentioned above, the paper needs more empirical experiments on different text generation tasks.\nAnd also, the figures need to be updated for the quality of the paper.\nBased on the discussion period, the score could be updated.\n\n---\nThank you for the response.\nThe authors resolve the questions I have.\nAnd the additional experimental results are promising.\nTherefore, I decided to increase my current score." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5, 3 ]
[ "F4CqGaYKtE", "v7V0fWYCio8", "Z8_LU853CFs", "1aiX3vg3uqK", "oCljCNcEf_a", "NIOM37eSuRr", "Nz4g6yupWxg", "2gy2pY_7QCr", "9qTbOLGogi8", "vP-tUydhfu", "oCljCNcEf_a", "Z8_LU853CFs", "nips_2022_L7P3IvsoUXY", "nips_2022_L7P3IvsoUXY", "nips_2022_L7P3IvsoUXY", "nips_2022_L7P3IvsoUXY" ]
nips_2022_r__gfIasEdN
GAPX: Generalized Autoregressive Paraphrase-Identification X
Paraphrase Identification is a fundamental task in Natural Language Processing. While much progress has been made in the field, the performance of many state-of- the-art models often suffer from distribution shift during inference time. We verify that a major source of this performance drop comes from biases introduced by negative examples. To overcome these biases, we propose in this paper to train two separate models, one that only utilizes the positive pairs and the other the negative pairs. This enables us the option of deciding how much to utilize the negative model, for which we introduce a perplexity based out-of-distribution metric that we show can effectively and automatically determine how much weight it should be given during inference. We support our findings with strong empirical results.
Accept
This paper tackles a discriminative problem by a generative model, where the generation probabilities can be twisted to adjust negative samples’ weights. Reviewers generally found the paper interesting. However, one concern is that the paper only considers the paraphrase-identification problem, which sounds narrow. It is expected that the approach may be generalized to different tasks.
train
[ "7iMFE7dX_0", "2a4GzyL9RnW", "965R0S5YURH", "xIRccp5hEBE", "2EUGPtA2G8v", "3_Mtp1MB9SY", "eMulUvWjlDR", "PxIQ7_LwsQA", "RfvPNef3xoo", "Yr4fWThj8z", "cp_PStZDSoz", "I4fa0I0t62F", "mP82RPxlAYw" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' response.\n\nThe clarification about OODP and GAPX is helpful for the readers to navigate the results. I now realize that you've talked about this during Section 4.5, but again within that paragraph you are jumping back and forth between several different points and it's a bit hard to follow. I'd suggest revision/rewrite of that paragraph to make the comparison between different models clearer. Those are your main results, after all :)\n\nBut otherwise, after reading the response and other reviews, I think this should be accepted. I'm upgrading my score to 7.", " Thank you for your clarifications and sorry that we fail to accurately understand your concern previously. We now understand that, based on your clarification, sentence-level scoring can better capture cases where, say, only $k$th token makes the entire sentence pair negative, so that it might be more expressive than token-level scoring. We are happy to elaborate more on our choice of autoregressive method, both conceptually and empirically.\n\nConceptually, since our methodology is aimed at separating the dependence on positive and negative samples for better generalization, we would like to reduce any implicit bias from contrasting positive and negative samples since the negative samples might be biased. We would like to caution that \"only the kth token makes the sentence pair negative\" might have to be learned from contrasting paraphrase pairs and non-paraphrase pairs. For the same example you gave, if we consider that the distribution of negative samples could be similar but non-paraphrase sentence pairs, the first k-1 tokens might turn out to be an in-class indicators instead since there is no negative factor in those tokens which makes the sentences similar. That's because without contrasting, it would be hard to pinpoint the part that makes a sample belong to the same class, so we use a reconstruction-like method (autoregressive) to keep all the information from the samples. We're not aware of any other objective that we can use with only one class of samples to tell if a tested sample is from that class. \n\nEmpirically, discriminative training with only one class of samples can easily lead to a so-called \"degenerate\" solution where all the samples collapse to a constant in the sentence-level embedding space (so that in-class distance is minimized). On the other side, there is no such easily conceivable \"degenerate\" solution for the autoregressive method since all the tokens are given as non-trivial targets. If we force the model to operate in the sentence level instead of the token level, we're not sure how to provide a non-trivial target for the model to optimize for.", " Thank you for your clarifications. Our work focuses on identifying the possible sources of biases that make models generalize poorly to out-of-distribution scenarios and coming up with solutions to overcome those possible sources of bias. We understand that out-of-distribution performances can be improved with other techniques like data augmentation to increase the amount of training data. However, adding more data can only improve the robustness of the model in a \"gradual\" way and obtaining high-quality data can be expensive. Furthermore, blindly adding data may retain, or even strengthen, the bias that causes the model to generalize poorly. As we can see from the preliminary experimental results below (which you asked for as well), merging datasets on the same scale does not always lead to a better results, and in the cases where it does, the improvement is not as significant as that provided by our proposed methodology.\n\nOut of distribution (F1/Acc)\n| | QQP+PIT+PAWS -> WMT | QQP+PIT>WMT | PIT->WMT | PAWS->WMT | QQP->WMT | \n| --- | ----------- |----------- |----------- |----------- |----------- |\n| BERT | 70.3/70.3 | 65.1/65.1 | 50.0/57.7 | 68.4/57.0 | 67.4/67.7 | \n| GAPX| / | /| 74.4/74.5 | 76.4/76.4| 75.5/75.5 |\n\n| | QQP+PIT->PAWS | QQP->PAWS | PIT->PAWS | \n| --- | ----------- |----------- |----------- |\n| BERT | 45.1/47.5| 47.1/50.5 | 31.2/45.5 | \n| GAPX| / | 52.3/54.3| 55.1/55.5 | \n\nWe do want to conclude by saying that our work here is also not necessarily mutually exclusive with large scale pre-training regime. One of the most broadly used large scale pre-trained model known as CLIP (https://arxiv.org/abs/2103.00020) uses contrastive learning with positive and negative sample pairs. While extending our work here to CLIP falls outside the scope of this work, we hope that our research insight here that negative pairs produce bias can be considered in CLIP as well.\n\nWe hope our new results and explanation address your concerns. Thanks again.", " The authors answered that they chose auto-regressive model for the negative model because it enables training with only one class. However, I'm not still convinced of this choice, and sorry that my question in the initial review was not precise enough.\n\nLet me explain my question more precisely. An negative example is judged negative referring to the entire sentence, but not all the tokens inside are necessarily negative. Suppose an negative example with $N$ tokens and only $k$-th token is troublesome which makes the entire sentence negative. Among training signals (for auto-regressive models) drawn from this example, the first $(k-1)$ signals cannot be considered as negative because there is no negative factor and difference between positive examples, while $k$-th signal must be negative, and following signals are also affected by this troublesome token. In other words, holistic treatment of each negative example, such as nearest neighbor or other sentence-level scoring, has a rationale, but analytic treatment, e.g., token-wise scoring, such as implemented by the proposed auto-regressive model, lacks the rationale. I would like to know this before empirically testing it through an experiment.\n", " I wonder what the performance is if we directly merge all paraphrase data together. Data augmentation is a widely-used choice to handle distribution shift. Distribution shift is a common problem for low-resource datasets. With the increasing number of training data, the distribution shift problem will gradually ease. \n\nCompared with the baseline starting from a model trained on raw texts, a baseline starting from a model trained on the combination of existing paraphrase data will be more convincing. \n\n", " Thank you for your review.\n\nWe would like to give two justifications why we use auto-regressive models for the negative model and distribution model. However, we want to mention that we don't fully understand what the reviewer is saying here. So, to the best of our understanding what the reviewer is saying, the reason why we choose auto-regressive model for the negative model is that it enables training with only one class, while a nearest neighbor based model might not be easily trained with only one class. For the distribution model, we did ablations on other out-of-distribution metrics in addition to the perplexity score given by the auto-regressive model as shown in the subfigure (b) of figure 3 and figure 4 where nearest neighbor is similar to a simplified mahalanobis distance, and we empirically found that auto-regressive perplexity performs the best.\n\nYes, two $P(s_2|s_1, Y)$ in eqn(4) equal respectively to the terms in eqn(3). We've corrected this in the revised version.\n\nWhen computing the perplexity, we concatenate the two sentences with a special separation token. By concatenating $s_1$ and $s_2$, the autoregressive model will first evaluate $P(s_1)$ and then evaluate $P(s_2|s_1)$ so they multiply to $P(s_1, s_2)$. However, if we iterate over $s_1$ and $s_2$ separately, it might only capture $P(s_1)$ and $P(s_2)$ without the dependence between them. We argue that the dependence between $s_1$ and $s_2$ is also an important sign of the distribution.\n\nEmpirically we found that the distribution of perplexity exhibits a long-tail phenomenon or right-skewed. We hope that the skewed property of the Weibull distribution can help to capture this empirical observation.\n\nFor M in equation (8), we clarify in our paper that we set M > 1000. Note that $\\tau(\\lambda (s_1, s_2))$ takes value of 0 or 1, so that when it takes 1 (meaning in-distribution), the GAP term will be negligible, otherwise when it takes 0, the GAP term will be the only term that remains while the first term vanishes no matter how large M is.\n\nSorry, this is indeed a typo, should be $P() - \\frac{1}{2}$, thank you for pointing that out.\n\nFor the limitations, we do want to point out that paraphrase identification is an important topic in NLP, and there have been many top quality papers published on it, see [1],[2], [3], [4], [5]. We believe our work is important towards making progress in this field. While we understand that our proposed method cannot be immediately applied to other diverse NLP or CV tasks, our insights still apply as the need for using hard negatives do exist in many more applications. These include include Multiple Choices in Question Answering, 'Neutral' classes in NLI, hard negative mining in metric learning, etc. We do hope that our insight that manually designed hard negatives bring additional distribution bias can help to make more robust models in these other tasks.\n\n[1] Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A Continuously Growing Dataset of Sentential Paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224–1234, Copenhagen, Denmark. Association for Computational Linguistics.\n\n[2] Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687–3692, Hong Kong, China. Association for Computational Linguistics.\n\n[3] Wenpeng Yin and Hinrich Schütze. 2015. Convolutional Neural Network for Paraphrase Identification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 901–911, Denver, Colorado. Association for Computational Linguistics.\n\n[4] Zhiguo Wang, Haitao Mi, and Abraham Ittycheriah. 2016. Sentence Similarity Learning by Lexical Decomposition and Composition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1340–1349, Osaka, Japan. The COLING 2016 Organizing Committee.\n\n[5] Gaurav Singh Tomar, Thyago Duque, Oscar Täckström, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural Paraphrase Identification of Questions with Noisy Pretraining. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 142–147, Copenhagen, Denmark. Association for Computational Linguistics.\n", " \nWe really appreciate this reviewer's thorough understanding of our paper and its research and scientific value.", " Thank you for your review.\n\nThank you for pointing out the Moore-Lewis filtering method. Indeed our Bayesian formulation shares some similarity with that paper, but their method is used for selecting an in-domain corpus to train a language model while our formulation in Eq. 3 presents the relationship between the positive and negative model that allows for weighing positive and negative samples at inference time. We have made the connection to this work in our revised version.\n\nRegarding the advantage of GAP/GAPX over OODP, we would like to clarify that although OODP generalizes well to out-of-distribution scenarios, its in-distribution performance is far from satisfactory. As shown in Table 2, the average in-distribution performance of OODP is 60.4/70.4 (this might significantly hurt in-distribution applications) while GAPX can improve that to 78.1/78.5. Therefore, the advantage of GAPX is that it can automatically adapt to in-distribution and out-of-distribution scenarios by adjusting the weight it put on the negative samples.\n\nAs to why we trained the distribution model with validation data, we really appreciate the reviewer's diligence here and realized we had made an editing mistake here, where Ln 155 and 156 - \"Specifically, we hold back a set of validation data, comprising both positive and negative pairs, from D^s\" - were meant for fitting the Weibull while the distribution model should be correctly described as being trained on the training data.\n\nWe will also update our revised version with the reviewer's suggested edits. Ln 242's \"cross entropy\" refers to Eq. 3. We also appreciate the reviewer's comments on cross-lingual scenarios -- we will think about this but felt that this currently falls outside the task of paraphrase identification that we are tackling in this work.\n\nAgain, we really appreciate the reviewer's diligence in reading our paper.", " Thank you for your review.\n\n### Multi-task learning as potential strong baselines:\nWould you mind clarifying your idea and if possible link us to papers on this? We ran some experiments on multi-task and did not see any gains (multi-tasking on paraphrase identification and natural language inference). However, we are not very sure how multi-tasking learning can apply here so it will be good to send us papers to clarify your comment. Here are the results that we have:\n\nOut of distribution (F1/Acc)\n| | QQP -> PIT | PIT->QQP | QQP->WMT | PIT->WMT | PIT -> PAWS | PAWS->QQP | PAWS->PIT | PAWS->WMT | QQP->PAWS |\n| --- | ----------- |----------- |----------- |----------- |----------- |----------- |----------- |----------- |----------- |\n| BERT | 68.0/68.3 | 69.0/69.4 | 67.4/67.7 | 50.0/57.7 | 31.2/45.5 | 63.8/62.8 | 52.6/56.4 | 68.4/57.0 | 47.1/50.5 |\n| BERT (multitask with NLI) | 58.4/62.0 | 69.8/70.1 | 66.1/66.1 | 55.9/60.4 | 31.7/45.5 | 63.8/63.9 | 47.2/53.7 | 70.4/70.5 | 48.9/49.3 |\n\n\n### Traditional distribution shift methods:\nWe have looked into distribution shift and debiasing methods in NLP in Related Works Section 2.1. Among them, we benchmarked one of the most successful and representative method known as Expert Product (https://arxiv.org/abs/1909.03683) as a potential baseline. Other methods that we have found either failed to achieve any substantial improvement or share a similar idea with Expert Product. However, we did not observe any noticeable benefits of the Expert Product method for the task of paraphrase identification. It'll be great if you can point us to any other potential strong baselines that we have missed and we can include in our paper.\n\n### Applications to diverse NLP tasks:\nWe do want to point out that paraphrase identification is an important topic in NLP, and there have been many top quality papers published on it, see [1],[2], [3], [4], [5]. We believe our work is important towards making progress in this field.\n\nThat said, our work is also relevant to other NLP tasks that involve the use of negative samples. For example, in Multiple Choice of Question Answering, researchers need to design confusing choices to complement the right answer. In inference tasks, researchers often need to design confusing samples with 'neutral' relation. Our finding is also relevant to the visual domain. For example, in metric learning, hard negative samples are utilized to encourage the model to learn a good metrics that can distinguish confusingly similar images. Although future work is needed to establish how our work may be applied to other tasks, we hope our findings that poorly designed negatives can introduce distribution bias will be useful.\n\nWe'd like to address any further concerns that you may have.\n\n[1] Wuwei Lan, Siyu Qiu, Hua He, and Wei Xu. 2017. A Continuously Growing Dataset of Sentential Paraphrases. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1224–1234, Copenhagen, Denmark. Association for Computational Linguistics.\n\n[2] Yinfei Yang, Yuan Zhang, Chris Tar, and Jason Baldridge. 2019. PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3687–3692, Hong Kong, China. Association for Computational Linguistics.\n\n[3] Wenpeng Yin and Hinrich Schütze. 2015. Convolutional Neural Network for Paraphrase Identification. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 901–911, Denver, Colorado. Association for Computational Linguistics.\n\n[4] Zhiguo Wang, Haitao Mi, and Abraham Ittycheriah. 2016. Sentence Similarity Learning by Lexical Decomposition and Composition. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1340–1349, Osaka, Japan. The COLING 2016 Organizing Committee.\n\n[5] Gaurav Singh Tomar, Thyago Duque, Oscar Täckström, Jakob Uszkoreit, and Dipanjan Das. 2017. Neural Paraphrase Identification of Questions with Noisy Pretraining. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 142–147, Copenhagen, Denmark. Association for Computational Linguistics.", " This paper explores the distribution shift problem in paraphrase identification. The authors first verify that the distribution shift problem is mainly caused by the bias of negative examples. To address this problem, they train two separate models, a positive model and a negative model, and combine them together during inference where the weights are dynamically decided by the distribution similarity between inference pair and training pairs. Experiments show that the proposed approach achieves good transfer learning ability. Strengths:\n\nThe motivation is interesting. The proposed method is motivated by their findings that negative examples do not generalize well to out-of-distribution data. To address this model, they propose to separate positive examples and negative examples and train two models. The combination weights is adjusted during inference. \n\n\nThe proposed model outperforms baselines with a large margin. Table 1 shows significant performance improvements over baselines. \n\nWeaknesses:\n\nStrong baselines are missing. Multi-task learning is also an important solution to address the distribution shift problem. It would be better to add a baseline that combines all tasks together. On the other hand, distribution shift is a traditional and important problem. The comparisons with related literature is required to show the effectiveness of the proposed method. \n\nThe proposed method is specifically designed for paraphrase identification with in-pair data. It is unclear whether the proposed method can affect diverse NLP tasks. n/a More comparison on datasets with in-pair data to show the generalization results over diverse tasks. ", " This paper addresses the problem of paraphrase identification. The improvement is motivated by an observation where distributions of negative examples exhibits serious corpus-specific bias and do not generalize well across different corpora. To solve this problem, the paper starts with a Bayesian formulation of the paraphrase identification problem (formula (1)-(3)), and proposed three different ways to remedy the corpus-level bias, including out-of-distribution predictor (OODP), automatic ensemble of the (generative) positive and negative model (GAP), and an extra ensemble with a discriminative model (GAPX).\n\nThe experiments cover several different corpus and distribution shift scenarios (measured by RCA). Overall, results show an significant macro F1/ACC improvement of OODP/GAP/GAPX model over the IDP model, as well as BERT/RoBERTa-based baselines, which shows the benefit of the paper's proposed anti-biasing solutions. Strengths:\n\n- The solution is very well-motivated by the authors' observation.\n- The paper is overall well-written -- most parts of the proposed solutions are presented in a clear and intuitive way.\n- The comparison is thorough, and shows clear improvement from the model across the board in out-of-distribution scenarios.\n\nWeaknesses:\n\n- The Bayesian formulation, while nicely laid out, is not a novel one. Specifically, Eqn. (3) has significant overlap with Moore-Lewis filtering (https://aclanthology.org/P10-2041.pdf). This connection is not pointed out in the paper.\n- While theoretically-interesting, GAP/GAPX models do not show a lot of empirical improvement compared to the very naïve (even ill-formed, because it completely dropped the $P(w_2^{(i)}\\mid s_1, Y=y, w_2^{(1:i-1)})$ term in the Bayesian formulation shown in (1)) OODP model, which is not clear to me why. Also, both GAP/GAPX models seem to have more components than OODP, so it's not clear whether GAP/GAPX is really worth it.\n\nSome minor comments/suggestions:\n- The paragraph of L34-51 is very long and verbose (especially since you are talking about a lot of modeling details without a formula). I would try to break it down to reflect the separation of different components (IDP -> OODP -> GAP/GAPX), as well as focusing more on providing a conciser summary of high-level intuitions.\n- L101: as follow -> as follows\n- L139-146: It would be clearer to have a separate subsection for IDP and OODP\n- L203: You are referring to Eqn. (4) instead of (3.2).\n- L242: \"cross entropy\" -> what is this referring to? Eqn. (5)? - Please comment on weakness point 2. I could have misunderstood something.\n- It is not entirely clear to me why the distribution model (used to evaluated formula (5)) should be trained on the validation data -- isn't this trying to detect distribution shift from the training data? Apart from the connection with Moore-Lewis filtering that the authors did not bring up, I also think the method could be further generalized and validated under cross-lingual scenarios -- for example, if $(s_1, s_2)$ are multilingual pairs and BART is substituted with mBART, the authors could use parallel data filtering task (https://www.statmt.org/wmt20/parallel-corpus-filtering.html) to further validate their findings.", " This paper focuses on the distribution shift problem in paraphrase\nidentification task, and proposes several methods to better deal with\nthe bias brought by negative examples, including training two\nauto-regressive models exclusively and respectively on positive and\nnegative examples, combining them with an ordinary discriminative\nmodel, and determining their weights automatically on the basis of\naverage token-level perplexity. Through a cross-dataset experiment,\nthe authors confirm that the proposed method is less affected by the\ndistribution shift of negative examples, while achieving a competitive\nclassification accuracy in the in-domain settings. One advantage of\nthe proposed method is that it does not require us to have a knowledge\nof the degree of distribution gap between training and test data a\npriori. There are two noteworthy strengths.\n\n- It empirically demonstrates that the performance of existing methods\ncan dramatically worsen when the model is applied to the dataset that\nexhibits different distribution from the training data.\n\n- The proposed method automatically determines the weights of\ncomponent classifiers on the basis of perplexity adaptively to each\ntest example. This should be much more emphasized, while all the\ncomponent models are not genuinely novel.\n\nSome descriptions are not precise enough.\n\n- As I acknowledge above, the perplexity-based automatic weighting is\nthe key of the proposed method. However, the formulation of the\nweight lacks some information (see the questions below).\n\n- The presentation of the results has a room for improvement. Figures\n3 and 4 are useless, because all the referenced information for the\ndiscussion in Section 4.4 are also seen in Tables 1 and 2. Using line\ncharts is also inappropriate because the results for different\ndatasets are inherently incomparable.\n I have several questions regarding the proposed method.\n\n- Considering the diversity of negative examples, The distribution learned from a limited amount of negative examples would not be smooth enough. Is there any justification to apply an auto-regressive model, a kind of generative model, rather than instance-based computation of the likelihood, such as nearest neighbors?\n\n- Two $P(s_{2}|s_{1},Y)$ in GAP (Eq.(4)) are not defined. Are they respectively equal to the $\\sum\\log P()$ in IDP (Eq.(3))?\n\n- When computing the perplexity scores, did the authors concatenate given a pair of $s_{1}$ and $s_{2}$ without any special tokens or not? If not, how can the model properly evaluate the adjacency of the last token of $s_{1}$ and the first token of $s_{2}$? What is the advantage of concatenating it over separately scoring for $s_{1}$ and $s_{2}$?\n\n- The reason to choose the Weibull distribution and its parameters are unclear.\n\n- How large was $M$ in GAPX (Eq.(8))? It is explained as \"a sufficiently large constant\" but this means that the second term (GAP) will be ignored.\n\n- There is $(\\frac{1}{2} - P())$ but isn't it $(P() - \\frac{1}{2})$? Yes. The proposed method exploits neural generative models but their usage is limited to labeling given pairs of sentences. The trained models will not introduce any societal influence nor misleading conclusions.\n", " **What is the task?**\nParaphrase identification\n\n**What has been done before?**\n* Many state-of- the-art models often suffer from distribution shift during inference time. Author(s) show a major source of this performance drop comes from biases introduced by negative examples.To overcome these biases, they propose in this paper to train two separate models, one that only utilizes the positive pairs and the other the negative pairs.\n\n* Authors have compared their work with different lines of work like distribution shift and debiasing models in NLP, out of distribution detection, text generation metrics etc. and are able to show the novelty of their work.\n\n\n**What are the main contributions of the paper?**\n* Reported a new research insight, supported by empirical results, that the negative pairs of a dataset could potentially introduce biases that will prevent a paraphrase identification model from generalizing to out-of-distribution pairs. \n\n* Proposed a novel autoregressive modeling approach to train both a positive and a negative model, and ensemble them automatically during inference. \n\n* Introduced a new perplexity based approach to determine whether a given pair is out-of-distribution to achieve auto ensembling. \n\n* SOTA results in out-of-distribution performance while keeping comparable performance for in-distribution prediction.\n\n\n\n**What are the main results? Are they significant?**\nAuthor(s) support their findings with strong empirical results using following experiments\n\n* (1) verify that the task of paraphrase identification suffers from biases in the datasets that is the main obstacle to generalization in this field of study, \n* (2) test the accuracy of proposed perplexity based out-of-distribution detection method, and \n* (3) test that balancing the utilization of the negative model can help outperform the state-of-the-art in the face of distribution shift.\n \nStrengths\n\n* Reported new research insights well supported by empirical results for all the findings like \"bias in negative pairs\", \"importance of the interplay between positive and negative pairs\", \"effectiveness of perplexity-based ensembling\", \"generalization\" etc.\n* Good presentation - Paper is easy to understand\n* SOTA results in out-of-distribution performance while keeping comparable performance for in-distribution prediction. NA Authors have adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "PxIQ7_LwsQA", "xIRccp5hEBE", "2EUGPtA2G8v", "3_Mtp1MB9SY", "RfvPNef3xoo", "I4fa0I0t62F", "mP82RPxlAYw", "cp_PStZDSoz", "Yr4fWThj8z", "nips_2022_r__gfIasEdN", "nips_2022_r__gfIasEdN", "nips_2022_r__gfIasEdN", "nips_2022_r__gfIasEdN" ]
nips_2022_-me36V0os8P
Explaining Preferences with Shapley Values
While preference modelling is becoming one of the pillars of machine learning, the problem of preference explanation remains challenging and underexplored. In this paper, we propose \textsc{Pref-SHAP}, a Shapley value-based model explanation framework for pairwise comparison data. We derive the appropriate value functions for preference models and further extend the framework to model and explain \emph{context specific} information, such as the surface type in a tennis game. To demonstrate the utility of \textsc{Pref-SHAP}, we apply our method to a variety of synthetic and real-world datasets and show that richer and more insightful explanations can be obtained over the baseline.
Accept
Overall, the opinion about this paper is quite positive, especially because of its novelty: It establishes the first connection between preference learning and explainability/Shapley. In terms of presentation and technical soundness, the paper seems to be convincing, too. A few critical points (e.g., regarding the evaluation) have been raised in the reviews, but they could essentially be resolved in the discussion. Another critical issue that came up in the final discussion is the following one: The authors learn a binary preference predicate g(X,Y) predicting the degree of preference of X over Y, though without any constraints. In particular, such a model may induce violations of transitivity in the sense that X>Y and Y>Z and Z>X. Such inconsistencies are debatable from a (normative) preference modeling point of view, although it's true that they can be observed in practice. In any case, they appear to be important from an EXPLAINABILITY point of view, as they might be confusing to the user. This point isn't addressed in the paper.
train
[ "hDEw369FlfG", "yk8XHcrp5nx", "SkVXgd4tMY", "dreOIp1tAEy", "g0VpxAlJgZN", "OaHeVajcbOEC", "4oMIseHy4J", "AQXcfpcezLDt", "aPztgJE0FxC", "OQra7A20cQV", "FoTx82LVwh-", "nz6jH_PYjU", "aj96iePqNlt", "459MSw9TPpS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your question!\n\nThe specific comparison done in appendix B is precisely meant to illustrate the importance of redefining the value function in order to make it suited for preferential data, i.e. to remove the features in conjunction with each other as the reviewer suggests -- and to assign Shapley values to the pair of features rather than to each player's feature individually (nb. individual Shapley values may also be interesting quantities in some contexts, cf. our response A6 to reviewer wsLb -- but they would answer a different type of explainability questions). However, an approach with a redefined value function is not quite the standard SHAP approach on concatenated data and our paper is the first, to the best of our knowledge, to give a formal treatment of explaining preferences and to address such questions. Essentially, the point that Shapley values need to be appropriately defined for preferential data (and that one cannot blindly run existing tools) is one of the main takeaways of our paper, in addition to our contribution in contrasting explaining preferences to explaining utility models.\nWe hope this clarifies all reviewer's concerns.\nThanks!", " Thank you for your response! \nYour responses A1 to A2 are clear and help me understand your contribution better. \n\nHowever, you did not address one of my criticism. I feel like your baseline you compare against is quite unfair for classical SHAP.\nMaybe I am missing an obvious point, but can you elaborate, why you are not comparing against a \"standard\" SHAP estimation approach (e.g. KernelSHAP) on a concatenation of features and remove the same features only in conjunction with each other. Wouldn't that make more sense than removing the features independently from each other in this concatenation?\n\nThis is also, why I feel that comparing against ground-truth values (thank you for A3 for this) is vital for your work. ", " Q: are we using a model-agnostic explainability technique on a non-black box model?\n\nA: What we are providing in this paper is:\n\n1) A novel general framework for explaining skew-symmetric preference models that need not assume transitivity.\n2) Under this general framework, we chose to estimate the value functions specifically using RKHS methods. This need not be the case and there are many other methods to do something similar, check out Covert et al. 2022, Frye et al. 2020. \n\n[Covert et al. 2022] Explaining by Removing:A Unified Framework for Model Explanation\n\n[Frye et al. 2020] Shapley explainability on the data manifold\n\n---\n\nQ: Preference over removal of transitivity assumption\n\nA: This approach is built upon a line of existing researches that study flexible modelling over preferences. In practice, total rankability of preferences are often too strong an assumption. There might be many reasons why some \"noisy\" preference do not conform to a single overall ranking. For example, it is well studied that cognitive biases often lead to inconsistent human preferences in behavioural economics (Tversky et al. 1992). We encourage the reviewer to see the work of Causer et al. 2005, Pahikkala et al. 2010, Waegeman et al. 2012, Chen et al. 2016, and Chau et al. 2022 on their motivation to consider intransitive relations.\n\n[Tversky et al 1992] Advances in prospect theory: Cumulative representation of uncertainty.\n\n[Causeur et al. 2005] A 2-dimensinoal extension of the Bradley-Terry model for paired comparisons.\n\n[Pahikkala et al. 2010] Learning intransitive reciprocal relations with kernel methods.\n\n[Waegeman et al. 2012] A kernel-based framework for learning graded relations from data.\n\n[Chen et al. 2016] Modeling intransitivity in matchup and comparison data.\n\n[Chau et al. 2022] Learning inconsistent preference with Gaussian Processes.\n", " Having re-read my comments, and the authors' reply, I do not feel myself more convinced than I was on first reading the manuscript.\n\nAbove all, the motivation remains unclear to me: \n- are we using a model-agnostic explainability technique on a non-black box model?\n- as I understand it, the authors prefer to remove the transitivity assumption in the usual definition of 'rational' preferences over using stochastic utility. I don't know this field well enough to understand why that is a better approach.", " *Q6: I am confused about the claim that computing two Shapley values for the same feature in $x^{(l)}$ and $x^{(r)}$ leads to inconsistency. Although $x^{(l)}$ and $x^{(r)}$ consist of the same features, their feature values are different. Therefore, each feature value can be assigned a Shapley value. A simple example is that the Shapley value of $x^{(l)} = -x^{(r)}$.*\n\nA6: If one believes that the specific order between left players and right players carries additional information – if, for example, left always stands for “home team”, and right always stands for “away team”, then it might make sense to concatenate the two sets of features and obtain an explanation as you suggested, with separate Shapley values, where one can ask a different type of questions, e.g. “how relevant is this feature for the home team?”. However, in the general case when there is no specific meaning to this order (the case we consider in this paper), computing separate Shapley values will lead to difficulties for two reasons: (1) there is no principled way to aggregate the two different Shapley values for $x^{(l)}$ and $x^{(r)}$, as shown in the example in appendix B, (2) Comparing Shapley values for a specific feature (e.g. height) for the left player to that of the right player, we see that they do not actually satisfy the appropriate symmetry constraints – i.e. Shapley value for the “left height” at $(x^{(1)}, x^{(2)})$ need not be the same as the Shapley value for the “right height” at $(x^{(2)}, x^{(1)})$, i.e. explanations change purely due to the arbitrary order of players. This is the inconsistency which we alluded to in the manuscript, and it will be elaborated further in the final version. Our approach does not have such drawbacks: we are interested to know which feature, observed for both players, contributed most to the comparison between them and hence we define a value function that allows us to quantify the utility when such a feature is masked for both players, instead of treating two observations of the same feature separately. \n\n---\n\n*C7: The comparison of local explanations in Figure 3 seems unfair. I think that one_hot should remain unchanged when computing $\\nu(S)$ on GPM and UPM. In this way, the clusters of pairs are constant and explanations are restricted in the specified condition.*\n\nA7: One-hot encoding is unchanged when running GPM and UPM – i.e. they are run on the same data. We emphasise that figure 3 is just a plot of Shapley values restricted to matches between cluster A and B, while the actual estimation procedure is done over all observations, just like figure 2. \n\n", " Thank you for your time and effort in reviewing the paper. We reply to your questions and comments here:\n\n*Q1: Can you outline in what way your method is applicable to other types of preference models $g(\\cdot)$ or tasks?*\n\nA1: While our proposed nonparametric estimator requires $g$ to live in an RKHS, our proposed value function is general and works for any skew-symmetric preference model. If one wants to model $g$ with a different model, such as a deep network, then one could appeal to methods discussed in Frye et al. 2020 to estimate our proposed value function, and proceed with the WLS approach in obtaining SVs as in KernelSHAP. We see our main contribution lies in devising a preference learning specific value function, and providing an effective way of estimating them that does not require solving a more difficult task of conditional density estimation. \n\n- Frye et al. 2020 Shapley explainability on the data manifold\n\n---\n\n*Q2: Why did you focus solely on kernel-based Generalised Preference Models? The sentence \"While the ... \" (lines 196 to 199) is not very informative for this big reduction of the problem.*\n\nA2: As replied above, we see our contribution in studying explainability under the context of preference learning. This has not been studied before and we devised an appropriate value function. There are numerous ways one could estimate the preference model, as well as the value function. We chose to focus on kernel-based generalised preference models because it allows us to utilise the recently proposed RKHS-SHAP to obtain closed form estimation of the value function, without estimating conditional densities, which are more difficult problems to solve than estimating conditional expectations. There are also convincing theoretical results about GPM as well given that the kernel used is universal with respect to the space of preference models.\n\n---\n\n*Q3: Is it possible to calculate and add the ground-truth Shapley values for your synthetic dataset and a \"perfect model\"?*\n \nA3: Thank you for this great suggestion – this will indeed be a helpful addition to the synthetic experiment, we will carefully consider calculating ground-truth Shapley values to better illustrate the significance of the results. \n\nWe thank the reviewers for their suggestions and minor remarks, we will incorporate them into the paper.\n\n\n\n", " *C1: Although the proposed Pref-SHAP uses kernel functions to compute the value function $v$, its essence is actually the same as SHAP. The only difference is the implementation/computation of the value function, i.e., the model output when only given a subset of input variables. Some benefits of Pref-SHAP over SHAP on UPM actually come from advantages of GPM, rather than the explanation method.*\n\nA1: While SHAP provides an efficient way to compute Shapley values using weighted least square approach, it assumes one has access to the outcome of the value functions. As such, another route in Shapley value based explainability research is to devise new or improve estimations of the value function. Our contribution here lies in the latter where we studied a new explainability problem for preference learning, and propose an appropriate value function for it, along with an approach to estimating it.\n\nCould you please also clarify what benefits of Pref-SHAP over SHAP on UPM comes from GPM?\n\n---\n*Q2: What is the advantage of computing by using kernel functions over directly computing the expected probability of the model? If using kernel functions is more efficient, then it would be better to compare the computational cost of the two implementations.*\n\nA2: Computing value function using kernel methods allows one to obtain close form solutions in terms of matrix vector multiplication, where one could then apply a variety of large scale kernel methods, and compute the quantity within $\\mathcal{O}(n\\sqrt{n})$ time [1]. If we do not use kernels, then the estimation of conditional density $p(X_{S^C} \\mid X_S=\\mathbf{X}_S)$ for all coalitions $S$ is required to compute the expectation, and that estimation problem might be more difficult than the original learning problem itself, and more costly [2]. \n\nThe discussion of computational advantage in using kernel functions for estimating value functions over neural methods have been discussed thoroughly in Chau et al (2022). The key contribution here is to study explainability for preference models since it is an unexplored but practical field. Our proposed value functions would still work even if we do not appeal to kernel methods. \n\n- [1] Rudi et al. 2017: FALKON: An optimal large scale kernel method\n\n- [2] Yeh et al. 2022: Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations\n\n---\n\n*Q3: In the third row of Table 2 for the candidate $x^{(l)}$, why not only modify $x^{(l)}$ in the computation of Shapley values without changing $x^{(r)}$?*\n\nA3: If we only modify $x^{(l)}$ while keeping ONE particular $x^{(r)}$ fixed, then we are essentially asking “which item features contributed most to $x^{(l)}$’s match with this particular $x^{(r)}$”, instead of the question we are trying to address:“which item features contribute most to $x^{(l)}$’s matches”. They are different quantities.\n\n---\n\n*Q4: For experiments on realistic datasets, how to judge which explanations are correct? For example, in Figure 6, both Pref-SHAP and SHAP for UPM show a chaotic pattern in beehive plots. Besides, differences in results between Pref-SHAP and SHAP for UPM may not stem from the explanation methods but come from the difference between models, because UPM cannot perform well for unrankable tasks and may lead to strange explanations. It would be better if the authors compare the proposed method with other explanation methods, e.g., Integrate Gradient, on the same model.*\n\nA4: Judging an explanation being “correct” is a very hard problem in itself since explainability is an unsupervised problem. We believe the comparison of explanation methods should happen at an axiomatic level, in that case, we can be more certain what propertiesour explanation will have. For example, Integrated Gradients (IG) was shown to theoretically approach Aummann-Shapley value as shown in [Chen et al. 2019], a fundamentally different concept to Shapey value. This difference leads to IG failing to satisfy some desirable feature attribution axioms that Shapley values do satisfy. For example, when feature i and j contributed equally to the function f across all coalitions S, i.e. $\\nu_f(\\{i\\}\\cup S) = \\nu_f(\\{j\\}\\cup S)$ with $\\nu_f$ the value function defined with respect with $f$, IG do not necessarily returns the same attribution score to features i and j, but SVs would. Moreover, when feature i does not contribute to the function f at all, the attribution score from IG will not always be 0 while Shapley value based approach would. See examples from this article [1] for further reference.\n\n- [1] https://towardsdatascience.com/limitations-of-integrated-gradients-for-feature-attribution-ca2a50e7d269\n- [Chen et al. 2019] Explaining Models by Propagating Shapley values\n\n", " **Clarification on run time complexity**\n\nThank you for your suggestion. For completeness, we will include a discussion on complexity here, and include it to the camera ready version later.\n\nWhen computing the GPM, since it is fundamentally a kernel ridge regression model, there are various large scale kernel approximation algorithms available. In this work, we chose to use FALKON[1], a Nyström approximation based preconditioner for conjugate gradient descent to find the regression solution in $\\mathcal{O}(n\\sqrt{n})$ time, where $n$ is the number of samples. \n\nWhen computing the value function for GPM, the challenge comes from computing the conditional mean embedding estimators, which in naive case, would require $\\mathcal{O}(n\\sqrt{n})$ complexity. However, one could again appeal to FALKON, and the estimation reverts back to $\\mathcal{O}(n\\sqrt{n})$ again.\n\nIn practice, we found that when explaining larger datasets i.e. $n\\sim 10^5$, it only took around 5 minutes on V100 cards to calculate all the Shapley values for all features for the entire dataset.\n\n[1]: FALKON: An Optimal Large Scale Kernel Method\n", " Thank you for your time and effort in reviewing the paper. We answer your questions and comments here:\n\n---\n\n*Q1: I would like to know what the generalization from UPM to GPM allows, in terms of utility theory. I'm not familiar with the SOTA in this area, but wonder, for example, how GPM relates to e.g. Deaton and Muellbauer's `almost ideal demand system'.*\n\nA1: To the best of our knowledge, utility models often map items of interest to a scalar and pairwise preferences are then derived by comparing them. This imposes a strict ordering between items and as Chau et al 2022 showed, cannot be used to model complex, more realistic preferences that often exhibit, e.g. cyclic structure. GPM bypasses this scalar structural assumption and directly models the pairwise preference function nonparametrically, thus can learn more general preference structures than UPM. \n\nWe are not familiar with the ‘almost ideal demand system’ model but it seems to be a specific demand model with a parametric form relating price and utility level with a cost function. We do not believe this is relevant to our work on explaining pairwise preference. \n\n- Chau et al. 2022: Learning Inconsistent Preferences with Gaussian Processes\n\n---\n\n*C2: line 193's ``inconsistent explanations'' makes me wonder what is being assumed about cross-partial elasticities (in the language of economic theory): is it assumed that features do not interact (i.e. neither substitutes nor complements)? If so, that is a very strong assumption. \nA2: We do not make such an assumption in our work. This assumption will lead to trivial solutions to the Shapley value computation, since now $\\nu({i} \\cup S) = \\nu({i})$ for all coalition S if features do not interact, where $\\nu$ is the usual value function.*\n\n---\n\n*C3: Personally, I found myself wanting to know e.g. what this can say about inter-state wars (e.g. using https://correlatesofwar.org/data-sets/COW-war/dyadic-inter-state-war-dataset-1). This both seems orders of magnitude more important, and might inform a topical issue, Russia's invasion of Ukraine.*\n\nA3: Thank you for the suggestion. We believe the focus of our contribution is methodological. Therefore, the aim was to demonstrate different aspects of our method using a variety of suitable datasets with appropriate formats. A potential application like the one the reviewer suggested would require careful consideration and its importance would warrant the entirety of the focus of such applied work. \n\n---\n\n*Q4: is the \"unrankable\" problem the same one that led to the use of stochastic utility in economic theory (q.v. work by e.g. Pattanaik, Barbera)?*\n\nA4: We do not see immediate connections to stochastic utility in economic theory, because the GPM model used does not model utility, but only the outcome of the comparison.\n\nThank you for the suggestions, we have incorporated the feedback into our manuscript.\n", " We thank the reviewers for their helpful comments and suggestions, we sincerely believe they have significantly improved this work. We are happy that the reviewers found the work well-written, technically sound, and significant. We respond to each reviewer individually below.", " The paper seeks to `explain' the outcomes of binary contests (whether between two tennis players or elements of a consumer's choice set). By imposing restrictions on the class of preference orderings considered, the paper is able to derive a ``closed form expression of the value function'' underlying the Shapley value, reducing the computational cost of determining the Shapley value. **Originality**\n\nThe paper seems original to me, although in a somewhat complex way:\n- a standard argument for Shapley value's use is that it is model agnostic. From this point of view, knowing the underlying model defeats some of its motivation.\n- utility functions (cardinal, ordinal, stochastic), preference orderings and the relationships between them have been studied for decades within the economics literature, as have (to a lesser extent) `contest success functions'. The paper does not give indications of understanding that, perhaps making it seem more original than it is.\n\n**Quality**\n\nNo concerns.\n\n**Clarity**\n\nWell presented.\n- line 40: it could make sense to explain what the authors have in mind by ``to explain''.\n- is the \"unrankable\" problem the same one that led to the use of stochastic utility in economic theory (q.v. work by e.g. Pattanaik, Barbera)?\n- line 91: the standard approach in economic theory is to see a `good' as a complete description, thus including what are called `context variables' here. It would be useful if the authors commented on why they do _not_ follow that approach.\n\n**Significance**\n\nTheoretically, I feel that the paper's detachment from the extensive microeconomic theory literature limits it. (This said, I have not seen the question of applying Shapley values applied to consumer models.)\n\nThus, I needed to be convinced by the Experiments. While I understand that they are `toy examples', it may still be that they trivialize the material (e.g. who cares which Pokeman wins a match, or Djokovic's performance on clay?). Personally, I found myself wanting to know e.g. what this can say about inter-state wars (e.g. using https://correlatesofwar.org/data-sets/COW-war/dyadic-inter-state-war-dataset-1). This both seems orders of magnitude more important, and might inform a topical issue, Russia's invasion of Ukraine. 1. I would like to know what the generalization from UPM to GPM allows, in terms of utility theory. I'm not familiar with the SOTA in this area, but wonder, for example, how GPM relates to e.g. Deaton and Muellbauer's `almost ideal demand system'.\n\n1. line 193's ``inconsistent explanations'' makes me wonder what is being assumed about cross-partial elasticities (in the language of economic theory): is it assumed that features do not interact (i.e. neither substitutes nor complements)? If so, that is a very strong assumption. No concerns.", " In the setting of explainable artificial intelligence, this paper investigates the challenging problem of explaining predictions made by preference models without strong rankability assumptions. To this point, one cannot exploit standard explainability tools for utility functions. So, this study instead considers the recent (possibly contextured) Generalized Preference Model proposed by Chau et al. (2022) that exploits kernels for capturing the likelihood function on pairs of items. By coupling this model with the paradigm of Shapley values, the authors derive a preferential value function for dueling items. Based on the existence of Rietz representation of the corresponding functional, this value function admits an elegant closed-form that can be computed efficiently. Experiments performed on a wide range of preference tasks corroborate the benefits of this Pref-SHAP approach, especially in comparison with a naive application of SHAP in pairwise preference explanation. \n Overall the paper is very well-written and well-motivated. As many technical tools are introduced for deriving, in an efficient way, relevant explanations for general preference models, the paper is very dense. But all concepts are introduced parsimoniously, with a pedagogical effort made for explaining difficult parts. As far as I could check, the technical results look sound. Finally, the experiments performed on various tasks clearly highlight the practical utility of Pref-SHAP. In a nutshell, this is a good paper that shall pave the way for future research on explaining predictive preferences.\n\nI found no real weaknesses in this paper. Just one point: it would be informative for the reader to give some bounds on the runtime complexity for estimating the value functions derived in Propositions 3.2 & 3.3. \n No real questions, but see the comment above about the runtime complexity.\n The Generalized Preference Model examined in this paper for inferring explanations makes no assumption about item rankability and takes only a few assumptions about the non-parametric function $g$ used in the model and its kernelization. The authors go even further by inferring explanations in the contextual setting. So, I did not find any real limitations in this framework.\n", " This paper proposes a method to explain attributions of inputs in a preference model. Specifically, the authors apply Shapley values to the preference model. To this end, the authors design the utility function for Shapley values on generalized preference models based on kernel functions. The proposed method can estimate attributions of both inputs and context variables. [Strengths]\n+ The paper is well-motivated and well-structured.\n+ The authors do not directly apply Shapley values to preference models, but consider properties of preference models and propose an efficient way to compute Shapley values.\n\n\n[Weaknesses]\n- Although the proposed Pref-SHAP uses kernel functions to compute the value function $v$, its essence is actually the same as SHAP. The only difference is the implementation/computation of the value function, i.e., the model output when only given a subset of input variables. Some benefits of Pref-SHAP over SHAP on UPM actually come from advantages of GPM, rather than the explanation method.\n- What is the advantage of computing $v$ by using kernel functions over directly computing the expected probability of the model? If using kernel functions is more efficient, then it would be better to compare the computational cost of the two implementations.\n- In the third row of Table 2 for the candidate $x^{(l)}$, why not only modify $x^{(l)}$ in the computation of Shapley values without changing $x^{(r)}$?\n- The comparison of local explanations in Figure 3 seems unfair. I think that one_hot$(c_i)$ should remain unchanged when computing $v(S)$ on GPM and UPM. In this way, the clusters of pairs are constant and explanations are restricted in the specified condition.\n- For experiments on realistic datasets, how to judge which explanations are correct? For example, in Figure 6, both Pref-SHAP and SHAP for UPM show a chaotic pattern in beehive plots. Besides, differences in results between Pref-SHAP and SHAP for UPM may not stem from the explanation methods, but come from the difference between models, because UPM cannot perform well for unrankable tasks and may lead to strange explanations. It would be better if the authors compare the proposed method with other explanation methods, e.g., Integrate Gradient, on the same model.\n - I am confused about the claim that computing two Shapley values for the same feature in $x^{(l)}$ and $x^{(r)}$ leads to inconsistency. Although $x^{(l)}$ and $x^{(r)}$ consist of the same features, their feature values are different. Therefore, each feature value can be assigned a Shapley value. A simple example is that the Shapley value of $x_i^{(l)}=-x_i^{(r)}$.\n- Some references for tables are incorrect. “Table 5” in Line289 should be “Table 2”, and “Table 4” in Line 308 should be “Table 3”.\n The authors do not address the limitations of their work. ", " The paper adapts the SHAP value estimation to the object-ranking task of the preference learning setting. Thereby, the authors present a novel way of computing SHAP values which the authors call PREF-SHAP. The authors present and evaluate the efficacy of PREF-SHAP with one synthetic dataset and three (plus one in the appendix) real-world ranking datasets. Moreover, the authors present their PREF-SHAP methodology formally by presenting a value function for the object ranking task of preference learning. The authors formally prove their methodology under the assumption that the ranking model under investigation is a kernel-based Generalised Preference Model. The proofs can be found in the appendix. Moreover, the authors publish their implementation online for open access. # Originality\nThe proposed Pref-SHAP is a natural extension of the already existing SHAP explanation framework. Pref-SHAP mainly builds on already existing work of Shapley Value approximation. The presented work relies heavily on a newly introduced RKHS-SHAP approximation method, which in itself is very new and, as of now, is still under review. Moreover, the work focuses heavily on a newly proposed model type of kernel-based Generalised Preference Models and the theoretical results are solely assuming such a model. Hence, the originality of the proposed work is rather incremental. (Application of SHAP to a new problem domain and reduction of problem space to analyze the solution formally) \n\nThat said, the presented work is to the best of my knowledge the first work linking a subset of preference learning and Shapley values explanations.\n\n# Quality\nIn general, the quality of the work is high. The method is presented formally. The conducted empirical datasets are well selected and illustrate how SHAP on ranking functions can be used. As there exists no SOTA and, thus, it is sufficient to compare against a sensible baseline. The theoretical analysis is straightforward and interesting when the model function is kernel-based.\n\nI understand the main contribution of the proposed work to be the value function in Definition 3.1 and that you show the results of evaluating the value function on a specific model instantiation (kernel-based Generalised Preference Models). The value function follows quite naturally from applying SHAP to the ranking domain and is presented very well. However, the authors do not give a valid motivation for why they chose to only focus on these specific kernel-based model functions. The only mention of this is the sentence: \"While the ... \" (lines 196 to 199), which does not include proper reasoning. Propositions 3.1 to 3.3 follow quite naturally from instantiating the model function $g(.)$ with a kernel-based GPM. Still, the theoretical results are interesting. Especially by framing the SHAP approximation problem as a kernel and utilizing RKHS (one of the main contribution of RKHS-SHAP, as I understand the paper).\n\nI see a problem with the evaluation of PREF-SHAP and the chosen baseline. Of course, it is interesting to see what would happen if you were to concatenate the features of two instances and apply KernelSHAP rather naively. However, I argue that it is pretty obvious that KernelSHAP would not really work there, because it assumes feature independence, and the features are the same, and, thus, are dependent. A better comparison would be to group the same features of two instances together and remove both features simultaneously. Of course sampling from the conditional distribution is more problematic (solved by RKHS-SHAP, I figure) but in the removing-with-marginal case (after Covert et al. 2021, [28] in your work) this would be no problem and straightforward. I would be very interested, in why you chose to use these kernel-based GPMs and would be more interested in a thorough investigation of common Shapley value approximation techniques for different models and model types with your new value function (Definition 3.1). \n\n# Clarity\nIn general, I enjoy the presentation of the work.\n- I like the motivation and introduction giving a good overview into the field of preference learning for scholars not familiar with it. The same is true for the Shapley section. However, I suggest being more specific about what part of preference learning you are addressing (mainly object-ranking).\n- I like the discussion of choosing marginal vs. conditional distribution for \"feature removal\".\n- In particular, I like Table 1: This gives a good overview of the aforementioned preference learning / SHAP description\n- Figure 1 is a great and small illustration of how the synthetic dataset was created and for what purpose it was implemented.\n- In general, the selection of empirical test datasets suits the work greatly. The Pokémon dataset is clear to understand. The Tennis matches are a good example of a \"real\" real-world dataset and the focus on Djokovic's losses is a humorous and interesting illustration of your PREF-SHAP method\n- The writing of the paper is good and it flows well. (Some hiccups along the way and the Propositions are a bit under-discussed)\n\n### Minor Clarity Remarks:\n- Coming from a Feature Importance background, I had problems with the formal notation of denoting the item index in the superscript in braces (l and r: $\\boldsymbol{x}^{(l)}, \\boldsymbol{x}^{(r)}$) as this representation is sometimes used to denote the feature column of the input space. Make this clear again in your paragraph introducing your notation.\n- All Propositions (3.1 to 3.3) are hard to understand at first (though easier when following the proofs in the appendix). I suggest adding a better natural text description to the formulation to make it clearer.\n- The contribution statement should be more specific and classify the main contribution more precisely (focus on kernel-based Generalised Preference Models)\n\n- Minor Remarks: \n\t- Appendix A: I don't think that you wanted to say that your algorithm is \"embarrassingly\" parallel (probably you meant \"embracing\").\n\t- Proposition 3.3 (Line 223-224) is overflowing horizontally. (the same is true for some expressions in the appendix ... there this is no problem)\n\t- Sentence \"An explicit...\" (line 14-17) is very hard to read. The i.e. part through me off. Split this maybe in two.\n\n# Significance\nThe significance of the work is high. Preference learning is a big part of machine learning research. Moreover, explainability and trustworthiness are also extremely important. Explainability is especially important for ranking problems, as search engines or recommender systems influence people on a daily basis. Being able to explain such systems is an important research question.\n\nTo the best of my knowledge, there exists no similar work of Shapley Value + any Preference Learning. As such, an application of the SHAP method to Preference Learning is a significant contribution to the community. # Questions\n- You frame your Pref-SHAP method to be the solution for all Preference learning/ranking machine learning models. However, you primarily apply RKHS-SHAP (or any SHAP estimation approach) to the object-ranking task on your mentioned kernel-based Generalised Preference Models. \n\t- Can you outline in what way your method is applicable to other types of preference models ($g(.)$) or tasks? \n\t- Why did you focus solely on kernel-based Generalised Preference Models? The sentence \"While the ... \" (lines 196 to 199) is not very informative for this big reduction of the problem. \n- Is it possible to calculate and add the ground-truth Shapley values for your synthetic dataset and a \"perfect model\"?\n\n# Suggestions\n- Specify your method's main limitations and future work potential.\n- To save some space, I would move some of the Pref-SHAP plots of section 4 into the appendix, as they are quite repetitive and take up a lot of space that could be better used for a thorough discussion of the limitations of your work or on a different kind of evaluation. I would keep the Pokemon and/or Tennis examples. These are great.\n- There exists a paper from econometrics that first made the connection of calculating Shapley Values via the least-squares formulation. (Also mentioned by Lundberg and Lee in their SHAP paper). In line 143 of your manuscript, the proper citation could be added: Charnes, A., Golany, B., Keane, M., & Rousseau, J. (1987). Extremal Principle Solutions of Games in Characteristic Function Form: Core, Chebychev and Shapley Value Generalizations. In J. P. Ancot, A. J. H. Hallett, J. K. Sengupta, & G. K. Kadekodi (Eds.), Advanced Studies in Theoretical and Applied Econometrics. Econometrics of Planning and Efficiency (Vol. 11, pp. 123–133). Springer Netherlands. https://doi.org/10.1007/978-94-009-3677-5_7\n- I suggest, adding a different focus to your evaluations: How do users interpret your explanations? How does RKHS-SHAP compare against regular KernelSHAP on the same setting (value function) # Limitations\n- The authors do not specify the limitations of their work. \n- There is no discussion about the ethical implications of this work apart from the motivation. As I do not see big ethical concerns with such XAI works (rather the opposite) I don't think that this is a problem.\n\n- The main limitation of the work is the more narrow application domain of object-ranking using a specific model type (kernel-based Generalised Preference Models). This is not clearly specified in the beginning.\n\n- The evaluation of the method is done in a way that is clearly favoring the newly proposed method and no comparison is done to existing approximations (mainly KernelSHAP) in the same value-function setting (see the point in the \"Quality\" section). \n\n- The proposed Pref-SHAP method is evaluated on a functional level without conducting any human experiments. For this work, a human experiment is not essential, as the work is rather foundational. However, as it is one of the first XAI + Preference Learning works, it is very interesting to see how people would interpret the output of Pref-SHAP, as it is unintuitive to do so even for a person who is quite familiar with SHAP." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "yk8XHcrp5nx", "OaHeVajcbOEC", "dreOIp1tAEy", "aPztgJE0FxC", "4oMIseHy4J", "459MSw9TPpS", "aj96iePqNlt", "nz6jH_PYjU", "FoTx82LVwh-", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P", "nips_2022_-me36V0os8P" ]
nips_2022_BgMz5LHc07R
C-Mixup: Improving Generalization in Regression
Improving the generalization of deep networks is an important open challenge, particularly in domains without plentiful data. The mixup algorithm improves generalization by linearly interpolating a pair of examples and their corresponding labels. These interpolated examples augment the original training set. Mixup has shown promising results in various classification tasks, but systematic analysis of mixup in regression remains underexplored. Using mixup directly on regression labels can result in arbitrarily incorrect labels. In this paper, we propose a simple yet powerful algorithm, C-Mixup, to improve generalization on regression tasks. In contrast with vanilla mixup, which picks training examples for mixing with uniform probability, C-Mixup adjusts the sampling probability based on the similarity of the labels. Our theoretical analysis confirms that C-Mixup with label similarity obtains a smaller mean square error in supervised regression and meta-regression than vanilla mixup and using feature similarity. Another benefit of C-Mixup is that it can improve out-of-distribution robustness, where the test distribution is different from the training distribution. By selectively interpolating examples with similar labels, it mitigates the effects of domain-associated information and yields domain-invariant representations. We evaluate C-Mixup on eleven datasets, ranging from tabular to video data. Compared to the best prior approach, C-Mixup achieves 6.56%, 4.76%, 5.82% improvements in in-distribution generalization, task generalization, and out-of-distribution robustness, respectively. Code is released at https://github.com/huaxiuyao/C-Mixup.
Accept
This is an interesting and technically solid paper. The reviews are very consistent as well.
train
[ "ouRjGvMuAW", "SNqVKhQsEzC", "50SxbIVYjWO", "9YZGZLb9R9S", "W0ftybwZaWHR", "yB5PQCEWNZ9", "QKossfDUXj", "BxrqyNi1Sg6", "-HOglvVeGWV", "flhqX516Dcv", "5fMQvwqh8oB", "nuoFcIFFhfa", "ydhRzfzH47U", "FdhMe_kM15Q", "b7QcuAwTILhJ", "mz-7NozIMjF", "FU2cMGbgxjg", "BZfmljPbEvc", "aCGl7ONPfG", "cJ-oh_4QhQd", "fszTYQufrlX", "TX27y5JcMb2" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi Reviewer 13Sq,\n\nThanks for pointing out this issue. We are sorry about the confusion. We indeed compared with AutoMix [2] (ECCV'2022) in our additional experiments. We made a mistake when adding the citation and have fixed this issue in the updated version. Many thanks!", " Thanks for your quick response. You may have confused two Automix articles. Automix [1] in ECCV'2020 generates mixup samples by generative methods, while Automix [2] in ECCV'2022 optimizes the mixup generation and mixup classification together in a closed loop to further improve PuzzleMix. In fact, what I mentioned in my review is AutoMix [2] in ECCV'2022 (open-source on GitHub), and I think it might be easy for you to compare with. It is also ok that you provide the comparison results based on AutoMix [1] in ECCV'2020 (a not open-source work).\n\n[1] Zhu, et al. Automix: Mixup networks for sample interpolation via cooperative barycenter learning. ECCV, 2020.\n\n[2] Liu, et al. AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. ECCV, 2022.", " Hi Reviewer 13Sq,\n\nThank you for pointing out these recent works. We have cited Saliency Grafting [1], TransMix [2], AutoMix [3], TokenMix [4] in the updated version. Additionally, we added some discussion between mixReg and other classification-based mixup variants in both Line 382-385 (related work) and Appendix 4.1. As you mentioned, mixReg focuses on how to select mixing pairs in regression, while other approaches change the policy to mixing.", " Thanks for your respones and I'm glad to see the manuscript being improved in the revision. It will be better if the authors could cite several recently published state-of-the-art mixup works, e.g., Saliency Grafting [1], TransMix [2], AutoMix [3], TokenMix [4], etc. Since the proposed MixReg focuses on selecting proper mixing samples for regression tasks, it would be better to discuss the relationship with the mainstream mixup methods in classification tasks. Currently proposed methods improve the sample mixing policy (e.g., PuzzleMix, Co-Mixup, AutoMix [3]) or the label mixing policy (e.g., Saliency Grafting [1], TransMix [2], TokenMix [4]) with saliency or attention information.\n\n[1] Park, et al. Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing. AAAI, 2022.\n\n[2] Chen, et al. TransMix: Attend to Mix for Vision Transformers. CVPR, 2022.\n\n[3] Liu, et al. AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. ECCV, 2022.\n\n[4] Liu, et al. TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers. ECCV, 2022.", " Dear Reviewer 13Sq,\n\nThank you for your response, we are happy to see that our response address most of your concerns. Thanks again for your valuable comments to help us improve our paper and for raising your rating.", " Dear Reviewer VT7i, \n\nWe are happy to see our response addresses your questions. Thank you for raising your rating.\n\nIn terms of your follow-up question, our understanding is that the noisy pairs are caused by noisy labels in the training set. Please kindly correct us if our understanding is wrong or if you have any follow-up questions.\n\nIf it is correct, since Reviewer t9o8 also asked it in the initial review, we did conduct experiments to investigate the robustness of mixReg to label noise during the rebuttal period, which was also added in Appendix F.4 of the revised paper. Concretely, we injected Gaussian noises into the labels. The noise is set as 30% of the standard deviation among the corresponding original labels, where adding noise significantly degrades the performance compared to clean data. In Table R2, we report the results and the corresponding noise distributions on Exchange-Rate, ShapeNet1D, and DTI, respectively.\n\n**Table R2**: Robustness analysis to label noise. ($\\downarrow$ denotes the smaller the better; $\\uparrow$ denotes the larger the better)\n\n| Model | Exchange-Rate | ShapeNet1D | DTI |\n|---------------|---------------|-------|------------|\n| | RMSE $\\downarrow$ | MSE $\\downarrow$ | Avg. R $\\uparrow$ |\n| Noise Type | $\\mathcal{N}(0, 1.18\\times10^{-3})$ | $\\mathcal{N}(0, 0.874)$ | $\\mathcal{N}(0, 7.59\\times10^{-3})$ |\n| ERM/MAML | 0.0381 $\\pm$ 0.0014 | 5.553 $\\pm$ 0.098 | 0.334 $\\pm$ 0.018 |\n| mixup/MetaMix | 0.0375 $\\pm$ 0.0017 | 5.329 $\\pm$ 0.101 | 0.307 $\\pm$ 0.021 |\n| **mixReg (ours)** | **0.0360 $\\pm$ 0.0013** | **5.185 $\\pm$ 0.096** | **0.356 $\\pm$ 0.013** |\n\nAccording to Table R2, we observe that mixReg still improves the performance over ERM and vanilla mixup even with the addition of label noise, showing its effectiveness and robustness to label noise.\n", " Thanks for the detailed rebuttal comments, which have conducted comparison experiments based on more mixup variants and addressed most of my concerns. The revision of the paper has provided more experiments and additional details on hyper-parameter tuning. I'm willing to raise my rating to 6.", " I really appreciate the very detailed explanation for each of my queries given by the authors. The authors have answered most of my questions and their revision to the paper makes it much better now. Im willing to *raise my rating to 6*.\n\nI still have one follow-up question and would appreciate if the authors could share their thoughts on them\n- for Q1, its clear to me now the difference between sampling probability and $\\lambda$. But from L114 of the paper, the authors say that they \"introduces a symmetric Gaussian kernel in order to calculate the sampling probability\" to form semantically closer pairs. I might have misunderstood this, but do the authors encounter the problem of having noisy pairs? \n\n", " Hi Reviewer 13Sq,\n\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you again!\n\n", " Hi reviewer VT7i, \n\nWe would like to follow up to see if the response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you again!", " Dear Reviewer t9o8,\n\nThank you for your quick response, we are happy to see that our response address most of your concerns. Thanks again for your valuable comments to help us improve our paper.", " Thanks for the rebuttal, which has addressed most of my questions. I'm happy to see the manuscript can be improved by my comments.", " We sincerely appreciate all reviewers for their insightful and constructive feedback. According to these comments, we have improved the paper (new pdf uploaded) and highlighted the main changes with blue text. Below, we summarize all changes:\n\n1. Revised the caption of Figure 1 (Reviewer t9o8)\n\n2. Detailed the choice of the bandwidth $\\sigma$ and provided some empirical guidance in Appendix F.3.1 (Reviewers t9o8 and 13Sq)\n\n3. Fixed typos in Figure 3 (Reviewer t9o8)\n\n4. Added new experimental results and the discussion of mixReg with label noise in Section 4.4 (main paper) and Appendix F.4 (Reviewer t9o8)\n\n5. Provided new experimental results and the discussion of the compatibility of mixReg with prior methods in analysis I of Section 4.4 (main paper) and Appendix F.1 (Reviewers 13Sq and VT7i)\n\n6. Added more discussions about the comparison between mixReg and mixup and its variants in Appendix A.4 and Section 5 Related Work (Reviewer VT7i).\n\n7. Revised some claims to make them more precise (Reviewers t9o8, 13Sq, VT7i)\n\n8. Added more discussions about limitations and future work in Appendix G (Reviewer VT7i)\n\n9. Fixed other typos in both the main paper and Appendix.\n\n10. Since more results and discussions are included in the main paper, we put all empirical analytical experiments in Section 4.4. Accordingly, some Appendix indexes have been changed. We further simplify some descriptions in Section 2 to make the main paper 9 pages. \n", " > **Q7**: The authors have missed citing several state-of-the-art mixup works such as PuzzleMix [2], Co-Mixup [3], SaliencyMix [4], AlignMixup [5], StyleMix [6], StyleCutMix [6], AutoMix [7] etc. It would be nice to discuss these papers in the related work section.\n\n**A7**: Thank you for pointing out these relevant references. We have added and discussed these papers in the related work section. We would also like to point out that mixReg is a complementary approach to these mixup variants, where mixReg changes the probabilities of sampling mixing pairs instead of changing the way to mixing. We further conduct compatibility analysis of mixReg on three representative large-scale regression datasets: Exchange-Rate (time-series prediction), PovertyMap (image regression), and Echo (video regression). We report the results in Table R2, where we basically inject mixReg to three representative mixup variants (PuzzleMix, CutMix, AutoMix). The results validate the compatibility of mixReg with these prior methods. We’ve added these new experimental results and discussion of compatibility in Section 4.4 (analysis I) and Appendix F.1 of the revised paper.\n\n**Table R2**: Compatibility analysis of mixReg. $\\downarrow$: the smaller the better; $\\uparrow$: the larger the better.\n\n| Model | | Exchange-Rate | Echo | PovertyMap |\n|-----------|---------|---------------|-------|------------|\n| | | RMSE $\\downarrow$ | RMSE $\\downarrow$ | Worst R $\\uparrow$ |\n| CutMix | | 0.0264 $\\pm$ 0.0049 | 5.405 $\\pm$ 0.069 | 0.49 $\\pm$ 0.05 |\n| | **+mixReg** | **0.0240 $\\pm$ 0.0021** | **5.161 $\\pm$ 0.062** | **0.52 $\\pm$ 0.06** |\n| PuzzleMix | | 0.0254 $\\pm$ 0.0027 | 5.368 $\\pm$ 0.095 | 0.47 $\\pm$ 0.03 |\n| | **+mixReg** | **0.0233 $\\pm$ 0.0012** | **5.206 $\\pm$ 0.063** | **0.50 $\\pm$ 0.04** |\n| AutoMix | | 0.0242 $\\pm$ 0.0033 | 5.525 $\\pm$ 0.055 | 0.50 $\\pm$ 0.06 |\n| | **+mixReg** | **0.0228 $\\pm$ 0.0014** | **5.239 $\\pm$ 0.037** | **0.53 $\\pm$ 0.06** |\n\n---\n\n\n\n**Reference**\n\n[Ouyang et al., Nature 2020] Ouyang, David, Bryan He, Amirata Ghorbani, Neal Yuan, Joseph Ebinger, Curtis P. Langlotz, Paul A. Heidenreich et al. \"Video-based AI for beat-to-beat assessment of cardiac function.\" Nature 580, no. 7802 (2020): 252-256.\n\n[Yeh et al. Nature Communication 2020] Yeh, Christopher, Anthony Perez, Anne Driscoll, George Azzari, Zhongyi Tang, David Lobell, Stefano Ermon, and Marshall Burke. \"Using publicly available satellite imagery and deep learning to understand economic well-being in Africa.\" Nature communications 11, no. 1 (2020): 1-11.\n\n[Koh et al. ICML 2021] Koh, Pang Wei, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu et al. \"Wilds: A benchmark of in-the-wild distribution shifts.\" In International Conference on Machine Learning, pp. 5637-5664. PMLR, 2021.\n\n[Huang et al. NeurIPS 2021] Huang, Kexin, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W. Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. \"Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development.\" In NeurIPS Datasets and Benchmarks. 2021.\n", " > **Q5**: Continuing from the above point, interpolating between semantically similar points in the data manifold could lead to a problem of manifold intrusion - Given two examples with different class labels, the interpolated example may actually lie in a region associated with a third class in the feature space. Im curious to know if the authors face this problem in MixReg, if not, what is being done to avoid manifold intrusion.\n\n\n**A5**: MixReg specifically aims to interpolate examples with closer labels, which mitigates the effects of manifold intrusion to some extent. Here, we use one example to illustrate it:\n\nIn this example, we aim to predict the angle of rotation of an object. Given different colors of background that take a large portion of pixels in an image, we consider two pairs of instances: pair 1 [(angle=50, color=yellow), (angle=50.5, color=blue)], pair 2 [(angle=50, color=yellow), (angle=100, color=yellow)]. If we consider the input feature similarity (similarity in x), examples in pair 2 will have a much smaller distance, and therefore will be mixed with a higher probability than pair 1. In this case, the manifold intrusion happens: we obtain a mixed image with a position that has no semantic meaning. In contrast, if we consider the label similarity (similarity in y), then examples in pair 1 will have a smaller distance and be mixed with a higher probability, leading to a new image with an angle between 50 and 50.5 with a different color. In this example, mixReg alleviates the manifold intrusion problem.\n\nDespite mixReg's ability to mitigate manifold intrusion, we believe there is still a lot of room for further investigation of manifold intrusion in regression and plan to leave it for future work, which is added in our discussion of limitations, i.e., Appendix G of the revised paper.\n\n---\n\n> **Q6**: Could the authors evaluate their work on large scale tasks such as semantic segmentation which is also a pixel-wise regression task on Pascal-VOC or MS-COCO? How does one apply mixup to such kind of tasks in general?\n\n**A6**: The paper already evaluated mixReg on large-scale datasets, such as Echocardiogram Video [Ouyang et al., Nature 2020], PovertyMap [Yeh et al., Nature Communication 2020], and DTI [Huang et al., NeurIPS 2021]. Specifically, Echocardiogram Video is one of the largest public labeled medical video datasets for cardiac function assessments, including 10,030 apical-4-chamber echocardiography videos. PovertyMap is a regression dataset in the Wilds benchmark [Koh et al. ICML 2021], aiming to estimate global-scale poverty with satellite images. Drug-target Interaction is the only domain shift task in Therapeutics Data Commons — one of the largest public drug discovery benchmarks. We’ve revised the last paragraph of the introduction to highlight these datasets.\n\nIn terms of semantic segmentation, mixup has not been widely used in semantic segmentation to our knowledge. We leave this task for future work (Appendix G of the revised paper).", " Thank you for your constructive comments and suggestions. We have revised our paper according to your comments. We respond to your questions below and would appreciate it if you could let us know if our response addresses your concerns.\n\n> **Q1**: I quite did not understand the difference between sampling probability and the interpolation factor λ typically used in mixup papers.\n\n**A1**: The entire mixup process includes three stages:\n- Stage I: Sample two instances ($x_i$, $y_i$), ($x_j$, $y_j$) from the training set\n- Stage II: Sample the interpolation factor \\lambda from the Beta distribution Beta(\\alpha, \\alpha)\n- Stage III: Mixing the sampled instances with interpolation factor \\lambda according to the following mixing formulation:\n\n$x_{mix}=\\lambda x_i + (1-\\lambda) x_j, y_{mix}=\\lambda y_i + (1-\\lambda) y_j, \\lambda \\sim \\mathrm{Beta}(\\alpha, \\alpha)$.\n\nIn the original mixup, the interpolation factor $\\lambda$ sampled in stage II controls how to mix these two instances. mixReg instead manipulates stage I and pairs with closer labels are more likely to be sampled. In the revised version, we have added the above discussion in Appendix A.4 and mentioned it in Section 3.1 of the main paper.\n\n---\n\n\n> **Q2**: In L133, the authors say that its meaningful to use d(i,j)=||yi−yj||2. However, what is the dimension of y? is it a one-hot vector. This is quite unclear again.\n\n**A2**: y is a vector with continuous values. We’ve clarified it in Line 129 of Section 3.1 in the revised paper. As mentioned in Line 130-132, the dimension of y is typically smaller compared to the input feature dimension, and sometimes equal to 1 (e.g., PovertyMap, DTI). \n\n---\n\n> **Q3**: Instead of using the L2 distance, how about using cosine similarity in lower-dimensional embeddings instead of L2 distance in the label space?\n\n**A3**: We conduct new experiments using cosine similarity in low-dimensional embeddings (i.e., hidden representations) and report the results below in Table R1. We also report the performance of mixReg and using L2 distance in lower-dimensional embeddings (copied from our analysis II in Section 4.4 of the revised paper) for comparison.\n\n**Table R1**: Comparison between label similarity and cosine representation similarity. $\\downarrow$ means the smaller the better and $\\uparrow$ means the larger the better.\n\n| Model | Exchange-Rate | ShapeNet1D | DTI |\n|-----------------------------|---------------|------------|-------|\n| | RMSE $\\downarrow$ | MSE $\\downarrow$ | Avg. R $\\uparrow$ |\n| Euclidean distance on low-dim representation | 0.0213 $\\pm$ 0.0006 | 4.202 $\\pm$ 0.078 | 0.483 $\\pm$ 0.001 |\n| Cosine distance on low-dim representation | 0.0209 $\\pm$ 0.0012 | 4.411 $\\pm$ 0.081 | 0.477 $\\pm$ 0.004 |\n| **mixReg (ours)** | **0.0203 $\\pm$ 0.0011** | **4.024 $\\pm$ 0.081** | **0.498 $\\pm$ 0.008** |\n\nAccording to the results, using label similarity (i.e., mixReg) still performs better than using both L2 distance and cosine similarity in lower-dimensional hidden representations, corroborating the efficacy of mixReg. We have added the comparison between mixReg and using cosine similarity on hidden representations in Appendix F.2.\n\n--- \n\n> **Q4**: In the original mixup paper [62], the authors show that interpolating examples from semantically similar manifolds leads to sub-optimal performance and the best performance is achieved by randomly mixing samples from the data manifold. However, in MixReg, the authors claim that mixup samples from semantically similar data manifold results in the best performance. Could the authors please explain its significance?\n\n**A4**: The original mixup paper focuses on classification datasets. As discussed in Line 31-35 and Figure 1, in regression, randomly mixing examples may be easier to generate semantically wrong labels compared to classification. Intuitively, linearly mixing one-hot labels in classification is easy to generate semantically meaningful artificial labels, where the mixed label represents the probabilities of mixed examples to some extent. While in regression, the mixed labels may be semantically meaningless (e.g., pairs 2 and 3 in Figure 1) and more significantly affect the performance. By mixing examples with closer labels, mixReg mitigates the influence of semantically wrong labels and improves the in-distribution and task generalization in regression. Additionally, mixReg further shows its superiority in improving out-of-distribution robustness in regression, which is not discussed in the original mixup paper. We’ve added the above discussion in Appendix A.4 of the revised paper. In our submission, we justified these advantages of mixReg over vanilla mixup from both theoretical and empirical perspectives.\n", " Thank you for reviewing our paper and for your valuable feedback. Below, we address your concerns point by point and we’ve revised our paper according to your suggestions. We would appreciate it if you could let us know whether your concerns are addressed by our response.\n\n> **Q1**: Details of selecting mixup samples according to Eq. (6) are vague. There might be some thresholds to determine the intra-cluster samples based on Eq. (6). \n\n**A1**: Given an example ($x_i$, $y_i$), we did not set a threshold to select another example ($x_j$, $y_j$) in addition to Eq. (6). As shown in Line 6 of Algorithm 1, all examples conceptually have probabilities to be sampled as ($x_j$, $y_j$), but label closer examples have higher probabilities. The sampling distribution is controlled by the bandwidth $\\sigma$. We clarified it in Line 136-138 of the revised paper.\n\n---\n\n> **Q2**: the way to determine the bandwidth hyper-parameter is not clear. \n\n**A2**: As mentioned in the first paragraph of Section 4 of the revised paper, we determine the bandwidth $\\sigma$ by performing grid search and applying cross-validation. As shown in Section 4.4 (analysis III) and Appendix F.3.1 of the revised paper, mixReg yields a good model for a wide range of bandwidths. Here, we recommend practitioners to try [0.01, 0.1, 1, 10, 100] if the computational resources are limited, which typically brings relatively satisfactory performance. We have revised Appendix C.2 and Appendix F.3.1 to highlight this guidance. \n\n---\n\n> **Q3**: Although the author performs extensive experiments on many datasets, more mixup methods with various mixing policies should be compared.\n\n**A3**: Our proposed approach is a complementary method to the original mixup and its variants (e.g., CutMix, AutoMix, PuzzleMix). In mixReg, we basically change the sampling probability of mixing pairs, where examples with closer labels are more likely to be mixed. Other mixup variants (e.g., CutMix, AutoMix, PuzzleMix) instead focus on how to interpolate two examples. To further evaluate the compatibility of mixReg, we run new experiments on Exchange-Rate (time-series prediction), PovertyMap (image regression), and Echo (video regression) by incorporating mixReg with three representative mixup variants. We report the results in Table R1.\n\n**Table R1**: Compatibility analysis of mixReg. $\\downarrow$: the smaller the better; $\\uparrow$: the larger the better. The performances on ERM, mixup, mixup+mixReg are also reported for comparison.\n\n| Model | | Exchange-Rate | Echo | PovertyMap |\n|-----------|---------|---------------|-------|------------|\n| | | RMSE $\\downarrow$ | RMSE $\\downarrow$ | Worst R $\\uparrow$ |\n| ERM | | 0.0236 $\\pm$ 0.0031 | 5.402 $\\pm$ 0.024 | 0.50 $\\pm$ 0.07 |\n| mixup | | 0.0242 $\\pm$ 0.0043 | 5.393 $\\pm$ 0.040 | 0.46 $\\pm$ 0.03 |\n| | **+mixReg** | **0.0203 $\\pm$ 0.0011** | **5.177 $\\pm$ 0.036** | **0.51 $\\pm$ 0.07** |\n| CutMix | | 0.0264 $\\pm$ 0.0049 | 5.405 $\\pm$ 0.069 | 0.49 $\\pm$ 0.05 |\n| | **+mixReg** | **0.0240 $\\pm$ 0.0021** | **5.161 $\\pm$ 0.062** | **0.52 $\\pm$ 0.06** |\n| PuzzleMix | | 0.0254 $\\pm$ 0.0027 | 5.368 $\\pm$ 0.095 | 0.47 $\\pm$ 0.03 |\n| | **+mixReg** | **0.0233 $\\pm$ 0.0012** | **5.206 $\\pm$ 0.063** | **0.50 $\\pm$ 0.04** |\n| AutoMix | | 0.0242 $\\pm$ 0.0033 | 5.525 $\\pm$ 0.055 | 0.50 $\\pm$ 0.06 |\n| | **+mixReg** | **0.0228 $\\pm$ 0.0014** | **5.239 $\\pm$ 0.037** | **0.53 $\\pm$ 0.06** |\n\n\nThe results indicate that (1) compared to mixup, some powerful inter-class mixup policies (e.g., PuzzleMix) improve the performance on part of regression tasks, e.g., Echo. These approaches may also yield worse performances than ERM in other datasets, e.g., Exchange-Rate; (2) integrating mixReg on these mixup-based variants performs better than their vanilla versions, showing the compatibility and complementarity of mixReg to the existing mixup-based approaches in regression. We have revised our paper to include the compatibility analysis of mixReg (see analysis I in Section 4.4 and Appendix F.1).\n", " > **Q7**: Can mixReg be applied to more complex and real-world natural tasks (eg, counting or pose estimation)? Mixup has shown its effectiveness on various classification tasks such as ImageNet. This paper only presents the experiments on some simple datasets, such as Shape1D and etc.\n\n**A7**: The paper submission already evaluates mixReg on many large-scale real-world datasets, including Echocardiogram Video (Echo), PovertyMap, and Drug-target Interaction (DTI). Concretely, Echocardiogram Video includes 10,030 apical-4-chamber echocardiography videos and is one of the largest public labeled medical video datasets for cardiac function assessments [Ouyang et al., Nature 2020]. PovertyMap is a regression dataset in the WILDS benchmark [Koh et al. ICML 2021], which captures the real-world distribution shifts [Yeh et al., Nature Communication 2020]. Drug-target Interaction is the only domain shift dataset in Therapeutics Data Commons [Huang et al., NeurIPS 2021], modeling the real-world drug-target interactions. We’ve revised our introduction and highlighted these datasets in Line 72-75 of the introduction (revised version).\n\n---\n\n> **Q8**: Will the code be publicly available?\n\n**A8**: Yes, we will release the code upon publication.\n\n---\n\n\n**Reference**\n\n[Yin et al. ICLR 2020] Yin, Mingzhang, George Tucker, Mingyuan Zhou, Sergey Levine, and Chelsea Finn. \"Meta-learning without memorization.\" ICLR 2020.\n\n[Ouyang et al., Nature 2020] Ouyang, David, Bryan He, Amirata Ghorbani, Neal Yuan, Joseph Ebinger, Curtis P. Langlotz, Paul A. Heidenreich et al. \"Video-based AI for beat-to-beat assessment of cardiac function.\" Nature 580, no. 7802 (2020): 252-256.\n\n[Yeh et al. Nature Communication 2020] Yeh, Christopher, Anthony Perez, Anne Driscoll, George Azzari, Zhongyi Tang, David Lobell, Stefano Ermon, and Marshall Burke. \"Using publicly available satellite imagery and deep learning to understand economic well-being in Africa.\" Nature communications 11, no. 1 (2020): 1-11.\n\n[Koh et al. ICML 2021] Koh, Pang Wei, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu et al. \"Wilds: A benchmark of in-the-wild distribution shifts.\" In International Conference on Machine Learning, pp. 5637-5664. PMLR, 2021.\n\n[Huang et al. NeurIPS 2021] Huang, Kexin, Tianfan Fu, Wenhao Gao, Yue Zhao, Yusuf Roohani, Jure Leskovec, Connor W. Coley, Cao Xiao, Jimeng Sun, and Marinka Zitnik. \"Therapeutics Data Commons: Machine Learning Datasets and Tasks for Drug Discovery and Development.\" In NeurIPS Datasets and Benchmarks. 2021.\n", " Thank you for your valuable feedback to help us improve our paper. We have revised our paper based on your feedback. We detail our response below and please kindly let us know if our response addresses your concerns.\n\n> **Q1**: lines 50-56 are hard to be understood. Does Figure 1(a) only present the ShapeNet1D Pose Prediction task? At the same time, why do pair 1 and pair 3 have close input similarities? \n\n**A1**: Figure 1(a) only presents the ShapeNet1D Pose prediction task. We calculate the input feature similarities of pair 1, pair 2, and pair 3 as $1.51\\times 10^5$, $1.82\\times 10^5$, and $1.50\\times 10^5$, respectively, where the results indicate that pairs 1 and 3 have close input similarities. We have revised the caption of Figure 1 to clarify it.\n\n---\n\n> **Q2**: Besides, why is the regression task more sensitive to noise than the classification task? Is there any literature to support this claim, and what's noise in the whole paper (Can an image have noise?)?\n\n**A2**: We agree that this claim is based on our intuition. Intuitively, labels in classification are discrete and there are margins between classes, where subtle feature noises may not result in the change of labels. While labels in regression tasks are continuous, feature noises are more likely to lead to label changes. We have revised our paper and removed this sentence to make the introduction more precise, which does not affect our motivation and conclusion.\n\n---\n\n> **Q3**: The choice of bandwidth. I first appreciate the ablation study of bandwidth sigma in section 4.5, but it has not convinced me of the claims that mixReg reduces the efforts to tune the bandwidth. For example, the label of ShapeNet1D has a range of [0, 360], and the other datasets may have a much different value range. How to deal with this problem to choose the correct sigma? \n\n**A3**: Thanks for pointing it out. We have revised our claim to make it more precise: mixReg reduces the efforts to tune the bandwidth $\\sigma$ for every specific dataset. Roughly tuning the bandwidth in the range [0.01, 0.1, 1, 10, 100] is sufficient to get a relatively satisfying performance. In our experiments, we perform a grid search and apply cross-validation to find the best value of bandwidth. We have added an empirical discussion about how to pick bandwidth in Appendix F.3.1 of the revised paper.\n\n---\n\n\n> **Q4**: Why is the output normalized to [0, 10] in line 270?\n\n**A4** In Pascal3D, as mentioned in Appendix D.1 of the initial submission, we follow [Yin et al. ICLR 2020] to preprocess the data, where the labels are normalized to [0, 10]. We have clarified it in Line 264-265 in the revised paper.\n\n---\n\n\n> **Q5**: The xlabels of Figure 3 are incorrect.\n\n**A5**: Thanks for pointing this out. We have fixed the typo and updated Figure 3 in the revised paper.\n\n---\n\n> **Q6**: An essential property of mixup is the robustness of label noise. As mixReg uses the distances between labels to select the pairs, will the label noise harm the generalization of mixReg?\n\n**A6**: We conduct experiments to investigate the robustness of mixReg to label noise, which is also added in Appendix F.4 of the revised paper. Specifically, we inject Gaussian noises into the labels. For each dataset, the noise is set as 30% of the standard deviation among the corresponding original labels, where adding noise significantly degrades the performance compared with clean data. In Table R1, we report the results and the corresponding noise distributions on Exchange-Rate, ShapeNet1D, and DTI, respectively. \n\n**Table R1**: Robustness analysis to label noise. Here, $\\downarrow$ denotes the smaller the better; $\\uparrow$ denotes the larger the better.\n\n| Model | Exchange-Rate | ShapeNet1D | DTI |\n|---------------|---------------|-------|------------|\n| | RMSE $\\downarrow$ | MSE $\\downarrow$ | Avg. R $\\uparrow$ |\n| Noise Type | $\\mathcal{N}(0, 1.18\\times10^{-3})$ | $\\mathcal{N}(0, 0.874)$ | $\\mathcal{N}(0, 7.59\\times10^{-3})$ |\n| ERM/MAML | 0.0381 $\\pm$ 0.0014 | 5.553 $\\pm$ 0.098 | 0.334 $\\pm$ 0.018 |\n| mixup/MetaMix | 0.0375 $\\pm$ 0.0017 | 5.329 $\\pm$ 0.101 | 0.307 $\\pm$ 0.021 |\n| **mixReg (ours)** | **0.0360 $\\pm$ 0.0013** | **5.185 $\\pm$ 0.096** | **0.356 $\\pm$ 0.013** |\n\n\nAccording to Table R1, with the addition of label noise, we observe that mixReg still improves the performance over ERM and vanilla mixup, showing its effectiveness and robustness to label noise.\n\n\n\n\n", " This paper proposes a simple method for regression tasks. It improves the mixup data augmentation by selectively interpolating examples with similar labels, or re-weighting the probability of mixup pairs. Extensive experiments on various data modalities have been conducted to validate its effectiveness. ### Strengths\n* clear motivation and good presetation;\n* Extensitive expermients;\n\n### Weaknesses\nI have not found the critical drawbacks of this paper. However, I have not carefully checked the correctness of sections 3.2, 3.3, and 3.4. In my opinion, section 3.1 has thoroughly presented the proposed method, and the rest provides little information but harmed the readability.\n 1. lines 50-56 are hard to be understood. Does Figure 1(a) only present the ShapeNet1D Pose Prediction task? At the same time, why do pair 1 and pair 3 have close input similarities? Besides, why is the regression task more sensitive to noise than the classification task? Is there any literature to support this claim, and what's noise in the whole paper (Can an image have noise?)?\n\n2. The choice of bandwidth. I first appreciate the ablation study of bandwidth sigma in section 4.5, but it has not convinced me of the claims that mixReg reduces the efforts to tune the bandwidth. For example, the label of ShapeNet1D has a range of [0, 360], and the other datasets may have a much different value range. How to deal with this problem to choose the correct sigma? Why is the output normalized to [0, 10] in line 270? The xlabels of Figure 3 are incorrect.\n\n3. An essential property of mixup is the robustness of label noise. As mixReg uses the distances between labels to select the pairs, will the label noise harm the generalization of mixReg?\n\n4. Can mixReg be applied to more complex and real-world natural tasks (eg, counting or pose estimation)? Mixup has shown its effectiveness on various classification tasks such as ImageNet. This paper only presents the experiments on some simple datasets, such as Shape1D and etc.\n\n5. Will the code be publicly available? See above.", " This paper proposed a mixup algorithm (mixReg) to improve generalization on regression tasks. Different from mixup methods for classification tasks, the proposed method adjusts the sampling probability based on the similarity of labels. Theoretical analysis shows that the proposed mixReg can achieve a lower mean square error and improve in-distribution generalization, task generalization, and out-of-distribution robustness. Experiments on eleven datasets prove the improvements of the proposed mixReg. ### Strengths\n\n(1) This paper studies an interesting problem that applies mixup augmentations to general regression tasks. Since data augmentations in regression tasks are rarely studied, the proposed mixReg is the first work that introduces the general regression augmentation strategy.\n\n(2) The proposed method in three sceneries is well supported by theoretical analysis and proofs.\n\n(3) Extensive experiments verify the effectiveness of the proposed mixReg from three aspects.\n\n### Weakness\n\n(1) Details of selecting mixup samples according to Eq. (6) are vague. There might be some thresholds to determine the intra-cluster samples based on Eq. (6). Meanwhile, the way to determine the bandwidth hyper-parameter is not clear. It is better to provide an empirical range of bandwidths for easy implementation in various scenarios.\n\n(2) Although the author performs extensive experiments on many datasets, more mixup methods with various mixing policies should be compared. For example, CutMix [1] can be performed on 2D images and 1D time series (replacing the segment). See questions for more details. Why mixReg work? The proposed mixReg only takes the intra-cluster (semantically similar) samples to perform vanilla mixup. Although the theoretical analysis proves the superior of mixReg upon vanilla mixup (and its variants) and mixup with input-similarity, the compared methods usually yield worse performances than ERM. The cause of bad performances of vanilla mixup might be the unreliable inter-cluster samples, and mixReg improves the mixup performance by removing these samples (similar to AdaMixup [2]). However, mixReg might degenerate to ERM with the large sample size N and the small bandwidth. What if we have a better inter-class mixup policy like PuzzleMix [3] and AutoMix [4]? We might transform regression tasks into classification tasks by splitting the regression targets into C bins and training the classifier to generate more reliable inter-class mixup samples [3, 4].\n\n[1] CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features. In ICCV, 2019.\n\n[2] MixUp as Locally Linear Out-Of-Manifold Regularization. In AAAI, 2019.\n\n[3] Puzzle Mix: Exploiting Saliency and Local Statistics for Optimal Mixup. In ICML, 2020.\n\n[4] AutoMix: Unveiling the Power of Mixup for Stronger Classifiers. In ECCV, 2022.\n\n\n================ Post-rebuttal ================\n\nSince the author has addressed my concerns and updated the revision according to my comments during the rebuttal period, I change my score from 5 to 6. The author discusses the limitations of this paper in the appendix, which might be solved by further experiments. I do not find any negative social impact of this paper.", " In this work, the authors propose to use mixup to improve generalization in regression tasks. They argue that when mixup is directly using in regression, it may generate arbitrary labels which are incorrect as the linear assumption may not hold. The authors aim to overcome this drawback by adjusting the sampling probability based on the similarity of the labels. They further show that this label similarity obtains smaller mean square error and also improves out-of-distribution robustness. Interpolating examples with similar labels mitigates the effect of domain-specific information and pushes invariant representations. The authors evaluate their work on several benchmarks and show its effectiveness on in-distribution generalization, task generalization and out-of-distribution robustness. #### **Strengths**\nThe following are some strengths of the work\n- The authors argument that when mixup is directly applied to regression task generates incorrect labels due to the failure of the linear assumption rule is quite interesting. Its also quite interesting that mixup in regression tasks has been understudied and the authors aim to bridge this gap.\n- The experimental analysis is quite exhaustive and the authors show the performance of MixReg on many different benchmarks and in-distribution generalization, task generalization and out-of-distribution robustness.\n- The paper is also quite clear to read and is well presented. #### **Weakness**/ **Questions**\n\nI appreciate the authors efforts in developing an interesting idea. However, I have the following concerns/ queries which I would like the authors to answer:\n- I quite did not understand the difference between sampling probability and the interpolation factor $\\lambda$ typically used in mixup papers.\n- In L133, the authors say that its meaningful to use $d(i,j) = ||y_i - y_j||^2$. However, what is the dimension of $y$? is it a one-hot vector. This is quite unclear again. Also instead of using the $L_2$ distance, how about using cosine similarity in lower-dimensional embeddings instead of $L_2$ distance in the label space?\n- In the original mixup paper [62], the authors show that interpolating examples from semantically similar manifolds leads to sub-optimal performance and the best performance is achieved by randomly mixing samples from the data manifold. However, in MixReg, the authors claim that mixup samples from semantically similar data manifold results in the best performance. Could the authors please explain its significance?\n- Continuing from the above point, interpolating between semantically similar points in the data manifold could lead to a problem of manifold intrusion [1] - Given two examples with different class labels, the interpolated example may actually lie in a region associated with a third class in the feature space. Im curious to know if the authors face this problem in MixReg, if not, what is being done to avoid manifold intrusion. \n[1] Guo et al., Mixup as locally linear out-of-manifold regularization. In AAAI, 2019. \n\n- Could the authors evaluate their work on large scale tasks such as semantic segmentation which is also a pixel-wise regression task on Pascal-VOC or MS-COCO? How does one apply mixup to such kind of tasks in general?\n- The authors have missed citing several state-of-the-art mixup works such as PuzzleMix [2], Co-Mixup [3], SaliencyMix [4], AlignMixup [5], StyleMix [6], StyleCutMix [6], AutoMix [7] etc. It would be nice to discuss these papers in the related work section. \n\n[2] Kim et al., Puzzle mix: Exploiting saliency and local statistics for optimal mixup. ICML 2020. \n[3] Kim et al., Co-mixup: Saliency guided joint mixup with supermodular diversity. ICLR 2021. \n[4] Uddin et al. Saliencymix: A saliency guided data augmentation strategy for better regularization. ICML 2021. \n[5] Venkataramanan et al., AlignMixup: Improving Representations By Interpolating Aligned Features. CVPR 2022. \n[6] Hong et al., Stylemix: Separating content and style for enhanced data augmentation. CVPR, 2021. \n[7] Zhu et al., Automix: Mixup networks for sample interpolation via cooperative barycenter learning. ECCV, 2020\n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "SNqVKhQsEzC", "50SxbIVYjWO", "9YZGZLb9R9S", "W0ftybwZaWHR", "QKossfDUXj", "BxrqyNi1Sg6", "FU2cMGbgxjg", "FdhMe_kM15Q", "FU2cMGbgxjg", "FdhMe_kM15Q", "nuoFcIFFhfa", "BZfmljPbEvc", "nips_2022_BgMz5LHc07R", "b7QcuAwTILhJ", "mz-7NozIMjF", "TX27y5JcMb2", "fszTYQufrlX", "aCGl7ONPfG", "cJ-oh_4QhQd", "nips_2022_BgMz5LHc07R", "nips_2022_BgMz5LHc07R", "nips_2022_BgMz5LHc07R" ]
nips_2022_dUYLikScE-
Infinite-Fidelity Coregionalization for Physical Simulation
Multi-fidelity modeling and learning is important in physical simulation related applications. It can leverage both low-fidelity and high-fidelity examples for training so as to reduce the cost of data generation yet still achieving good performance. While existing approaches only model finite, discrete fidelities, in practice, the feasible fidelity choice is often infinite, which can correspond to a continuous mesh spacing or finite element length. In this paper, we propose Infinite Fidelity Coregionalization (IFC). Given the data, our method can extract and exploit rich information within infinite, continuous fidelities to bolster the prediction accuracy. Our model can interpolate and/or extrapolate the predictions to novel fidelities that are not covered by the training data. Specifically, we introduce a low-dimensional latent output as a continuous function of the fidelity and input, and multiple it with a basis matrix to predict high-dimensional solution outputs. We model the latent output as a neural Ordinary Differential Equation (ODE) to capture the complex relationships within and integrate information throughout the continuous fidelities. We then use Gaussian processes or another ODE to estimate the fidelity-varying bases. For efficient inference, we reorganize the bases as a tensor, and use a tensor-Gaussian variational posterior approximation to develop a scalable inference algorithm for massive outputs. We show the advantage of our method in several benchmark tasks in computational physics.
Accept
The paper tackles the multi-fidelity simulation problem by modeling the grid variation with NODE, coupled with a GP. Experiments on multiple physical simulators show better performance compared to baselines. Please also report computational efficiency and sample complexity in the final version.
train
[ "Sv3hSba0yMM", "_pAdCb7qtfhE", "h7tigo9ElCrf", "vguisfp_yB", "gpmjtArrWw", "857-h_swPh", "3oLfFVWj9ja", "xhi45D5WSH", "C4vXmUGM1Ex", "Ow8frKfLkyK" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering my questions. I've raised my score based on the response.\nGood luck", " C6: Does the proposed model support varying input and output dimensions at different fidelity levels?\n\nR6: Great question. Since the input to our model is the identify information of the problem, such as PDE parameters and IC/BC parameters, which we do not assume change along with the fidelity of the solver, the input dimension is fixed in our model. Our model supports varying output dimensions. Although the prediction of our model (at any fidelity) is of a fixed dimension, we use interpolation or down-sampling to align with the actual dimensions in the data (see line 295-298). \n\n\nC7: Does the proposed model support non-subset multi-fidelity data? \n\n\nR7: Yes, our model allows an arbitrary set of inputs and outputs at each fidelity. Our model does not require that the inputs of higher fidelity examples must be a subset of the inputs at lower fidelities. \n", " Thanks for your valuable and insightful comments. Here are our responses. C: comments; R: response.\n\n\nC1: I disagree with the statement \"in practice, the fidelity choice is often continuous and infinite\". For all five experiments in the paper, the data has finite and discrete fidelities. In practice, people pre-generate the simulation data. \n\n\nR1: Thanks for the question. We mean to point out that the range or the possible choice of the fidelity in simulation is often continuous and infinite. That’s because the fidelity usually corresponds to the mesh spacing or finite-element length, which are continuous in nature. \nThis does **not** contradict to the fact that the actual simulation data only includes a finite number of fidelities, because the dataset itself can only be finite, and cannot cover infinitely possible fidelities. As an analogy, suppose an ML model includes the temperature as one feature. Obviously, the range of the temperature is continuous, and it can take infinitely many values. However, no matter how much data we collect, we can only observe finite distinct temperature values, because the dataset is always finite, which cannot cover all possible temperature values. Note that the difference between our method and existing works is in the modeling perspective, rather than in the data used. The training data can be the same. Our method models the fidelities and their relationship in the whole range (continuous space), including those appearing in the data and those not. The existing works only focus on the a few fidelities present in the data and their relationships. For such set of finite fidelities, we can index them by integers, that’s why they are viewed as “discrete”. We will highlight these to improve the clarify of our paper. \n\n\nC2: The definition of fidelity $m$ seems confusing and not consistent. In the background, looks like a discrete value 1,2,3,.... but starting from the model section, $m$ becomes a continuous value. There's no explanation of how to map a fidelity level to value. What's the definition of fidelity (continuous case)?\n\n\nR2: Thanks for the comments and question. As discussed in R1, the existing works introduced in the background focus only on the fidelities present in the data. They used integers to index this set of finite fidelities, that’s why $m=1,2,3,\\ldots$ In our work, since we notice that the fidelity is often determined by mesh-spacing, finite element length, and/or other continuous control variables in the simulation, and hence the fidelity is continuous in nature. In addition to the finite fidelities observed in the data, there are infinitely many other choices. Accordingly, in the model section, we use continuous $m$ to index the fidelity. Actually, we did explain in our experiment how to map a fidelity level to the value --- we used a simple linear mapping from the mesh length (fidelity level) to the fidelity value $m$; see line 282-293 for details. Thanks for the questions. We will clarify these differences in our paper. \n\n\nC3: Figure 4 result is interesting, especially the extrapolation part. It will be great if you can add similar figures for the rest three experiments to see whether this trend happens in general. \n\n\nR3: Great suggestion. We will surely supplement the figures for the rest experiments. \n\n\nC4: I don't see the case that the dataset has continuous and infinite fidelity. But your method considering continuous and infinite fidelity improves the performance. It will be great if you can explain more about the reasons. \n\n\nR4: As discussed in R1 and R2, since the dataset is finite, it can only include a finite number of fidelities, despite the range or choice of the fidelity can be continuous and infinite. The critical difference of our method from the existing works is in *the modeling perspective*, rather than the data. The dataset is the same. **The existing works only models those finite fidelities present in the data, and ignores infinitely many other fidelities. Our work models the fidelities in their entire range, which is continuous and infinite, including both those present in data and those not.** We believe, capturing the relationships and integrate the information throughout a much *richer* set of fidelities can allow us to further improve the performance of the surrogate model. \n\n\nC5: baselines are missed [1, 2, 3]. Including them will make the experiment results more strong. It will be great if the author could include other metrics for accuracy (negative log-likelihood) and uncertainty quantification (Continuous Ranked Probability Score [4] or mean interval score [4,5]).\n\n\nR5: Thanks for providing the references of these excellent works! We will cite and discuss them, and supplement the comparison in our paper. We will also add evaluation results based on other metrices as you suggested. \n", " C8: it is not mentioned on which kind of grids or finite element topologies the proposed model would work\n\n\nR8: Please see R6. \n\n\nC9: It would be helpful to explain the implications of Gaussian assumptions during modeling \n\n\nR9: Great suggestion. The Gaussian likelihood (see Eq8&Eq10) is essentially equivalent to the square loss, the most commonly used loss function in machine learning and data science. The Gaussian prior over the basis elements (see Eq8) arises from the Gaussian process (GP) prior over the basis as a function of the fidelity; see Eq7 (the finite project of GP is Gaussian). GP is a powerful nonparametric prior over functions; it assumes the function is sampled by the realization of a Gaussian process governed by some covariance (kernel) function. It does not pre-assume any parametric form of the function, and only models the correlations between the function values. Accordingly, it can automatically capture the complexity of the function (e.g., multilinear and nonlinear) according to the data. GP has been widely used surrogate learning (e.g., the competing baselines (Xing et al., 2021) and (Wang et al., 2021) ) and numerous other machine learning applications. Thanks for your suggestion. We will supplement these explanations in our paper. \n\n\nC10: The authors claim that \"the surrogate model [is trained] only using low-fidelity examples, but we can still expect to obtain high-fidelity predictions, i.e., more accurate than the training data\" [L389-391]. It seems to me that there is not sufficient evidence to support this statement and I would alter or explain it for a camera-ready version. I do not fully understand how the proposed model could create, for example, higher-order frequencies that have been seen in the training phase.\n\n\nR10: Thanks for your insightful comments. We never claimed that our model can guarantee to extrapolate the prediction to higher fidelities (than training data). Even in our experiments, only IFC-ODE^2 shows good extrapolation yet IFC-GPODE does not, and we did acknowledge that (see Fig. 4 and line 385-387). We mean to point out that, if it is the case that the extrapolation gives even higher-fidelity prediction in some problem, it will be very useful because we can avoid too expensive, very high-fidelity simulations for that problem. We will alter the tone and text to express our thought more accurately. Also, thanks for the example regarding higher-order frequencies. We do agree it is challenging, but if from the training data, our ODE model can capture how the frequency varies with the increase of the fidelity (for a toy example: the frequency grows quadratically with the fidelity increase), we might still be able to infer new frequencies that have not been seen in the training data. Of course, if the frequency change has nothing to do with the fidelity change, our model will be unable to infer new frequencies because it violates our model assumption. We will supplement all these discussions about the scope and limitations of our methods. \n", " C4: What are the stochastic variables, parameters, ICs, or BCs that vary in between samples in train and test dataset and what are their distributions? It remained unclear to me whether the proposed work is an emulator of a single PDE solution at different resolutions or an emulator of the PDE solver that works at different resolutions and parameters, ICs, BCs. L249 indicates that the proposed work emulated a PDE solver, but I would need confirmation by the authors.\n\nR4: Great question. Our work is “an emulator of the PDE solver that works at different resolutions and parameters, ICs, BCs” rather than “a single PDE solution”. That is, the input to our models consists of PDE parameters and/or parameterized IC/BCs and designated fidelity; our models predict the corresponding solution field at that fidelity. These parameters and variables are sampled uniformly to generate the training and test datasets (but we ensure there is no overlap). This is the same as our competing baselines, e.g., (Wang et al., 2021). As mentioned in our paper (line 251), the data generation, including the PDE parameters, ICs/BCs, and the solvers, follow (Wang et al., 2021). The details are provided in the appendix of the paper (Wang et al., 2021). We will highlight these in our paper. Here we give a brief summary. \n\nFor Burger’s equation, the PDE parameter is the viscosity and varies from [0.001, 0.1] in the training and test samples, with the fixed IC and BC: $u(x,0) = sin(x\\pi/2)$ with the homogeneous Dirichlet boundary condition.\n\n\nFor Poisson’s equation, we use a rectangular domain and the Dirichlet boundary condition. The values of the four boundaries and the center of the domain are used as the input to our model, hence varying among the samples.\n\n\nFor heat equation, we use a 2D spatial-temporal domain with the Neumann boundary condition. The input parameters to our model includes the flux rate of the left boundary, flux rate of the right boundary, and the thermal conductivity. They vary among the training and test samples. \n\n\nFor computational fluid dynamics, it is described by 2D spatial-temporal domain (see line 340-342). The input parameters to our model include the tangential velocities of the four boundaries and the Reynolds number, which vary among the samples. \n\n\nFor topology structure optimization, the input parameters to our model consist of the location and angle of the load on the structure, and vary among the training and test examples (see line 326-329). The detail of solving this problem is given by (Keshavarzzadeh et al.,2018) (cited by our paper at Sec 6.2). \n\nC5: Can the authors add a complete in-/output diagram to the camera-ready version? It is unclear to me if the state with lowest- or 1-lower fidelity is used as input. \n\n\nR5: Great suggestion! We will surely add such a diagram to make our paper more clear. Actually, the state with ``lowest- or 1-lower fidelity’’ is indeed used as the input to compute the state with higher fidelities. This can be seen from the infinitesimal view of our ODE model: see Eq4, the unlabeled equation under Eq4, and the surrounding text. We can see the state at a higher fidelity is determined by the state at the lower fidelity (with $\\Delta$ difference). From the wholistic view, if we write down the general solution of the ODE model, it is given by\n$$h(m,\\mathbf{x}) = h(0, \\mathbf{x}) + \\int_0^m \\phi(\\tau, h, \\mathbf{x}) \\text{d} \\tau$$\nwhere $h(0, \\mathbf{x})$ is the initial state, which corresponds to the lowest fidelity. We can see that the state with fidelity $m$ is computed by both the initial state and all the states at lower fidelities ($<m$). Therefore, the information from all the lower, continuous (infinite) fidelities are integrated to make predictions at the target fidelity. We will enrich our presentation to highlight this point. \n\nC6: What are the assumptions on the grid that are being made? \n\nR6: We actually do not make any specific assumption on the grid (position, shape, topology, size, etc.). This is up to the particular problem and the solver choice. We only assume the change of the grid (e.g., dense/coarse) can change the fidelity, and there can be infinite, continuous choices, e.g., based on the length of the intervals or finite elements. We believe this is a reasonable and widely applicable assumption. \n\n\nC7: The authors do not address limitations or potential negative social impacts. \n\n\nR7: Thanks for reminder. In principle, our method can be used in any physical simulation related applications. One potential negative social impact can occur if our method is used to design fatal weapons. One limitation of our method is that the training involves ODE solvers and is sequential in nature. Hence, it is not obvious about how to utilize parallel computing resources to accelerate the training. We will supplement all these discussions in our paper.\n", " Thanks for your valuable and insightful comments. Here are our responses. C: comments; R: response. \n\nC1: It is still a bit unclear what exactly the contributions are if Zhe et al., 19 and Li et al., 21 have already provided scalable GPs for regression of latent outputs and bases. ... I am assuming that the paper is the first paper to do learning-based coregionalization with continuous fidelities. Section 4 needs to clarify that matrix Gaussian distribution is taken from a different paper and only applied to coregionalization... The clarity of the paper could be improved by adding a 'list of contributions' at the end of the intro.\n\nR1: Thanks for the great comments and suggestions. We do agree that the tensorization and Kronecker product properties have already been used in the prior works (as cited and acknowledged in our paper). Our contribution is obviously *not* the invention of these tricks. Instead, we believe the contribution is the novel combination of these techniques with ODE solvers (and/or adjoint state methods), to address the learning challenges of our newly proposed ODE-GP mix, which is the first model \"to do learning-based coregionalization with continuous fidelities\". We will follow your suggestions to make clarifications, and highlight the difference with the prior works, so as to make our contributions more clear. \n\nC2: The related works section is very detailed with respect to the most similar works. The related works section could be improved by mentioning a practical use-case of coregionalization, and deterministic and stochastic superresolution methods with GANs, Flows, or diffusion-based methods.\n\nR2: Great suggestion! We will add the references and discussions accordingly. \n\n\nC3: I am not sure if introducing active learning to the mix would make for a very interesting paper as it might become really complicated to use this method in practice.\n\nR3: Thanks for the concern. We believe active learning is a promising direction since it can help us to further reduce the simulation cost (in data collection) while improving the efficiency of surrogate learning. We do agree the potential complexity or challenge in developing an effective active learning approach for our models: how to design an acquisition function, and how to optimize it to find the new input and fidelity at which to query. But once the active learning approach is ready, its usage can be pretty convenient --- it just repeats three steps: identifying new query points (input and fidelity), acquiring the examples by calling off-the-shelf simulators, and retrain the surrogate model. All the three steps can be done automatically, with little human intervention. Therefore, despite the potential risk, we are still willing to study active learning in the future work. ", " Thanks for your valuable and insightful comments. Here are our responses. C: comments; R: response. \n\nC1: … besides the comparison on accuracy, it would be helpful to see the training time comparison for at least 1 of the problems … if the authors can present a comparison when using different amount of training data…\n\nR1: Great suggestions! We do agree. We will supplement the training time of all the methods, and conduct a comparison with varying the training data amount. \n\nC2: The interpolation and extrapolation study was only provided on Poisson's and Heat equations, which are problems with relatively smooth solutions and this is possibly why the model can extrapolate to even high fidelities. Can the authors perform interpolation and extrapolation studies on Burger's equation and CFD examples?\n\nR2: Thanks for the insightful comments and great suggestion. We do agree that the solutions of Poisson's and heat equations can be quite smooth. But the extrapolation performance of our model mainly depends on if our model can accurately capture the *solution variation* (not the solution itself) along the fidelity increase. For example, the solution might be relatively less smooth at both fidelity $m-\\Delta$ and fidelity $m$; However, if their change (or difference) w.r.t the fidelity change $\\Delta$ is smooth (see Eq 4&5), our model might still be relatively easy to learn and utilize such change to obtain a good extrapolation. Thanks for your suggestion. We will conduct such studies on Burger’s equation and CFD problems to further investigate the performance of our method. \n\nC3: What is the Reynolds number in the CFD example? How does the algorithm perform on high Reynolds number?...\n\nR3: The Reynolds number is one input (i.e., the PDE parameter) to our model, and is sampled from [10, 500], which is consistent with the experiment conducted in the baseline work (Wang et al., 2021). We will test on larger ranges to examine the performance of our method on much higher Reynolds numbers. \n\nC4: Does the number of required layers, as well as other hyperparameters, vary for different examples? \n\nR4: For our method, we used the same hyperparameters for all the tasks in the experiments (2 hidden layers, 40 neurons per layer, tanh activation, Rk45 solver, etc.). For DMF (baseline), all the hyperparameters do not change except for the layer width, and we selected it for different tasks following the original paper (Li et. al. 2022); see line 300-301. We will highlight this point in our paper. \n", " This is an interesting paper that seeks to propose novel machine learning methods to extract information within continuous and infinite fidelities to bolster the prediction accuracy. The key novelty here is the means to develop a infinite fidelity coregionalization method by introducing a low-dimensional latent output and multiply it with a basis matrix for solution output prediction. The experimental data is comprehensive. Overall, the work is well done and provides some new methodology to this field. Strength:\nA novel infinite fidelity coregionalization method is proposed, which improves upon existed models with finite and discrete fidelities. The paper is well-written. Empirical results on different types of PDE problems are provided.\n\nWeakness:\nOne of the main advantage of multi-fidelity models is are the improve learning and sampling efficiencies. Therefore, besides the comparison on accuracy, it would be helpful to see the training time comparison for at least 1 of the problems. It would also help the readers to assess the method if the authors can present a comparison when using different amount of training data. 1. The interpolation and extrapolation study was only provided on Poisson's and Heat equations, which are problems with relatively smooth solutions and this is possibly why the model can extrapolate to even high fidelities. Can the authors perform interpolation and extrapolation studies on Burger's equation and CFD examples?\n2. What is the Reynolds number in the CFD example? How does the algorithm perform on high Reynolds number (such as transient flow and turbulence flow) regimes?\n3. The authors have commented that a shallow NN (2 hidden layers) is sufficient. Does the number of required layers, as well as other hyperparameters, vary for different examples? The discussions on limitations are adequate.", " The authors propose two novel methods for coregionalization, i.e., projecting multiple grid resolutions onto a common grid, similar to superresolution. While previous works assume a discrete number of resolutions, the proposed work contributes the first coregionalization with a *continuous* change in resolution. The methods combine neural-ODEs and GPs to project low-resolution inputs, parameters, and BCs onto a higher-resolution grid. The main idea is by training a neural ODE to interpolate the meshes, the NODE can learn from other meshes in the latent sstate and outperform discrete methods that learn one model per mesh. The authors provide background, methodology, and support the claims with extensive empirical results. The empirical results confirm that the proposed methods, IFC-ODE and IFC-GPODE, both outperform discrete methods for coregionalization.\n 1. Strength:\n1.1 The broader research topic of ML-based surrogate modeling of PDEs is significant to computational fluid dynamics, climate modeling, chemistry, biology, etc. The narrow topic of coregionalization, i.e., projecting data from various resolutions to a common grid, or learning from data of various resolutions is relevant to practical settings. The more narrow topic of infinite resolutions would allow for higher flexibility in using the method in practice. \n\n1.2 The authors provide extensive empirical results on five problem settings and compare to five relevant methods. While the comparative methods could have been selected broader, e.g., including GAN-, Flow-, or Diffusion-based superresolution, the selection seems sufficient. The empirical results support the claims of the paper.\n\n1.3 The author choose advanced methodology to handle GPs in high-dimensional settings. \n\n2. Weaknesses:\n2.1 It is still a bit unclear what exactly the contributions are if Zhe et al., 19 and Li et al., 21 have already provided scalable GPs for regression of latent outputs and bases. The authors acknowledge this in L218. I am assuming that the paper is the first paper to do learning-based coregionalization with continuous fidelities. Section 4 needs to clarify that matrix Gaussian distribution is taken from a different paper and only applied to coregionalization, here, as far as I understood. The clarity of the paper could be improved by adding a 'list of contributions' at the end of the intro. \n\n2.2 The related works section is very detailed with respect to the most similar works. The related works section could be improved by mentioning a practical use-case of coregionalization, and deterministic and stochastic superresolution methods with GANs, Flows, or diffusion-based methods. \n\n2.3 The method is quite complicated as it mixes matrix GPs and neural ODEs. I am not sure if introducing active learning to the mix would make for a very interesting paper as it might become really complicated to use this method in practice. \n 3.1 What are the stochastic variables, parameters, ICs, or BCs that vary in between samples in train and test dataset and what are their distributions? It remained unclear to me whether the proposed work is an emulator of a single PDE solution at different resolutions or an emulator of the PDE solver that works at different resolutions and parameters, ICs, BCs. L249 indicates that the proposed work emulated a PDE solver, but I would need confirmation by the authors.\n\n3.2 Can the authors add a complete in-/output diagram to the camera-ready version? It is unclear to me if the state with lowest- or 1-lower fidelity is used as input. The quality of the results would positively surprise me if, e.g., in the case of Navier Stokes, an 8x8 solution is the input and the output is an accurate 64x64 solution. \n\n3.3 What are the assumptions on the grid that are being made?\n 4.1 The authors do not address limitations or potential negative social imapcts. \n\n4.2 It is not mentioned on which kind of grids or finite element topologies the proposed model would work. I am assuming that it would only work for equispaced grids. \n\n4.3 It would be helpful to explain the implications of Gaussian assumptions during modeling.\n\n4.4 The authors claim that \"the surrogate model [is trained] only using low-fidelity examples, but we can still expect to obtain high-fidelity predictions, i.e., more accurate than the training data\" [L389-391]. It seems to me that there is not sufficient evidence to support this statement and I would alter or explain it for a camera-ready version. I do not fully understand how the proposed model could create, for example, higher-order frequencies that have been seen in the training phase. \n", " This paper introduced IFC, an infinite-fidelity coregionalization method for physical simulation. They designed ODE-based modeling to capture information for continuous fidelities and combined it with Gaussian processes or another ODE to estimate the fidelity-varying bases. The result shows their method outperforms the baselines among several benchmark prediction tasks at the highest fidelity level. Strengths:\n1. Well-written paper in general.\n2. the experiment is extensive. Five benchmark study included. \n3. The performance of the proposed model seems good.\n\nWeaknesses:\n1. I disagree with the statement \"in practice, the fidelity choice is often continuous and infinite\". For all five experiments in the paper, the data has finite and discrete fidelities. In practice, people pre-generate the simulation data. The fidelity level is also pre-selected, which is also finite and discrete. Although the performance of the proposed method is good.\n2. The definition of fidelity $m$ seems confusing and not consistent. In the background, $m$ looks like a discrete value 1,2,3,.... but starting from the model section, $m$ becomes a continuous value. There's no explanation of how to map a fidelity level to $m$ value. \n3. Only one evaluation metric nRMSE is included. \n 1. What's the definition of fidelity $m$ (continuous case)? \n2. Figure 4 result is interesting, especially the extrapolation part. It will be great if you can add similar figures for the rest three experiments to see whether this trend happens in general.\n3. I don't see the case that the dataset has continuous and infinite fidelity. But your method considering continuous and infinite fidelity improves the performance. It will be great if you can explain more about the reasons.\n4. baselines are missed [1, 2, 3]. Including them will make the experiment results more strong.\n\n[1] Wang, Yating, and Guang Lin. \"MFPC-Net: Multi-fidelity Physics-Constrained Neural Process.\" arXiv preprint arXiv:2010.01378 (2020).\n[2] Wu, Dongxia, et al. \"Multi-fidelity Hierarchical Neural Processes.\" arXiv preprint arXiv:2206.04872 (2022).\n[3] Xing, Wei W., et al. \"Residual Gaussian process: A tractable nonparametric Bayesian emulator for multi-fidelity simulations.\" Applied Mathematical Modelling 97 (2021): 36-56.\n\n5. It will be great if the author could include other metrics for accuracy (negative log-likelihood) and uncertainty quantification (Continuous Ranked Probability Score [4] or mean interval score [4,5]).\n\n[4] Gneiting, Tilmann, and Adrian E. Raftery. \"Strictly proper scoring rules, prediction, and estimation.\" Journal of the American statistical Association 102.477 (2007): 359-378. \n[5] Wu, Dongxia, et al. \"Quantifying uncertainty in deep spatiotemporal forecasting.\" arXiv preprint arXiv:2105.11982 (2021).\n\n6. Does the proposed model support varying input and output dimensions at different fidelity levels?\n7. Does the proposed model support non-subset multi-fidelity data?\n\n\n No potential negative societal impact I can see." ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "_pAdCb7qtfhE", "h7tigo9ElCrf", "Ow8frKfLkyK", "gpmjtArrWw", "857-h_swPh", "C4vXmUGM1Ex", "xhi45D5WSH", "nips_2022_dUYLikScE-", "nips_2022_dUYLikScE-", "nips_2022_dUYLikScE-" ]
nips_2022_fzvDZ0mraPP
Giga-scale Kernel Matrix-Vector Multiplication on GPU
Kernel matrix-vector multiplication (KMVM) is a foundational operation in machine learning and scientific computing. However, as KMVM tends to scale quadratically in both memory and time, applications are often limited by these computational constraints. In this paper, we propose a novel approximation procedure coined \textit{Faster-Fast and Free Memory Method} ($\text{F}^3$M) to address these scaling issues of KMVM for tall~($10^8\sim 10^9$) and skinny~($D\leq7$) data. Extensive experiments demonstrate that $\text{F}^3$M has empirical \emph{linear time and memory} complexity with a relative error of order $10^{-3}$ and can compute a full KMVM for a billion points \emph{in under a minute} on a high-end GPU, leading to a significant speed-up in comparison to existing CPU methods. We demonstrate the utility of our procedure by applying it as a drop-in for the state-of-the-art GPU-based linear solver FALKON, \emph{improving speed 1.5-5.5 times} at the cost of $<1\%$ drop in accuracy. We further demonstrate competitive results on \emph{Gaussian Process regression} coupled with significant speedups on a variety of real-world datasets.
Accept
The authors propose an new approximation procedure for Kernel matrix-vector multiplication target to tall and skinny kernel matrices. The proposed method achieves significant speedups over the state-of-the-art GPU-based linear solver FALKON while sacrificing only small drops in accuracy due to approximation. The paper discusses a specific use case (low dimensional data) but it is very clear about the scope. The reviewers agree that the problem still has high significance, is well motivated and the reported performance gains are convincing. The experiments also provide interesting insights into the inner working of the method and the trade-offs between accuracy and efficiency. For a potential camera ready version the authors should carefully work in the reviewers' comments on the presentation to improve the accessibility of their work for a general NeurIPS audience. Also additional details about the low-level GPU optimizations would be good to add in section 4.1 and some comments on how to extend the method to other kernel functions would strengthen the paper.
train
[ "K-nQyc8hMc", "q9Z3krHetpP", "epvIapVG1MTB", "zLJGuTysyP", "5aJa4Gbs9FpY", "2fmBTU988lV", "ztBDNqKmeE3", "5gIBTWBF9Ks", "cdPF9frpE5P", "KEWdZIQ51f" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'm grateful to the authors for their response to my questions. I will keep my score as accept.", " Thank you for your time and effort in reviewing the paper! We respond to your comments and questions below:\n\n*Q1*: I would like to to see some ablation studies and experiments on F^{2.5}M. \n*A1*: We have provided the requested experiments with commentary in Appendix J (last page), Table 7 of the rebuttal revision. \n\nWe will update the figures with vector graphics, thank you for the suggestion!", " Thank you for your time and effort in reviewing the paper! We respond to your comments and questions below:\n\n*Q1*: Is it possible to use tensor cores in the proposed algorithm? \n*A1*: We have not attempted to utilize tensor cores in the proposed algorithm yet, and we believe some parts of the code benefit from tensor cores. However as tensor cores mostly require floating point 16 accuracy, one must be careful about precision when attempting this. We view this as a very promising venue for further development in scaling further.\n\n*Q2*: How to improve the FLOP efficiency? \n*A2*: That's a good question! Through our profiling, we found that a potential bottleneck currently is when the Lagrange interpolation occurs between each interaction, as it is quite difficult to parallelize very effectively. These circumstances can directly be attributed to each box having a different amount of points. Another aspect is that quite a lot of memory is reallocated during the run, which hampers throughput. We plan to detail this carefully in a more comprehensive fashion in a future revision. ", " Thank you for your time and effort in reviewing the paper! We respond to your comments and questions below:\n\n*Q1*: The methods contain several levels of approximation and heuristics, with many hyperparameters to be tuned. It is not clear how sensitive these methods are to these parameters. How to best select them for a given dataset? \n*A1*: We give some guidance on the impact of hyperparameters effective variance limit $\\eta$ and the number of interpolation nodes $r$ in Figure 13 in appendix H. By increasing the number of interpolation nodes (increasing accuracy) the computation is slower while increasing effective variance limit (more interactions interpolated) the performance improves. There is a controllable trade-off between accuracy and performance when tuning $\\eta$ and $r$.\n\n\n*C2*: The paper is densely packed with approximations and notations, which make it quite difficult to follow at some points. In Section 4.1, the authors repeatedly mentioned we used non-trivial method... without any guidance. Please consider updating it. Some parameters are used for a different meaning, e.g., the vector v in Line 143 and Line 144. \n*A2*: Thank you for the suggestion! We will extend the paragraphs about low-level implementations by including more pseudo-code and more details about the algorithmic challenges encountered during this project. As these changes require more careful writing, we have decided to not include them in the rebuttal revision in hopes of providing a more comprehensive overview. We have clarified the notation for vector $v$ in lines 143 and 144 in the rebuttal revision.\n\n*Q3*: The last two items in Table 1 are quite strange as runtime depends on the hardware. What does that mean by running under a minute or an hour? \n*A3*: It means we have run FFM(GPU) and F$^3$M on V100 chips on 1 billion datapoints on 3D data. We have added a footnote in the rebuttal revision to clarify this. Thanks for pointing this out!\n\n*C4*: It seems that the F3 method is specific to the Gaussian kernel in Section 4.3? It is not clear how the methods depend on hyperparameters when the data set is changed. For example, the suggested parameters in Line 250 are based solely on Figure 7, which may not generalize to new data sets. \n*A4*: The Gaussian kernel is used as a proof of concept for the method, the method and derivations can be extended to other kernels either directly or with minor modification depending on the kernel. One can quite easily rerun the procedure in Figure 7 for a dataset to validate parameter selection. However, the datasets and selection we propose should generally work well. It is however an important point we hope future work can resolve.\n\n*Q5*: Theorem 2 is hard to understand, it needs more discussion. It is not clear how large the different terms (with negative signs) are, relative to the first term in the complexity bound. \n*A5*: We agree. We have reformulated and clarified the meaning of the theorem in the rebuttal revision. New content marked in blue.\n\n", " Thank you for your time and effort in reviewing the paper! We respond to your comments and questions below:\n\n*Q1*: Maybe mention that $D$ is the dimension of the data in the first paragraph where it is introduced. \n*A1*: Thank you for this suggestion! We have done it in an uploaded revised version. The new content is marked in blue.\n\n*Q2*: Are there any assumptions on the type of kernel F$^3$M is useful for? It seems like most experiments were performed on RBF kernels. \n*A2*: Not really, but the method generally extends directly to translation invariant kernels.\n\n*Q3*: Line 143 should $u_1$ and $u_2$ be $v_1$ and $v_2$? \n*A3*: Good spot! Thanks, we have corrected it.\n\n*Q4*: Can the authors include more detail in Section 4.1 on the low-level GPU optimizations performed in the main text? \n*A4*: Yes! We will extend the paragraphs about low-level implementations by including more pseudo-code and more details about the algorithmic challenges encountered during this project. As these changes require more careful writing, we have decided to not include them in a rebuttal revision in hopes of carefully rewriting this section in a more comprehensive way.", " We would like to thank the reviewers for their comments, we believe they have significantly improved the work. We are happy that the reviewers find the problem interesting and the method to be of significance. We respond to each reviewer's comments individually.", " The paper proposes a novel algorithm to approximate kernel matrix-vector multiplication (KMVM), for large kernels and data of dimension less than or equal to 7. Experiments and theoretical analysis suggest the algorithm has linear time and memory complexity. Since KMVM is a key component of other algorithms like conjugate gradients, the authors show their algorithm can be combined with existing large-scale kernel methods and demonstrate significant speed ups. Strengths\n- The limitations of the method are clearly discussed.\n- The paper makes a strong contribution with a nice algorithmic advance and a technically challenging implementation on GPU.\n- All the main claims of the paper are thoroughly supported by experiments, including an ablation study. Overall a high quality paper.\n\nWeaknesses\n- The algorithm is limited to low-dimensional data, which might hamper the paper's significance in the NeurIPS community. However, this limitation is clearly acknowledged and discussed.\n- I found some sections of the paper hard to follow, particularly Section 4.2, but this may be because I'm not familiar with some of the prior work the paper builds on.\n - Maybe mention that $D$ is the dimension of the data in the first paragraph where it is introduced.\n- Are there any assumptions on the type of kernel F$^3$M is useful for? It seems like most experiments were performed on RBF kernels.\n- Line 143 should $u_1$ and $u_2$ be $v_1$ and $v_2$?\n- Can the authors include more detail in Section 4.1 on the low-level GPU optimizations performed in the main text? No concerns about negative societal impact.", " The paper considers speeding up the matrix-vector multiplication operation in kernel methods for a specific structure (tall and skin) of the kernel matrix. The approximation method builds on the FFM algorithm, which separates data points into far-field and near-field interactions and Lagrange interpolation. The proposed schemes are further implemented on GPUs to exploit their parallel processing power; various speed-ups can be observed compared to the FFM method. The methods contain several levels of approximation and heuristics, with many hyperparameters to be tuned. It is not clear how sensitive these methods are to these parameters. How to best select them for a given dataset?\n\nThe paper is densely packed with approximations and notations, which make it quite difficult to follow at some points. \nIn Section 4.1, the authors repeatedly mentioned **we used non-trivial method...** without any guidance. Please consider updating it. Some parameters are used for a different meaning, e.g., the vector **v** in Line 143 and Line 144. The last two items in Table 1 are quite strange as runtime depends on the hardware. What does that mean by running under a minute or an hour?\n\nIt seems that the F3 method is specific to the Gaussian kernel in Section 4.3? It is not clear how the methods depend on hyperparameters when the data set is changed. For example, the suggested parameters in Line 250 are based solely on Figure 7, which may not generalize to new data sets.\n\nTheorem 2 is hard to understand, it needs more discussion. It is not clear how large the different terms (with negative signs) are, relative to the first term in the complexity bound. see above", " This paper presents a fast matrix vector multiplication for skinny matrices.\n Strong Points\n----\n1. This paper targets to solve a well-motivated and important problem.\n2. Related work is clearly discussed.\n3. The empirical results substantially outperform baselines.\n\nWeak Points\n---\n1. The V100 used in the experiments supports tensor cores. I suggest the authors include a discussion of whether tensor cores may be utilized in F^3M.\n2. The GPU utilization of the proposed algorithm is relatively low to the full capacity of V100. I wonder whether the authors could add a discussion of the reason and how to improve the utilization.\n\nPresentation\n----\nThe upper part of Figure 5 is incomplete. And the text is too small for readers.\n 1. Is it possible to use tensor cores in the proposed algorithm?\n2. How to improve the FLOP efficiency? As the authors claim, the major limitation of the proposed algorithm is the restriction of the dimension (D <= 7).\nAlthough it may limit the usage, many relational tabular data fit the constraint.", " This paper tries to accelerate the fundamental problem of kernel matrix-vector multiplication (KMVM) operation. The authors propose F^{2.5}M and which is built on classical FFM and specially designed for GPU parallelization. Then they introduce the main algorithm F^{3}M to further accelerate the computation speed. The experiments show that compared with CPU and GPU baselines the new proposed algorithm uses less memory and has a faster speed while controlling the error in an acceptable range. This paper tries to solve the fundamental problem of high memory usage and low speed in large-scale KMVM operations. The writing is clear and the motivation is well-driven. The algorithm is straightforward but seems effective according to the experiment results. The figures in the attachment materials show clearly how the algorithm utilizes the GPU parallelization attribute.\nMinor:\nPictures need to be replaced with vector graphics to improve clarity. I would like to to see some ablation studies and experiments on F^{2.5}M. N/A" ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 2 ]
[ "5aJa4Gbs9FpY", "KEWdZIQ51f", "cdPF9frpE5P", "5gIBTWBF9Ks", "ztBDNqKmeE3", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP", "nips_2022_fzvDZ0mraPP" ]
nips_2022_nJt27NQffr
Self-Supervised Learning via Maximum Entropy Coding
A mainstream type of current self-supervised learning methods pursues a general-purpose representation that can be well transferred to downstream tasks, typically by optimizing on a given pretext task such as instance discrimination. In this work, we argue that existing pretext tasks inevitably introduce biases into the learned representation, which in turn leads to biased transfer performance on various downstream tasks. To cope with this issue, we propose Maximum Entropy Coding (MEC), a more principled objective that explicitly optimizes on the structure of the representation, so that the learned representation is less biased and thus generalizes better to unseen downstream tasks. Inspired by the principle of maximum entropy in information theory, we hypothesize that a generalizable representation should be the one that admits the maximum entropy among all plausible representations. To make the objective end-to-end trainable, we propose to leverage the minimal coding length in lossy data coding as a computationally tractable surrogate for the entropy, and further derive a scalable reformulation of the objective that allows fast computation. Extensive experiments demonstrate that MEC learns a more generalizable representation than previous methods based on specific pretext tasks. It achieves state-of-the-art performance consistently on various downstream tasks, including not only ImageNet linear probe, but also semi-supervised classification, object detection, instance segmentation, and object tracking. Interestingly, we show that existing batch-wise and feature-wise self-supervised objectives could be seen equivalent to low-order approximations of MEC. Code and pre-trained models are available at https://github.com/xinliu20/MEC.
Accept
The paper in general received three positive feedbacks and ratings. The three reviewers all recognize the theoretical soundness of the paper, and the paper is also clearly presented with informative and strong experimental results. There is a few places making one reviewer less comfortable in terms of the exact effectiveness of the proposed theory. While overall the experimental results are comprehensive and basically suppor the claiming points made by the paper. The authors may furher clarify the points based on the comments.
train
[ "1D5QseuDfao", "jhNmg6p4W44", "_CRwj_41D_3", "LOZdyTPyjUyY", "7Rshwe65owV", "JseAaEPPCRG", "kBn4idsJKm", "HNkE8jJcGhP", "TodRS3dY18", "W95okT9-Gr", "b6QZR_vTPX", "WYBS_UVgqaJ" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the reviewer for providing the additional feedback. We address your concerns below.\n\n>**\"Actually, in response point 1, Barlow Twins are supposed to reach 73.5\\%. So here, the extra two orders approximately indeed only provide a 0.1\\% improvement.\"**\n\n**First, it should be clarified that comparing the mentioned number (73.5\\%) with ours (73.6\\%) is *unfair*, because of differences in training settings.** The settings differ in two aspects: the training scheme and the projector network.\n- The comparison made in response point 1 (also in Tab.1 of our paper) is based on 800-epoch pre-trained models in order for a fair comparison of all methods. The result that the reviewer mentioned is based on a 1000-epoch pre-trained model, which is the experiment setting of Barlow Twins. \n- Barlow Twins uses a larger projector network, the dimension of which is 8192. In contrast, we only use a 2048-d projector network in order for a fair comparison of other methods (e.g., SimCLR, MoCo v2, SimSiam). Even so, our method outperforms Barlow twins by 0.6\\% evaluated by linear probing.The ablation experiments in Barlow Twins show their performance drops by over 3\\% when reducing the dimension from 8192 to 2048. However, this result is obtained with a 300-epoch training scheme. When trained for 800 epochs, we did not find a relevant result, but it is natural to expect a similar performance drop.\n\nA detailed comparison with consideration of experimental settings is shown below. As summary, when experimental settings are controlled to be identical for a fair comparison, **the performance gap between MEC and Barlow Twins is far more than 0.1\\%.**\n\n| method | source | pre-training epochs | projector width | linear evaluation |\n|--------------|----------------|:---------------------:|:-----------------:|:-------------------:|\n| Barlow Twins | github release | 1000 | 8192 | 73.5 |\n| Barlow Twins | original paper | 1000 | 8192 | 73.2 |\n| Barlow Twins | reproduction | 800 | 8192 | 73.0 |\n| Barlow Twins | estimated | 800 | 2048 | <73.0 |\n| MEC | ours | 800 | 2048 | 73.6 |\n\n\n**Second, we would like to emphasize again that the main purpose of our method is to improve the *generalization of SSL representation on various downstream tasks***, therefore we do not recommend being obsessed solely with the ImageNet linear probe results. Comparing MEC with Barlow Twins, the former outperforms the latter on all tasks across the board, including semi-supervised learning, object detection, instance segmentation and object tracking. It is important to point out that results in Figure 1, Table 1-4 together make our point, not Table 1 alone. \n\n>**\"But it is unacceptable that the paper tries to exaggerate the results. [...] Not in the figure, not in the main text, not in the experimental detail.\"**\n\nWe respectfully disagree with this point. First, in the method section (L158-160), we use a separate paragraph to clearly state that the additional techniques (momentum encoder and asymmetric networks) can improve the performance of the minimalist variant of MEC (shown in Fig.2).\n\nSecond, in the experiments section (Tab.5, L252-259), we use six tables to show the default settings of our experiments, and the results demonstrate the effects of different designs of Siamese networks and also the advantage of adopting momentum encoder and asymmetric networks. Moreover, we list all the implementation details of the network architecture and momentum encoder in Appendix C (L586-595, L599-600).\n\nThird, we show in part 1 of the response that our MEC can outperform other methods on various downstream tasks with or without the additional techniques, which emphasizes the importance of the proposed loss instead of the architecture design or momentum encoder. We will revise the paper accordingly to better clarify the gains of different parts of our method.\n\n-----\n\nWe hope to have addressed your concerns. Please let us know if you have any further suggestions or concerns.\n\n\n\n", " We sincerely thank the reviewer for the additional feedback and valuable comments.\n\nThe maximum entropy principle states that the probability distribution that best represents the current state of knowledge about a system is the one with largest entropy, given a testable information, and in this way, no additional bias or assumptions is introduced. Our proposed method is based on the principle, which theoretically guarantees that the learned representations are less biased. And this is also supported by the empirical evidence on various downstream tasks. \n\nWe are particularly inspired by the reviewer's comments that more empirical evidence can be leveraged to explicitly show what information the proposed method can learn and is beneficial for the downstream tasks. For example, following the method in [A], we can measure the mutual information between the learned representations and target information (e.g., color, position, patch/pixel), which will provide a more systematic analysis and make our method more interpretable. As noted by the reviewer, the bias can also be introduced by the designed data augmentations. To address this problem, we can further incorporate other methods that focus on this aspect (e.g., AugSelf [A]) into MEC to minimize the bias introduced by both data augmentations and pretext tasks.\n\nThanks again for the very helpful suggestions which will be incorporated to make the paper stronger.\n\n**References**\n\n[A] Lee, Hankook, et al. \"Improving transferability of representations via augmentation-aware self-supervision.\" NeurIPS 2021.", " I thank the authors for the detailed response.\nThis is a very well-written paper with solid theoretical and empirical support.\n\nHowever, I am more convinced by the authors that the empirical advantage is misleading. The empirical advantage is exactly just 0.3%.\nActually, in response point 1, Barlow Twins are supposed to reach 73.5%. So here, the extra two orders approximately indeed only provide a 0.1% improvement.\n\nI think this is totally ok. A good theory unifies several ideas and provides incremental improvements. But it is unacceptable that the paper tries to exaggerate the results.\n\nThe main figure and most of the results tables will mislead readers that the advantage of the proposed method over Barlow Twins / SimCLR is due to the proposed loss. In fact, leveraging other tricks like momentum encoders is the main reason but is only mentioned in L159 once. Not in the figure, not in the main text, not in the experimental detail.\n\nI will lower my score due to this reason.", " Thank you to the authors for their response to the reviews.\n\n> we explicitly encourage the generalization ability on downstream tasks and minimize the bias in the formulation of the pretext task, by introducing the Maximum Entropy Principle.\n\nI still feel that the explanation about bias and learned information is unclear. First, the provided results can show MEC's superiority, but cannot explain how MEC can achieve the superiority. One can consider Maximum Entropy Principle as another pretext task. Moreover, MEC is also an image-level SSL objective relying on strong data augmentations. Therefore, MEC also has some bias introduced by the principle, image-level objective, and data augmentations. Can you guarantee MEC always has better bias than existing methods? Since this work introduces a new objective, I think it would be better to analyze the proposed method systemically, e.g., by addressing the following questions:\n- Which information (e.g., color, position, background, etc.) can be easier to learn from MEC than existing methods? This may be answered by checking mutual information between target information and the learned representation. (For example, see [this paper](https://openreview.net/forum?id=U34rQjnImpM), Figure 2.)\n- Why can MEC learn more patch/pixel-level information compared to other image-level SSL objectives?\n\nNevertheless, I think the experimental results (including transfer learning for fine-grained classification tasks) are very strong, so I'm still positive about this paper. Hence, I keep my positive rating.\n", " > **3. \"The 'principle of maximum entropy' is no different from requiring independent features, which is already proposed in MCR$^{2}$, BarlowTwins, VICReg, and whitening SSL\"**\n\nSince they can be categorized as feature-wise SSL methods, these methods and ours indeed require each feature dimension to be independent. However, what really matters is the different ways of learning independent features proposed in these methods. In contrast to other mentioned methods, our method aims to learn general-purpose representations and is strongly supported by theoretical motivation. In addition, our proposed method unifies the batch-wise and feature-wise objectives as low-order approximations of our method, and this new perspective also distinguishes our method from existing ones.\n\n> **4. \"Do you consider n=2 (Taylor expansion order) as a novel model?\"**\n\nYes, we do consider the second-order approximation as a novel model. Despite that the second-order expansion of the feature-wise side of Eqn.4 is equivalent to Barlow Twins, our novelty comes from two aspects.\n\nFirst, the motivation and derivation are novel. Barlow Twins aims to minimize the redundancy between the feature components to avoid mode collapse, while our derivation originates from the desire of optimizing generalization on downstream tasks by leveraging the principle of maximum entropy. \n\nSecond, the perspective we provided to the community is novel. We find interesting equivalence between low-order approximations of MEC and existing batch-wise or feature-wise objectives, which provides a new perspective for a unified understanding of prevalent SSL methods. And as remarked by Reviewer CgUD, \"The direct tying of a family of objectives to a very grounded mathematical concept is highly significant in my eyes as it brings the iterative line of research to a convergence point from which more novel lines of research can be taken.\"\n\n> **5. \"The authors only compare their work to fundamental frameworks (BYOL, BarlowTwins, SwAV), but there are many similar mixture models, e.g., arXiv:2204.07141, arXiv:2104.14548, arXiv:2109.12909, arXiv:2012.13493, arXiv:2201.05119\"**\n\nThanks for pointing out these relevant and interesting papers. We will include all these papers in the related work section, and compare our work to these methods (see the preliminary table below) if their code and pre-trained models are available, so we can reproduce the missing results and fit them in our main tables (Tab.1-4).\n\n| Method | Linear100 | Linear200 | Linear300 | Det | Ins |\n|--------------|-----------|-----------|-----------|------|------|\n| NNCLR [C] | 69.4 | 70.7 | - | - | - |\n| HEXA [D] | - | 68.9 | - | - | - |\n| C-SimCLR [E] | - | - | 70.1 | - | - |\n| C-BYOL [E] | - | - | 73.6 | - | - |\n| MEC | 70.6 | 71.9 | - | 39.8 | 34.7 |\n\n**References**\n\n[A] Zbontar, Jure, et al. \"Barlow twins: Self-supervised learning via redundancy reduction.\" ICML 2021.\n\n[B] Grill, Jean-Bastien, et al. \"Bootstrap your own latent-a new approach to self-supervised learning.\" NeurIPS 2020.\n\n[C] Dwibedi, Debidatta, et al. \"With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations.\" arXiv:2104.14548 (2021).\n\n[D] Li, Chunyuan, et al. \"Self-supervised pre-training with hard examples improves visual representations.\" arXiv:2012.13493 (2020).\n\n[E] Lee, Kuang-Huei, et al. \"Compressive Visual Representations.\" arXiv:2109.12909 (2021).\n\n\n", " We thank the reviewer for the valuable comments, and for recognizing the strong performances and theoretical motivation of our work. We address the raised concerns below.\n\n> **1. \"The strong result mostly comes from a mixture of designing components, e.g., exponential moving average and asymmetric architecture. It is questionable how much advantage comes from the maximum entropy coding loss.\"**\n\nAlthough the mixture of designing components contributes to the results (Tab.5), the maximum entropy coding loss is the most important reason for the strong performances of our method. When removing all those components, our method still outperforms SimCLR, Barlow Twins by 3.2, 0.6 points, respectively. And this behavior is different from other SSL methods (e.g., SimSiam, BYOL), where mode collapse is a big concern without those components. We summarize the results in the table below.\n\n| Method | Ema | Asym. | Linear |\n|--------------|-----|-------|----------|\n| SimCLR | no | no | 70.4 |\n| BYOL | no | no | collapse |\n| SimSiam | no | no | collapse |\n| Barlow Twins | no | no | 73.0 |\n| MEC | no | no | **73.6** |\n\nThe technique of exponential moving average has been common practice in recent SSL methods to improve performance. However, using asymmetric architecture could decrease the performance, sometimes by a large margin (61.3 v.s. 71.4, as reported in Barlow Twins [A]). So a mixture of designing components does not necessarily lead to performance improvement. \n\nTo further validate how much advantage comes from the maximum entropy coding loss, below we compare our method with BYOL on a wide variety of downstream tasks.\n\n| Method | order | Ema | Asym. | Linear | Semi | Det | Ins | SOT | VOS | MOT | MOTS | PoseTrack |\n|----------|-------|-----|-------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|\n| BYOL [B] | n=1 | yes | yes | 74.3 | 53.2 | 37.9 | 33.2 | 58.9 | 58.8 | 62.9 | 70.8 | 73.8 |\n| MEC | n=4 | yes | yes | **74.5** | **55.9** | **39.8** | **34.7** | **62.3** | **62.0** | **63.3** | **72.0** | **74.1** |\n\nAlthough both methods use exponential moving average and asymmetric architecture, MEC outperforms BYOL on all 9 tasks considered. Similar trends can also be observed on 11 fine-grained classification benchmarks (Tab.8). These experiment results further emphasize the importance of the maximum entropy coding loss instead of the mixture of designing components.\n\n> **2. \"The higher-order approximation, as mentioned above, only has a 0.3\\% linear probe improvement. All other experiments fail to verify the advantage.\"**\n\nAs the main purpose of our method is to improve the generalization of SSL representation on various downstream tasks, we argue that the advantage should not be solely evaluated by ImageNet linear probe. Instead, we compare transferred performance on a variaty of downstream tasks. \n\nAs can be seen from the table in response 1, with fourth-order approximation, MEC outperforms BYOL (which can be seen as MEC with first-order approximation) on all 9 tasks considered across the board, suggesting the advantage of using higher-order approximation. We also note that the extra two orders (2->4) benefit less than lifting from the first order to the fourth order approximation. It is reasonable since the relative approximation error of second-order expansion is already quite decent, lower than 0.5\\% (Fig.6), almost the same as fourth-order expansion (Fig.3). We will include these discussions in the paper to better clarify the advantage of using higher-order approximation.\n\n\n", " Thank you very much for your constructive comments and support. We greatly appreciate that you found our work being very motivated and opening up exciting possibilities for future work. Below we address the raised concerns.\n\n> **1. \"Can you clarify where the increase in metrics from cited sources comes from?\"**\n\nThe comparison of different methods in Tab.1 is based on the reproduction results from the SimSiam paper [A]. In order for a fair comparison, they made small and straightforward modifications to the related methods. And the reproduction has better results for SimCLR, MoCo v2, and SwAV, and has comparable results for BYOL. Please kindly refer to Appendix C of the SimSiam paper [A] for more details. We will add a note in the paper to better clarify this.\n\n> **2. \"Are you able to provide any results with pretraining on moderate-to-large scale datasets besides ImageNet? I'd also be curious if this objective can provide improvements for self-supervised learning on smaller-scale datasets.\"**\n\nThank you for your insightful questions. We perform self-supervised pre-training on Places365 dataset using the proposed method, and then linear evaluation is conducted on Places365 and ImageNet dataset. We list the initial experiment results in the table below.\n\n| Method | Places365 | ImageNet |\n|--------|-----------|----------|\n| SimCLR | 53.0 | 56.5 |\n| BYOL | 53.2 | 58.5 |\n| MEC | **53.8** | **59.9** |\n\nThese results show that MEC can still learn good representations when pre-trained on different kinds of datasets, and also achieve better performance than previous methods. As for pre-training on smaller-scale datasets, we show in the main paper that MEC improves both linear and kNN accuracy of other methods by a large margin (e.g., 2.3\\% linear probing accuracy for SimCLR) on CIFAR-10 dataset when working as a regulation term (Fig.5), and when working as a standalone pre-training objective, MEC gains +2.9\\% accuracy over SimCLR evaluated by linear probing. We will include these results and discussions in the paper to better demonstrate the effectiveness of MEC.\n\n> **3. \"'variaty' instead of 'variety' in L60\" \"Appendix H (starting from L724) is extremely interesting and I was wishing for a similar section within the main text.\"**\n\nThank you for your very helpful suggestions. We will revise the paper accordingly, and add a reference to Appendix H with a preview of what it contains in the main text.\n\n**References**\n\n[A] Chen, Xinlei, and Kaiming He. \"Exploring simple siamese representation learning.\" CVPR 2021.", " > **2. \"Could you test more transfer learning experiments with fine-grained classification benchmarks?\"**\n\nThank you for the very helpful suggestion. We have already conducted the suggested transfer learning experiments on fine-grained classification benchmarks in supplementary material. Specifically, we perform linear probing and fine-tuning experiments on 11 fine-grained classification datasets, following the setup in SimCLR [A] and BYOL [B] papers. Please refer to Appendix D (L660-667) for more details about the experiments. And we also list the results in the table below.\n\n| Method | Food101 | CIFAR10 | CIFAR100 | SUN397 | Cars | Aircraft | VOC2007 | DTD | Pets | Caltech-101 | Flowers |\n|-----------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|-------------|----------|\n| Linear probing: | | | | | | | | | | | |\n| SimCLR [A] | 68.4 | 90.6 | 71.6 | 58.8 | 50.3 | 50.3 | 80.5 | 74.5 | 83.6 | 90.3 | 91.2 |\n| BYOL [B] | 75.3 | 91.3 | **78.4** | 62.2 | **67.8** | 60.6 | 82.5 | 75.5 | 90.4 | 94.2 | **96.1** |\n| MEC | **75.6** | **92.1** | **78.4** | **62.7** | 67.2 | **61.5** | **82.7** | **75.8** | **90.9** | **94.6** | 96.0 |\n| Fine-tuned: | | | | | | | | | | | |\n| Random init [A] | 86.9 | 95.9 | 80.2 | 53.6 | 91.4 | 85.9 | 67.3 | 64.8 | 81.5 | 72.6 | 92.0 |\n| SimCLR [A] | 88.2 | 97.7 | 85.9 | 63.5 | 91.3 | 88.1 | 84.1 | 73.2 | 89.2 | 92.1 | 97.0 |\n| BYOL [B] | 88.5 | **97.8** | 86.1 | 63.7 | **91.6** | 88.1 | 85.4 | **76.2** | 91.7 | 93.8 | 97.0 |\n| MEC | **88.9** | **97.8** | **86.8** | **63.8** | **91.6** | **88.5** | **85.9** | 76.0 | **91.9** | **94.9** | **97.2** |\n\nThe results show that the learned representations of MEC are more generalizable and less biased across the various data domains, compared to other models pre-trained with specific pretext tasks. We will add the results and discussions to the main paper to further demonstrate the transferability of the learned representations.\n\n> **3. \"Too small top margins in p2-3.\" \"Also, other spaces (e.g., caption spaces) seem too narrow.\"**\n\nThank you for pointing these out. We have modified the top margins and other spaces accordingly in the updated version of the paper.\n\n**References**\n\n[A] Chen, Ting, et al. \"A simple framework for contrastive learning of visual representations.\" ICML 2020.\n\n[B] Grill, Jean-Bastien, et al. \"Bootstrap your own latent-a new approach to self-supervised learning.\" NeurIPS 2020.\n\n[C] Xie, Enze, et al. \"Detco: Unsupervised contrastive learning for object detection.\" CVPR 2021.\n\n[D] Zbontar, Jure, et al. \"Barlow twins: Self-supervised learning via redundancy reduction.\" ICML 2021.", " We thank the reviewer for the positive comments and constructive feedback. Below we address the raised concerns. \n\n> **1. \"What information the proposed method can learn while existing SSL methods cannot is unclear.\"\"What are the biases exactly? Could you provide some empirical evidence supporting that MEC can learn less-biased representations?\"**\n\nThank you for the insightful questions. The bias of representations is a tendency to prefer some particular aspects of data over others. In self-supervised learning, the representations are learned through solving pretext tasks, and hence the bias is closely related to the nature of different pretext tasks. \n\n- Early approaches, such as image colorization and orientation prediction, bias the model to learn low-level image statistics, instead of high-level visual concepts, thus leading to poor empirical performance on high-level downstream tasks.\n\n- A most prevalent line of self-supervised learning methods, i.e. contrastive learning, bases themselves on the instance discrimination pretext task. However, such learned representations are found biased to image-level tasks such as image classification, while by contrast degenerate in patch- or pixel-level tasks like object detection and semantic segmentation [C]. \n\n- Some pretext tasks are designed tailored to given downstream applications. For instance, DetCo [C] is designed specifically for the object detection task, thus the learned representation naturally degenerates on other tasks like ImageNet classification.\n\nConsidering that the ultimate goal of self-supervised learning is a general-purpose representation, we emphasize that our contribution which distinguishes our method from existing ones is that we *explicitly* encourage the *generalization ability on downstream tasks* and *minimize the bias* in the formulation of the pretext task, by introducing the Maximum Entropy Principle. The empirical results show that MEC generalizes well consistently across (i) image-based and video-based tasks (Tab.1,2,4), (ii) patch-level and pixel-level tasks (Tab.3). We summarize the results in the table below.\n\n| Method | Linear | Semi | Det | Ins | SOT | VOS | MOT | MOTS | PoseTrack |\n|-----------------|----------|----------|----------|----------|----------|----------|----------|----------|-----------|\n| BYOL [B] | 74.3 | 53.2 | 37.9 | 33.2 | 58.9 | 58.8 | 62.9 | 70.8 | 73.8 |\n| BarlowTwins [D] | 73.0 | 55.0 | 39.2 | 34.3 | 60.5 | 61.7 | 62.4 | 69.8 | **74.3** |\n| MEC | **74.5** | **55.9** | **39.8** | **34.7** | **62.3** | **62.0** | **63.3** | **72.0** | 74.1 |\n\nFurthermore, for a fixed given downstream task, MEC also improves generalization across different data distributions. We perform transfer learning experiments and elaborate the results below.\n\n\n\n", " This paper aims to learn generalizable representations without labels. To this end, this paper proposes Maximum Entropy Coding (MEC), inspired by the principle of maximum entropy in information theory. MEC uses the minimal lossy coding and the Taylor series approximation to make the maximum entropy estimation feasible. Extensive experimental results show that MEC consistently outperforms existing SSL methods under various downstream tasks. Furthermore, this paper demonstrates that MEC is robust to various hyperparameters (e.g., smaller batch sizes) and architectures (e.g., ViTs).\n The proposed method has several strengths: (1) simplicity and scalability, (2) high performance across various downstream tasks, and (3) robustness to various hyperparameters and architectures. I think the extensive experiments well demonstrate these strengths. Furthermore, MEC can be interpreted as batch-wise and feature-wise SSL methods, so I feel this part is also interesting.\n\nOne major concern with this paper is that what information the proposed method can learn while existing SSL methods cannot is unclear. This paper mentioned that existing SSL methods introduce some \"biases\" while MEC does not. What are the biases exactly? Could you provide some empirical evidence supporting that MEC can learn less-biased representations? I think this paper does not explain why and how MEC can outperform other SSL methods.\n\nSome minor concerns are provided in the **Questions** section.\n\nTo sum up, although some explanation seems insufficient, I feel the empirical results are strong. Hence, I vote for Weak Accept.\n - Too small top margins in p2-3. **This seems to violate formatting rules.** This should be modified in the rebuttal period. Also, other spaces (e.g., caption spaces) seem too narrow.\n- Could you test more transfer learning experiments with fine-grained classification benchmarks? These benchmarks are also important to evaluate the transferability of the learned representations. For the transfer setup, I recommend seeing SimCLR and BYOL papers.\n This paper well addressed the limitations and the potential negative societal impact.\n", " This paper proposed a joint-embedding objective called Maximum Entropy Coding, which is close to MCR^2 [74]. This method directly optimizes the information content by minimizing the coding length function. This proposed method unifies batch-wise objectives (SimSiam) and feature-wise objectives (BarlowTwins). Practically, this is implemented as a Taylor series expansion. \nThe proposed method shows strong performances (when combined with all existing techniques, including exponential moving average and asymmetric network) on a wide range of experiments, including ImageNet linear probe, semi-supervised classification, transfer learning on video tasks, and object detection. \nHowever, this paper actually uses all the tricks, including exponential moving average from BYOL, but only mentions it in the ablation study/supplementary material. It is obvious that the strong result of this method mostly comes from this mixture of existing design components. Strengths:\n1. The proposed method is strongly supported by theoretical motivation: the maximum entropy principle. \n2. The proposed method shows strong performances on a wide range of experiments (when combined with all existing techniques, including exponential moving average, asymmetric network), including ImageNet linear probe, semi-supervised classification, transfer learning on video tasks, and object detection.\n3. The authors provided experimental details, including pseudo-code and all hyperparameters. \n\nWeaknesses:\n1. The strong result mostly comes from a mixture of designing components, e.g., exponential moving average and asymmetric architecture. It is questionable how much advantage comes from the maximum entropy coding loss. In fact, the only contribution of the proposed method over existing methods is that it has a higher-order correction. Table 5e clearly shows that the extra two orders (2->4) only increase accuracy by 0.3%.\n2. It's unclear what the main contribution is to this paper. \n(a) The \"principle of maximum entropy\" is no different from requiring independent features, which is already proposed in MCR^2, BarlowTwins, VICReg, and whitening SSL. \n(b) The higher-order approximation, as mentioned above, only has a 0.3% linear probe improvement. All other experiments fail to verify the advantage. \n(c) The practical advantage of this mixture model. The authors only compare their work to fundamental frameworks (BYOL, BarlowTwins, SwAV), but there are many similar mixture models, e.g., arXiv:2204.07141, arXiv:2104.14548, arXiv:2109.12909, arXiv:2012.13493, arXiv:2201.05119\n\n====== after rebuttal comments =====\n\nI am more convinced by the authors that the empirical advantage is misleading. The empirical advantage is exactly just 0.3%. Actually, in response point 1, Barlow Twins are supposed to reach 73.5%. So here, the extra two orders approximately indeed only provide a 0.1% improvement.\n\nI think this is totally ok. A good theory unifies several ideas and provides incremental improvements. But it is unacceptable that the paper tries to hide this point.\n\nThe main figure and most of the results tables will mislead readers that the advantage of the proposed method over Barlow Twins / SimCLR is due to the proposed loss. In fact, leveraging other tricks like momentum encoders is the main reason but is only mentioned in L159 once. Not in the figure, not in the main text, not in the experimental detail.\n Do you consider n=2 (Taylor expansion order) as a novel model? If not, then how is the performance of this baseline across all experiments. If yes, how is it different from BarlowTwins? N/A", " This paper proposes a self-supervised learning method dubbed Maximum Entropy Encoding (MEC) which leverages the principle of maximum entropy to learn unbiased representations of an image dataset (experiments done on ImageNet). The authors combine a maximum entropy with the augmentation-invariance objective of contrastive learning, which is justified as a view consistency prior which gives testable information. \nThe exact resulting loss has a log determinant, which they approximate using a Taylor series. They show that various orders of this approximation are in fact equivalent to existing self-supervised methods such as SimSiam and Barlow Twins.\nExperiments are conducted with pretraining on ImageNet and suitably diverse downstream tasks: linear evaluation on ImageNet, semisupervised classification on ImageNEt, transfer learning on object detection (VOC + COCO) and segmentation (COCO), as well as video tracking (Table 5). Strengths:\nOriginality:\nThe community has been iteratively constructing contrastive or similar self-supervised learning methods for several years now. This paper subsumes several previous works by providing a novel unified view that allows for further adaption and exploration. While the resulting loss code is not markedly different from existing work, the theoretical approach it is derived from and the resulting flexibility/opportunity it yields constitute originality in my eyes.\n\nQuality:\nThe experiments in this paper are very clean, with straightforward well-accepted experimental settings. Appendix C describes the methodology well without introducing any complicating bells and whistles. The results of the new method are consistently equal or superior to existing objectives all while being very motivated.\nI cross-referenced numbers with original works and the authors match (or in some cases exceed) the original citations in table referenced numbers (example: the arxiv version of SimCLR lists 69.3 as its top1 imagenet acc while this work gives up to 70.4). I assume these discrepancies are from improved/additional facets in the training, but would appreciate clarification on this point.\nI feel extremely comfortable that I could replicate the experiments of this paper with minimal effort and would see comparable results.\n\nClarity:\nThis paper is extremely well-written. I was able to understand all of the concepts on the first read-through. The order of presentation was logical and descriptions were enlightening without being overly wordy. As previously mentioned, I feel comfortable that I could replicate the results quickly, especially given the useful pseudocode in Appendix A.\n\nSignificance:\nThe direct tying of a family of objectives to a very grounded mathematical concept is highly significant in my eyes as it brings the iterative line of research to a convergence point from which more novel lines of research can be taken. There could have been several more iterations of self-supervised learning papers converging to this objective by empirical motivation but this work shortcuts that and opens up exciting possibilities for future work (especially given the prominence of contrastive learning objectives in current vision-language works).\n\nMisc:\n- Figure 4 was an extremely useful illustration\n- Appendix was extremely informative and helpful\n- The ablations in Table 5 (particularly (e)) largely match my intuition which is nice\n\nWeaknesses:\nOriginality:\nThis paper consolidates existing lines of exploration and opens the door for further \"contrastive\" methods and as such the ultimate objective is not technically very different from existing work. Given the importance and significance of the consolidation though, this is not truly a weakness in my opinion.\n\nQuality:\n*I do wish that there were at least some pretraining experiments on datasets besides ImageNet (either uncurated web imagery or something like Places365). In my opinion this is the biggest weakness of this paper.*\nAs stated above, overall I was very impressed with the quality of this work. The only typo that jumped out to me while reading was \"variaty\" instead of \"variety\" in L60.\n\nClarity:\nI am very familiar with the contrastive learning so am probably a biased estimator here, but I thought the explanations and experiments were extremely straightforward and non-confusing. I have no complaints about the clarity. My only related suggestion is that a pointer to Appendix H is placed in the main text (see Misc below)\n\nSignificance:\nAs stated in the above originality section, it's difficult for a consolidation paper to have earth-shattering impact as the value largely lies in the tying together of work that necessarily exists. I don't think that this by any means is a net negative; I would rank this paper as one of the more significant ones that I've come across in the last 2-4 weeks of arxiv, but it does cap it below what a Conference Best Paper might look like.\n\n\nMisc:\n- Appendix H (starting from L724) is extremely interesting and I was wishing for a similar section within the main text. While incorporating that many lines is impractical at this stage of the paper; I would suggest that a reference to Appendix H with a preview of what it contains is added to the main text From Strengths-Quality: Can you clarify where the increase in metrics from cited sources comes from? Was I looking in the wrong place in the cited works or are there improvements stemming from more modern implementations?\n\nFrom Weaknesses-Quality:\nAre you able to provide any results with pretraining on moderate-to-large scale datasets besides ImageNet?\nI'd also be curious if this objective can provide improvements for self-supervised learning on smaller-scale datasets (this latter point is an extra ask, if pressed for space in the rebuttal feel free to ignore) Appendices E and F provide a succinct but accurate and precise description of limitations and the possible relation to societal impact" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4 ]
[ "_CRwj_41D_3", "LOZdyTPyjUyY", "7Rshwe65owV", "W95okT9-Gr", "JseAaEPPCRG", "b6QZR_vTPX", "WYBS_UVgqaJ", "TodRS3dY18", "W95okT9-Gr", "nips_2022_nJt27NQffr", "nips_2022_nJt27NQffr", "nips_2022_nJt27NQffr" ]
nips_2022_KglFYlTiASW
Neural Transmitted Radiance Fields
Neural radiance fields (NeRF) have brought tremendous progress to novel view synthesis. Though NeRF enables the rendering of subtle details in a scene by learning from a dense set of images, it also reconstructs the undesired reflections when we capture images through glass. As a commonly observed interference, the reflection would undermine the visibility of the desired transmitted scene behind glass by occluding the transmitted light rays. In this paper, we aim at addressing the problem of rendering novel transmitted views given a set of reflection-corrupted images. By introducing the transmission encoder and recurring edge constraints as guidance, our neural transmitted radiance fields can resist such reflection interference during rendering and reconstruct high-fidelity results even under sparse views. The proposed method achieves superior performance from the experiments on a newly collected dataset compared with state-of-the-art methods.
Accept
This paper proposes a novel neural radiance field rendering method that is dealing with specular reflection on the object’s surface. The authors present a novel method to solve the limitation of the existing NeRF-based methods for the scenes behind the transparent surfaces with specular reflection. The review results are two A(7) and two BA(5). After carefully checking out the rebuttals and discussions, I recommend the paper to be accepted for this NeurIPS.
train
[ "cK17dISDzl7", "1-uMB9cROD", "yxqwaJ88gtE", "MJS4EiHsPeo", "GeVmhlOwaz", "EPlijxvg5fo", "oz2LoycuaC_", "hnW7O_Pk0y", "vzSCNbbiLkB", "3hM9QqO5dLS", "Grf62Qo59r0", "KPpd4YA8U_", "wmvYGXr6tLb", "AicpL3uDBb", "6k3_tBmsFWU", "aZk-k_OIXhG", "UxXchnQsb83", "QLkPGH-LYkF", "6IjNF0Eyqf3", "6hKE4Voh6mo" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer K1pa, thanks for you kind reply very much. We are glad to have this opportunity to address your concerns. We will continue improving our paper to make it better.\n", " Dear authors,\n\nThank you for uploading an updated version of the manuscript and an HTML file. The HTML file was really helpful in understanding how each concern from reviewers is addressed. \n\nMost of my concerns are clearly addressed, including better terminology and explanations about reflection (i.e., reflection entanglement and motion inconsistency), details for reproduction (i.e., ERRNet-illustration), and additional experiments (i.e., RR + MVS, REC, non-planar reflectors, large reflectors). In particular, the additional experiments improved the clarity of the proposed method a lot by showing which component is critical for the performance (e.g., RR + MVS), and some challenging situations that the proposed method can still handle (e.g., non-planar reflectors and large reflectors). I became more positive about this paper.", " **Dear Reviewer 2Mxb , since the external link service provider may have some problems, if you cannot open the **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** in our response, you can find the related documents from this alternative dropbox link:**\n\nhttps://www.dropbox.com/s/4zx30m8hahcz8cs/Neural%20Transmitted%20Radiance%20Fields%20supplementary.zip?dl=0\n\n**Please download the whole folders and open ‘README.html’. Then, you can find the additional examples we provide for the rebuttal.**", " **Dear Reviewer Pr2K , since the external link service provider may have some problems, if you cannot open the **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** in our previous response, you can find the related documents from this alternative dropbox link:**\n\nhttps://www.dropbox.com/s/4zx30m8hahcz8cs/Neural%20Transmitted%20Radiance%20Fields%20supplementary.zip?dl=0\n\n**Please download the whole folders and open ‘README.html’. Then, you can find the additional examples we provide for the rebuttal.**", " **Dear Reviewer K1pa, since the external link service provider may have some problems, if you cannot open the **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** in our response, you can find the related documents from this alternative dropbox link:**\n\nhttps://www.dropbox.com/s/4zx30m8hahcz8cs/Neural%20Transmitted%20Radiance%20Fields%20supplementary.zip?dl=0\n\n**Please download the whole folders and open ‘README.html’. Then, you can find the additional examples we provide for the rebuttal.**", " **Dear Reviewer 9GAV, since the external link service provider may have some problems, if you cannot open the **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** in our response, you can find the related documents from this alternative dropbox link:**\n\nhttps://www.dropbox.com/s/4zx30m8hahcz8cs/Neural%20Transmitted%20Radiance%20Fields%20supplementary.zip?dl=0\n\n**Please download the whole folders and open ‘README.html’. Then, you can find the additional examples we provide for the rebuttal.**", " Dear Reviewers,\n\nWe appreciate every comment given by our reviewers. If you have any additional questions or concerns, please let us know by the end of this Author-Reviewer Discussion Stage (Aug 9th).\n\nWe again thank all reviewers for your time reviewing and helping to improve our paper. We have updated our paper and the supplementary after considering your suggestions. You can find them from the newly submitted files. All revisions to our original paper and supplementary materials are denoted using the **red** font. We will continue to proofread our paper to avoid any typos.\n\nBesides, more animated results are also provided as independent files in the supplementary.\n\n**You can find our original paper in the revision history under the timestamp 17 May 2022**.\n\nFor reviewer 9GAV:\n\n1. We have better explained $f_\\alpha(\\mathbf{x}, \\mathbf{d})$ in the revised manuscript. More discussions have been settled to explain its usage here (**line 181-line 184**).\n2. Due to the page limitation, we put the discussions for REC into the supplementary materials (**Under Section A**).\n\nFor reviewer K1pa:\n\n1. We have replaced the “entanglement” with “the reflection interference.” The word “disambiguate” has been replaced with “separate” or “separation.” \n2. We have rephrased the paragraph with \"the absorption, reflection, and refractive effect” in the revised manuscript to avoid unclearity (**line 120-line 124**). \n3. “Motion inconsistency” has been replaced with “Recurring edge constraints.” Please search **\"Recurring edge\"** or **\"Recurring edge constraints\"** in our paper.\n4. We have explained $\\Psi$ in the revised manuscript (**line 217-line 218**).\n5. The pretraining of ERRNet is provided in the revised manuscript (**line 237-line 239**).\n6. The way we make use of $\\mathbf{W}_g$ and $\\mathbf{W}_l$ is also provided in **Section B** of the supplementary.\n7. RR+MVSNeRF has been considered as a baseline in the experiments (**Table 1, Figure 3, and Figure 4**). We admit that RR+PixelNeRF will be added to our final version.\n8. We have discussed the non-planar glass in the revised manuscript (**line 247-line 248**) and clearly show its challenge in the Limitations (**line 314**).\n9. We also discuss the challenge of large reflectors in the Limitations (**line 313 - line 314**).\n10. Due to the page limitation, we include the ablation study of $\\mathbf{W}_g$ and $\\mathbf{W}_l$ in the supplementary materials (**Section B**). At the same time, we have cited related papers as the reference for such a design (**line 161 - line 163**).\n11. We have discussed how we set the threshold in the revised manuscript (**line 207 - line 208**).\n\nFor reviewer Pr2k\n\n1. We have shown the quantitative values of RR+MVSNeRF in **Table 1** and the visual comparison in **Figure 3 and Figure 4** of the revised manuscript.\n2. We have shown two examples with the non-reflective scenes in **Figure 6** of the revised manuscript.\n3. Due to the page limitation, more discussions about REC has been put into the supplementary materials (**Under Section A**).\n4. We have shown how $\\mathbf{W}_g$ and $\\mathbf{W}_l$ are used in our work (**line 163 - line 164**).\n\nFor reviewer 2Mxb\n\n1. After checking our paper, besides the teaser, we have already included two examples with specular reflection in **Figure 3** of the original manuscript, which shows our method can handle the specular reflection. We further include two more examples with specular reflection in **Figure S4** and **Figure S8** of the revised supplementary material.\n2. We have clearly shown that we obtained the pose via the COLMAP (**line 245-line 246**) and also discussed when COLMAP might fail in the Limitation. We finally propose some possible solutions for our future work to be considered (**line 315 - line 316**).\n3. Due to the page limitation, we clarify the differences between the settings pointed out by the reviewer (the 1st weakness) and ours in **Section D** of the revised supplementary material.", " Dear All, we are revising our paper based on the precious suggestions from each reviewer. We will update our paper as soon as possible. ", " > Questions: As mentioned earlier, specular reflection moves in the opposite direction of the transmitted image when the camera moves. It causes structure-from-motion failures that are supposed to give the right camera pose information as input. This paper doesn't clearly mention how the camera pose is estimated from the input image. I might miss it. If then, please let me know where the information is. I'm worried about the impact of the camera pose when producing other methods' results. I would like to hear more in the rebuttal.\n\nWe apologize for the unclarity. We follow other mainstream NeRF-style methods [NeRF 2020] to estimate the pose using COLMAP. From our experience, COLMAP can accurately extract the camera poses for our tested cases. Even for the strong reflection, if the saturation only occupies limited areas, COLMAP can still accurately extract the transmission poses used for the computation. However, if the strong reflection occupies large areas, COLMAP cannot accurately differentiate the transmission and the reflection. In this situation, it may falsely extract the reflection features to obtain the poses, which affects the subsequent computation. \n\nSince the pose estimation is necessary before any NeRF-style computation, this may influence subsequent rendering. We will include the discussion in our final version.\n\n[NeRF 2020] Mildenhall B, Srinivasan P P, Tancik M, et al. Nerf: Representing scenes as neural radiance fields for view synthesis[C]//European conference on computer vision. Springer, Cham, 2020: 405-421.", " > Even though the motivation of the proposed method sounds interesting, I'm not fully sure if this paper is completely developed and evaluated to solve the technical challenges. Specular reflection works very differently from transmission. For instance, when the camera motion occurs, the specular reflection and transmitted image move in opposite directions about the depth position of glass surfaces. The proposed model doesn't seem to account for the physical phenomenon. Instead, it just tries to separate the transmission and reflection along the given view vector, which is not physically plausible. This observation should be valid from a specific view angle. If the method accumulates multiple observations in a voxel grid, the accurate separation cannot be achievable by increasing the number of observations. I would like to hear more in the rebuttal. \n\n**“Specular reflection works very differently from transmission.”**\n\nWe agree that specular reflection works very differently from the transmission, and the transmission and specular reflection may move in opposite directions. However, our model does not rely on such a phenomenon or any given specific viewing angles. In our experiments, we do not deliberately choose specific viewing angles as the main view or reference views. \n\n**“the observation should be valid from a specific viewing angles”**\n\nWe plot two figures on this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **7. Specular Reflection** to show our understanding of this question. If we understood correctly, the situation described by Reviewer 2Mxb can be depicted as Figure A. In Figure A, the reflection is mainly caused by light sources (like lightbulbs, flash, or extreme cases like a laser pointer) that occupy limited areas but with very strong intensity causing saturation. When the viewing angle changes, it becomes difficult or even impossible to observe such specular reflection from another angle (especially when the light source is a small point) due to the law of reflection on a mirror-like surface. However, in our context of reflection removal, the reflection occupies much broader areas, as illustrated in Figure B. Such reflection could be observed from different angles (the law of reflection still describes the mirror-like reflection phenomenon, but the camera can receive rays from other directions). Several multi-image reflection removal methods also adopt such assumptions, where the reflection has different shapes and appearances when viewed from different angles, such as [Liu et al. 2020].\n\n[Liu et al. 2020] Liu Y L, Lai W S, Yang M H, et al. Learning to see through obstructions[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020: 14215-14224.\n\n____________\n\n> The evaluation of this paper is one of the weakest points. Except for the main results shown in the teaser, most results do not include strong specular reflection. According to the proposed formulation of the recurring edge constraint, the proposed method may work properly when there are strong contrast edges in the transmitted image. The main result of the picture frame is the case. In other cases, the results do not include any strong specular reflection. I think the results look very cherry-picking with a very small number of examples. I would like to see more results to validate the performance of the proposed method.\n\nOur method is not solely designed for strong specular reflection. Thus, our dataset contains both strong and moderate reflections. To further verify this, we show an example by applying our method to the public available NeRF-style dataset, RFFR [Guo et al. 2022], and the results can be found from the **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **8. Results for Examples with Specular Reflection**. The stronger specular reflection in this place only occupies limited areas and can also be observed from different positions. We hope it can partly simulate the scenario suggested by the reviewer. Our method can still successfully suppress the strong specular reflection from the results. \n\n[Guo et al. 2022] Guo Y C, Kang D, Bao L, et al. Nerfren: Neural radiance fields with reflections[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 18409-18418.\n\n", " > I found the performance comparison with respect to baseline method MVSNeRF is a bit unfair because the selected baseline methods are not designed to deal with reflection, and hence it tends to predict the reflected scene as is. Therefore, the quantitative PSNR results are much worse than the proposed method as expected. Especially, in Figure 3, MVSNeRF almost reconstructs the exact appearance of the target view.\n\nWe agree that MVSNeRF is not designed for issues related to reflection removal. We will consider adding another baseline method, \"RR+MVSNeRF\", in the final version, for a fair comparison. Through the comparison, it will also be more evident that this setting cannot be directly applied to reconstruct the scenes even if reflection removal has been applied, where constraints of being photometrically static may not hold. We have included this comparison in this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **5. RR+MVS**. Since the reflection removal cannot suppress all reflections, the results obtained by MVSNeRF can also observe reflection residuals. \n\n____________\n\n> For a NeRF method, it is also important to know the performance of the proposed method applied to normal (non-reflective) scenes. Otherwise, the usage of the proposed method is just limited to reflective scenes. In the submitted paper and supplementary material, all the examples and benchmark data are performed on the scenes with reflection. The authors are suggested to provide more comparison (quantitative) and real normal scene examples in the rebuttal period.\n\nThanks for this suggestion. Our method can also achieve robust results under non-reflective scenes. In this situation, the transmission feature extractor can be regarded as a special feature extractor, and REC can be regarded as a module to obtain the edges or gradient of the main view. We conduct more experiments on the LLFF dataset to address this concern. Only 6 views are used for training, and other experiment settings are the same as described in our paper. The results can be found on the **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **6. Non-Reflective**. Our method can work properly on non-reflective scenes with sparse views, which further validates the robustness of the proposed framework under the suggested settings.\n\n____________\n\n> What is the processing speed and the network complexity of the proposed method compared to baseline methods? In order to prove the effectiveness of the proposed method, it is crucial to verify that the performance gain is not coming from the extra number of parameters in the network as well as the pre-processed edge map and reflection purged features.\n\nThanks for this suggestion. When comparing the proposed methods with various baselines, we try to control the number of trainable parameters roughly the same. However, we still find that processing speed varies among different methods. A preliminary profiling test shows that the proposed method uses 1.3x the time taken by NeRF-W every epoch and 1.4x the time taken by NeRF. We attribute this inefficiency to the time-consuming homographic warping for high-dimensional transmission features. The computation time might be reduced if we further optimize our implementation, and we will try this before releasing the code.\n\n____________\n\n> From the ablation study, the recurring edge constraints (REC) only bring in very little improvement, but it is considered as one of the two contributions in the method section. It seems that the proposed method is not very effective.\n\nThanks for this suggestion. We also feel interested in the role of REC in the whole framework and have already done some further experiments to validate the effectiveness of REC. One experiment suggested by Reviewer 9GAV can be found in this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **1. REC**, where the results with REC can filter out some reflection residuals than the version without REC. \n____________\n\n> It is true that the proposed method outperforms other baselines on the reflective NeRF dataset by a large number. However, the method itself is quite straightforward with limited novelty. It is critical to understand the effectiveness of the proposed method by providing the performance comparison on normal datasets, and hence prove the validity of the proposed method. Normal data -> no reflection data\n\nWe have conducted more experiments to verify the effectiveness of the proposed method using more general datasets. The results can be found from the **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **6. Non-Reflective**.\n____________\n\n> Questions: The transmission features W_g and W_i are in different size. How do they feed into the T-MLP network?\n\nIn our experiments, they are upsampled using Nearest-Neighbor interpolation to the size of the image.", " > Non-planar reflector. How would the proposed method work when the reflector is not planar? It seems that all the experiments are done with a planar reflector. Does any data include a non-planar reflector? When the reflector is not planar, the behavior of reflections in multi-view images will be very different as the reflected parts will not be aligned after warping. To be specific, when the reflector is planar, the reflected object is equivalent to the virtual object that is behind the reflector and thereby the reflected parts from different viewpoints will be located at the same pixels after warping. However, we cannot expect this alignment in the non-planar reflector as it will be projected differently depending on the viewpoint. This will affect the performance of REC that relies on the aligned edge after warping, and it would be interesting to see how the method works in the case of a non-planar reflector.\n\n**“Non-planar reflector”**\n\nThanks for this suggestion. Most reflection-removal-related problems assume a piece of planar glass. We follow this assumption in this paper and do not specifically consider the influence of non-planar reflectors in our experiments. We will clarify this assumption in the final version. Since we capture images in the real world, some examples in Figure 3 of our paper are captured through a piece of glass with slightly curved areas, and our method still shows its robustness. \n\n**“We cannot expect this alignment in the non-planar reflector as it will be projected differently depending on the viewpoint.”**\n\nWe agree that the reflection components may not be aligned for non-planar reflectors under some situations, while REC only needs to find the recurring transmission components. Thus, the unaligned reflection components are not a big issue in this place. We show an example in this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **3. Non-planar** to better explain it. In this example, the reflection components are not aligned after the warping, while our method can still identify the corresponding transmission edges and reflection edges.\n\nWe try to analyze the influence of the non-planar glass on our framework. If such non-planar glass distorts the light emitted by the transmission scenes behind the glass, it may influence the rendering process. In this situation, extracting the necessary transmission poses may become difficult due to the distorted transmission details. \n\nWe agree that the non-planar glass is an interesting problem for NeRF with reflections, which is worth to be explored further. We will carefully consider it in our future study. \n____________\n\n> Large reflector. What if the reflector is large enough and thereby the reflections exist in every viewpoint? (e.g., large window as in [9]). This case breaks the assumption used for REC that the reflection is sparse across the different viewpoints, and only “motion” inconsistency can disambiguate reflections and transmissions (though it is still ambiguous to determine which one is transmission).\n\nThis is a very good question. The reflection may only dominate limited regions in many situations due to its regional property [Wan et al. 2022]. Thus, the assumption used for REC is still a valid approximation. From our answer to Reviewer 9GAV's question and the results shown in this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **1. REC**, even when the reflection occupies larger areas, our method can still work correctly if COLMAP can extract the transmission pose well. COLMAP fails to accurately estimate the transmission pose needed for the warp in slivered-mirror scenarios (**I** = 0.2**B**+0.8**R**), where the reflection almost occludes the light rays emitted by the transmission scene. In this situation, transmission REC cannot be extracted.\n\n[Wan et al. 2022] Wan R, Shi B, Li H, et al. Benchmarking single-image reflection removal algorithms[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.\n\n____________\n\n> Ablation study of feature pyramid W_g and W_l. What if the transmission feature W is used without a pyramid? \n\nWe have shown more ablation studies for this design, where the results can be found from the following **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **4. Ablation**. The complete model can output more robust results than the model without $\\mathbf{W}_g$ and $\\mathbf{W}_l$, though the two incomplete models can all suppress the reflection to some degrees.\n\n____________\n> How is the threshold 0.6 in (12) determined?\n\nThis threshold is determined empirically. This threshold can filter out some small gradient values belonging to the reflection components. We search from 0 to 1 with a step of 0.1 and fix it as 0.6 in our experiments. ", " \n> Missing baseline. A baseline (that might be interesting) is missing, that is RR + pixel-NeRF (without transmission feature). One of the main contributions of this paper is using the transmission feature, which is the combination of 1) reflection removal and 2) pixel-NeRF (assist the training of NeRF). If these two parts are divided into the reflection removal part and the pixel-NeRF part, it can be another baseline of RR + pixel-NeRF, which will be a more fair and interesting baseline. \n\nWe agree that RR+Pixel-NeRF may reflect interesting phenomena and new insights. However, its data loader is designed for the data generated by Blender and needs much effort to adapt to our case. We are afraid we cannot figure this out given the tight time during rebuttal. As an alternative, MVSNeRF is similar to PixelNeRF and uses PixelNeRF as a baseline, which can be found in Table 1 and Table 2 of [Chen et al. 2021]. We instead run a setting as RR+MVSNeRF, and this experiment can be found in this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **5. RR+MVS**. Our method still performs better because it can suppress reflection within the whole framework. We will continue working on the RR+PixelNeRF setup and include this comparison in the final version.\n\n[Chen et al. 2021] Chen A, Xu Z, Zhao F, et al. Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021: 14124-14133.\n\n____________\n\n> REC has a limited performance (at least quantitatively). The second main contribution of this paper is using recurring edge constraints (REC), but the effect of REC seems to be marginal quantitatively as shown in the ablation study (Table 2). The PSNR without REC is 22.48, which is almost the same as that of the complete model (22.75). It would be interesting to see how REC works in more challenging data. \n\nWe appreciate this suggestion. It is also mentioned by Reviewer 9GAV. We follow the suggestion made by Reviewer 9GAV to show the influence in a \"slowly-relaxed\" but more challenging setting. Specifically, we achieve this goal by using synthetic images with gradually changing parameters (to mimic moderately and highly reflective surfaces) for reflection components and making the reflection components cover the whole image plane. The results can be found from this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **1. REC**. In the first experiment with $\\mathbf{I} = 0.6\\mathbf{B}+0.4\\mathbf{R}$, for the reflection (at the center of *Our Model without REC*) that are hard to be further suppressed by the transmission feature extractor, REC can further exclude them during the rendering process. However, we agree that REC cannot function well under the slivered-mirror case with $\\mathbf{I} = 0.2\\mathbf{B}+0.8\\mathbf{R}$. In this situation, since COLMAP can only extract dominant reflection features, REC cannot extract the transmission patterns (the flower, which becomes almost invisible to human vision in this case).\n____________\n\n", " > The presentation of the manuscript can be improved. There are some ambiguous definitions or explanations:[Line 123] What is transmission and reflection entanglement? If it means transmission and reflection have an inherent ambiguity, then the proposed method cannot disambiguate either. “Due to the absorption, reflection, and refractive effect ~” should be further clarified.\n\n**“What is transmission and reflection entanglement?”**\n\nBy \"transmission and reflection entanglement\", we mean that the accurate separation of the transmission $\\mathbf{B}$ and the reflection $\\mathbf{R}$ is an ill-posed problem, which is recognized in reflection-removal-related areas. We agree that our method cannot \"disambiguate\" them, while we hope to make their separation as reasonable as possible under the current framework. We also realize that the \"entanglement\" and \"disambiguate\" in this place are not clear and accurate enough. In the final version, we will clearly say that their separation is an ill-posed problem, and our goal is to provide a reasonable separation under the current framework.\n\n**“Due to the absorption, reflection, and refractive effect ~ should be further clarified.”**\n\nThe absorption, reflection, and refractive effects denote several factors that may influence the light emitted by the objects on both sides of the glass. When light travels through a piece of glass, the light's intensity is typically influenced by the absorption and reflectivity effect [Wan et al. 2022]. The refractive effect is related to the density of glass and mainly affects the relationship between the transmission and the reflection. These factors jointly make reflection separation a difficult task. More details about the three factors can be found in this paper [Wan et al. 2022]. This part will be further clarified in the final version. \n\n[Wan et al. 2022] Wan R, Shi B, Li H, et al. Benchmarking single-image reflection removal algorithms[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.\n\n______\n\n> [Motion inconsistency] The terminology “motion inconsistency” (used frequently all around the paper including the abstract) used for recurring edge constraints is somewhat misleading. The key idea used for recurring edge constraints is that the reflected component may not exist in some viewpoints and thereby have a sparse presence. The reason for this phenomenon is the size of the reflector is limited, which causes the reflected object to be outside the reflector and disappear in some viewpoints. It has nothing to do with motion and thus the term “motion inconsistency” is not the appropriate term to describe the method. Maybe the reflected object is at a different depth from the transmitted object and moves differently in the image (e.g., larger disparity when it is further), but it is not the information that the proposed method directly uses. The description in the main paragraph (line 187-) is already clear, so just choosing a better terminology would improve the clarity of the proposed method.\n\nThanks for this helpful suggestion. It provides a better perspective to consider an important component used in our framework. We agree that this phenomenon is due to the limited size of the reflector, and it causes the reflected object to be outside the viewpoints. After carefully considering reviewers' suggestions, we will directly use \"Recurring Edge Constraint\" in the final version. \n____________\n> [Line 210] What is \\Psi? The notation seems to be not defined.\n\nEq. (14) below line 210 have two lines, and the definition for $\\Psi$ has already been given at the second line. It defines a pixel-wise correlation between the transmission and reflection, which helps to separate them in the gradient domain. \n____________\n> Some important details about the transmission feature are missing. What network is used for feature W? From Line 155, I assume the network is based on ERRNet but it is difficult to see which part of the ERRNet is used as there are many components in the ERRNet. Line 162 is not enough for understanding the exact structure. Also, Line 232 explains the pretraining of the transmission encoder briefly and it is somewhat confusing if the method is different from the original ERRNet. The network structure and the training detail needs to be added to the supplemental material.\n\nWe apologize for this unclarity. We will clarify this in the final version, and its details can be found in this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** with **2. ERRNet-illustration**. For the pretraining of this transmission feature extractor, we follow the strategy proposed in ERRNet with its released training data. Thus, the model is approximately equivalent to its released model. \n", " > Weaknesses: The authors rightly point out that weighting coefficients are dependent on several factors. The viewing direction (wrt lights in the scene) and the camera position are correlated and more discussion is warranted on whether an MLP that encodes the weighting coefficients is sufficient in general.\n\nWe apologize for the missing details about weighting coefficients $\\alpha$, which may have misled the reviewer into doubting the effectiveness of the MLP used here. In the inline equation $f_\\alpha(\\mathbf{x}, \\mathbf{d})$ at line 180, $\\mathbf{x}$ refers to the position of any given point, and $\\mathbf{d}$ is the viewing direction, consistent with the notation defined at line 106 and line 107. The weighting map of a given view is rendered similarly to Eq.(2), where $\\sigma_{t}^{(i)}$ being the $\\sigma$ in the equation, and the value of $\\alpha$ along the ray emitted from the camera is accumulated. This setting enables the network to increase its robustness with real-world cases in our experiments. These details will be clarified in the final version.\n___\n> Question 1: It would be an interesting study to see the performance of this method as the sparsity assumption is slowly relaxed, where the reflective components in the scene change from transmissive glass plates on one end of the spectrum to fully silvered mirrors on the other. Have the authors performed any experiments on such scenes? \n\nThis is a very inspiring suggestion. Before we answer this question, we define $\\mathbf{I}$, $\\mathbf{B}$, and $\\mathbf{R}$ as the mixture image captured through glass, the transmission components, and the reflection components. This definition follows the common settings in reflection-removal-related works.\n\nWe use $\\mathbf{I} = 0.2\\mathbf{B}+0.8\\mathbf{R}$ (relatively high weight on reflection) to mimic the ``silvered-mirror\" case. The results are given in this **[link](https://anonymous.4open.science/r/NeurIPS_1660/README.md)** under **1. REC**. The COLMAP used for the pose extraction cannot obtain the transmission features, which the reflection has almost totally occluded.\n\nWe adjust the weight to mimic the transmissive glass with more common reflectivity with $\\mathbf{I} = 0.6\\mathbf{B}+0.4\\mathbf{R}$. COLMAP can well extract the pose information for REC computation, and our proposed scheme can also work properly. \n______\n> Question 2: While occlusions are one of the main limiting factors, occlusions in the reflected part of the scene however are not likely to cause a problem in my opinion, since the edge constraint assumption and the sparsity assumption are still valid?\n\nThe edge constraint and sparsity assumptions are still valid even if some of the reflection components are occluded in certain views. Thus, we do not consider the occlusions in the reflected part as a limiting factor. ", " We appreciate the comments and efforts made by our reviewing panel members. We hope to use this opportunity to provide more details about our work. Thanks very much.\n", " The paper proposes a method to learn neural radiance fields that represent the underlying scene free of reflective components in the scene, i.e. explicitly represented the transmitted regions of the scene. Prior work in representing transmitted radiance field relies on reflection removal from the input image sequence, however this is a challenging problem and typically results in photometric inconsistencies. The proposed method uses a novel formulation leveraged on the observation that reflective components in the radiance field are sparser than the transmitted components. A patch-based rendering scheme is used to handle the local characteristics of reflective/transmissive components. Strengths:\nThe paper is well written and the exposition is clear. The paper provides a through introduction and a motivation for the solution, before properly explaining the proposed solution. As such I find the paper to be a useful contribution to the community and beneficial for the reader. \nThe use of the transmission encoder with pyramid-scale features is interesting and the choice of Wg and Wl is properly motivated. \nThe recurring edge constraints are the core strength of the paper and the description provided in section 4.2 is succinct. \nThe qualitative and quantitative results in the paper and supplemental material clearly demonstrates that the transmitted radiance field is captured free form noise due to reflection. \n\nWeaknesses:\nThe authors rightly point out that weighting coefficients are dependent on several factors. The viewing direction (wrt lights in the scene) and the camera position are correlated and more discussion is warranted on whether an MLP that encodes the weighting coefficients is sufficient in general. It would be an interesting study to see the performance of this method as the sparsity assumption is slowly relaxed, where the reflective components in the scene change from transmissive glass plates on one end of the spectrum to fully silvered mirrors on the other. Have the authors performed any experiments on such scenes? \n\nWhile occlusions are one of the main limiting factors, occlusions in the reflected part of the scene however are not likely to cause a problem in my opinion, since the edge constraint assumption and the sparsity assumption are still valid? Yes, the authors discuss the limitations of the work", " This paper targets to solve the novel-view synthesis problem with reflection removal, that is, novel-view synthesis of a transmitted object from images corrupted by reflections. A naive baseline, that applies reflection removal techniques to each input image before NeRF, does not solve the problem as the resultant image would not be multi-view consistent; This is because most reflection-removal techniques cannot take advantage of multiple viewpoints. This paper solves this problem by introducing 1) transmission feature integration and 2) recurring edge constraints. First, Transmission feature integration is based on the idea of pixel-NeRF that the feature from other viewpoints can assist the training, and the paper used “transmission feature” instead of the vanilla pixel feature in pixel-NeRF. Second, recurring edge constraints are based on the assumption that a reflected component is sparse in its presence in the aligned image. The paper also collected a new dataset for real multi-view images corrupted by reflections, and the proposed method shows promising results. ### Strengths\n- Promising results. The proposed method shows promising results on real multi-view images corrupted by reflections. The comparison with other methods such as NeRF, NeRF-W, and RR + NeRF, also shows that the proposed method performs superior both qualitatively and quantitatively, especially when the number of input images is limited.\n- New dataset of multi-view images with and without reflections. The paper shows the newly collected multi-view images, which can facilitate further research on multi-view reconstruction and reflection removal.\n\n### Weaknesses\nWhile the paper proposes an interesting method with promising results, there are some weaknesses that can be improved:\n- The presentation of the manuscript can be improved. There are some ambiguous definitions or explanations:\n - [Line 123] What is transmission and reflection entanglement? If it means transmission and reflection have an inherent ambiguity, then the proposed method cannot disambiguate either. “Due to the absorption, reflection, and refractive effect ~” should be further clarified.\n - [Motion inconsistency] The terminology “motion inconsistency” (used frequently all around the paper including the abstract) used for recurring edge constraints is somewhat misleading. The key idea used for recurring edge constraints is that the reflected component may not exist in some viewpoints and thereby have a sparse presence. The reason for this phenomenon is the size of the reflector is limited, which causes the reflected object to be outside the reflector and disappear in some viewpoints. It has nothing to do with motion and thus the term “motion inconsistency” is not the appropriate term to describe the method. Maybe the reflected object is at a different depth from the transmitted object and moves differently in the image (e.g., larger disparity when it is further), but it is not the information that the proposed method directly uses. The description in the main paragraph (line 187-) is already clear, so just choosing a better terminology would improve the clarity of the proposed method.\n - [Line 210] What is \\Psi? The notation seems to be not defined.\n- Some important details about the transmission feature are missing. What network is used for feature W? From Line 155, I assume the network is based on ERRNet but it is difficult to see which part of the ERRNet is used as there are many components in the ERRNet. Line 162 is not enough for understanding the exact structure. Also, Line 232 explains the pretraining of the transmission encoder briefly and it is somewhat confusing if the method is different from the original ERRNet. The network structure and the training detail needs to be added to the supplemental material.\n- Missing baseline. A baseline (that might be interesting) is missing, that is RR + pixel-NeRF (without transmission feature). One of the main contributions of this paper is using the transmission feature, which is the combination of 1) reflection removal and 2) pixel-NeRF (assist the training of NeRF). If these two parts are divided into the reflection removal part and the pixel-NeRF part, it can be another baseline of RR + pixel-NeRF, which will be a more fair and interesting baseline.\n- REC has a limited performance (at least quantitatively). The second main contribution of this paper is using recurring edge constraints (REC), but the effect of REC seems to be marginal quantitatively as shown in the ablation study (Table 2). The PSNR without REC is 22.48, which is almost the same as that of the complete model (22.75). It would be interesting to see how REC works in more challenging data. Additional experiments that may further demonstrate the robustness of the proposed method:\n\n- Non-planar reflector. How would the proposed method work when the reflector is not planar? It seems that all the experiments are done with a planar reflector. Does any data include a non-planar reflector? When the reflector is not planar, the behavior of reflections in multi-view images will be very different as the reflected parts will not be aligned after warping. To be specific, when the reflector is planar, the reflected object is equivalent to the virtual object that is behind the reflector and thereby the reflected parts from different viewpoints will be located at the same pixels after warping. However, we cannot expect this alignment in the non-planar reflector as it will be projected differently depending on the viewpoint. This will affect the performance of REC that relies on the aligned edge after warping, and it would be interesting to see how the method works in the case of a non-planar reflector.\n- Large reflector. What if the reflector is large enough and thereby the reflections exist in every viewpoint? (e.g., large window as in [9]). This case breaks the assumption used for REC that the reflection is sparse across the different viewpoints, and only “motion” inconsistency can disambiguate reflections and transmissions (though it is still ambiguous to determine which one is transmission).\n- Ablation study of feature pyramid W_g and W_l. What if the transmission feature W is used without a pyramid?\n- How is the threshold 0.6 in (12) determined? The questions in the above section (Questions) include some limitations that are not handled in the paper: non-planar reflector and large reflector. The proposed method may not work for those cases of reflectors.", " This paper proposes a novel neural radiance field rendering method that is dealing with specular reflection on the object’s surface. The proposed method aims at recovering only the transmission radiance behind the reflection. To that end, this paper proposes to prepare two dedicated networks, i.e. T-MLP and R-MLP, to learn the transmission features and reflection features. This is achieved by applying a single image reflection removal method to the training data to separate the background and the reflection. The learned transmission and reflection color radiance are then combined in a convex combination. In addition, in order to guide the learning of background high-frequency details, this method also applies recurring edge constraints which utilize the observation that background edges appear consistently in multiple different views. Strengths\n1. This paper is generally well-written with clear motivation in the introduction section. It clearly defines the current problem and challenge left by existing NeRF-based methods, which is the reconstruction of scenes behind the transparent surfaces with specular reflection. \n2. The comprehensive experiments show that the proposed method consistently outperforms the state-of-the-art methods by a considerable margin, in both qualitative and quantitative evaluations. \n3. This paper proposes a new NeRF purpose dataset, which is particularly focusing on the scenes behind the specular reflection. The proposed dataset may impose a strong impact on future research in this area. \n\nWeakness\n1. I found the performance comparison with respect to baseline method MVSNeRF is a bit unfair because the selected baseline methods are not designed to deal with reflection, and hence it tends to predict the reflected scene as is. Therefore, the quantitative PSNR results are much worse than the proposed method as expected. Especially, in Figure 3, MVSNeRF almost reconstructs the exact appearance of the target view. \n2. For a NeRF method, it is also important to know the performance of the proposed method applied to normal (non-reflective) scenes. Otherwise, the usage of the proposed method is just limited to reflective scenes. In the submitted paper and supplementary material, all the examples and benchmark data are performed on the scenes with reflection. The authors are suggested to provide more comparison (quantitative) and real normal scene examples in the rebuttal period. \n3. What is the processing speed and the network complexity of the proposed method compared to baseline methods? In order to prove the effectiveness of the proposed method, it is crucial to verify that the performance gain is not coming from the extra number of parameters in the network as well as the pre-processed edge map and reflection purged features. \n4. From the ablation study, the recurring edge constraints (REC) only bring in very little improvement, but it is considered as one of the two contributions in the method section. It seems that the proposed method is not very effective. \n5. It is true that the proposed method outperforms other baselines on the reflective NeRF dataset by a large number. However, the method itself is quite straightforward with limited novelty. It is critical to understand the effectiveness of the proposed method by providing the performance comparison on normal datasets, and hence prove the validity of the proposed method. \n\n 1. The transmission features W_g and W_i are in different size. How do they feed into the T-MLP network?\n The limitation of the proposed method is to apply it to any normal scenes or NeRF datasets. If it cannot perform well on non-reflective scenes, the generalizability of the method will be the biggest limitation. ", " This paper proposes a novel view synthesis network specially designed for see-through scenarios. This paper introduces a transmission encoder, which separately estimates the transmission amount against the specular highlight's reflection. In addition, this paper introduces a recurring edge constraint to account for the frequency of edges. [Strengths]\n+ The application and approach of the transmissive scenario sound interesting to me. The specular reflection on glass in the see-through scenario has been rarely discussed in the neural rendering field yet. I found that this new research problem is interesting. Existing solutions such as vanilla NeRF seem to fail when there is a specular reflection in input images, while the proposed method works properly. \n\n[Weaknesses]\n- Even though the motivation of the proposed method sounds interesting, I'm not fully sure if this paper is completely developed and evaluated to solve the technical challenges. Specular reflection works very differently from transmission. For instance, when the camera motion occurs, the specular reflection and transmitted image move in opposite directions about the depth position of glass surfaces. The proposed model doesn't seem to account for the physical phenomenon. Instead, it just tries to separate the transmission and reflection along the given view vector, which is not physically plausible. This observation should be valid from a specific view angle. If the method accumulates multiple observations in a voxel grid, the accurate separation cannot be achievable by increasing the number of observations. I would like to hear more in the rebuttal.\n\n- The evaluation of this paper is one of the weakest points. Except for the main results shown in the teaser, most results do not include strong specular reflection. According to the proposed formulation of the recurring edge constraint, the proposed method may work properly when there are strong contrast edges in the transmitted image. The main result of the picture frame is the case. In other cases, the results do not include any strong specular reflection. I think the results look very cherry-picking with a very small number of examples. I would like to see more results to validate the performance of the proposed method. - As mentioned earlier, specular reflection moves in the opposite direction of the transmitted image when the camera moves. It causes structure-from-motion failures that are supposed to give the right camera pose information as input. This paper doesn't clearly mention how the camera pose is estimated from the input image. I might miss it. If then, please let me know where the information is. I'm worried about the impact of the camera pose when producing other methods' results. I would like to hear more in the rebuttal. Limitations are clearly mentioned in the main paper. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "QLkPGH-LYkF", "QLkPGH-LYkF", "6hKE4Voh6mo", "6IjNF0Eyqf3", "QLkPGH-LYkF", "UxXchnQsb83", "nips_2022_KglFYlTiASW", "nips_2022_KglFYlTiASW", "6hKE4Voh6mo", "6hKE4Voh6mo", "6IjNF0Eyqf3", "QLkPGH-LYkF", "QLkPGH-LYkF", "QLkPGH-LYkF", "UxXchnQsb83", "nips_2022_KglFYlTiASW", "nips_2022_KglFYlTiASW", "nips_2022_KglFYlTiASW", "nips_2022_KglFYlTiASW", "nips_2022_KglFYlTiASW" ]
nips_2022_wlrYnGZ37Wv
Sequencer: Deep LSTM for Image Classification
In recent computer vision research, the advent of the Vision Transformer (ViT) has rapidly revolutionized various architectural design efforts: ViT achieved state-of-the-art image classification performance using self-attention found in natural language processing, and MLP-Mixer achieved competitive performance using simple multi-layer perceptrons. In contrast, several studies have also suggested that carefully redesigned convolutional neural networks (CNNs) can achieve advanced performance comparable to ViT without resorting to these new ideas. Against this background, there is growing interest in what inductive bias is suitable for computer vision. Here we propose Sequencer, a novel and competitive architecture alternative to ViT that provides a new perspective on these issues. Unlike ViTs, Sequencer models long-range dependencies using LSTMs rather than self-attention layers. We also propose a two-dimensional version of Sequencer module, where an LSTM is decomposed into vertical and horizontal LSTMs to enhance performance. Despite its simplicity, several experiments demonstrate that Sequencer performs impressively well: Sequencer2D-L, with 54M parameters, realizes 84.6% top-1 accuracy on only ImageNet-1K. Not only that, we show that it has good transferability and the robust resolution adaptability on double resolution-band. solution-band. Our source code is available at https://github.com/okojoalg/sequencer.
Accept
Four reviewers provided detailed feedback on this paper. The authors responded to the reviews and I appreciate the authors' comments and clarifications, specifically that each question/comment is addressed in detail. The authors also uploaded a revised version of the paper. After the two discussion periods, all four reviewers suggest to accept the paper (although the scores do not exceed a "weak accept"). After considering the reviewers' and authors' comments, I believe that the paper should be accepted to NeurIPS. Weaknesses include: * Some concerns about experimental results, e.g. highlighting accuracy vs. number of parameters but not also highlighting limitations when looking throughput (comparing only parameters (or FLOPS) can sometimes be misleading, see also [The efficiency misnomer, ICLR22](https://arxiv.org/abs/2110.12894)). But it's good that throughput numbers are presented in the paper and the paper acknowledges this limitation. Related: concerns about computational cost. * Some concerns regarding relevant related literature (addressed in comments and revision) and novelty of the approach. * Limitation to image classification only in the experiments (partially addressed in comments and revision). * More interpretation of the effect of using LSTMs could be helpful to the reader (partially addressed in comments). Strengths include: * Interesting, conceptually simple approach that revisits LSTMs for images, which could be specifically useful for high resolution images. * Reviewers agree that the paper is well-written. * Experimental results and ablations are strong with respect to the claims made. Minor points (not affecting this decision, but potentially useful to authors when preparing the final revision): * MLP-based methods "cannot cope with flexible input sizes during inference" - I think this is only partially true, even the original MLP-Mixer paper shows how this can be solved e.g. in fine-tuning by "modifying the shape of Mixer’s token-mixing MLP blocks" * minor typo I randomly encountered: Table 3, row 3, column "Flowers" 89.5 -> 98.5 * "It is demonstrated that modeling long-range dependencies by self-attention is not necessarily essential in computer vision" - To some degree similar "demonstrations" are visible in CNNs and MLP-Mixers, so this claim seems a bit strong, maybe?
train
[ "JzsvoxPspUu", "PtNi56Ce1S1", "vcytZlC49Z1", "5SOiseeBCR8", "Jmr6X-hmg1k", "uE4MKBEvyBr", "HzVRTA9px-B", "v02LgXSwmKI", "9E2JNrrxZkX", "3Y_jBaO7gsr", "BR7MmAgezxJ", "EsmSLbo41JW", "BCUgzjlVL7", "h_kpfnBLW_X", "c6ypb13dnO4" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response from the authors. My concerns are mostly addressed. Although I am still worried about the throughput issue in standard ImageNet resolution, I lean toward acceptance as the successful trial of replacing self-attention with LSTM in ViT deserves credit.", " Thanks for your positive comments. As you suggested, we will include the training/inference throughput discussion and the official code link in the final version.\n\nThanks to your excellent review, we were able to improve the manuscript.", " Thank you for the answers and updating the paper. \n\nPlease add the detailed discussion about training/inference throughput and memory in the final version as this is a major limitation of the proposed method. Also, I suggest the authors make the code and pre-trained models public. This will help the community to reconsider the RNN-based models for vision tasks. \n\nI don't have any further questions. ", " The authors have addressed my questions. I have no further question.", " ### Reply to Question 2 (About Throughput)\n\nThroughput itself is generally considered to decrease as resolution increases, but when throughput is compared to DeiT, Sequencer has an advantage as resolution increases. We have corrected the incomplete description in the revised version. The throughput advantage of Sequencer over DeiT, as the resolution increases is as mentioned in the reply to Question 1.\n\nYour concern about the time it takes to train is most understandable. The following are the peak memory results during training.\n\n| Model | Infer Throughput(image/s) | Infer Peak Mem. | Train Throughput(image/s) | Train Peak Mem. |\n| --- | --- | --- | --- | --- |\n| RegNetY-4GF | 823 | 225 | 228 | 1136 |\n| ConvNeXt-T | 1124 | 248 | 337 | 1418 |\n| DeiT-S | 1569 | 180 | 480 | 1195 |\n| Swin-T | 894 | 308 | 268 | 1613 |\n| ViP-S/7 | 702 | 195 | 214 | 1587 |\n| CycleMLP-B2 | 586 | 234 | 158 | 1357 |\n| PoolFormer-S24 | 988 | 183 | 313 | 1461 |\n| Sequencer2D-S (Ours) | 347 | 196 | 110 | 1799 |\n| RegNetY-8GF | 751 | 333 | 211 | 1776 |\n| T2T-ViT$_{t}$-19 | 654 | 1140 | 197 | 3520 |\n| CycleMLP-B3 | 367 | 287 | 100 | 2326 |\n| PoolFormer-S36 | 673 | 220 | 213 | 2187 |\n| GFNet-H-S | 755 | 282 | 227 | 1740 |\n| Sequencer2D-M (Ours) | 270 | 244 | 83 | 2311 |\n| RegNetY-12GF | 695 | 440 | 199 | 2181 |\n| ConvNeXt-S | 717 | 341 | 212 | 2265 |\n| Swin-S | 566 | 390 | 165 | 2635 |\n| Mixer-B/16 | 1011 | 407 | 338 | 1864 |\n| ViP-M/7 | 395 | 396 | 130 | 3095 |\n| CycleMLP-B4 | 259 | 338 | 70 | 3272 |\n| PoolFormer-M36 | 496 | 368 | 171 | 3191 |\n| GFNet-H-B | 482 | 367 | 144 | 2776 |\n| Sequencer2D-L (Ours) | 173 | 322 | 54 | 3516 |\n\nThe results above were measured under the same conditions as the throughput measurements in Table 1. While it is true that training throughput is not good, the results show that the training throughput is about three times the inference throughput for all these models. Compared to other models, both measured inference and training time are not good. Future research should be performed to determine if throughput can be improved by reducing sequence length in combination with convolution and pooling.\n\n#### Reference\n\n[1] \"Multi-dimensional recurrent neural networks.\" ICANN 2007.\n\n[2] \"Pixel recurrent neural networks.\" ICML 2016.\n\n[3] \"Scene labeling with lstm recurrent neural networks.\" CVPR 2015.\n\n[4] \"Semantic Object Parsing with Local-Global Long Short-Term Memory\" CVPR 2016.\n\n[5] \"Renet: A recurrent neural network based alternative to convolutional networks.\" arXiv:1505.00393 2015.\n\n[6] \"Metaformer is actually what you need for vision.\" CVPR 2022.\n", " \nThank you for your comments and for pointing out further related work! We have included those papers and the discussion to resolve your suspicions in the paper. The revised paper has been uploaded to OpenReview.\n\n### Reply to Question 1\n\nThank you for your question. It might cause a misunderstanding to you due to our unclear statement. We have revised the relevant part, as they should be claimed as memory economically and throughput-economically \"compared to DeiT\". \n\n> The higher the input resolution, the more memory-efficient and throughput-economical are on Sequencers\n\nIn particular, the above is incorrect and is corrected below:\n\n> The higher the input resolution, the higher memory-efficiency and throughput of Sequencers when compared to DeiT. \n\nWhy is Sequencers more memory economical than DeiT on high-resolution input? BiLSTM2D processes multiple columns and rows at once, using $WC/2$- and $HC/2$-dimensional memory cell state, respectively. BiLSTM2D hidden states are used as $CHW$-dimensional outputs. In contrast, a multi-head-attention requires $CHW$-dimensional value and $head*(HW)^2$-dimensional attention maps, where $H$, $W$, and $C$ is height, width, and channel, respectively. Thus, increasing H and W is disadvantageous to DeiT's memory consumption. In addition, Figure 3c(revised version) supports this view. \n\nAt the $896^2$ resolution in Figure 3d(revised version), we see experimentally that the throughput of Sequencer is better than DeiT. This result is influenced by the vertical and horizontal decomposition, not the usual LSTM structure. Assuming $W=H$ for simplicity, the complexity of self-attention is $\\mathcal{O}(W^4 C)$, whereas the computational complexity of BiLSTM is $\\mathcal{O}(WC^2)$. Namely, the computational complexity of attention is $\\mathcal{O}(W^3/C)$ times higher than that of BiLSTM. By contrast, there are $\\mathcal{O}(1)$ sequential operations for self-attention, whereas there are $\\mathcal{O}(W)$ sequential operations for BiLSTM2D. This implies that the increase in complexity of self-attention by increasing W has a much larger impact than the increase in BiLSTM2D sequence operations. Therefore, assuming we use a sufficiently efficient RNN cell implementation, such as official PyTorch LSTMs we are using, the increase of the complexity of self-attention is much more rapid than BiLSTM2D. It implies a lower throughput of self-attention compared to BiLSTM2D at high resolution.\n\n### Reply to Question 2 (Lack of related work)\n\nThank you for your suggestion to add the citations and the study's position compared to the paper.\n\nAs you said, ReNet [5] is one of the excellent studies that are very relevant to our work. We have followed your suggestion and revised to include the following explanation.\n\n> ReNet [5] uses a 4-way LSTM and non-overlapping patches as input. In this respect, it is similar to Sequencer. Meanwhile, there are three differences. First, Sequencer is the first MetaFormer [6] realized by adopting LSTM as the token mixing block. Sequencer also adopts a larger patch size than ReNet [5]. The benefit of adopting these designs is that we can modernize LSTM-based vision architectures and fairly compare LSTM-based models with ViT. As a result, our results provide further evidence for the extremely interesting hypothesis MetaFormer. Second, connecting the vertical BiLSTM and the horizontal BiLSTM is different. Our work connects them in parallel, allowing us to gather vertical and horizontal information simultaneously, whereas ReNet [5] adopts the method of the output of the horizontal BiLSTM as input to the vertical BiLSTM). Finally, we trained Sequencer on large datasets such as ImageNet, whereas ReNet [5] is limited to small datasets as MNIST, CIFAR-10, and SVHN, and has not shown the effectiveness of LSTM for larger datasets. \n\nFor [1-4], which you have pointed out, the revised version compares these works with our study. The submitted version already has cited [1] but has been revised to show more differences in the revised version.\n", " \nYou have raised important questions. We would like to thank you for this. Find below our answers to the reviewer's concerns:\n\n### The novelty of the proposed method.\n\nAs you say, in order to claim the novelty of our method, we need to state exactly how it differs from Metaformer[1]. We agree that Seuqnecer is a MetaFormer template-based work; however, we have adopted LSTM, unexplored non-local and non-attentional inductive bias, while MetaFormer(PoolFormer)[1] uses pooling which is local bias. In addition, LSTM is a different module from ViT[2], MLP-Mixer[3], and its many variants, which also follow the template of Metaformer[1]. This effort not only provides further evidence of the MetaFormer concept but also encourages the community to rethink the possibilities of LSTM-like architectures. Sequencer outperforms PoolFormer[1] in terms of performance other than throughput. Compared to PoolFormer-M36, Sequencer2D-S require the number of parameters to be about half and the throughput is about 70%, with 0.2 top-1 accuracy increases. It loses a bit in throughput but outperforms in top-1 accuracy and memory. From Table 1, compared to PoolFormer-M36, Sequencer2D-S require the number of parameters to be about half and the throughput is about 70%, with 0.2 top-1 accuracy increases. It loses a bit in throughput but outperforms in top-1 accuracy and memory. Thus, in terms of accuracy and number of parameter, Sequencer is superior to PoolFormer. This result also provides new evidence for the important hypothesis, MetaFormer.\n\n\n### High FLOPs and low throughput\n\nAs you point out, the drawback of this model is poor throughput, which is also mentioned in the paper as a limitation; improving throughput is certainly a subject for future research. The revised version more clearly states this. As we currently know, the accuracy of LSTM does not change when it is replaced by GRU; FLOPs and throughput improve slightly in that case (Section 4.3, revised version):\n\n|Model|\\#Params.|FLOPs|Infer Throughput(image/s)|Acc.|\n|---|---|---|---|---|\n|GRU-Sequencer2D|25M|7.5G|402|82.3|\n|Seqeucer2D-S|28M|8.4G|347|82.3|\n\nFor further improvement, we need to consider shortening the sequence length of RNNs in combination with local operations such as pooling, or developing lightweight recurrent modules.\n\n### The generalization ability in other tasks\n\nWe thank the reviewer for their suggestion to add other visual tasks, including segmentation and detection experiments. The segmentation experiments have already been conducted in the rebuttal period. We have added the results presented below to Section 4.4 and Appendix C.4 in the revised version. We employed Seuqnecer as the backbone of SemanticFPN[4] to train and evaluate semantic segmentation. The dataset used is ADE20k[5], with a batch size of 32. AdamW[6] is used, with the initial learning rate of 2e-4, the polynomial decay schedule with a power of 0.9, and 40000 training iterations. These settings follow the Metaformer settings[1]. The results are shown below:\n\n|Model|#Param.|mIoU|\n|---|---|---|\n|PVT-Small[7]|28.2|39.8|\n|PoolFormer-S24[1]|23.2|40.3|\n|Sequencer2D-S|31.6|46.1|\n|---|---|---|\n|PVT-Medium[7]|48.0|41.6|\n|PoolFormer-S36[1]|34.6|42.0|\n|Sequencer2D-M|42.3|47.3|\n|---|---|---|\n|PVT-Large[7]|65.1|42.1|\n|PoolFormer-M36[1]|59.8|42.4|\n|Sequencer2D-L|58.3|48.6|\n\nThis result indicates that Sequencer's generalization ability for segmentation is comparable to other leading models. Studies of another tasks such as object detection are future research topics; We will be able to contain the detection experiment's results in the revised version by Aug. 10.\n\n#### Reference\n\n[1] \"Metaformer is actually what you need for vision.\" CVPR 2022.\n\n[2] \"An image is worth 16x16 words: Transformers for image recognition at scale.\" ICLR 2021.\n\n[3] \"Mlp-mixer: An all-mlp architecture for vision.\" NeurIPS 2021.\n\n[4] \"Panoptic Feature Pyramid Networks\" CVPR 2019.\n\n[5] \"Scene parsing through ade20k dataset.\" CVPR 2017.\n\n[6] \"Decoupled weight decay regularization.\" ICLR 2019.\n\n[7] \"Pyramid vision transformer: A versatile backbone for dense prediction without convolutions\" ICCV 2021", " ### Reply to Question 3\n\nThank you for your interest about the case of other RNNs. It is one of the questions that we have been wondering about too. We have reported the performances of models in which LSTM is replaced by other RNN-cells, including GRU, in Table 11b, P.19, Appendix in the submitted version. We have moved the result about replacing the RNN to the main paper.\n\nWe reiterate the result:\n\n| Model | \\#Params. | FLOPs | Infer Throughput(image/s) | Acc. |\n| --------------- | --------- | ----- | ------------------------- | ---- |\n| RNN-Sequencer2D | 19M | 5.8G | 445 | 80.6 |\n| GRU-Sequencer2D | 25M | 7.5G | 402 | 82.3 |\n| Seqeucer2D-S | 28M | 8.4G | 347 | 82.3 |\n\nRNN-Sequencer2D replaces LSTM in Seqeucer2D-S with tanh-RNN, and GRU-Sequencer2D replaces LSTM in Seqeucer2D-S with GRU. The table suggests that all of these Metaformer-like architectures, including RNN-cell, are meaningful. Also, tanh-RNN performs slightly worse than others, probably due to its lower ability to model long-range dependence. LSTM does not outperform significantly in accuracy than GRU but do than tanh-RNN. Tanh-RNN is not entirely inaccurate: For example, it is better than the accuracy of RegNetY-4GF [1].\n\n#### Reference\n\n[1] \"Designing network design spaces.\" CVPR 2020.", " \nWe appreciate your thought-provoking feedback and positive assessment. We have reflected them in the revised paper. The revised paper has been uploaded to OpenReview.\n\n### Cons. 1\n\n> Sequencer usually needs 2x FLOPs and is 2x~10x lower throughput compared with other methods.\n\nWe listed the throughput value for Sequencer2D-L incorrectly. The revised version corrects that. In the case of this correct result, Sequencer is 2x~7x lower throughput.\n\n> Although this is not surprising due to the recursion in LSTM, I am still concerned about the practicality of this model with such a high computational cost.\n\nThe poor throughput is less notable at resolutions higher than 224x224 compared to other methods (Figure 3d). For example semantic segmentation often uses images with resolutions higher than 224x224, such as 512x512. In revised version, we have added the result showing that Sequencer is competitive with Poolformer in semantic segmentation task (See Section 4.4 and Appendix C.4).\n\n### Reply to Question 1\n\n> What is the running time of Sequencer and other baselines on ImageNet-1K?\n\nThe other baselines values are based on the cited papers, so we do not have the results at hand comparing ImageNet-1K training times. We do, however, measure training throughput for each baseline model. We provide the results:\n\n| Model | Infer Throughput(image/s) | Infer Peak Mem. | Train Throughput(image/s) | Train Peak Mem. |\n| --- | --- | --- | --- | --- |\n| RegNetY-4GF | 823 | 225 | 228 | 1136 |\n| ConvNeXt-T | 1124 | 248 | 337 | 1418 |\n| DeiT-S | 1569 | 180 | 480 | 1195 |\n| Swin-T | 894 | 308 | 268 | 1613 |\n| ViP-S/7 | 702 | 195 | 214 | 1587 |\n| CycleMLP-B2 | 586 | 234 | 158 | 1357 |\n| PoolFormer-S24 | 988 | 183 | 313 | 1461 |\n| Sequencer2D-S (Ours) | 347 | 196 | 110 | 1799 |\n| RegNetY-8GF | 751 | 333 | 211 | 1776 |\n| T2T-ViT$_{t}$-19 | 654 | 1140 | 197 | 3520 |\n| CycleMLP-B3 | 367 | 287 | 100 | 2326 |\n| PoolFormer-S36 | 673 | 220 | 213 | 2187 |\n| GFNet-H-S | 755 | 282 | 227 | 1740 |\n| Sequencer2D-M (Ours) | 270 | 244 | 83 | 2311 |\n| RegNetY-12GF | 695 | 440 | 199 | 2181 |\n| ConvNeXt-S | 717 | 341 | 212 | 2265 |\n| Swin-S | 566 | 390 | 165 | 2635 |\n| Mixer-B/16 | 1011 | 407 | 338 | 1864 |\n| ViP-M/7 | 395 | 396 | 130 | 3095 |\n| CycleMLP-B4 | 259 | 338 | 70 | 3272 |\n| PoolFormer-M36 | 496 | 368 | 171 | 3191 |\n| GFNet-H-B | 482 | 367 | 144 | 2776 |\n| Sequencer2D-L (Ours) | 173 | 322 | 54 | 3516 |\n\n> Are there any potential ways to reduce the computational cost, such as reducing the length of the sequence by downsampling? If so, can the authors report the model performance compared with the baselines under a similar scale of FLOPs and throughput?\n\nPlease see Reply to Question 3. The accuracy of LSTM does not change when it is replaced by GRU; FLOPs and throughput improve slightly in that case (Section 4.3). Combining RNNs with local operations can indeed shorten the sequence length of input to the RNNs. It could improve throughput, but the demonstration is a future challenge.\n\n### Reply to Question 2\n\nObjects and the visual patterns that comprise them are often distributed continuously in the image. Based on this observation, Sequnecer injects corresponding inductive bias by using vertical and horizontal LSTMs, which tend to guarantee continuous long-term dependencies. Such inductive bias can be represented by RNNs, but not by self-attentions. Token interactions extend beyond the straight line on which the LSTM acts: The interaction between any two tokens is formed by stacking two sequencing blocks. As for the impact of LSTM memory on the processing of spatial information, it is not straightforward to visualize the long-term dependence between tokens, unlike the case of attentions. We are convinced that this is a fascinating future work. Instead, our revised version visualizes each input-output tensor to BiLSTM2D for better understanding. From the hidden state visualization, it can be observed that the tokens processed in the vertical and horizontal directions interact to form two-dimensional spatial patterns. The closer tokens are in position, the stronger their interaction tends to be; the farther tokens are in position, the less their interaction tends to be (Figure 4).\n", " Thanks for your positive comments and questions on our paper. We would be happy to resolve your questions.\n\n### Reply to Question 1\n\nWe think our lack of representation has caused you to misunderstand our claim. The original Table 1 was misleading, thus we have annexed the accuracy before fine-tuning in ther revised version of Table 1. We have also added a supplement to its caption. Sequencer-L trained from scratch, is compared with ConvNeXt-\"S\" with comparable parameters. In contrast, fine-tuned Seuqnecer-L is compared with ConvNeXt-\"B\" which has more parameters than Seuqnecer-L. We reluctantly have postd ConvNeXt-\"B\" since [1] do not fine-tune ConvNeXt-\"S\" but ConvNeXt-\"B\". Non-finetuned ConvNeXt-\"B\" achieves 83.8% accuracy as reported in [1], 0.4% more than Sequencer-L. This suggests that the report, finetuned ConvNeXt-\"B\" is 0.5% more accurate than finetuned Seuqnecer-L, is not unnatural.\n\n### Reply to Question 2\n\nWe assume that you have misunderstood the model to which this experiment is to be compared due to the imperfections in our explanation. The only mention of which type of model is being compared has been in Figure 3a and 3b, so we have added it to the text to reduce misunderstandings in the revised version. Since We could not figure out which part is inconsistent, we will explain Figure 3a and 3b for the moment. If you could add any additional information on the content that you feel is inconsistent, we would be glad to correct that part of our paper. Figure 3a and 3b compare the top-1 accuracy of DeiT-S, GFNet-S, CycleMLP-B2, Seuqnecer-S, and ConvNeXt-T models trained at a resolution of $224^2$, with different input image resolutions of ImageNet validation set during inference, with no fine-tuning. Figure 3a concisely plots the accuracy. For example, the accuracy of Seuqnecer-S and ConvNeXt-T are achieved as 82.3 and 82.1 in Table 1, respectively, so they are plotted at 82.3 and 82.1 on the line of resolution $224^2$. Figure 3b is relative to the accuracy at resolution $224^2$. Thus, any models are plotted $0$ on the $224^2$ resolution line. The table compares how much the inference accuracy drops as the resolution changes without being distracted by the difference in accuracy at resolution $224^2$.\n\n#### Reference\n\n[1] \"A convnet for the 2020s.\" CVPR 2022", " We thank the reviewers for their insightful comments on our paper. The comments have helped us to improve the paper significantly. As for the values of Sequencer2D-L throughput and memory consumption during inference in Table 1, we inadvertently wrote worse values than the actual values, and we have corrected those values. The revised paper will be uploaded to OpenReview.", " This paper introduces a new model architecture using LSTM for image classification. By adapting 2-dimensional LSTM (Bi-LSTM for vertical and horizontal directions) into the Transformer-like architecture, the model outperforms ViT-based and SOTA CNN-based architectures with less number of parameters. \n\n ## Strengths\n\n1. The paper is clearly written. \n\n2. The paper proposes a simple yet effective framework using LSTM. The model outperforms transformer and CNN-based models for image classification. This work provides a great alternative to Transformer and CNNs for image classification. \n\n3. The proposed model is especially efficient for higher resolutions. \n\n## Weaknesses\n\n1. Lack of related work\n\n- There are a number of studies using multi-directional LSTM/RNN for vision tasks that are very relevant to this work e.g., [1-4]. The authors should cite and discuss the similarities and differences. \n\n- ReNet [69] is very relevant to this work. The authors pointed out that the major difference is to use a transformer-like block structure. However, the benefit of this structure and what it provides to the model compared to ReNet or other related works [1-4] are missing. \n\n2. Due to LSTM's sequence nature, LSTM-based models are not easily parallelizable, especially compared to transformer and CNN-based models. I see that throughput is much worse than other models. I assume training time would be especially slow. It is unclear to me how throughput improves with higher resolutions.\n\n[1] \"Multi-dimensional recurrent neural networks.\" ICANN 2007.\n\n[2] \"Pixel recurrent neural networks.\" ICML 2016.\n\n[3] \"Scene labeling with lstm recurrent neural networks.\" CVPR 2015.\n\n[4] \"Semantic Object Parsing with Local-Global Long Short-Term Memory\" CVPR 2016\n\n\n\n 1. The authors mentioned that 'The higher the input resolution, the more memory-efficient and throughput-economical are on Sequencers' (Line 251-252 and Figure 4). I believe LSTM, in particular, 2D LSTM is not memory efficient and cannot be throughput-economical as it requires computing and saving the activation for all directions. Especially it should be worse for the higher resolutions. Could authors explain how sequencer becomes memory-efficient and throughput-economical? \n\n2. It would be great to discuss and address the weaknesses mentioned above during the rebuttal. The authors explained the limitations and potential negative societal impact of their work in the paper. ", " This paper proposes an architecture for image classification named Sequencer, which utilizes the BiLSTM module to replace the self-attention module in the vision transformer model. The BiLSTM module is further improved by processing the vertical and horizontal axes in parallel from top/botton and left/right directions. Experiments on image classification tasks demonstrate that the proposed method can acheive similar performance with existing classification models with similar number of paramters. Strengths:\n\n+ This paper is well-written. The idea is easy to understand. \n\n+ The proposed method is the first work to empirically show the effectiveness of LSTM modules in large scale image classification tasks, which would have a board impact in investigating the potential of LSTM-like architectures in the computer vision field.\n\n+ Ablations and visualization results are rich, which present the validity of the proposed method in terms of the importance of each component.\n\nWeaknesses:\n\n- The novelty of the proposed method is limited. The proposed Sequencer replaces the self-attention module in ViT with the existing BiLSTM module. Besides, [r1] shows that the self-attention module in ViT can be replaced with a simple spatial pooling operator, which suggests that such replacement is incremental. \n\n- Although the proposed model can achieve similar performance with existing SOTA architecures, it requires much higher FLOPs and throughput as shown in Table 1. \n\n- Evaluation is only conducted on image classification. It would be better to evaluate the proposed architecture on more vision tasks such as detection and segmetation to show its generalization ability.\n\n[r1] MetaFormer Is Actually What You Need for Vision. CVPR 2022.\n\n Please refer to the weaknesses part The limitations are mainly about the limited novelty of the proposed method and the poor experimental results (much higher FLOPs, lack of experiments on other vision tasks).", " This paper proposes a new Sequencer architecture that replaces self-attention in ViT with BiLSTM(2D) for the image classification task. On ImageNet-1K dataset, Sequencer achieves better performance than current other similar scale models. The authors also show Sequencer is more robust to resolution variation and suffers from less severe accuracy degradation when the input resolution is increased. Pros:\n1. This paper makes an attempt to use LSTM, an unexplored inductive bias, to replace self-attention in ViT for image classification and shows its effectiveness. This line of research helps the community understand what is indeed essential for vision tasks.\n\n2. Strong results and extensive experiments. It compares with a series of related works based on various inductive biases and shows that it has superior performance and transferability under a similar scale of parameters. Besides, ablation studies are conducted.\n\nCons:\n1. The computational cost is too high. As shown in Table 1, under a similar scale of model parameters, Sequencer usually needs 2x FLOPs and is 2x~10x lower throughput compared with other methods. Although this is not surprising due to the recursion in LSTM, I am still concerned about the practicality of this model with such a high computational cost.\n\n2. Lack of reasoning on how using LSTM captures the spatial information and why it is so effective. In BiLSTM2D, it uses LSTM to capture dependencies from horizontal and vertical patches respectively. From my point of view, this design should not be as effective as global dependencies in self-attention since you may need to involve patches that are not necessarily in the same horizontal and vertical line to understand the objects in the images. Besides, I am also curious about what role the memory in LSTM plays in processing spatial information. The above analysis is critical for readers to understand the model but is missing in the paper.\n 1. What is the running time of Sequencer and other baselines on ImageNet-1K? Are there any potential ways to reduce the computational cost, such as reducing the length of the sequence by downsampling? If so, can the authors report the model performance compared with the baselines under a similar scale of FLOPs and throughput?\n\n2. See (2) of Cons. Can the authors provide more discussions and potential visualization on why Sequencer block is effective?\n\n3. What if replace LSTM with other RNN cells, such as GRU? Will the model still work well? If not, which part is essential to the model performance? The authors have discussed the limitations in the conclusion. Actually, it would be better if the authors can test the model's effectiveness on tasks that require sequence modeling such as video action recognition in the main paper.", " This paper proposed Sequencer by using deep LSTMs instead of self-attention for image classification. And many related works were compared in experiments to validate the performance of sequencer. Strength: This paper proposed Sequencer, which uses LSTM instead of the self-attention for sequence modeling. This paper also proposed a two-dimensional version of Sequencer module, where an LSTM is decomposed into vertical and horizontal LSTMs to enhance performance. Experiments showed the advantages of Sequencers compared to the self-attention mechanism in transferability and resolution adaptability. The work is clearly stated, and the manuscript is well written.\nWeakness: Some experimental results are not clearly explained.\n 1.\tFrom Table 1, compared with ConvNetXt, Sequencer achieved worse performance at FLOPS and throughput with 0.2~0.3 top-1 accuracy increase. Meanwhile, the fine-tuned sequencer is worse than the fine-tuned ConvNetXt.\n2.\tIn Figure 3, the performance between Sequencer and ConvNetXt seems to conflict with Table 1? Some experimental results are not clearly explained." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "v02LgXSwmKI", "vcytZlC49Z1", "Jmr6X-hmg1k", "3Y_jBaO7gsr", "EsmSLbo41JW", "EsmSLbo41JW", "BCUgzjlVL7", "h_kpfnBLW_X", "h_kpfnBLW_X", "c6ypb13dnO4", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv", "nips_2022_wlrYnGZ37Wv" ]
nips_2022_osPA8Bs4MJB
Delving into Sequential Patches for Deepfake Detection
Recent advances in face forgery techniques produce nearly visually untraceable deepfake videos, which could be leveraged with malicious intentions. As a result, researchers have been devoted to deepfake detection. Previous studies have identified the importance of local low-level cues and temporal information in pursuit to generalize well across deepfake methods, however, they still suffer from robustness problem against post-processings. In this work, we propose the Local- & Temporal-aware Transformer-based Deepfake Detection (LTTD) framework, which adopts a local-to-global learning protocol with a particular focus on the valuable temporal information within local sequences. Specifically, we propose a Local Sequence Transformer (LST), which models the temporal consistency on sequences of restricted spatial regions, where low-level information is hierarchically enhanced with shallow layers of learned 3D filters. Based on the local temporal embeddings, we then achieve the final classification in a global contrastive way. Extensive experiments on popular datasets validate that our approach effectively spots local forgery cues and achieves state-of-the-art performance.
Accept
All reviewers are positive about this paper. Generally speaking, the proposed method is novel and is also easy to follow due to well writing. Also, the experiments are comprehensive. In the rebuttal, the authors also provide some qualitative results to clearly respond to the concerns of reviewers. So, I suggest accepting this paper.
train
[ "NNvno0qQ25B", "j3TOg77SWjl", "NVe6S2N1xI", "BDEdZIairjt", "4OW5cyxmLwU", "MTiipQMwz4", "Mtt0wLxbTsn", "V2VKodwIWTM", "0QRUdzP29dP", "p3teEjcTMZ9", "zbbz1ew0FuN", "5Oo04rruNC8-", "HHN9S5nxl1ON", "G9wLvB4cNBW", "_BqXop1Au-y", "jo2urd0juA_", "JUqIE2pcn8Q", "KgHskORqAw8", "ul9Sk2FIvVa", "zAVQlCO2DrC" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi reviewer,\n\nThe discussion period is closing soon. Please take a look at our responses to your pre-rebuttal concerns. 1) Regarding novelty, we clarify the differences between this paper and related arts, where the key dilemma between **robustness** and **generalization** is resolved by the introduced low-level & temporal learning. 2) The asked analyses are also provided in the responses, revised paper, or the revised supp.\n\nWe hope that our responses so far have cleared up the confusion for you to reevaluate our paper. We are willing to have further discussions until we can still reply.\n\nbest ", " The authors give more analysis about frame selection strategy. Also, the authors give the comparison between the proposed method and SBI. In addition, the equations are well written in the rebuttal version, my main concerns have been addressed.", " After reading authors' feedback and other comments, I decide to keep my rating - borderline accept.\n\nSome of my concerns have been resolved that the authors provide more details, specially on the evaluation. Besides, the authors provide analysis on the model complex that there is no significant computation cost even though 3D convolutions are applied.\n\nOverall, this paper presents a new model for deepfake video detection and shows the state of the art results.", " Thanks for the response. I appreciate the authors for the effort and I believe that the results and discussion above can improve the quality of the submission. \n\nMy major concerns are mostly addressed (on ‘mask’ and ‘CPI’), but as the other reviewers point that, \n1. Novelty: Not so significant.\n2. Robustness (my Q10): The performance of the model, when facing wild videos, in particular mixed degradation ones, which are closer to the real-world deepfake detection scene, is not sure. \n\n\nTo sum up, I keep my original score.", " Dear reviewers,\n\nThanks again for your efforts and valuable comments. We have provided corresponding responses and results, which we hope have covered your concerns. Since the discussion period is closing, we would like to know if you have any lingering concerns? If there is more we could do to help you make your final decision, please let us know.\n\nIf you are satisfied with our responses, please consider raising your scores and let our ideas contribute to the development of deepfake detection.\n\nbest", " We thank all reviewers for the detailed and constructive comments. Here we first respond to some common questions.\n\nNote: all the referred equations, lines, and citations in the responses correspond to the revised paper. Although we retain related descriptions of the reviewers in the quoted *Questions*.\n\n## Notations and Equations\n\nAs advised by the reviewers, we revise some of the notations for better demonstrations. The paper is modified from four aspects:\n\n1. Simplify the subscripts, which are dispensable, e.g., $p$ in $\\mathrm{x}_p^{t,i}$ is removed as $\\mathrm{x}^{t,i}$. \n2. Change the symbol to avoid confusing subscripts, e.g., we change $\\mathrm{x}_{pt}^{t,i}$ to $y^{t,i}$ to denote the patch features after low-level & temporal enhancement.\n3. Unify the two sets of equations (Eq (1-9) in the revision).\n4. Fix bugs, e.g., Eq(12,13) in the revision.\n\nThese modifications are marked in **blue**, please find the details in the revised paper. \n\nMeanwhile, we notice what the reviewer said about the complex superscripts. In order to accurately describe both temporal and spatial dimensions, we have to use two superscripts, e.g., $\\mathrm{x}^{t,i}$ denotes the image patch at $i$-th spatial region of the $t$-th frame. But this is not really difficult to understand, since we use $t$ for a timestamp and $i$ for a spatial location consistently throughout the paper. We also include a short description in the revised version of Sec 3.1 Problem statement.\n\n## Clip Length & Sampling Space Ablation\n\nAs suggested by the reviewers, we further conduct ablations on the hyper-parameters regarding the model input. In the pre-rebuttal version, the two hyper-parameters (clip length, frame sampling space) of the input size are set empirically. On the one hand, too short a clip does not ensure enough temporal information, e.g., 4 consecutive frames are almost identical to each other. On the other hand, too long a clip is not necessarily better: 1) A longer clip costs more memory, and a smaller batch size also slows down the training convergence, or worse. 2) We found some of the videos in the test datasets contained scene switching. Considering the FPS of 24, a video clip in our experiments will not last 1 second, thus greatly avoiding the scene-switching problem. Two hyper-parameters are further investigated: clip length and frame sampling space.\n\n| Clip Length | Sampling Space | FF++ | DFDC | DeepFo | Avg |\n| ----------- | -------------- | ----- | ----- | ------ | ----- |\n| 8 | 1 | 99.26 | 77.25 | 96.82 | 91.11 |\n| 16 | 1 | 99.52 | 80.39 | 98.50 | 92.80 |\n| 32 | 1 | 99.47 | 80.92 | 98.41 | 92.93 |\n| 64 | 1 | 99.38 | 79.23 | 98.49 | 92.37 |\n| 16 | 2 | 99.15 | 76.17 | 97.32 | 90.88 |\n| 16 | 4 | 98.51 | 73.02 | 91.33 | 87.62 |\n\nFrom the results, the first 4 rows show that clip length has little effect on the results. We believe this is closely related to the idea of low-level temporal learning, which does not require the video clip to last long enough (e. g., 3 seconds) but only adequate frames to extract the low-level temporal patterns. Sparse sampling is another practice to aggregate more content in a single clip. When we expand the frame interval, the performance degrades considerably. This phenomenon suggests that more sparse sampling is detrimental to the learning of low-level temporal patterns even when more motion content is included. These findings demonstrate that LTTD is distinctly different from related temporal-based models considering the low-level temporal learning is specially designed for deepfake detection.\n\n## Complexity Analysis\n\n| Method | LipForensics | FTCN | LTTD w/o Low-enhancement | LTTD |\n| ------------------- | ------------ | ----- | ------------------------ | ----- |\n| Num. of Params. (M) | 36.00 | 26.47 | 20.57 | 22.66 |\n| GFLOPs | - | 8.25 | 13.25 | 14.51 |\n\nWe show the model complexities of two SOTA models, and two of our models in the table (the num. of params. of LipForensics is cited from [57]). \n\n1. Comparing LTTD w/o Low-enhancement and LTTD, the increased parameters and FLOPs correspond primarily to the linear layers of the introduced Low-level Enhancement Transformer stages.\n2. Compared with the SOTA models, our LTTD achieves the best performance (Table 1 in the paper) with a minimal number of parameters. The FLOPs of our model is relatively higher considering the intensive computation of the linear layers we use.\n3. In short, our model reports better performance with a smaller number of parameters, although the inference may be slower.", " We sincerely thank the reviewer for the constructive comments. We will respond to each detailed concern as follows:\n\n> **Q1: \"The average improvement over the competing method [62] `(89.6% vs 91.8%) `is appreciated but looks a bit marginal without comparing the timings of the approach. Also, it is unclear if the improvement `+2.2` video-level AUC is statistically significant ... This small improvement questions the overcomplexity of the method ...\"**\n\n**A1**: \n\n**1)** Compared with FTCN-TT, we have tried to re-implement it ourselves (since no training code is open sourced) but found it is hard to reproduce performances comparable to the original paper. Thus, we directly cited the numbers.\n\n**2)** In terms of the performance of our method. We retrain the model 5 times with different random seeds and get the following results:\n\n| Method | CelebDF | DFDC | FaceSh | DeepFo |\n| ------ | -------------- | -------------- | -------------- | -------------- |\n| FTCN | 86.9 | 74.0 | 98.8 | 98.8 |\n| LTTD | 89.25$\\pm$0.13 | 80.39$\\pm$0.46 | 99.51$\\pm$0.07 | 98.50$\\pm$0.21 |\n\n**3)** For model complexity, please refer to the \"General Responses\".\n\n> **Q2: \"Though the method is well-motivated, it feels a bit... changing the idea of [62]... This part where the method applies 3D convolution on patches is less clear ... more explanation on this 3D convolution is needed for the way it handles the spatial component over patches.\"**\n\n**A2**: \n\nWe would like to first explain the 3D convolutions in our paper and then discuss the related method FTCN [57].\n\n**1)** In FTCN, deep layers of 3D convs operate on the whole frames and thus inevitably focuses on semantic motions.\n\nWhile in our method, the introduced Low-level Enhancement with 3D convolution is motivated by the previous low-level feature learning methods which use hand-crafted filters [40,19,26,35]. Our intuition here is to leverage shallow learnable 3D filters for low-level information processing on local patches. Specifically, one 3d conv layer operates on each stage of the Low-level Enhancement Transformer like a spatio-temporal filter. Moreover, it would never model the cross-patch relations. A detailed torch-like description is included in the revised supplementary material.\n\n**2)** The related work, FTCN, is mainly based on *deep 3D CNNs* to learn pixel-level temporal discrepancy. The self-attention layers in their model are only responsible for multi-frame semantic information aggregation. \n\nWe adopt Transformers to model the sequential patches motivated by two facts: a) various practices in computer vision have demonstrated that self-attention can be used directly to model vision content by regarding visual patches as tokens; b) Transformers are not restricted by receptive fields. We thus leverage this property for both long- & short-span temporal learning. While in FTCN and other 3D conv-based works, the temporal receptive field is restricted by the kernel size of 3D convolutions.\n\nOverall, FTCN and our work are related but fundamentally different.\n\n> **Q3: \"... It is not clear overall how the face cropping of the system works... It is not clear also what is randomly determined. If a single crop is used and the face move slightly this is even worse than the random jittering that you get by frame-by-frame processing.\"**\n\n**A3**: The processing of face crop in our framework is demonstrated as:\n\n1. Given a full-frame video;\n\n2. We randomly determine a valid clip range, e.g., from frame 10 to frame 25;\n\n3. We sample the determined clip with 16 successive frames;\n\n4. We detect all the bounding boxes of the faces in the 16 frames, resulting in 16 boxes (case of multi-face is omitted here);\n\n5. `the same bounding box` is generated by picking the largest or smallest indices on each coordinate as:\n\n ```python\n # boxes.shape = [16, 4]\n # [xmin, ymin, xmax, ymax] = boxes[0]\n box = boxes.min(axis=0)[:2].tolist() + boxes.max(axis=0)[2:].tolist()\n ```\n\nWe also include the details in the revised supplementary material.\n\nFaces in the videos are always slowly moving. Considering the FPS of 24, a video clip in our experiments will not last for 1 second. Such practice greatly avoids large movements in a single clip. Frame-by-frame processing will corrupt the temporal relations of low-level features.", " > **Q4: \"More on this: In section 3.2 it is not clear if the Conv3D still slides over the patches or if the spatial kernel size of the convolution is big as the patch itself...\"**\n\n**A4**: The 3D convs operators in our work do not slide over the patches. They only focus on the specific spatial region. The kernel size is set to 3x3x3. The patches at different spatial regions would not interact with each other during Local Sequence Transformer (LST) forwarding, and only patches at the same spatial location are encoded by all the operators. We have included a detailed torch-like description of the 3D convs in the revised supplementary material. We have simplified the symbols of the equations for a better demonstration. Please see the revised paper for details.\n\n> **Q5: \"It is ok to be formal with the notation yet very often the article abuses notation and this makes the article not easily readable. There are multiple of these remarks e.g. usage of subscript that is not necessary xpi. Why p is used in L.131, L144 I believe the patch is indexed by i. Same remarks for Eq. (5-8).\"**\n\n**A5**: Thank you for the valuable suggestions, we find the simplified equations easier to understand. We have revised the related parts and marked them in blue in the revision. The subscript $p$ is used to denote the origin image patch. We also find it dispensable and remove it in the revision.\n\n> **Q6: \"... the method misses an instructive study of what happens when the temporal dimension T varies from different values than 16; also, given a fixed T, the paper misses a study to understand whatever it is better to have a dense sampling of the frames (e.g. 1:1 wrt to the original video) ...\"**\n\n**A6**: Thank you for the valuable suggestion. We include related discussions in the revision. Please find the details in the \"General Responses\" and the revised paper.\n\n> **Q7: \"I found some typos and sometimes the text use adjective a bit off: e.g. `L17 ``enormous fake videos`? ... `L81` data agumentation. `L199` vise versa (vice versa). `L308` a unified manifolds.\"**\n\n**A7**: Thank you for pointing out these typos and the suggestion! We have revised them accordingly.\n\n> **Q8: \"No detail of training and testing time.\"**\n\n**A8**: Our models are trained on one A100 GPU for 12 hours. The inference speed for processed videos is about 98 FPS.\n\n> **Q9: \"not clear: L198: the simplest thought is that the real region should be similar to the real one to some extent\".... Which min and max values...? ...Eq. (12,13) but seems over complicated with the notation (mα,mo,mβ,). ... the motivation for the loss is that real features should be basically similar to each other but it is not clear how this is attained given the current loss formulation ...?\"**\n\n**A9**: \n\n**1)** Our intention is to demonstrate that low-level temporal features of the real region should be similar to the real ones since the sequence of real regions will certainly depict a \"natural\" variation, while the sequence of fake regions formed by re-assembling will be different. We include a related explanation in the revised paper.\n\n**2)** About the min and max values of $\\mathrm{sim}_{gt}$ are -1and 1, respectively.\n\n**3)** We have revised these equations and fixed some bugs. Here we briefly explain \"how ` real features should be basically similar to each other is attained given the current loss formulation\":\n a) Given the modification mask as a gray image generated by simply subtracting the fake frame from the corresponding real one. The part that is faked out will have a larger difference and thus be closer to 255. In contrast, the real part will be closer to 0. Then, we normalize the mask to the value range of (0, 1).\n b) The Eq (12) first average pools the mask sequence at the temporal dimension, then interpolate the spatial size to be consistent with the feature map. Finally, we use $\\mathrm{m} \\in \\mathbb{R}^{N}$ to denote the flattened map. Thus, each value of $\\mathrm{m}$ corresponds to one patch sequence of the input clip. If the value of $\\mathrm{m}$ is closer to 1, it means that there is more forged content at the location of this patch, and vice versa.\n c) Therefore, by Eq (13), two patches with similar values in the flatten map $\\mathrm{m}$ will get a greater ground truth similarity.", " > **Q10: \"L:208 xclass is mentioned without explaining what is it.\"**\n\n**A10**: $x_{class}\\in \\mathbb{R}^{D}$ is a learnable token inserted at the beginning of embeddings of sequential patches. It has the same role and meaning as the *class token* in ViT. The embedding of this token is used for final classification.\n\n> **Q11: \"The limitations presented are somewhat related to the generalization of the model. It says *now it is good (at least a bit superior to the state-of-the-art) but we will be unsure in the future.* I think an overall direction for the future that would be nice for Deepfake Detection is working toward ensuring that the prediction of the Detection is really calibrated well... all these deepfake detectors have the limitation that we do not know if they are *calibrated well*... if we run it in the wild on videos for which we do not have labels, how much can we trust the prediction? \"**\n\n**A11**: We really appreciate and agree with the limitation pointed out by the reviewer about *'calibration requirements*. We believe it will be an important direction of the next generation’s deepfake detection. At the current stage, we still need intensive labor to ensure the detector is *calibrated well* in the deployment. We have included related discussions in the revision and are very grateful for the insightful discussion. \n\n> **Q12: \"Justification of the rating: My opinionf the paper is that it is a good system paper for deepfake detection...\"**\n\n**A12**: We are grateful that our efforts and ideas are recognized by the reviewer. All the constructive comments from the reviewers lead to a better revised paper! We believe that our encouraging finding of low-level & temporal learning in this paper will be a non-trivial contribution to the deepfake detection community. The novel idea of framework designs and the new state of the art can also promote its development. We hope you can change your mind and let our paper contribute to the community!", " We sincerely thank the reviewer for the constructive comments. We will respond to each detailed concern as follows:\n\n> **Q1: \"In the discussion part for Section 3.2, the authors claim that the proposed ... can explicitly avoid semantic modeling of features like facial structure and always focus on low-level temporal learning. But no explanations are given to prove this statement.\"**\n\n**A1**: \n\n**1)** \" explicitly avoid semantic modeling of features like facial structure\". This statement is intuitively made. Since we split frames into independent squential patches, semantic information like facial structure is explicitly corrupted during modeling.\n\n**2)** \"always focus on low-level temporal learning\". This statement is related to the first one. We do not model the inter-patch relation (like original ViT) in the Local Sequence Transformer (LST), but encode only patch sequences separately. \n\n a). Low-level: Considering that the facial semantics are explicitly excluded, our LST intuitively models low-level information. Moreover, we adopt shallow learnable 3D filters for further low-level information extraction instead of using deep convolutional layers which inevitably focus on high-level semantics.\n\n b). Temporal: Moreover, temporal learning is achieved by self-attention operations working on the input of patch sequences.\n\n> **Q2: \"Comparing Eq.4 and Eq.8, the formats of these two equations are not unified. ... it is strange to use t (lower letter) to represent the features after the pooling operation while the upper letter T represents the time sequence...\"**\n\n**A2**: Thanks for pointing out the problems. We have revised these parts for better demonstration. Please find details in \"General Responses\" and the revised paper (marked in blue).\n\n**1)** The formats of Eq.4 and Eq.8 were indeed described differently. We take the advice and revise the two equations to be unified for easier understanding.\n\n**2)**. In Eq (2), we used the subscript $pt$ of $\\mathrm{x}_{pt}^{1,i}$ to distinguish from image patch $\\mathrm{x}_{p}^{1,i}$. In the revision, we abandon the subscript and use a different symbol.\n\n**3)** In addition, we use the upper letter $T$ in superscript to describe dimension size, e.g., $m_o\\in \\mathbb{R}^{T\\times H\\times W}$, thus we also keep the same meaning as a constant of $T$ in all equations. \n\n**4)** \" it is strange to use t (lower letter) to represent the features after the pooling operation\". We have changed this notation in the revision.\n\n> **Q3: \"In equation 12, in my understanding, the m should equal Flatten(mβ)\"**\n\n**A3**: Thank the reviewer for pointing it out. We have fixed it in the revision.\n\n> **Q4: \"The authors mention that the temporal dimension T is set to 16. However, no selection details are given ... The authors should present more analysis on how to frame number influences ...\"**\n\n**A4**: We empirically select the clip length. On the one hand, too short a clip does not ensure enough temporal information, e.g., 4 consecutive frames almost identical to each other, thus leading to sub-optimal temporal learning. On the other hand, too long a clip is not necessarily better: 1) A longer clip costs more memory, and a smaller batch size also slow down the training convergence, or worse. 2) We found some of the videos in the test datasets contained scene switching. Considering the FPS of 24, a video clip in our experiments will not last 1 second, thus greatly avoiding the scene-switching problem. We include related ablation and discussion in the \"General Responses\" and the revised paper.\n\n> **Q5: \"The following method is not compared in generalizability evaluation: Kaede Shiohara, Toshihiko Yamasaki: Detecting Deepfakes with Self-Blended Images. CVPR 2022.\"**\n\n**A5**: The mentioned CVPR22 paper [a] is indeed an encouraging concurrent work. However, it was not available on the CVPR22 website when we submitted our paper and was therefore missed. We have made comparisons with them by carefully runing their official code.\n\n| Method | CelebDF | DFDC | FaceSh | DeepFo | Average |\n| ------- | ------- | ---- | ------ | ------ | ------- |\n| LTTD | 89.3 | 80.4 | 99.5 | 98.5 | 91.9 |\n| SBI+EB4 | 89.9 | 74.9 | 97.4 | 77.7 | 85.0 |\n\nThe generalization shown by both methods is excellent and our method outperforms SBI in DFDC, FaceSh, DeepFo with a clear margin. In addition, the contributions of the two papers are very different: SBI achieves generalization by creating training data (from the perspective of model training), while our approach focuses on learning more generalizable and robust features (from the perspective of feature learning). How to integrate the merits of both will be a direction worthy of further exploration in the future.\n\n[a] Detecting Deepfakes with Self-Blended Images. CVPR 2022", " We sincerely thank the reviewer for the constructive comments. We will respond to each detailed concern as follows:\n\n> **Q1: \"The paper claims that the method crops the face regions using the same bounding box randomly determine the clip range in-the-fly, which may affect the practicality... If the clip contains scene switching, it may cause face misalignment and affect the detection results.\"**\n\n**A1**: \n\n**1)** Normally, scene or shot switching can be detected by off-the-shelf scene/shot detection tools. Just like all faces are detected by face detectors in this task, we can also use scene/shot detection tools in the data preprocessing. \n\n**2)** We empirically set the clip length to 16 in this paper. Considering the FPS of 24, a video clip in our experiments will not last for 1 second. Such practice also ensures that most modeled clips are temporarily stable.\n\n> **Q2: \"Some details of selecting video clips are missing. How many clips extracted from one video? ... Why the temporal dimension is 16? It is necessary to have an ablation study ...\"**\n\n**A2**: The details are provided in the revision. Different from current SOTAs [56,57] using all the frames, for storage reasons, we extract only the first 128 frames of all videos in our experiments. Thus, the final prediction is an average of 8 clips. The clip length is emperically set to 16, please find the ablation in the \"General Responses\" and the revised paper.\n\n> **Q3: \"... It seems that the framework needs complex computing. What about the efficiency of inference?\"**\n\n**A3**: Please find the details about model complexity in the \"General Responses\". The proposed LTTD framework has a comparable model size and computing complexity with the SOTA methods. The main computation overhead of LTTD is arised from the dense linear connections of self-attention, in contrast, the shallow 3D convolutions contribute little.\n\n> **Q4: \"It is not clear that how to compute the ground truth similarity matrix and how to generate the mask sequence (L201-205) ...\"**\n\n**A4**:\n\n**1)** \"how to generate the mask sequence\". We denote the original mask sequence as $m_o\\in \\mathbb{R}^{T\\times H\\times W}$, where the $T$ is the temporal dimension, i.e., there are $T$ masks, which are generated by simply subtracting the fake frame from the corresponding real one. We will add a short description with the \"modification mask\" we mentioned at L207 in the revision.\n\n**2)** The $sim_{gt}\\in \\mathbb{R}^{N\\times N}$ (ground truth similarity matrix) is calculated from the mask sequence of corresponding video clip (Eq (12,13), where we fixed the bugs). Then, the $sim_{gt}\\in \\mathbb{R}^{N\\times N}$ is calculated by subtraction (Eq (13)), which measures the similarity of corresponding patches at different spatial regions. The value range of $sim_{gt}\\in \\mathbb{R}^{N\\times N}$ is normalized to [-1,1], as the same as cosine similarity range (Eq (11)).\n", " We sincerely thank the reviewer for the constructive comments. We will respond to each detailed concern as follows:\n\n> **Q1: \"Novelty. Many studies prove that the low-level patterns ... What is the motivation of this search? Does ... address specific problems that previous methods have neglected? \"**\n\n**A1**: We will restate our novelty and highlight our differences with previous studies here.\n\n**1)** Previously, low-level patterns are studied [19,26,35,40] using hand-crafted low-level filters, which will be less effective on degraded data in the presence of commonly applied post-processing procedures like visual compression [20,34,57]. This suggests their **lack of *robustness*** (L33).\n\nIn this paper, low-level patterns are extracted from a spatio-temporal view with fully learnable 3D filters. Moreover, our operations on local patches naturally avoid high-level semantic modeling. These designs ***better adapt to the complex distributions of untapped deepfakes, making a more robust model*** (L43).\n\n**2)** As for temporal inconsistency, many previous works [5,30,20,50,44] pursue to identify certain abnormal behaviors (e. g., abnormal eye blinking, phoneme-viseme mismatches, aberrant landmark fluctuation) (L26, L102). However, the remarkable visual forgery cues are expected to be gradually eliminated during the continuous army race between forgers and detectors. Considering the substantial temporal differences arise locally during the ***independent local modifications*** of forged frames, we propose to achieve deepfake detection by learning the local & low-level temporal inconsistency within a *restricted spatial space (16x16 patch)*. The proposed framework shows SOTA performance and good interpretability.\n\n**3)** A close discussion is presented in the Introduction section. In short, our way of modeling low-level and temporal information can address the dilemma between *robustness* and *generalization*. Our framework demonstrates promising performances and interpretable results (Fig. 3). \n\n> **Q2: \"The novel Low-level Enhancement ... can be regarded as an 3D extension of textural enhancement [60]. The authors should validate whether the 3D convolutions have the major contribution ... Can it be implemented with 2D convolutions ...\"**\n\n**A2**: \n\n**1)** The introduced Low-level Enhancement was motivated by the previous low-level feature learning methods which use hand-crafted filters [40,19,26,35]. Our intuition here is to leverage shallow learnable filters for low-level information processing.\n\n**2)** On the other hand, different from [55] that uses stacks of deep convolutional layers, we adopt only 1 layer of convolutional operation in each Low-level Enhancement Transformer stage. Thus only a few additional parameters are involved. Moreover, the performance gain is significant as shown in the Table 3, especially on the challenging DFDC dataset.\n\n**3)** The reason we use 3D filters instead of 2D is mainly related to the requirement of sequential modeling. Voxel-level alignment should be considered with cubic kernels. Here we replace the 3D filters with the 2D ones and show the results in the table below. \n\n| Kernel type$\\downarrow$ Dataset $\\rightarrow$ | FF++ | CelebDF | DFDC | FaceSh | DeepFo |\n| --------------------------------------------- | ----- | ------- | ----- | ------ | ------ |\n| 2D | 99.32 | 84.90 | 79.54 | 98.49 | 97.20 |\n| 3D | 99.52 | 89.25 | 80.39 | 99.51 | 98.50 |\n\nIt can be seen that 2D filters would lead to a inferior performance.\n\n> **Q3: \"The LTTD focuses on the low-level temporal patterns of the restricted spatial region. What is the restricted spatial region? ...\"**\n\n**A3**: As we split a video into squential patches, the \"the restricted spatial region\" refers to the area of space enclosed by the patch boundary. In this paper, we set the patch size to 16x16, which considerably corrupts the semantic information and thus is suitable for low-level pattern learning.", " > **Q4: \"As shown in Table 3, the ViT [18] has inferior performance to the CNN baseline Xception. What is the reason for this phenomenon? How do the ViT-based methods FTCN [62] and LTTD improve the performance of ViTs on Deepfake detection?\"**\n\n**A4**: \n\n**1)** \"Why Xception outperforms ViT?\". More precisely, Xception only demonstrate better performance on FF++ (in-dataset, AUC% 99.38 vs 97.92) and FaceSh (cross-dataset, AUC% 78.6 vs 65.56), while ViT shows better performance on more chanllenging DFDC (cross-dataset, AUC% 72.89 vs 67.36). On the one hand, Xception with more abundant inductive bias tends to better learn the specific forgery patterns or identity features in the train set, thus achieving better performance on FF++ and FaceSh (FaceSh shares the same identities with FF++, i.e., the same source videos are adopted to generate deepfakes). On the other hand, ViT with less inductive bias demonstrates better generalization, thus having advantages on DFDC evaluation. A similar discussion was made in Sec 4.5, where we drew the conclusion with a more intuitive visulization (Fig 3).\n\n**2)** \"Why improvements of FTCN and LTTD compared with ViT?\". First, although FTCN also employs a self-attention module, there are foundamental differences comparing with LTTD regarding both motivation and model design, in which the deep point-wise 3D convolution operations play a major role in FTCN. Moreover, we think the key to the significant generalization improvements of LTTD compared with ViT is the idea of `low-level & temporal feature learning` and the specially devised `learning-within-patch` model framework. For the idea of `low-level & temporal feature learning`, we have made a related response in the earlier comments (Novelty comment); for the effects of `learning-within-patch` framework, it can be learned from the ablation (Table 3), where the model w/o conv enhancement (LTTD w/o LST) already outperforms the two baselines. Moreover, Fig. 3 shows that our LTTD learns completely different features.\n\n> **Q5: \"Complexity. The comparisons on model complexity and GFLOPs between LipForensics, FTCN, and LTTD are desirable.\"**\n\n**A5**: Please refer to the \"General Responses\".\n\n> **Q6: \"Localization. Can the LTTD localize the forged regions? ...\"**\n\n**A6**: Thanks for the advice. Our method can indeed localize the forged regions. We include a short discussion with visualizations in the revised supplementary material.\n\n**Follow-up**: We hope that our responses so far have cleared up the confusion for the reviewer to reevaluate our paper. We are willing to have further discussions if there is anything we could clarify. ", " We sincerely thank the reviewer for the constructive comments. We will respond to each detailed concern as follows:\n\n> **Q1: \"In Sec 3.2, the authors use ‘shallow 3D convolution’ in the LST module for ‘align’... In my opinion, it is similar to face alignment, ... why not just do it in the pre-processing stage.\"**\n\n**A1**: We intentionally did not align faces because alignment errors are inevitable during per-frame processing. As a result, per-frame face alignment will certainly corrupt the natural low-level temporal consistency/inconsistency of both *real* and *fake* videos. \n\n> **Q2: \"In Sec 3.3, sim_gt is calculated by interpolation operation, how about max or average pooling operation?\"**\n\n**A2**: Interpolation is employed here only for narrowing the spatial dimensions of $\\mathrm{sim}_{gt}$. We have tried pooling and found virtually no difference.\n\n>**Q3: \"In Sec 4, the experiment part is lacking intra-evaluation, both training and testing on FF++.\"**\n\n**A3**: In recent works, in-dataset results are almost saturated (overfitting to specific kinds of artifacts may lead to better in-dataset performance, but worse generalization) and are not listed for comparison as a common practice. Our method achieves 99.52 AUC% on FF++, which also demonstrates SOTA in-dataset performance. \n\n> **Q4: \"In Sec 4.1, how many frames or clips are sampled from videos in each deepfake dataset.\"**\n\n**A4**: For storage reasons, we extract only the first 128 frames of all videos in our experiments. Therefore, the final prediction is averaged from 8 clips. We have added these details in the revision.\n\n> **Q5: \"In Sec 4.3, the result analysis about robustness evaluation seems not enough.\"**\n\n**A5**: Regarding robustness, we focus our analysis on our comparisons with Face X-ray and LipForensics, as both these studies are closely related to our discussions in the Introduction (about the dilemma of simultaneously achieving generalization and robustness).\n\n**1)** Low-level feature learning could lead to better generalization, but worse robustness. Face X-ray is one of the first works to achieve generalizable deepfake detection, focusing on low-level (blending boundary) learning. From the results in Table 2, Face X-ray suffers from drastic performance degradation when perturbations of gaussian noise, gaussian blur, and video compression are applied. The reason is that the low-level features of blended boundaries are greatly corrupted by these perturbations, thus clearly demonstrating the weakness of low-level feature learning in terms of robustness.\n\n**2)** Semantic feature learning will result in better robustness. Since most perturbations do not change the semantic information. LipForensics, which focuses on high-level understanding, demonstrates better robustness compared to Face X-ray under perturbations like gaussian noise, gaussian blur, and video compression. However, this method cannot generalize to scenes when the mouth part is blocked or even shut down.\n\n**3)** As discussed in the *Introduction* section, to achieve both *generalization* and *robustness*, we combine the learning of low-level & temporal features. The learned local patterns of our model are less influenced by low-level perturbations compared to Face X-ray. Moreover, our method performs better than LipForensics under different scenes.", " > **Q6: \"the ablation study for robustness evaluation is also required.\"**\n\n**A6**: \n\n| Method | Clean | Color Saturation | Color Contrast | Block-Wise Noise | Gaussian Noise | Gaussian Blur | Pixelation | Video Compression | Avg/Drop |\n| ------------ | ----- | ---------------- | -------------- | ---------------- | -------------- | ------------- | ---------- | ----------------- | ---------- |\n| LTTD | 99.4 | 98.9 | 96.4 | 96.1 | 82.6 | 97.5 | 98.6 | 95.0 | 95.0/-4.3 |\n| Face X-ray | 99.8 | 97.6 | 88.5 | 99.1 | 49.8 | 63.8 | 88.6 | 55.2 | 77.5/-22.3 |\n| LTTD w/o LST | 98.8 | 93.9 | 92.6 | 86.0 | 68.8 | 93.2 | 95.9 | 91.7 | 88.9/-9.9 |\n| LTTD w/o CPI | 98.8 | 93.9 | 92.6 | 86.2 | 68.9 | 93.2 | 95.1 | 91.7 | 88.8/-10.0 |\n| LTTD w/o GCC | 99.1 | 97.6 | 90.6 | 94.9 | 76.0 | 89.0 | 97.4 | 91.4 | 90.9/-8.2 |\n\nFrom the results, we find that the special designs all contribute to optimal performance. For color contrast (CC), the Global Contrastive Classification (GCC) module has a more significant contribution as it better enhances the detection of local color anomalies by modeling features in different spatial regions through global comparisons. In contrast to block-wise noise (BW), the Local Sequence Transformer (LST) and Cross-Patch Inconsistency (CPI) modules contribute more, since BW noise affects only a very small local area, it has no effect on the low-level & temporal features in other regions. However, it will interfere with the global contrast learning of GCC. The results on gaussian noise (GNC) can be understood consistently with BW. Since GNC comprehensively modifies the low-level features, low-level & temporal learning of LST and CPI will be greatly affected, while the global contrastive learning of GCC is less affected, thus leading to a more significant contribution of GCC. Compared with Face X-ray focusing on spatial low-level feature learning, the performance degradations of our models are significantly smaller due to the consideration of temporal dimension. This phenomenon is also in line with our motivation we discussed in the Sec. Introduction that low-level features are susceptible to perturbations and robustness will be enhanced by incorporating temporal learning. We have added this part to the revised supplementary material.\n\n> **Q7: \"It seems that the supervision information has not only binary labels but also masks, so the prediction may be determined by other simpler structures like fine-grained classification layers?\"**\n\n**A7**: In Sec 3.3, we introduce CPI loss using the *modification masks* (which are created by simply subtracting the fake frame from the corresponding real one) as supervisions, but they do not directly affect the final prediction and are not be used while testing. CPI loss is only calculated for training regularization. The final prediction is given by the GCC module as described in L214.\n\n> **Q8: \"Based on Table 3, the result line from the ‘LTTD w/o CPI’ shows that CPI contributes far less than the LST or GCC module in DFDC and DeepFo.\"**\n\n**A8**: LST and GGC modules are indeed the key components of our LTTD considering the main idea of low-level & temporal learning and local-to-global prediction. CPI provides only auxiliary contrastive supervision, which is helpful but relatively less significant.\n\n> **Q9:\"For the authors’ motivation, using low-level... It seems that the middle-level features, and the suitable mixture of low-& middle-level features may also contribute to Deepfake Detection.\"**\n\n**A9**: Our current version focuses specifically only on the low-level part. However, we think the idea of \"suitable mixture of low-& middle-level features\" is interesting and would be of great value. We will add this to our discussions on future work directions.\n\n> **Q10: \"For Sec 4.4, ... the authors could refer to the benchmarks in Improving [45] and ForgeryNet ...\"**\n\n**A10**: Thanks for the advice. With limited time, we are unable to provide an evaluation with [a] and [b]. We would include a discussion with these benchmarks in the future.\n\n[a] Improving the efficiency and robustness of deepfakes detection through precise geometric features. \n\n[b] Forgerynet: A versatile benchmark for comprehensive forgery analysis\n\nPlease do not hesitate to let us know if there are any additional clarifications or experiments that we can offer!", " The authors propose a framework to improve the generalization and robustness of deepfake detection, which relies on local low-level and temporal information, and transformer-based model. In particular, the Local Sequence Transformer (LST) is used to identify low-level temporal inconsistency and the Cross-Patch Inconsistency loss (CPI) is used to model spatial inconsistency. The Global Contrastive Classification (GCC) is used for final classification, which inserts temporal tokens and adopts additional three Transformer blocks.\nQuantitative experiments show better results on four datasets in generalization evaluation benchmark and on seven perturbations in robustness evaluation benchmark, compared with some recent works. \n Strengths:\n1. The paper is well written, easy to follow and provides adequate experimental results.\n2. The proposed framework successfully integrates temporal information and low-level features, which are separate from improved vision transformer and shallow CNN, and could be of some contributions to the community.\n3. There are some novel ideas in the LST module: the first is the input type: ‘local patches with same spatial position’, and the second is the low-level temporal enhancement module, corresponding with the research that the low-level artifact is more suitable than the semantic artifact for deepfake detection. What’s more, the CPI module is well designed, and the motivation of the GCC module shows good understanding of deepfake datasets.\n4. The evaluation experiments of generalization and robustness are well done.\n\nWeakness:\n1. In Sec 3.2, the authors use ‘shallow 3D convolution’ in the LST module for ‘align’ as the first reason (Ignoring the second reason temporally). In my opinion, it is similar to face alignment, so if face-alignment operation plays the same role,why not just do it in the pre-processing stage.\n2. In Sec 3.3, sim_gt is calculated by interpolation operation, how about max or average pooling operation? The authors may provide ablation study about how to calculate it.\n3. In Sec 4, the experiment part is lacking intra-evaluation, for example, both training and testing on FF++.\n4. In Sec 4.1, how many frames or clips are sampled from videos in each deepfake dataset. \n5. In Sec 4.3, the result analysis about robustness evaluation seems not enough.\n6. In Sec 4.4, the ablation study is only done for generalization evaluation, that for robustness evaluation is also required.\n Confusion:\n1. In Sec 3.3, it seems that the supervision information has not only binary labels but also masks, so the prediction may be determined by other simpler structures like fine-grained classification layers, instead of the proposed complex “temporal tokens + three Transformer blocks” structure.\n2. Based on Table 3, the result line from the ‘LTTD w/o CPI’ shows that CPI contributes far less than the LST or GCC module in DFDC and DeepFo. So is the CPI module significance?\n\nLimitation:\n1. For the authors’ motivation, using low-level features instead of semantic features benefits a lot in Deepfake Detection. It seems that the middle-level features, and the suitable mixture of low-& middle-level features may also contribute to Deepfake Detection. It is a pity that the paper does not include this exploration.\n2. For Sec 4.4, the robustness evaluation seems not strong enough, the authors could refer to the benchmarks in Improving [45] and ForgeryNet [CVPR 2021, Forgerynet: A versatile benchmark for comprehensive forgery analysis],whose robustness evaluation contains more types of perturbations and mix perturbation test sets.\n The authors have done it properly. ", " This paper proposes a Local- and Temporal-aware Transformer-based Deepfake Detection (LTTD) framework to capture temporal cues from local sequences. The authors design a Local Sequence Transformer (LST) models the temporal consistency of restricted spatial regions to learn local-to-global features. The proposed LTTD framework achieves outstanding generalizability and robustness. Strengths:\n1. Clear description of method.\n2. Remarkable generalization towards unseen deepfakes and strong robustness against post-processing operations. \n\nWeakness:\n1. Novelty.\n2. Analyses.\n 1. Novelty. \nMany studies prove that the low-level patterns and temporal inconsistency are effective clues for Deepfake detection. What is the motivation of this search? Does the LTTD framework address specific problems that previous methods have neglected? The authors are supposed to underline the motivation and advantage of the proposed method.\n\n2. Analyses. \n1) The novel Low-level Enhancement is designed by shallow 3D convolutions. It can be regarded as an 3D extension of textural enhancement [60]. The authors should validate whether the 3D convolutions have the major contribution of improving feature learning. Can it be implemented with 2D convolutions to have comparable performance gain and less computational overhead?\n2) The LTTD focuses on the low-level temporal patterns of restricted spatial region. What is the restricted spatial region? Please introduce the definition of restricted region. Furthermore, is it necessary to restrict specific regions? The authors should analyze the difference between restricted region and unrestricted region.\n3) As shown in Table 3, the ViT [18] has inferior performance to the CNN baseline Xception. What is the reason for this phenomenon? How do the ViT-based methods FTCN [62] and LTTD improve the performance of ViTs on Deepfake detection? What is the key to this significant improvement?\n4) Complexity. The comparisons on model complexity and GFLOPs between LipForensics, FTCN, and LTTD are desirable.\n5) Localization. Can the LTTD localize the forged regions? It will be explicit to visualize the localization of face forgery.\n Adequately claimed. ", " Briefly summarize the paper and its contributions. This is not the place to critique the paper; the authors should generally agree with a well-written summary.\n\nTo achieve generalizability across deepfake methods and robustness towards image post-processings, this paper proposes the Local- & Temporal-aware Transformer-based Deepfake Detection (LTTD) framework. The experiments show that the proposed approach achieves the state-of-the-art generalizability and robustness. Strengths:\n+ The paper attempts to ensure both generalizability and robustness in deepfake detection, which are two important problems in deepfake detection. The proposed method focuses on low-level temporal learning and prevents overfitting to global semantic cues.\n\n+ The experiments are comprehensive.\n\n+ The paper is clear to understand and the writing quality is relatively good.\n\n\nWeaknesses:\n- The paper claims that the method crops the face regions using the same bounding box randomly determine the clip range in-the-fly (L246), which may affect the practicality. It may be suitable for the videos in datasets because many of these videos often have a single scene. However, some videos in datasets and most videos in wild often have many scenes. If the clip contains scene switching, it may cause face misalignment and affect the detection results.\n\n- Some details of selecting video clips are missing. How many clips extracted from one video? In FaceForensics++, one video may have less than 300 frames or more than 1000 frames. For short videos and long videos, if the number of clips different? The paper report video-level AUC Why the temporal dimension is 16? It is necessary to have an ablation study to see the effect of different choices of this parameter.\n\n- The method splits the input image into local patches and use 3D convolutions. It seems that the framework needs complex computing. What about the efficiency of inference?\n\n- It is not clear that how to compute the ground truth similarity matrix and how to generate the mask sequence (L201-205), especially the fake clip. See above. This work aims to detect deepfake videos, which is important to prevent the spread of fake information.", " The proposed method combines 3D conv and vision transformer to extract forgery artifacts in Deepfakes videos, the experiments prove that the proposed method achieves outstanding generalization performance on multiple Deepfakes datasets. \n\n Strengths:\nThe overall performance of the proposed method is outstanding.\nWeaknesses:\nThe writing of the manuscript should be improved. 1. In the discussion part for Section 3.2, the authors claim that the proposed structure which combines 3D conv feature and self-attention feature can explicitly avoid semantic modeling of features like facial structure and always focus on low-level temporal learning. But no explanations are given to prove this statement.\n2. Comparing Eq.4 and Eq.8, the formats of these two equations are not unified. In addition, the meaning of the symbols used in the equations is not clearly given, making the equations hard to understand. Also, it is strange to use t (lower letter) to represent the features after the pooling operation while the upper letter T represents the time sequence. Similar problems occur in all the equation parts.\n3. In equation 12, in my understanding, the m should equal Flatten(mβ)\n4. The authors mention that the temporal dimension T is set to 16. However, no selection details are given. Suppose we have a 300 frames Deepfakes video with the last 150 frames tampered. The frames selection strategy would be crucial in detecting this Deepfakes video. In addition, no discussions for frame number selection are given. The authors should present more analysis on how frame number influences the proposed structure.\n5. The following method is not compared in generalizability evaluation:\nKaede Shiohara, Toshihiko Yamasaki: Detecting Deepfakes with Self-Blended Images. CVPR 2022. The authors have adequately addressed the limitations and potential negative societal impact of their work.", " The paper proposes a method for DeepFake Detection. The method focuses on a particular way of modeling the temporal aspect of the detection: instead of learning first spatio-temporal downsampled embedding with 3D convolution and then aggregating the embedding with transformers [62], the proposed method immediately breaks the image into patches and adds the temporal dimension. Unlike prior work, temporal modeling is done directly over patches with the motivation of identifying low-level temporal inconsistencies and standing on the shoulders of insights from [35]. This is achieved by moving the transformer architecture immediately upstream in the pipeline contrary to previous methods in which usually temporal aggregation is performed downstream. Though the method models patches in this way still use part of 3D convolutions applied to patches. The method is supervised with standard cross-entropy loss and another loss that enforces that real patches should be similar to each other in the learned embedding space.\nExperimental results are provided on recent benchmarks for Deepfake detection along with ablation studies. The robustness of the method to common perturbation is also tested. ### Strenghts\n- The method is overall motivated when proposing something and takes a direction that is different than the state-of-the-art [62] of directly modeling patch sequences bringing transformers upstream in the pipeline.\n- Overall the innovative claims in the intro are well balanced with the paper's experimental evidence, especially in the robustness part of perturbations.\n- I was able to grasp the overall idea of the method quickly though often parts feel overcomplicated in the way they are written.\n- The experimental validation is promising and uses recent benchmarks for deepfake detection. Ablation studies are provided to support design choices. In some cases, the ablation improvement is clear in other cases less clear. In Tab. 3 an overall summary using average with standard deviation across unseen manipulation is needed.\n\n\n### Weaknesses\n\n- The average improvement over the competing method [62] `(89.6% vs 91.8%) `is appreciated but looks a bit marginal without comparing the timings of the approach. Also, it is unclear if the improvement `+2.2` video-level AUC is statistically significant. What happens if you train multiple **LTTD** for different initialization and multiple **FTCN-TT** and report the average and standard deviation? Is the improvement statistically significant, given the standard deviation of the two approaches? (Assuming authors have code to run FTCN-TT). I ask this because, it is clear that from the distribution of performance the gap is obtained thanks to the DFDC dataset, all the others are basically the same. What makes LTTD works well on DFDC? This small improvement questions the overcomplexity of the method and I have the idea that the paper will be hard to reproduce and re-implement.\n- Though the method is well-motivated, it feels a bit the method is riding the success of transformers and applied to the idea of modeling temporal discrepancy with patches for DeepFake Detection, changing the idea of [62]. Also, though the method seeks a changing paradigm does still apply 3D convolution as in [62]. This part where the method applies 3D convolution on patches is less clear. Convolution works on images because of the assumption that patches can be highly correlated but here the method breaks the image by splitting into patches, so more explanation on this 3D convolution is needed for the way it handles the spatial component over patches.\n- The idea to break the image in patches does not account for rigid and non-rigid deformation that the face may even in 3D, convolution suffers from this too but is more resistant to jittering of translation by definition. The paper says something on this matter at `L149` but is not clear.\n- `L242` gives details about the face cropping which is related to alignment, motion compensation, and breaking the image into patches. It is not clear overall how the face cropping of the system works. The method does not do frame-by-frame face detection and cropping but uses a single crop. The text says using `the same bounding box` but same respect to which frame? It is not clear also what is randomly determined. If a single crop is used and the face move slightly this is even worse than the random jittering that you get by frame-by-frame processing.\n- More on this: In section 3.2 it is not clear if the Conv3D still slides over the patches or if the spatial kernel size of the convolution is big as the patch itself. This part spends a lot of space with equations (1-4) with tons of super and subscripts but it does not make the reading easier.\n- It is ok to be formal with the notation yet very often the article abuses notation and this makes the article not easily readable.\nThere are multiple of these remarks e.g. usage of subscript that is not necessary $ \\mathbf{x}_{p}^{i}$. Why $p$ is used in L.131, L144 I believe the patch is indexed by $i$. Same remarks for Eq. (5-8).\n- Given that the title includes _Delving into Sequential Patches_, the method misses an instructive study of what happens when the temporal dimension $T$ varies from different values than 16; also, given a fixed $T$, the paper misses a study to understand whatever it is better to have a dense sampling of the frames (e.g. 1:1 wrt to the original video) and a small window or maybe more sparse sampling of the frames (1:20) but to aggregate more content of the video.\n- Fig. 1 is overall nice and I appreciated but it is pretty dense and not easy to digest immediately.\n- I found some typos and sometimes the text use adjective a bit off: e.g. `L17 ` `enormous fake videos`? This means that a video is huge, big in size. `L81` data aRgumentation. `L199` vise versa (vice versa). `L308` a unified manifolds. \n\n### Justification of the rating\nMy opinionf the paper is that it is a good system paper for deepfake detection with marginal improvement over the state-of-the-art, assuming these are statistically significant. The paper has some strength in showing more robustness than other methods. The overall claims are decently supported. Besides that, the paper needs to be improved by simplifying the notation by making it more clear for the community since as of now it will be hard to reproduce it. Overall it is a good paper but I believe that if the paper is not accepted, the deepfake community is not going to lose innovative ideas. According to this, I sugged a borderline reject score before rebuttal. - Tab. 3 maybe improve if an average is presented.\n- No detail of training and testing time.\n- What are the parameters of the 3D Convolution used? Kernel size over spatial dimension and temporal.\n\n- There are several parts that are not clear:\n > L198: the simplest thought is that the real region should be similar to the real one to some extent\n\nFor me, this is not enough to justify the loss and is not a very clear sentence. The sentence says that real regions should be similar to real ones but this is not true in the image space; maybe this is the requirement that the loss is imposing in the learned feature space though is not clear. Furthermore, it is not clear what the loss why the loss matches the cosine similarity between all patches with the ground-truth similarity matrix. Which min and max values take the ground-truth similarity matrix?\nThe process of generating the ground-truth similarly matrix is displayed in Eq. (12,13) but seems over complicated with the notation ($m_\\alpha,m_o,m_\\beta,$). Why does interpolation have to be $\\pi$? Per my understanding, the min and max values for sim_gt are $\\{-1, 1\\}$. Is this correct? Anyway, the motivation for the loss is that real features should be basically similar to each other but it is not clear how this is attained given the current loss formulation since there is no selection of the real embedding in the loss unless this is achieved with the $sim_git$ though how? \n\n L:208 $\\mathbf{x}_{class}$ is mentioned without explaining what is it.\n\nL:242, In the same way, I mentioned in the weakness the part on data processing is hard to understand. Especially this sentence:\n> Therefore, in our method, we crop the face regions using the same bounding box after randomly determining the clip range in-the-fly,\n\n\n\n The limitations presented are somewhat related to the generalization of the model. It says _now it is good (at least a bit superior to the state-of-the-art) but we will be unsure in the future._ \n\nI think an overall direction for the future that would be nice for Deepfake Detection is working toward ensuring that the prediction of the Detection is really calibrated well. To the best of my knowledge, there is no detection system yet deployed that can be left free of running on video on social networks and flagging videos. I am not sure if there are implementations with humans in the loop that check highly confident video marked by the detector. Anyway, the point is that, given that an erroneous highly confident prediction as fake for an authentic video could be even more problematic of a deepfake itself, all these deepfake detectors have the limitation that we do not know if they are _calibrated well_. Perhaps adding a general discussion on this may improve the paper: i.e. as of now, the method has good generalization accuracy, but if we run it in the wild on videos for which we do not have labels, how much can we trust the prediction?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4, 4 ]
[ "JUqIE2pcn8Q", "p3teEjcTMZ9", "zbbz1ew0FuN", "_BqXop1Au-y", "nips_2022_osPA8Bs4MJB", "nips_2022_osPA8Bs4MJB", "zAVQlCO2DrC", "zAVQlCO2DrC", "zAVQlCO2DrC", "ul9Sk2FIvVa", "KgHskORqAw8", "JUqIE2pcn8Q", "JUqIE2pcn8Q", "jo2urd0juA_", "jo2urd0juA_", "nips_2022_osPA8Bs4MJB", "nips_2022_osPA8Bs4MJB", "nips_2022_osPA8Bs4MJB", "nips_2022_osPA8Bs4MJB", "nips_2022_osPA8Bs4MJB" ]
nips_2022_MbVS6BuJ3ql
Maximum Class Separation as Inductive Bias in One Matrix
Maximizing the separation between classes constitutes a well-known inductive bias in machine learning and a pillar of many traditional algorithms. By default, deep networks are not equipped with this inductive bias and therefore many alternative solutions have been proposed through differential optimization. Current approaches tend to optimize classification and separation jointly: aligning inputs with class vectors and separating class vectors angularly. This paper proposes a simple alternative: encoding maximum separation as an inductive bias in the network by adding one fixed matrix multiplication before computing the softmax activations. The main observation behind our approach is that separation does not require optimization but can be solved in closed-form prior to training and plugged into a network. We outline a recursive approach to obtain the matrix consisting of maximally separable vectors for any number of classes, which can be added with negligible engineering effort and computational overhead. Despite its simple nature, this one matrix multiplication provides real impact. We show that our proposal directly boosts classification, long-tailed recognition, out-of-distribution detection, and open-set recognition, from CIFAR to ImageNet. We find empirically that maximum separation works best as a fixed bias; making the matrix learnable adds nothing to the performance. The closed-form implementation and code to reproduce the experiments are available on github.
Accept
This paper aims at introducing a criterium for class separation. The paper demonstrates high performance, by proposing an affine transformation of the canonical embedding of labels, which lead to a maximal separation between those new vectors. Given the simplicity and good numerical results, I recommend accepting this paper; however, the minor revisions suggested by reviewer BhPp needs to be addressed in the camera-ready version.
train
[ "mNjl8vJ2iqA", "gJmnMgirm4", "O7-jqKUpfcH", "IH_MvrJaxay", "nXyDOznw3Qh", "BcwjerF_FpR", "82MnAD7OhJb", "V1lT5wXHERc", "VRePyw72Ud9u", "Og87ZAxhqc", "0D1_NLqYv2Z", "yPm3kip21hv", "cZ1jwnDsIJK" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I have no more questions, and I hope these suggestions can be useful when preparing a revised version.", " We thank the reviewer for their response.\n\nRegarding feature dimensionality, you can indeed set feature embedding to eg. $512$ dimensions with a standard softmax cross-entropy formulation. To obtain class logits and compute the loss, a final layer with $512$ x $(k+1)$ learnable parameters is then needed for $k+1$ classes (excluding the bias). Similarly in our approach, we can set the number of feature dimensions to $512$. This is then followed by a learnable layer of size $512$ x $k$. On top, we add our fixed matrix of size $k$ x $k+1$ to get class logits. This means that in terms of learnable parameters, there is no noticeable difference between the standard setup and our maximum separation formulation. With many classes however, the fixed final matrix will be large, which can lead to extra computational effort. At the ImageNet scale (1,000 classes) training and inference times are similar, but we have yet to investigate extreme classification cases. We hope this helps clear up the similarities and differences, we will add this point to the paper.\n", " We thank the reviewer for their response and answer their additional questions below.\n\n\nOur solution is originally motivated by the line of works on maximum separation in deep networks. The current state-of-the-art strives for separation through approximate optimization, see e.g. [25,28,29,31,33,35] of our paper and [1,2] in the refs of the reviewer. We bring a new perspective to these works by showing that separation is not an optimization problem and can be solved in a closed form recursive manner. \n\n\nAs noted by the reviewer, our closed-form maximum separation solution relates to observations in neural collapse, specifically NC2 in [3] from the reviewer’s references. Our approach is however different both in both formulation and empirical outcomes. In neural collapse, the final matrix is $(k+1)$ x $d$ with $d >= k+1$ for $k+1$ classes ([3,4,5] in refs above), while our matrix is more compact at size $(k+1)$ x $k$. As a result, we have a different solution in the form of a recursive algorithm. This recursive closed-form solution comes with benefits, notably it opens the door towards continual and class-incremental learning with maximum separation. The algorithm yields the maximum separation matrix near instantly for a fixed number of classes and only needs to be computed once prior to training, so does not affect the overall training and inference time. We furthermore plug our approach on top of any network architecture, rather than replace the final layer as done in e.g. [5]. Our setup focuses on improving results and we provide strong performance gains in classification and long-tailed recognition without any extra learnable parameters. We also show that maximum separation is directly beneficial for out-of-distribution detection and open-set recognition, which highlights its generic and broad applicability. We will include the references and discussion on neural collapse to Section 2 of the paper.\n\n\nOur theory is encapsulated in Lemma 1 and Theorem 1 and we have additional empirical analyses to help understand our approach. We show in Figure 5 that maximum separation improves out-of-distribution detection by increasing the gap in energy scores between in- and out-of-distribution samples. We have added a visualization of the outputs of closed- and open-set samples, which shows that samples from outside the closed training set are less discriminative and have a lower confidence. We have furthermore added an analysis on the Angular Fisher Score, which highlights that maximum separation increases the discriminativeness of standard networks. These new analyses have been included in the supplementary materials.\n", " Thanks for the response. I appreciate the effort put in the rebuttal, and I am satisfied with the response. However, while I am generally satisfied with the response, I would like to clarify one misunderstanding from the authors. But I do want to note that although this is a limitaiton for this work, it does not prevent this paper to be an interesting and solid idea (which may inspire others to work on solving this limitation). \n\n\n\"Indeed, the output dimensionality is always one less than the number of classes, hence it scales linearly to many classes as well. This is the same for standard cross-entropy, where the number of output dimensions needs to be the same as the number of classes, hence our approach does not provide any burdens on top of standard softmax cross-entropy optimization.\"\n\n- I think you may misunderstand my point. My point is that when the static design of classifiers requires the dimension of classifier (which is the same as the dimension of feature) to scale linearly with the number of classes. This implies that the feature will have really large dimensionality if the number of classes is large (say million-scale classification), while this is not the case for standard softmax cross-entropy. Usually for million-scale classification, you could still set the feature dimension as 512 or 1024 for standard learnable classifiers, which can be way smaller than the number of classes.", " Thanks for responding. Comments below detailed by remaining concerns.\n\nMaximum Class Separation continually draws rigorous attention in machine learning [1-6], as also stated by the authors. Considering the general classification task, we need to carefully discuss the novelty and contribution of inducing the fixed classifier and a new estimation algorithm. Also, as the formulation of Definition 1 and Lemma 1 is equivalent to Simplex Equiangular Tight Frame(ETF) [3] defined in neural collapse [3-6] and the motivation to obtain the sought-for geometry (Maximum Separation) matches with [5-6], the literature of neural collapse should be considered.\n\n\nContribution 1: Fixed classifier design\n\n- As far as I'm concerned, this contribution is heavily covered in [5-6], stated as\n - \"For example, our experiments demonstrate that one may set the feature dimension equal to the number of classes and fix the last-layer classifier to be a Simplex ETF for network training, which reduces memory cost by over 20% on ResNet18 without sacrificing the generalization performance.\" [5]\n - \"We propose a new paradigm for deep neural network with the linear classifier randomly initialized as a simplex ETF and fixed during training\". [6]\n- Lack of theoretical analysis. For comparison, [6] theoretically indicates that the feature learning with the fixed classifier converge to the neural collapse state, even in the imbalanced case. (Theorem 1, 2)\n- Fixed classifier has been discussed in many scenarios: e.g., self-supervised learning [7], long-tailed learning [8]\n\n\nContribution 2: recursive algorithm\n- It is important to clarify why we need a recursive algorithm rather than the closed form estimation(Definition 1 in [3][6]) with satisfying calculation complexity. \n\n\n[1] 2018 NeurIPS, Learning towards Minimum Hyperspherical Energy\n\n[2] 2021 AISTATS, Learning with Hyperspherical Uniformity\n\n[3] 2020 PNAS, Prevalence of Neural Collapse during the terminal phase of deep learning training\n\n[4] 2020 arXiv, Neural collapse with unconstrained features\n\n[5] 2021 NeurIPS, A geometric analysis of neural collapse with unconstrained features\n\n[6] 2022 arXiv, Do We Really Need a Learnable Classifier at the End of Deep Neural Network\n\n[7] 2022 ICLR, Understanding Dimensional Collapse in Contrastive Self-Supervised Learning\n\n[8] 2022 CVPR, Targeted Supervised Contrastive Learning for Long-Tailed Recognition", " We thank the reviewer for their positive comments regarding the novelty of the approach, the theoretical analysis, and the empirical effectiveness about our paper and also for their guidance to improve the paper and code.\n\n### Figures and code\n \nWe thank the reviewer for the suggestions to improve Figure 1 and Figure 3. We have added $Pk/pk$ to Figure 1 and added plot labels with imbalance ratios in Figure 3. The updated figures are shown in the revised pdf. We have also fixed the typo in the caption of Figure 4. We will address the minor code fixes and upload it to a public github repository.\n\n\n\n### Generalizing to unsupervised settings\n\nOur paper focuses on the supervised setting, but we also see potential for maximum separation in unsupervised settings. For example, Wang and Isola (ICML, 2020) have previously shown that self-supervised learning involves optimizing for alignment and uniformity. Maximum separation can potentially improve self-supervised learning by increasing uniformity between samples in batches for contrastive learning. Also segmentation, which generalizes classification to the pixel-level, is a potentially fruitful direction for maximum separation. We have added this discussion to Section 5.\n\n### Error bars for imbalanced experiment\nBased on the reviewer’s suggestion, we have run the experiments in Table 1 of the main paper 5 times and added error bars. The results show that over multiple runs, the improvements are stable. Due to space limitations, we have added the experiment with error bars to the supplementary materials.\n\n \n\n\n\n\n| | | | CIFAR-10 | | | | | CIFAR-100 | | | \n|--- |--- | --- | --- | --- | --- | --- | --- | --- | --- | --- | \n| | - | 0.2 | 0.1 | 0.02 | 0.01 | - | 0.2 | 0.1 | 0.02 | 0.01 |\n| ConvNet | 56.45± 0.32 | 45.88 ± 0.43 | 40.04± 0.38 | 27.17± 0.52 | 16.31 ± 0.22 | 86.30± 0.21 | 78.37± 1.04 | 73.6± 0.58 | 51.71 ± 0.38 | 42.72 ± 1.21 |\n| \\+ This Paper | **57.05± 0.55** | **46.21± 0.45** | **40.44± 0.23** | **28.16 ± 0.31** | **18.15 ± 0.53** | **86.48± 0.20** | **79.44± 1.20** | **75.4 ± 1.03** | **56.98 ± 1.16** | **48.26 ± 0.65** |\n| ResNet-32| 75.42 ± 0.37 | 65.20± 0.43 | 58.01± 1.01 | 42.70± 0.20 | 34.98± 0.54 | 94.41±0.25 | 87.96± 0.24 | 82.95± 0.45 | 68.04± 0.83 | 56.5± 0.56 |\n| \\+ This Paper | **76.41± 0.21** | **66.22±0.56** | **60.23± 0.54** | **45.11± 0.13** | **37.65± 0.81** | **96.12± 0.19** | **91.26± 0.22** | **88.01± 0.73** | **77.12± 1.33** | **68.8± 1.42** |", " We thank the reviewer for their positive words on the topic, the experiments, and the writing. Below, we have addressed the listed concerns.\n\n### Relation to other works\nMaximum separation for classification has a rich history in machine learning. The concurrent work of Yang et.al. [1] also points out the strong potential of fixed and separated class embedding in deep networks. Both works provide complementary views to maximum separation: Yang et al. highlight the link with neural collapse, while we show the potential of maximum separation beyond classification and long-tailed recognition for out-of-distribution detection and open-set recognition, with a single line of code. We will also include Graf et.al. [2] with the other referenced papers on incorporating separation through optimization in Section 5.\n\n### Results in Table 1 versus Table 2\n\nImprovements of our approach increase with higher imbalance in Table 1 because standard architectures do not account for imbalance. In Table 2, we incorporate our approach on top of methods specifically designed for long-tailed recognition. These problem-specific approaches already tackle imbalanced settings and further benefit from our maximum separation.\n\n### Additional analyses for OOD and OSR\n\nFor out-of-distribution detection, we show in Figure 5 that our approach enlarges the gap in energy score distributions for in- and out-of-distribution samples, which results in better out-of-distribution detection. Following the reviewer’s suggestion, we have performed a feature visualization to explain the performance gain with maximum separation on open-set recognition. We have added the analysis to the supplementary materials. \n\n\n[1] Do We Really Need a Learnable Classifier at the End of Deep Neural Network.\n\n[2] Dissecting Supervised Contrastive Learning.", " We thank the reviewer for highlighting the theoretical motivation and the geometric interpretation of our work and for pointing out the need for further discussion on the limits and broader potential of maximum separation.\n\n### Further discussion on limits of maximum separation\n\nOur approach is currently focused on multi-class settings only, where exactly one label needs to be assigned to each example. Maximum separation does also not naturally generalize to zero-shot settings, as such settings require that classes are represented by semantic vectors that point in similar directions based on semantic similarities. We position all classes equally far away from each other, hampering zero-shot generalization.\n\n### Broader potential for unsupervised learning\n\nWhile we focus on the supervised setting, we agree that maximum separation has potential for unsupervised learning as well. Improving the latent space of variational auto-encoders with maximum separation sounds like an intriguing direction. We furthermore see potential in the self-supervised learning paradigm, by enforcing a maximum separation (i.e. full uniformity) between unlabelled samples in a batch. \n\nWe have updated the conclusions with the above discussions.", " We thank the reviewer for their detailed review and thoughtful comments. We discuss the additional analyses and questions below.\n\n### Angular Fisher score analysis \n\nWe reported the Angular Fisher Score from Liu et.al. [1] in the table below for CIFAR-10 and CIFAR-100 test sets. We trained a ResNet-32 with the same settings as Table-1 from the paper. For the Angular Fisher Score, lower is better. Across datasets and imbalance factors, the score is lower with maximum separation, providing additional verification of our approach. We have added the angular Fisher score analysis to the supplementary materials.\n\n\n|| | CIFAR-100 | | | CIFAR-10 | | \n| --- |--- |--- |--- |--- | --- | --- | \n|| - | 0.1 | 0.01 | - | 0.1 | 0.01 |\n| ResNet-32 | 0.2954 | 0.4958 | 0.7272 | 0.058 | 0.2305 | 0.4141 \n| + This Paper | **0.1521** | **0.4483** | **0.6952** | **0.055** | **0.1397** | **0.3240** | \n\n\n\n### Comparison to optimization-based separation\nFollowing the reviewer’s suggestion, we have compared our approach to a baseline that optimizes for class vectors through optimization and fixes the vectors afterwards. We compare to the hyperspherical prototype approach of Mettes et al. [3]. We have looked into the class vectors themselves, as well as the downstream performance. For the class vectors, we find that a gradient-based solution has a pair-wise angular variance of over one degree for 100 classes, indicating that not all classes are equally well separated, while we do not have such variability. We have also performed additional long-tailed recognition experiments for our maximum separation approach versus the hyperspherical prototype approach of Mettes et al [3] with ResNet-32 backbone. Below are the results for CIFAR-10 and CIFAR-100 for three imbalance ratios:\n\n\n\n|| | CIFAR-100 | | | CIFAR-10 | |\n| --- | --- | --- | --- | --- | --- | --- | \n| | - | 0.1 | 0.01 | - | 0.1 | 0.01 | \nMettes et.al. | 71.58 | 53.28| 34.08 | 93.27 | 86.16 | 61.63 |\nThis Paper | **76.23** | **60.54** | **38.85** | **95.09** | **88.16** | **69.70** |\n\nWe conclude that a closed-form maximum separation is preferred for recognition. We have added the comparison to the supplementary materials.\n\n### Relation to orthogonality\n\nWe agree that the inner product approaches zero and angles are closer to being orthogonal as the number of classes $k+1$ becomes larger. Substituting it with an $(k+1)$x$(k+1)$ orthogonal matrix (say, an orthogonal basis) still uses only the positive subspace of the hypersphere and is hence not maximally separated. Perhaps a better choice of orthogonal matrix as indicated by the reviewer from [2] might be useful to get performances similar to our maximum separation. Similarly, learning an orthogonal rotation/reflection from the probabilistic classifiers from [2] would also be an interesting connection to maximum separation for future research.\n\n\n### Number of dimensions \n\nIndeed, the output dimensionality is always one less than the number of classes, hence it scales linearly to many classes as well. This is the same for standard cross-entropy, where the number of output dimensions needs to be the same as the number of classes, hence our approach does not provide any burdens on top of standard softmax cross-entropy optimization. We have furthermore clarified in line 83 that the maximum separation only holds for matrices of $k$x$(k+1)$ for $k+1$ classes. Lastly, we observe that the down-stream recognition convergence is similar with and without maximum separation. \n\n[1] SphereFace: Deep Hypersphere Embedding for Face Recognition, CVPR 2017\n\n[2] Orthogonal Over-Parameterized Training, CVPR 2021\n\n[3] Hyperspherical Prototype Networks, NeurIPS 2019", " This paper proposes a closed-form design of the classifier layer, which encourages the maximal separation between any two classifiers on a hypersphere. The design is simple and straighforward, and it introduces a recursive algorithm to place $k+1$ classifiers in $k$ dimensional space. The experimental results demonstrate effectiveness in standard visual recognitio, long-tailed recognitio, OOD detection and open-set recognition. Strengths:\n\n- This paper is well written and structured. I generally enjoy reading this paper and find this idea quite interesting. Placing static classifiers and encourage their maximal separability is very natural and well motivated. The inductive bias from hyperspherical uniformity is encoded in the classifier layer which is used to generate corresponding gradients to update the whole network. Due to such a design, I think the learned features should also be quite discriminative (maybe with some sort of angular margin). I would suggest the authors to conduct some visualizations or Fisher discrimination analysis (say angular Fisher score in Appendix E of [1]) to demonstrate such advantages in a more intuitive way.\n\n- The resursive algorithm is neat and efficient. I like its simplicity and the theoretical analysis that motivates it. The analysis is intuitive and should be correct as far as I'm concerned. I am wondering how it compares to gradient-based optimization. For example, you can directly minimize a matrix's hyperspherical energy to obtain its maximally separable column vectors, and then keep them fixed during the network training. Although it may not be the optimal design, I am curious how it compares empirically to the proposed method.\n\n- I believe the static design of the classifier layer is of sufficient interest to the community and is of sufficient novelty, because it may be a potential solution to both encode better inductive bias and address the computational difficulty of training million-level number of classes. Besides the proposed static design, I would point the authors to a probabilistic design (in Section 6.4 of [2]) that also pre-specifies a set of fixed classifiers (which are randomly initialized) and then learns an orthogonal matrix that applies to them. I find the same idea can also be used in this paper, meaning that you learn an orthogonal matrix (using the same way as [2]) just to rotate/reflect these maximally separable classifiers. This does not change their pairwise distance / similarity, which means their hyperspherical energy stays the same. I think it will further improve the proposed method by granting it more flexibilty (while their maximal separability does not change).\n\n- The paper demonstrates the effectiveness of the hyperspherical uniformity inductive bias in a number of recognition experiments. Its generalization ability is well verified. To me, the experiments should also be easily reproducible.\n\n\nWeaknesses:\n\n- While I like the general idea of static classifier design, the proposed method has an obvious weakness: the dimensionality of classifier scales linearly with the number of classifiers. This may be okay for benchmarks like CIFAR and ImageNet, but it still constrains its practical usage in large categorical training (say extreme classification). I can understand this particular algorithm design (ie, $k+1$ classifiers in $k$-dimensional space) is due to the fact that static assignment of maximally separable classifiers is very challenging when the dimensioanltiy is way smaller than the number of classifiers. I think this limitation, as a future direction to study, should be explicitly discussed in the paper.\n\n- I appreciate the simplicity of the static classifier design, but the lack of flexibility (learnability) may limit its performance. Learning an orthogonal transformation (or even simpler, learning a rotation matrix) can be beneficial, as I mention in the third point in Strengths.\n\n- As Lemma 1 shows, the inner product approaches zero as $k$ becomes larger. Does it mean that we can use orthogonal matrix (which is $k\\times k$ instead of $k+1 \\times k$) to replace the maximally separated matrix? It seems to be an interesting conclusion that one can take advantage of. This is not exactly a weakness, but it could be an interesring connection to orthogonality. Maybe the authors can comment / elaborate on this.\n\n- The definition of maximally seprated matrix is a bit inappropriate in the sense that the equal inner product between any two classifiers does not always hold for matrices of any size. It only holds when the matrix is of the size $(k+1)\\times k$. I suggest the authors to clarify this in order to avoid confusion.\n\n\nSummary:\n- I am in favor of the core idea (as well as the overall direction in static classifier design) and find this paper a solid work in general. I would vote for clear acceptance given the authors properly address my concerns.\n\n\n[1] SphereFace: Deep Hypersphere Embedding for Face Recognition, CVPR 2017\n\n[2] Orthogonal Over-Parameterized Training, CVPR 2021 - My major questions are given in the \"Strengths And Weaknesses\" section.\n\n- Some visualization on the learned features could really improve the paper. I am also curious about how the learned features will be given a fixed set of classifiers.\n\n- While the classifiers are no longer learned, I am wondering whether the convergence performance will also be improved (since less number of parameters are needed to learn)? Some experiments on convergence speed (eg, iteration vs. testing accuracy) would be nice. For technical limitations, I have discussed them in the \"Strengths And Weaknesses\" section. For potential negative societal impact, I am unaware of any.", " The paper proposes to bake maximum separation between classes as an inductive prior to a deep learning model. The proposed approach maximally separates class vectors, which is cleverly proven to have a closed form (assuming hyperspherical uniformity), and is implementable as a simple multiplication with a fixed matrix. Results are good on conventional tasks (Cifar, ImageNet), alongside long-tailed and open set problems. Strengths:\nThe method improves classification in the imbalanced, long tailed setting, alongside OOD and open set settings. \nThe approach easily slots in with what appears to be any/most existing approach, as seen by the comprehensive experiments with various different methods on various problems. \nGood theoretical analysis. It appears correct to the best of my understanding. \nUtilizing a fixed matrix that you do not learn and improving results in a deep learning framework is an interesting approach with some novelty/originality to it. \nThe result of the ablation with learnable prototypes is interesting, highlighting the importance of class separation as a solid inductive prior. \nHow to run the code is very clearly laid out in README.md (albeit with minor mistakes). \nI was able to reproduce similar numbers to a subset of the results (table 1, alexnet) using the code with minor fixes. Did not have time to try more.\n\nWeakness:\nDiagrams are confusing and could be better labeled. For example, in Figure 1, it would be most useful to label what is $P_k$ and what is $p_k$. Figure 3 could be much more clear (sub-title each subplot with amount of imbalance).\nSome figures appear to contain the wrong description i.e. figure 4 \"green\" (there is no green). \nThe code was troublesome to get running; many simple errors to fix. \n Can this applied to other tasks, like segmentation?\nCan an approach like this be modified to work well in an unsupervised setting? I'd be interested in knowing if this work can be applied to other different tasks, like segmentation, or be applied in other settings, like unsupervised. \nThere are obviously improvements with this method, but when I quickly ran the code a couple of times, there seemed to be a large amount of variance of the test performance, exp in the imbalanced setting. Calculating some error bars/confidence intervals would be helpful. ", " The paper proposes to integrate a closed-form structural prior in deep learning architectures, which has a clear geometric interpretation. For any neural network designed for classification, hence having a final softmax layer, the contribution takes the form of a fixed (non-learnable) matrix multiplication before the computation of the softmax. For a classification problem with K classes, the construction of the matrix consists of deriving K vectors on a (K-1)-dimensional hypersphere that are maximally separated. The authors provide a simple recursive procedure to construct such a matrix that only needs to be called once before training. The proposed contribution is evaluated on different learning tasks, showcasing its relevance in various common benchmarks. The main strength of the paper is to provide a simple geometric interpretation of maximum class separation within the context of neural architecture design. Contrary to many engineering tricks used in deep learning to stabilize training or improve performances, the presented contribution does not contain any hyperparameter that needs to be fine-tuned. The originality and quality of the proposed contribution lies in the fact that the authors addressed the question of maximum separation from a theoretical point of view, and proves that their matrix design is relevant when followed by a softmax activation function. The recursive procedure used to construct the matrix closely resembles an algorithm that could be used to build an orthonormal basis for a vector space, except that in the presented case, the class vectors are required to have a fixed pairwise scalar product (equal to $-\\frac{1}{K}$) instead of being orthogonal to each others. The experiments are well-chosen to prove the significance of the contribution across different challenging learning tasks, and authors focus on proving that their contribution improves on many baselines, rather than seeking for a single state-of-the-art metric in a very specific setting. Additionally, the authors shows that making their matrix learnable, either with their initialization or with random initialization, can even degrade performances, thus showing the optimality of their contribution to a certain extent. Such paper shows that it is possible to make consistent improvements in neural architectures when the chosen operations are motivated by strong theoretical arguments, somehow deviating from the traditional viewpoint that neural networks are differentiable hence optimizable black box functions. One criticism that could be made is that settings where the proposed contribution falls short are only mentioned in one sentence at the conclusion. Since the authors experimented with various learning tasks to showcase the relevance of their contribution, it could be of interest to additional information on the limitation of the proposed contribution.\nCan the authors provide examples showcasing such limits ? The authors stated that their work is limited to supervised settings, and cannot be applied to settings that require relational information between classes. However, in the case of unsupervised learning with variational-auto-encoders, could such a matrix be integrated at the top of the encoder for better latent space covering ?\nAnswering such a question could additionally broaden the significance of the contribution, since performing clustering in the latent space of an auto-encoder can provide additional information that can be very valuable in a few-shot learning setting.\nGenerally speaking, even if the construction of the matrix is somehow tied to a classification problem with $K$ classes, the fact that the matrix impose a geometric structure in the resulting output space should also be beneficial for unsupervised learning.", " In this paper, the author explores how to achieve maximizing class separation by directly adding a pre-calculated layer into the deep neural networks. The weights of the layer are solved in closed-form by a recurve algorithm. The effectiveness of the maximally separable vectors is verified on various datasets from long-tailed recognition, out-of-distribution and open-set recognition tasks. Strengths\n- The topic of maximum class separation is interesting.\n- The experimental evaluation is adequate.\n- The paper is well written and easy to follow.\n\nWeaknesses\n- Limited novelty and contribution. To my knowledge, the maximum class separation has been discussed by a few works [1-2]. Moreover, the proposed method is similar with [1], and the matrix estimation in [1] does not need recursive calculation.\n- Lack of insights from the experiments. The experiments mainly focus on the performance evaluation. \n\n[1] Do We Really Need a Learnable Classifier at the End of Deep Neural Network.\n\n[2] Dissecting Supervised Contrastive Learning.\n - As shown in Table.1, the performance gap of adding maximum separation layer over baseline enlarges on more imbalanced data. However, the phenomenon does not hold in Table. 2. More explanations are needed.\n- Why the proposed methods can benefit OOD detection and OSR recognition? More explanations and evidences are needed.\n Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "gJmnMgirm4", "IH_MvrJaxay", "nXyDOznw3Qh", "VRePyw72Ud9u", "82MnAD7OhJb", "0D1_NLqYv2Z", "cZ1jwnDsIJK", "yPm3kip21hv", "Og87ZAxhqc", "nips_2022_MbVS6BuJ3ql", "nips_2022_MbVS6BuJ3ql", "nips_2022_MbVS6BuJ3ql", "nips_2022_MbVS6BuJ3ql" ]
nips_2022_kcQiIrvA_nz
Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection
Deep neural networks (DNNs) have demonstrated their superiority in practice. Arguably, the rapid development of DNNs is largely benefited from high-quality (open-sourced) datasets, based on which researchers and developers can easily evaluate and improve their learning methods. Since the data collection is usually time-consuming or even expensive, how to protect their copyrights is of great significance and worth further exploration. In this paper, we revisit dataset ownership verification. We find that existing verification methods introduced new security risks in DNNs trained on the protected dataset, due to the targeted nature of poison-only backdoor watermarks. To alleviate this problem, in this work, we explore the untargeted backdoor watermarking scheme, where the abnormal model behaviors are not deterministic. Specifically, we introduce two dispersibilities and prove their correlation, based on which we design the untargeted backdoor watermark under both poisoned-label and clean-label settings. We also discuss how to use the proposed untargeted backdoor watermark for dataset ownership verification. Experiments on benchmark datasets verify the effectiveness of our methods and their resistance to existing backdoor defenses.
Accept
This paper proposes a methods to verify unauthorized use of open-sourced dataset. The idea is to inject verifiable backdoor watermarks. The authors first show that existing backdoor watermarks can be exploited by adversaries for attacks. They then proposed novel untargeted backdoor watermarking techniques that are both effective and harmless in poisoned-label (UBW-P) and clean-label (UBW-C) settings. A malicious network that trained using the watermarked dataset may predict randomly for watermarked test data and clearly for clean test data, so it is possible to verify using the difference between the two predictions, for watermarked and clean test data. The authors agree that the proposed untargeted watermarks are useful and the problem being studied is interesting. The authors are suggested to address remaining concerns of the reviewers, such as whether the random classification is better than previous guided misclassification for verifying malicious users.
train
[ "22JMt-5jpDV", "DYRzi32DHwT", "Mc13g0wBN0J", "YgPwpzn_Hna", "dogSr_xa9wX", "4R-oMjbbsAp", "fywNmFsKS3wq", "fTdLNGcv2R", "crzzszRL1p", "hHfuFYGTqmn5", "N9GwcFBj9Ao", "qPv0iys0-g", "J_Iv2zvxl8qt", "9vicxjS6K46", "i95KYOnJosS", "h6oQ97N-nOb", "UFbJ3K9pyXs", "k4paZC8xhMo", "D-iNlUn2-Y2", "9seYzqxktg", "FMh7tMDk-J", "y4iYxYEDVp", "LZhPO3wkIMw", "V1sGQYEAHvi", "1TMxds3AJy", "-Kmbh0Db43B", "kMEHzeBAwz1", "fR3qiUq7ISY", "RwEBL1uhaKo" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " There are no ethical issues in my opinion. There are no ethical issues in my opinion. There are no ethical issues in my opinion.", " Thank you for your recognition of our discussions and kind explanations. We do respect your decision and are willing to wait for your final score after the Reviewer-Metareviewer discussion ends. However, just for a warm notification, we think you may have some misunderstandings about the reviewing procedures. (PS: We have served as the reviewer of NeurIPS and ICLR on OpenReview many times and joined Reviewer-Metareviewer discussions multiple times.)\n\nIn general, reviewers will change their pre-rebuttal score before the author-reviewer discussion period ends, if they think the authors have addressed their main concerns. In particular, this score is not the final score you thought, since reviewers can still change their scores during the Reviewer-Metareviewer discussion. This updated score is a reflection of your attitude toward the paper after the author-reviewer discussion. Otherwise, other reviewers cannot fully know what you think and therefore may not have an effective Reviewer-Metareviewer discussion.\n\nHowever, as we mentioned before, we sincerely thank you for helping us to improve our work and totally respect your decision of updating the score later :) ", " Thank you for your kind and patient replies, and it was very enjoyable discussion. Most of my concerns are addressed, and I know that I had some misunderstand. But, it is hard to definitively mention about my final score now because of a remained review preocedure. As I know, we will have Reviewer-MetaReviewer discussion period until tomorrow, and I will fix score considering your replies, and discussion during the period (unless this is my misconstruction about review procedure...).\n\nI'm sorry I can't give you a definite answer.", " Please accept our appreciation for your positive feedback on our rebuttal and further insightful questions and comments. We are encouraged that you finally recognize our UBW methods.\n\nWe believe that your current negative score is mostly due to the misunderstands that our previous paper version may cause you. We sincerely thank you for your valuable time and comments, which greatly help us for improving our work. However, we think we may have addressed (most of) your main concerns. We would be very grateful if you can kindly update your score based on our clarifications and discussions. We are also happy to address your further questions and concerns before the rebuttal ends.", " \n---\n**Q6**: I'm not clearly understand about this comment \"Our UBWs are harmless since the abnormal model behaviors are not deterministic.\" Any malicious user cannot have watermarked testset anyway, so I think they cannot distinguise whether their model works deterministically or randomly.\n\n**R6**: Thank you for this question and we do understand your concern. Firstly, we are deeply sorry for the misunderstandings that this sentence may cause you. **The latent adversary in this sentence is not the malicious dataset users you thought**. Instead, we wanted to indicate that the **dataset owners may attack DNNs trained on their released dataset (with their pre-defined watermarks) if they are malicious**. In this case, the targeted backdoor watermarks may be very harmful since abnormal model behaviors are deterministic. We will add more details and explanations in the introduction and the proposed method in our revision.\n\n---\n**Q7**: Even though I thought as Q5-6, I think this approach is very interesting. As I thought, this random prediction by watermark is proper to password on treat model rather than dataset watermarking. If a treat model trained by the watermarked dataset works well on only image with trigger, and works poorly on clean image, it would be very useful.\n\n\n**R7**: Thank you for your recognition and interesting perspective! Using a backdoor watermark as the password to protect DNNs is a promising research topic. Intuitively, it can be treated as the dual of our discussed problem since we have reverse goals regarding the inference process. It seems that we can also formulate this problem as a bi-level optimization and solve it with similar techniques adopted in this paper. However, it is out of the scope of this paper. We will discuss it in our future work.\n\n\n---\n**Note**: We believe that your current negative score is mostly due to the misunderstands that our previous paper version may cause you. We sincerely thank you for your valuable time and comments, which greatly help us for improving our work. However, we think we may have addressed (most of) your main concerns. We would be very grateful if you can kindly update your score based on our clarifications and discussions. We are also happy to address your further questions and concerns before the rebuttal ends.\n\n---", " ---\n**Q4**: In Appendix G, the authors only compared UBW-C and UBW-P with only BadNets using easy trigger, and I accept that the proposed is better than BadNets thanks to untargeted property. As I mentioned, I think clean-label is necessary, so I focused on clean-labeled approaches. Sleeper Agent paper also provided results after defense methods including neural cleanser. According to Sleeper Agent, the Neural Cleanser is not so good for the detection of any of the backdoored classes. Therefore, I think the comparison of UBW-C and clean-labeled approaches in image-level similarity and results of defense are necessary.\n\n\n**R4**: Thank you for this constructive suggestion! We compared our methods with BadNets simply because it can be detected by almost all defenses. Accordingly, we can use its results for reference to better illustrate why our watermarks are resistant to the discussed defenses. However, we do understand your concerns and conduct additional experiments to verify whether Sleeper Agent and label-consistent backdoor attack are also resistant to potential defense methods. The results are summarized as follows:\n\n- The Resistance to Trigger Synthesis based Defenses: As you mentioned, **Sleeper Agent is also resistant to trigger synthesis based defenses, whereas label-consistent attack is not.** This is mostly because the trigger patterns used in the training process of Sleeper Agent are sample-specific, whereas that of label-consistent attack is sample-agnostic.\n- The Resistance to Saliency-based Defenses: **Both Sleeper Agent and label-consistent backdoor attacks can be detected by saliency-based defenses** since their trigger patterns used in the inference process are both sample-agnostic and both of them are targeted. Note that the trigger pattern adopted for Sleeper Agent in the inference process is sample-agnostic, although those used in the training process are sample-specific.\n- The Resistance to STRIP: **Both Sleeper Agent and label-consistent backdoor attacks can be detected by saliency-based defenses** since their trigger patterns used in the inference process are both sample-agnostic and both of them are targeted. Note that Sleeper Agent may resistant to STRIP to some extent, if random position mode is adopted.\n \n\n\nWe will add more details and discussions in Appendix G in our revision. \n\n---\n**Q5**: For my mention about Gaussian Noise, it was not for devaluation or blame. If you felt uncomfortable, I'm sorry for that mention. I meant that this random prediction works similarly to Gaussian Noiseinjection, and I have a question about evidence capacity comparing targeted prediction. If I were a malicious user, I would insist that this misclassification is due to noise, and Gaussian Noise can lead to random misclassification. Of course, I agree with your reply \"Adding small random Gaussian noises to benign images will not significantly change model predictions.\", but the malicious user may insist on that. And, a judge cannot conclude this is unauthorized use of the watermarked dataset because of the random noise case. So, I'm worried about a clear distinguish of misclassifications by this watermark, or random noise. For targeted prediction case, the predictions are guided toward dataset owner's pre-defined class, so it is better in evidence ability as I thought.\n\n**R5**: Thank you for your detailed explanations and insightful comments! We fully understand your concerns now. We agree that targeted watermarks are easier for distinction in the verification stage. We argue that untargeted watermarks are also distinctive (and therefore are also practical), as follows:\n- We use one trigger pattern (e.g., white-black square) instead of different (random) noises to generate watermarked testing samples for verification. **It is unlikely that using a specific trigger pattern can shift the predictions of many different testing images if the suspicious model is not watermarked**.\n- We notice that the dataset owners have the benign version of their released watermarked dataset. Accordingly, **dataset owners can train DNNs on the benign dataset and show that the trigger pattern cannot change their predictions to refute the insistence of malicious users**.\n\nThank you again for this insightful question. We will add more discussions in the appendix of our revision. \n\n---\n", " We greatly appreciate your positive feedback on our rebuttal and further insightful questions and comments. We are encouraged that you finally recognize our UBW methods. Please kindly find our explanations about your remaining concerns as follows:\n\n---\n**Q1**: For BA drops, I agree on that the UBW-P is not much harmful, but I'm still have concerns about UBW-C. It drops about 6% of accuracy on CIFAR10 experiments.\n\n**R1**: Thank you for your comment and we do understand your concerns. We need to notice that **there is a trade-off between the BA and the ASR to some extent**, due to the poisoning rate $\\gamma$ that users may adopt. As we illustrated in Figure 4 in our main manuscript, the BA decreases while the ASR increases with the increase of $\\gamma$. We adopt $\\gamma=10\\%$ simply following the classical setting used in backdoor-related works. The BA drop is no more than 3% while the ASR is higher than 50% for both UBW-P and UBW-C if we set $\\gamma=2\\%$ on the CIFAR-10 dataset. We will further explore how to better balance the BA and ASR in our future work.\n\n---\n**Q2**: According to Appendix, change of poisoning ratio, sample-specific trigger, and alternative optimization are stated. However, I'm confused about the sample-specific trigger. As I know, data poisoning approaches doesn't use many triggers, I expect that only a trigger per class is needed.\n\n**R2**: Thank you for this insightful question! In general, **using sample-specific trigger patterns are more effective and stealthy**, compared to using sample-agnostic one. Specifically,\n- **Effectiveness**: From the perspective of optimization, using sample-specific triggers introduces more variables for optimization and therefore makes the attack more effective. This benefit is especially critical for clean-label backdoor attacks since they are significantly more difficult compared to poisoned-label backdoor attacks. Specifically, the 'robust features' related to the target label will hinder the learning of backdoor triggers. This is the main reason why Sleeper Agent is more effective than label-consistent backdoor attack.\n- **Stealthiness**: As illustrated in [1, 2], most of backdoor defenses (e.g., Neural Cleanse) were designed based on a latent assumption that the trigger pattern is sample-agnostic. Accordingly, attacks with sample-specific triggers can easily bypass them since they break their fundamental assumption.\n\n\nBesides, we need to notice that the perturbations of modified images in advanced data poisoning (e.g., [3]) are also sample-specific. We will add more details and discussions in the appendix of our revision.\n\n\nReferences\n1. Invisible Backdoor Attack with Sample-Specific Triggers. ICCV, 2021.\n2. Input-Aware Dynamic Backdoor Attack. NeurIPS, 2020.\n3. Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching. ICLR, 2021.\n\n\n---\n**Q3**: I'm sorry for repeat of this question even authors provided results on TinyImageNet, but results on TinyImageNet is not included in Sleeper Agent and Label-consistent. Then, about 92% BA is reported in Sleeper Agent on CIFAR10 and ResNet18, but this paper reports 86.99% BA. For CIFAR10, this 5% is not small. Could you explain the modification of reimplementation in detail?\n\n**R3**: Thank you for your question and we do understand your concern. Firstly, we are deeply sorry for the misunderstandings that our response may cause you. We did not intend to modify the implementation of Sleeper Agent. The point we were trying to make during our previous rebuttal was that we may have some differences in the implementation, even though we reproduced it based on its official codes and original paper. Besides, as we mentioned in our last rebuttal, we use a different trigger pattern, which also leads to different results. \n\n---\n", " Thanks for your reply, and some of my concerns are addressed.\n\nBasically, I think it is more important to compare with clean-labeled attacks. Actually, the poison-label attacks can be detected by human eye without defence methods. In my opinion, clean-label is necessary to be used in practice, \n\n1) For BA drops, I agree on that the UBW-P is not much harmful, but I'm still have concerns about UBW-C. It drops about 6% of accuracy on CIFAR10 experiments. Then, authors mentioned that they reimplemented Sleeper Agent for fair comparison. According to Appendix, change of poisoning ratio, sample-specific trigger and alternative optimization are stated. However, I'm confused about the sample-specific trigger. As I know, data poisoning approaches doesn't use many triggers, I expect that only a trigger per class is needed. I'm sorry for repeat of this question even authors provided results on TinyImageNet, but results on TinyImageNet is not included in Sleeper Agent and Label-consistent. Then, about 92% BA is reported in Sleeper Agent on CIFAR10 and ResNet18, but this paper reports 86.99% BA. For CIFAR10, this 5% is not small. Could you explain the modification of reimplementation in detail?\n\n2) In Appendix F, authors only compared UBW-C and UBW-P with only BadNets using easy trigger, and I accept that the proposed is better than BadNets thanks to untargeted property. As I mentioned, I think the clean-label is necessary, so I focused on clean-labeled approaches. Sleeper Agent paper also provided results after defence methods including neural cleanser. According to Sleeper Agent, the Neural Cleanser is not so good for detection of any of the backdoored classes. Therefore, I think comparison of UBW-C and clean-labeled approaches in image-level similarity, and results of defence. \n\n3) For my mention about Gaussian Noise, it was not for devaluation or blame. If you felt uncomfortable, I'm sorry for that mention. I meant that this random prediction works similarly to Gaussian Noiseinjection, and I have a question about evidence capacity comparing targeted prediction. If I were a malicious user, I would insist that this misclassification is due to noise, and Gaussian Noise can lead random misclassification. Of course, I agree on your reply \"Adding small random Gaussian noises to benign images will not significantly change model predictions.\", but malicious user may insist as that. And, a judge cannot conclude this is unauthorized use of the watermarked dataset because of the random noise case. So, I'm worry about clear distinguish of misclassifications by this watermark, or random noise. For targeted prediction case, the predictions are guided toward dataset owner's pre-defined class, so it is better in evidence ability as I thought.\n\n\n4) I'm not clearly understand about this comment \"Our UBWs are harmless since the abnormal model behaviors are not deterministic.\" Any malicious user cannot have watermarked testset anyway, so I think they cannot distinguise whether their model works deterministically or randomly. \n\n\n5) Even though I thought as 3) and 4) , I think this approach is very interesting. As I thought, this random prediction by watermark is proper to password on treat model rather than dataset watermarking. If a treat model trained by the watermarked dataset works well on only image with trigger, and works poorly on clean image, it would be very useful.", " Thank you again for your valuable time, comments, and suggestions, which greatly help to improve the quality of our paper. Your recognition of our paper also encourages us a lot. Looking forward to your decision on the final score :)", " Thanks for your answer. It addressed my concern.\nI will keep my Accept decision, but the score I will decide later.", " \n---\n**Q4**: Table 1 and 2 show that the proposed achieves higher ASRs, but don't show the harmlessness. At table 1 and 2, UBW reports similar BA to targeted approaches'. Also, Table 2 in reply shows almost 20% drop in large-scale dataset, and 20% drop is not neglectable.\n\n**R4**: Thank you for your questions and we believe there are some misunderstandings here.\n- Table 1-2 provided in our previous response are used to show that our methods are still effective under different model structures and on large-scale datasets. We did not intend to verify that our methods are harmless by showing these tables.\n- We are deeply sorry for the misunderstandings that our response may cause you. Specifically, **our Table 2 in the previous rebuttal may mislead you to think that our UBW-C causes approximately 20% BA drop on the large-scale dataset, compared with our UBW-P**. However, as we mentioned in the previous R5, **the UBW-P was trained on the whole ImageNet while UBW-C was trained on Tiny-ImageNet** (due to the limitation of time and computational resources). Accordingly, comparing their results are meaningless. To verify that our methods (UBW-P and UBW-C) will not significantly reduce the BA of the watermarked models, we provide additional results as follows:\n\nTable 1. The performance of our UBW-P on the whole ImageNet dataset.\n| Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C | $D_p$ |\n|:---------------------------------------:|:------:|:------:|:------:|:-------:|\n| No Attack | 72.29 | NA | NA | NA |\n| UBW-P | 71.36 | 50.00 | 42.56 | 1.8346 |\n\n\nTable 2. The performance of our UBW-C on the Tiny-ImageNet dataset.\n| Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C | $D_p$ |\n|:---------------------------------------:|:------:|:------:|:------:|:-------:|\n| No Attack | 54.04 | NA | NA | NA |\n| UBW-C | 51.56 | 88.00 | 86.54 | 2.9871 |\n\n\n\nSorry again for the misunderstandings that our rebuttal may cause you and we will add more details and discussions in the appendix of our revision. \n\n---\n**Q5**: For Sleeper Agent, table 1 reports much different performances comparing their original paper. As described Table 2 and 3 of Sleeper Agent original paper, it achieved much better BA/ASR for CIFAR10 and ResNet18 (the same condition to this paper). The only difference is ratio of budget for poisoning.\n\n\n\n**R5**: Thank you for this question and we do understand your concerns about whether the comparisons are fair. As we illustrated in Appendix, the optimization process of our UBW-C has some similarities to that of the Sleeper Agent. Accordingly, **we modified the codes of Sleeper Agent to implement our UBW-C for ensuring a fair comparison**. However, we found that the codes of Sleeper Agent are too structive for modification and therefore we have to re-implement its codes. The re-implementation process may introduce some differences and therefore cause different results. Besides, **we used a different trigger pattern**, compared to the one used in the original paper of Sleeper Agent. It also leads to different results.\n\n---", " We greatly appreciate your feedback on our rebuttal and the further insightful questions and comments. Please kindly find our explanations about the remaining concerns as follows:\n\n---\n**Q1**: For the untargeted attack's advantage, I agree that it is more stealthy; however, this benefit only applies if a malicious user can access to poisoned testset. If I were a dataset distributor, and I applied poisoning on my dataset, I would hide my poisoned test data. Therefore, malicious user cannot filter out by analyzing predictions of poisoned data, they can filter out only by analyzing input data.\n\n**R1**: Thank you for this comment and we do understand your concerns. However, we are deeply sorry for the misunderstandings that our response may cause you. In our previous R1, we argued that our methods can naturally bypass some backdoor detections (e.g., Neural Cleanse and Spectral Signatures) due to their untargeted nature and therefore are more stealthy. We notice that both **Neural Cleanse and Spectral Signatures are detection methods used for filtering poisoned training samples instead of poisoned testing samples**. Accordingly, this benefit of stealiness does not require the dataset owner to release their poisoned testset. \n\nSorry again for the misunderstandings that our rebuttal may cause you and we will add more details and discussions in the appendix of our revision. \n\n\n---\n**Q2**: Untargeted approach makes hard to verify watermarked DNN. I think this untargeted poisoning is not much different from Gaussian random noise injection which is small but can change predictions randomly. Therefore, I believe it is preferable for the dataset owner to adopt a controllable method. \n\n**R2**: Thank you for these comments and we are deeply sorry for the misunderstandings that our paper or response may cause you.\n- Firstly, we respectfully disagree that our work is not much different from Gaussian random noise injection which is small but can change predictions randomly. **Adding small random Gaussian noises to benign images will not significantly change model predictions**. Instead, using UBW triggers can activate the hidden backdoors of watermarked models and therefore change predictions randomly. In addition, **the trigger patterns used for backdoor activation are pre-defined and therefore more controllable** (compared to random noises). \n- Secondly, as we mentioned in R1 of our previous rebuttal, although they are easier for watermarking and verification, **targeted attacks will introduce new threatening security risks** since the adversaries can determine the predictions of malicious samples.\n\nAccordingly, the untargeted backdoor watermarking is practical.\n\n\n---\n**Q3**: Untargeted approach makes hard to verify watermarked DNN. Due to the difficulty for verification, a new metric, Dispersibility, is proposed, but I think that authors should suggest practical advantages of random classification for poisoned images comparing targeted classification.\n\n**R3**: Thank you for this constructive suggestion! \n- Our UBWs are harmless since the abnormal model behaviors are not deterministic. As such, they are more likely to be used in practice.\n- Our UBWs are more stealthy since they can naturally bypass many backdoor detection methods. Accordingly, the malicious dataset users can hardly notice our watermark helping us to keep the watermark in trained models.\n\n\n---", " We greatly appreciate your positive feedback of our rebuttal and the further insightful question. Please kindly find our explanations about the remaining concern below.\n\n---\n**Q1**: In Tables 1 & 2, BA dropped a lot for both targeted and untargeted watermarks. I think the authors picked a training configuration that prioritized ASR over BA. However, if BA is too low, the model will not be picked to use in practice. I wonder if the authors can try other configurations that guarantee the BA drop no more than 2% and check if the watermarked models are still verifiable?\n\n**R1**: Thank you for this insightful question and we do understand your concern. Firstly, we are deeply sorry for the misunderstandings that our response to Reviewer RZea (Table 1-2) may cause you. Specifically, **our Table 2 in the rebuttal may mislead you to think that our UBW-C causes approximately 20% BA drop on the large-scale dataset, compared with our UBW-P**. However, as we mentioned in R5, **the UBW-P was trained on the whole ImageNet while UBW-C was trained on Tiny-ImageNet** (due to the limitation of time and computational resources). Accordingly, comparing their results are meaningless. To verify that our methods (UBW-P and UBW-C) will not significantly reduce the BA of the watermarked models, we provide additional results as follows:\n\nTable 1. The performance of our UBW-P on the whole ImageNet dataset.\n| Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C | $D_p$ |\n|:---------------------------------------:|:------:|:------:|:------:|:-------:|\n| No Attack | 72.29 | NA | NA | NA |\n| UBW-P | 71.36 | 50.00 | 42.56 | 1.8346 |\n\n\nTable 2. The performance of our UBW-C on the Tiny-ImageNet dataset.\n| Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C | $D_p$ |\n|:---------------------------------------:|:------:|:------:|:------:|:-------:|\n| No Attack | 54.04 | NA | NA | NA |\n| UBW-C | 51.56 | 88.00 | 86.54 | 2.9871 |\n\n\nBesides, as you mentioned, there is a trade-off between the BA and the ASR to some extent, due to the poisoning rate $\\gamma$ that users may adopt. As we illustrated in Figure 4 in our main manuscript, the BA decrease while the ASR increase with the increase of $\\gamma$. The BA drop is no more than 3% while the ASR is higher than 50% for both UBW-P and UBW-C if we set $\\gamma=2\\%$ on the CIFAR-10 dataset. \n\nSorry again for the misunderstandings that our rebuttal may cause you and we will add more details and discussions in the appendix of our revision. \n", " Thanks to the authors for your response. The answers addressed my questions pre-rebuttal.\n\nHowever, I am concerned about one issue raised by another reviewer. In Tables 1 & 2, BA dropped a lot for both targeted and untargeted watermarks. I think the authors picked a training configuration that prioritized ASR over BA. However, if BA is too low, the model will not be picked to use in practice. I wonder if the authors can try other configurations that guarantee the BA drop no more than 2% and check if the watermarked models are still verifiable?\n\n", " Thank you for your reply. It is much helpful to understand.\n\nBut, I have some remained questions. \n\n1) For the untargeted attack's advantage, I agree that it is more stealthy; however, this benefit only applies if a malicious user can access to poisoned testset. If I were a dataset distributor, and I applied poisoning on my dataset, I would hide my poisoned test data. Therefore, malicious user cannot filter out by analyzing predictions of poisoned data, they can filter out only by analyzing input data.\n\n2) Also, untargeted approach makes hard to verify watermarked DNN. For targeted approaches, they can verify by counting poisoned images that is classified as the target class. However, this approach have to count misclassified without any guided target class. I think this untargeted poisoning is not much different from Gaussian random noise injection that is small but can change predictions randomly. Therefore, I believe it is preferable for the dataset owner to adopt a controllable method.\nDue to this difficuly to verify, a new metric, Dispersibility, is proposed, but I think that authors should suggest a practical advantages of random classification for poisoned images comparing targeted classification. \n\n\n3) I'm still unsure about harmlessness. However, Table 1 and 2 show that the proposed achieves higher ASRs, but don't show the harmlessness. \nAt table 1 and 2, UBW reports similar BA to targeted approaches'. Also, Table 2 in reply shows almost 20% drop in large-scale dataset, and 20% drop is not neglectable.\n\n\n\n4) For Sleeper Agent, table 1 reports much different performances comparing their original paper. As described Table 2 and 3 of Sleeper Agent original paper, it achieved much better BA/ASR for CIFAR10 and ResNet18(the same condition to this paper). The only difference is ratio of budget for poisoning. \n\n", " Please accept our appreciation for your valuable comments, and in particular for recognizing the strengths of our paper in terms of good motivation, novelty, well-designed methods and metrics, effectiveness, and resistance to standard backdoor defenses. \n\nPlease kindly let us know if our response and the new experients have properly addressed your concerns. We are happy to address them before the rebuttal ends.", " Thank you so much for the positive feedback! It encourages us a lot.", " I would like to thank the authors for their detailed feedback. The authors have addressed all my concerns. I read the author's rebuttal and comments from other reviewers. I believe that the proposed methods are novel and elegant. I think it will also have high impact on dataset protection and image watermarking, which are important research areas. Accordingly, I increase my score to 8.", " ---\n**Q5**: Only label-consistent watermarking and Sleeper Agent were compared. I think it is necessary to compare other recent works such as Radioactive Data which is similar to the proposed method, or various backdoor attack/data poisoning approaches. \n\n**R5**: Thank you for these comments and we do understand your concerns.\n- Firstly, we have to admit that we failed to find that Radioactive Data was also designed for dataset ownership verification. We sincerely thank you for pointing it out. After reading backdoor-embedding-based dataset watermarking (BEDW), radioactive data (RD), and papers that cited them, we can confirm that only BEWD and RD claimed that they can be used for dataset ownership verification. We will add RD to our related work in the revision.\n- Compared with RD, our UBW requires fewer user capacities and therefore is more practical. Specifically, RD is model-dependent since it requires users to have a fixed and known feature extractor for generating radioactive data. Besides, RD requires to have the prediction vectors or even the model source files for ownership verification, whereas we only need the probability in the predicted label. Accordingly, our method can even be generalized to the scenario that users can only obtain the predicted labels (by examining whether poisoned images have different predictions compared with their benign version) whereas RD cannot.\n- We mainly compared our UBW-C with label-consistent watermarking and Sleeper Agent, since they were the most representative and probably the only backdoor attacks under the clean-label setting. We have also compared with other backdoor attacks, including BadNets, Blended, and WaNet, under the poisoned-label setting.\n- However, we do understand your concerns that data poisoning may also be adapted for dataset watermarking since it can also introduce distinctive prediction behaviors. We have compared our methods with it in Section I of the appendix (Line 236-252, page 9-10). For example, the (advanced) data poisoning is also targeted and therefore is more harmful compared to our UBW. \n\n\nWe will add more details and discussions in Section I (Connections and Differences with Related Works) of the appendix in the revision.\n\n---\n**Q6**: In Eq. 6 and 7, f(.) means NN architecture. Is it necessary to know architecture of malicious model?\n\n**R6**: Thank you for this insightful question! Following the classical settings of bi-level-optimization-type backdoor attacks (*e.g.*, LIRA and Sleeper Agent), we report the results of attacking DNN with the same model architecture as the one used for generating poisoned samples. However, we do understand your concern about the transferability across different model architectures of our UBW-C. As shown in Table 3, our UBW-C has high transferability and therefore our method is practical in protecting released datasets.\n\nTable 3. The performance of our UBW-C with different model architectures trained on the poisoned dataset generated with ResNet-18.\n| | ResNet-18 | ResNet-34 | VGG-16-BN | VGG-19-BN |\n|:-----:|:---------:|:---------:|:---------:|:---------:|\n| BA | 86.99 | 87.34 | 86.83 | 88.55 |\n| ASR-A | 87.56 | 78.89 | 75.80 | 74.30 |\n\n---", " ---\n**Q3**: Table 1 and 2 show that the proposed highly drops the benign accuracy from 92.53% to 86.99% for CIFAR10, and 67.3% to 59.6% for ImageNet subset. It seems that the proposed watermark is harmful.\n\n**R3**: Thank you for your comments and we do understand your concerns. We admit that our UBW-C has some decreases in benign accuracy, compared with the one trained on the benign dataset. This side effect is mostly due to the optimization process of bi-level optimization, which is relatively difficult in practice. We observed similar phenomena in methods which are also based on bi-level optimization (*e.g.*, Sleeper Agent and LIRA). We use the standard bi-level optimization with minimal modifications to make our methods and contributions more explicit. We believe that the decrease will not significantly reduce its practicality. Our next step is to explore how to better balance BA, ASR, and dispersibility. We hope that this acceptable BA gap will not eliminate our contributions and impacts as early work (with insights and theoretical supports) in this important field.\n\n---\n**Q4**: Only ResNet-18 and small-scale datasets(CIFAR10 and ImageNet subset) were used in evaluation. \n\n**R4**: Thank you for your comments and we do understand your concerns. \n- We adopt ResNet-18 simply following the classical settings in backdoor-related methods. However, we do understand your concern about whether our methods are still effective with different model structures. To verify it, we evaluate our methods with VGG. As shown in Table 1, **our methods can still reach promising performance with different model structures**, although the performance may have some fluctuations. \n- We adopt the ImageNet subset (with 50 classes) instead of the whole ImageNet due to the limitation of computational resources. However, we do understand your concern about whether our methods are still effective on large-scale datasets. To alleviate your concern, we train our UBW-P on the whole ImageNet dataset for only 30 epochs with the pre-trained model due to the limitation of time and computational resources. Since UBW-C takes more epochs due to the generation process of poisoned samples, we train it on the Tiny-ImageNet dataset (with 200 classes) due to the limitation of time and computational resources. As shown in Table 2, **our methods are still effective on large-scale datasets to some extent**. We notice that users can obtain better performance on the ImageNet dataset by training the model more epochs, especially training from the scratch.\n\nWe will add more details and discussions in the appendix of our revision.\n\n\nTable 1. The performance of our UBW with different model structures on CIFAR-10.\n| Model$\\downarrow$ | Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C | $D_p$ |\n|:---------:|:---------------------------------------:|:------:|:------:|:------:|:-----:|\n| ResNet-18 | UBW-P | 90.59 | 92.30 | 92.51 | 2.2548 |\n| ResNet-18 | UBW-C | 86.99 | 89.80 | 87.56 | 1.2641 |\n| VGG-16 | UBW-P | 91.25 | 88.20 | 86.46 | 2.0244 |\n| VGG-16 | UBW-C | 87.20 | 78.21 | 74.34 | 0.9875 |\n\n\n\nTable 2. The performance of our UBW on large-scale datasets.\n| Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C | $D_p$ |\n|:---------------------------------------:|:------:|:------:|:------:|:-------:|\n| UBW-P | 71.36 | 50.00 | 42.56 | 1.8346 |\n| UBW-C | 51.56 | 88.00 | 86.54 | 2.9871 |\n\n\n---\n", " We sincerely thank you for your valuable time and comments. We are encouraged by your positive comments on the specialty in our promising untargeted attack. We are deeply sorry for the misunderstandings that our paper may cause you. Please kindly find our clarifications below to your concerns.\n\n---\n**Q1**: I cannot be sure about strength of the untargeted watermarking. What is better than targeted one? In Fig 2, only difference between target label and random label in prediction.\n\n**R1**: Thank you for this insightful question! Why we need the untargeted backdoor watermark is one of the core motivations of this paper and we are deeply sorry that we failed to make you fully understand it. The detailed strengths of our UBW are as follows:\n- In short, **our untargeted backdoor watermark is harmless compared to the existing targeted backdoor watermarks**. Existing targeted backdoor watermarks introduce new security threats in the model since the backdoor adversaries can determine model predictions of malicious samples. In contrast, the predictions generated by DNNs watermarked by our UBW are dispersible and therefore the adversaries cannot explicitly control model predictions. \n- **This harmlessness is necessary for both dataset owners and dataset users**. For the dataset users, they can use the watermarked dataset without fear of being attacked by the dataset owners who know how to activate the hidden backdoor; For the dataset owners, more users are willing to use their dataset. They are also excluded from suspicion when the models are attacked. \n- **Our methods can naturally bypass some backdoor detections** (*e.g.*, Neural Cleanse and Spectral Signatures) due to their untargeted nature. Accordingly, our methods are more stealthy, compared to targeted attacks.\n\nWe are deeply sorry for the ambiguity that our paper may cause you again. We will add more details in both our introduction and the proposed method in the revision.\n\n---\n**Q2**: Dispersibility occupies large portion of this paper, but I don't undestand necessity of it. It isn't used for training generator in Eq. 6 and 7, and used only as evaluation metric. However, I think it is entropy-based ASR instead of accuracy-based ASR, but they ar not much different.\n\n**R2**: Thank you for your question and we do understand your concerns. In our paper, we define three dispersibilities, including averaged prediction dispersibility ($D_p$), averaged sample-wise dispersibility ($D_s$), and averaged class-wise dispersibility ($D_c$). Why we need them is one of the core motivations of this paper and we are deeply sorry that we failed to make you fully understand it. Here we will further explain their necessities.\n- As we illustrated in the aforementioned R1, **the averaged prediction dispersibility ($D_p$) is necessary for harmless dataset watermarking**. This is why we include dispersibility as one of our watermark's goals in Section 3.2 (Line 146-156, Page 4) and treat it as one of the evaluation metrics.\n- However, as we explained in Section 3.4 (Line 177-179, Page5), **$D_p$ is non-differentiable and therefore cannot be optimized directly in UBW-C**. Accordingly, we introduce $D_s$ and $D_c$ as two differentiable surrogate dispersibilities to alleviate this problem.\n- Once we have $D_s$ and $D_c$, the remaining problem is how to design our UBW-C. According to our Theorem 1, **we can optimize the averaged sample-wise dispersibility $D_s$ and the class-wise dispersibility $D_c$ simultaneously by only maximizing $D_s$**. This is why we only include $D_s$ in Eq. (6). \n- In Eq. (6), **the entropy is defined on the prediction vector of poisoned images** where their target and ground-truth labels are not involved. As such, it is not a simply (and trivial) entropy-based ASR as you thought. \n\nGiven the aforementioned reasons, we can conclude that the design and theorem of dispersibilities are closely related to our methods and our methods are not the simple entropy-based ASR extensions. We are deeply sorry again for the ambiguity that our paper may cause you. We will add more details in the proposed method (Section 3.2-3.4) in the revision. \n\n---\n", " \n---\n**Q3**: The authors should verify the proposed watermarking methods under dataset-based backdoor defenses such as Spectral Signatures and Activation Clustering.\n\n**R3**: Thank you for this constructive suggestion! Both spectral signatures and activation clustering tend to filter poisoned samples from the training dataset, based on sample behaviors in hidden feature space. These methods rely on a latent assumption that poisoned samples will form a separate cluster in the hidden feature space. This assumption usually holds in existing targeted poison-only backdoor attacks. However, as we illustrated in Section H (Figure 5) in the appendix, **poisoned samples generated by our untargeted UBW-P and UBW-C tend to scatter in the whole space instead of forming a single cluster. Accordingly, our methods are naturally resistant to both spectral signatures and activation clustering**. To verify it, we conduct some experiments on the CIFAR-10 dataset. As shown in the following Table 3, these defenses fail to filter our poisoned samples to some extent. We will add more details and discussions in the appendix of our revision.\n\n\nTable 3. The successful filtering rate (the number of filtered poisoned samples / the number of all filtered samples, %) on CIFAR-10.\n\n| Attack$\\downarrow$, Defense$\\rightarrow$ | SS | AC |\n|:----------------------------------------:|:--:|:--:|\n| UBW-P | 10.96% (548/5000) | 52.61% (4981/9467) |\n| UBW-C | 9.40% (470/5000) | 20.51% (1003/4889) |\n\n\n\n---\n**Q4**: It is also more persuasive if they are verified under more backdoor defenses such as Neural Attention Distillation (NAD) and Mode Connectivity Repairing (MCR).\n\n**R4**: Thank you for this constructive suggestion! We evaluate our methods on the CIFAR-10 dataset. As shown in Table 4-5, **both UBW-P and UBW-C are resistant to NAD and MCR to some extent**. Their failures are probably because both NAD and MCR contain a fine-tuning stage, which is ineffective for our UBW. We will add more details and discussions in the appendix of our revision.\n\nTable 4. The resistance to NAD on CIFAR-10.\n| Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C |\n|:---------------------------------------:|:--:|:-----:|:-----:|\n| UBW-P | 67.98 | 99.40 | 89.87 |\n| UBW-C | 77.13 | 36.00 | 29.81 |\n\n\n\nTable 5. The resistance to MCR on CIFAR-10.\n| Attack$\\downarrow$, Metric$\\rightarrow$ | BA | ASR-A | ASR-C |\n|:---------------------------------------:|:--:|:-----:|:-----:|\n| UBW-P | 88.17 | 96.20 | 96.06 |\n| UBW-C | 86.15 | 79.10 | 71.69 |\n\n\n---\n**Q5**: The denotations for the datasets $\\mathcal{D}$ and dispersibility metrics $D$ are easy to be confused.\n\n**R5**: Thank you for pointing it out! We will change the dispersibility metrics from $D$ to $d$ in our revision.\n\n---\n", " We sincerely thank you for your valuable time and comments. We are encouraged by your positive comments on our **good motivation**, **novelty**, **well-designed methods and metrics**, **effectiveness**, and **resistance to standard backdoor defenses**. We will alleviate your remaining concerns as follows:\n\n\n---\n**Q1**: The proposed watermarks are perceptible by human subjects and can be manually removed. In UBW-P, instead of using the BadNets triggers, the authors can try the imperceptible ones from WaNet or LIRA.\n\n**R1**: Thank you for this insightful comment and constructive suggestion! We designed our UBW-P based on BadNets-type triggers simply because it is the most straightforward method. We intended to show how simple it is to design UBW under the poison-label setting. However, we do understand your concern and agree that imperceptible UBW-P would be better for its stealthiness. Following your suggestions, we evaluate the UBW-P with WaNet-type triggers. As shown in Table 1, **our UBW-P can still reach promissing performance with imperceptible trigger patterns**. We will add more details and discussions in the appendix of our revision.\n\n\nTable 1. The effectiveness of our UBW-P with different types of triggers on CIFAR-10.\n| Method$\\downarrow$, Metric$\\rightarrow$ | BA (\\%) | ASR-A (%) | ASR-C (\\%) | $D_p$ |\n|:---------------------------------------:|:-------:|:---------:|:----------:|:------:|\n| UBW-P (BadNets) | 90.59 | 92.30 | 92.51 | 2.2548 |\n| UBW-P (WaNet) | 89.90 | 73.00 | 70.45 | 2.0368 |\n\n\n\n\n---\n**Q2**: The proposed watermarks are perceptible by human subjects and can be manually removed. In UBW-C, there is no constraint to enforce the poisoned image to be similar to the clean one. As a result, we can see apparent artifacts on the poisoned examples. Employing that constraint in the optimization goal (Eq. 6) would be an interesting direction to explore.\n\n**R2**: Thank you for this insightful comment and constructive suggestion! Firstly, we are deeply sorry for the misunderstanding that our paper caused you. In fact, **we did ensure that the poisoned image $x'$ should be similar to its benign version $x$** by requiring $||x'-x||_\\infty < \\epsilon$ where $\\epsilon$ is the perturbation budget. We included these details in Appendix Section B (Page 2, Line 25-26) due to the space limitation of our main manuscript. As shown in the following Table 2, as we expected, using a larger $\\epsilon$ can increase ASR (with the sacrifice of some stealthiness degrees). We will add more details in Section 3.4 of our main manuscript and the appendix in the revision. \n\n\nTable 2. The effectiveness of our UBW-C with different perturbation budgets on CIFAR-10.\n| Method$\\downarrow$, Metric$\\rightarrow$ | BA (\\%) | ASR-A (%) | ASR-C (\\%) | $D_p$ |\n|:---------------------------------------:|:-------:|:---------:|:----------:|:------:|\n| UBW-C (16/255) | 86.99 | 89.80 | 87.56 | 1.2641 |\n| UBW-C (32/255) | 86.17 | 94.55 | 92.03 | 1.0112 |\n\n\n\nBesides, we also notice the artifact that you pointed out. We think **it is most probably due to the differences between the $\\ell^\\infty$ norm and the human visual system** since we observed similar problems in existing attacks with $\\ell^\\infty$-bounded additive perturbations. We will explore how to further enhance the stealthiness of UBW-C in our future work. \n\n\n---", " \n---\n**Q3**: Can you add more descriptions regarding dataset protection and encryption?\n\n**R3**: Thank you for this constructive suggestion! \n- **Dataset protection** has always been an important and wide research area. In this paper, we focus on the protection of released datasets (*e.g.*, open-sourced datasets and commercial datasets). In particular, those datasets are released and can only be used for specific purposes. For example, open-sourced datasets are available to everyone while most of them can only be adopted for academic or educational rather than commercial purposes. Our goal is to detect and prevent unauthorized users of released datasets. This task is challenging since the adversaries can get access to the victim dataset while unauthorized users will only release their trained models without disclosing their training details.\n- **Encryption** is the most classical protection method, which encrypts the whole or parts of the protected data. Only authorized users who have obtained the secret key can decrypt the encrypted data. However, the encryption can not be exploited to protect released datasets for it will hinder dataset functionalities (*i.e.*, users can not use encrypted dataset for training). \n\nWe will add more details in our related work in the revision.\n\n---\n**Q4**: Can you give some practical scenarios to illustrate the importance of UBW?\n\n**R4**: Thank you for this constructive suggestion! As we illustrated in the aforementioned R3, our goal is to detect and prevent unauthorized users of released datasets. Specifically, we consider the hardest black-box verification setting where defenders can only get model predictions whereas having no information about its model parameters. This setting is more practical, compared with the white-box one, allowing defenders to perform ownership verification even when they only have access to the model API. Accordingly, **given a suspicous third-party model, we can verify whether it was trained on our protected dataset (without unauthorization) when we can obtain its source files or just the model API**. We will add more details in our related work in the revision.\n\n---\n", " We sincerely thank you for your valuable time and comments. We are encouraged by your positive comments on our **research significance**, **extensive experiments**, **method effectiveness**, and **paper writing**. We will alleviate your remaining concerns as follows:\n\n\n---\n**Q1**: Lack some in-depth discussions on the selection of triggers.\n\n**R1**: Thank you for your insightful comments and we do understand your concerns. We adopted the white-black trigger patterns in the main manuscript simply because it is the most classical one used in existing backdoor-related papers. We have also evaluated the effectiveness of other trigger patterns with diffferent appearances and sizes in Appendix D (Table 1-2). The results show that \n\n- Similar to existing backdoor attacks, both UBW-P and UBW-C can reach promising performance with arbitrary user-specified trigger patterns.\n- The attack success rate increases with the increase of trigger size while its increase has only minor adverse effects on benign accuracy.\n\nHere we quote Table 1-2 for you to refer. Please find more details and discussions in Appendix D.\n\nTable 1. The effectiveness of our UBW with different trigger patterns on the CIFAR-10 dataset.\n\n| Method$\\downarrow$ | Pattern$\\downarrow$, Metric$\\rightarrow$ | BA (\\%) | ASR-A (%) | ASR-C (\\%) | $D_p$ |\n|:------------------:|:----------------------------------------:|:-------:|:---------:|:----------:|:------:|\n| UBW-P | Pattern (a) | 90.59 | 92.30 | 92.51 | 2.2548 |\n| UBW-P | Pattern (b) | 90.31 | 84.53 | 82.39 | 2.2331 |\n| UBW-P | Pattern \\(c\\) | 90.21 | 87.78 | 86.94 | 2.2611 |\n| UBW-C | Pattern (a) | 86.99 | 89.80 | 87.56 | 1.2641 |\n| UBW-C | Pattern (b) | 86.25 | 90.90 | 88.91 | 1.1131 |\n| UBW-C | Pattern \\(c\\) | 87.78 | 81.23 | 78.55 | 1.0089 |\n\n\nTable 2. The effectiveness of our UBW with different trigger sizes on the CIFAR-10 dataset.\n| Method$\\downarrow$ | Trigger Size$\\downarrow$, Metric$\\rightarrow$ | BA (\\%) | ASR-A (%) | ASR-C (\\%) | $D_p$ |\n|:------------------:|:---------------------------------------------:|:-------:|:---------:|:----------:|:------:|\n| UBW-P | 2 | 90.55 | 82.60 | 82.21 | 2.2370 |\n| UBW-P | 4 | 90.37 | 83.50 | 83.30 | 2.2321 |\n| UBW-P | 6 | 90.43 | 86.30 | 86.70 | 2.2546 |\n| UBW-P | 8 | 90.46 | 86.40 | 86.26 | 2.2688 |\n| UBW-P | 10 | 90.72 | 86.10 | 85.82 | 2.2761 |\n| UBW-P | 12 | 90.22 | 88.30 | 87.94 | 2.2545 |\n| UBW-C | 2 | 87.34 | 4.38 | 15.00 | 0.7065 |\n| UBW-C | 4 | 87.71 | 70.80 | 64.86 | 1.2924 |\n| UBW-C | 6 | 87.69 | 75.60 | 70.85 | 1.7892 |\n| UBW-C | 8 | 88.89 | 75.40 | 69.86 | 1.2904 |\n| UBW-C | 10 | 88.30 | 77.60 | 73.92 | 1.7534 |\n| UBW-C | 12 | 89.29 | 98.00 | 97.72 | 1.1049 |\n\n---\n\n**Q2**: It lacks a discussion on the potential negative impact brought by the protected dataset. For example, will the model trained through the protected dataset is vulnerable to backdoor attacks?\n\n**R2**: Thank you for your insightful comment! As we illustrated in Section 6 (Societal Impacts), we notice that our untargeted backdoor watermark (UBW) is resistant to existing backdoor defenses and could be maliciously used by the backdoor adversaries. However, compared with existing targeted backdoor attacks, our UBW is untargeted and therefore has minor threats. Moreover, although an effective defense is yet to be developed, people can still mitigate or even avoid the threats by only using trusted training resources. Our next step is to explore principled and advanced defenses against UBW-like watermarks.\n\n---", " No ethical concerns. NA None.", " This paper proposes a dataset protection approach, Untargeted Backdoor Watermark, via crafting untargeted backdoor attacks for the protected dataset. The authors propose their method in two different settings(i.e., mislabel and clean-label settings). Through extensive experiments, Untargeted Backdoor Watermark is proved to be effective and robust against various defense approaches in two different settings. \n================= Strength================= \n1. The studied problem is interesting. \n2. The evaluation is comprehensive. \n3. The overall presentation is good.\n\n\n=================Weakness================\n\n1. Lack some in-depth discussions on the selection of triggers and provide a practical scenario for data leakage.\n2. It lacks a discussion on the potential negative impact brought by the protected dataset. For example, will the model trained through the protected dataset perform vulnerable against backdoor attacks?\n\n 1. Can you add more descriptions regarding datasets protection and encryption? 1. Can u give some practical scenarios to illustrate the importance of UBW.", " The paper aimed to protect open-source datasets from illegal DNN training by injecting verifiable backdoor watermarks. It first revealed that the existing backdoor watermarking techniques used targeted labels and could be exploited by adversaries for attacks. It then proposed novel untargeted backdoor watermarking techniques that are both effective and harmless in poisoned-label (UBW-P) and clean-label (UBW-C) settings. UBW-P stamped simple backdoor triggers on a subset of data and changed their labels randomly. UBW-C did not modify image labels; it instead optimized the trigger generation function in a bi-level optimization. The proposed methods were verified on CIFAR-10 and ImageNet, showing high effectiveness and dispersibility while being stealthy to standard backdoor defenses, including Fine-tuning, Fine-pruning, NeuralCleanse, STRIP, and GradCam inspection. ## Strengths:\n- The authors provided a good discussion on the issue of the common targeted backdoor watermarks and why we needed untargeted backdoor watermarks.\n- The proposed untargeted backdoor watermarking techniques are novel and useful. I believe the topic of untargeted backdoors is easy to think of. However, it is not attractive and unexplored in the context of backdoor attacks due to its unpredictability. However, the authors found data watermarking a suitable application for these techniques, in which unpredictability becomes a strength when considering safety.\n- The paper designed two versions for both poisoned-label and clean-label settings.\n- The authors proposed new dispersibility metrics, which are technically sound. They are particularly helpful in designing the clean-label untargeted backdoor watermarks (UBW-C).\n- The proposed methods were verified on CIFAR-10 and ImageNet, showing high effectiveness and dispersibility while being stealthy to standard backdoor defenses, including Fine-tuning, Fine-pruning, NeuralCleanse, STRIP, and GradCam inspection.\n\n\n## Weaknesses:\n- The proposed watermarks are perceptible by human subjects and can be manually removed. The authors should try the imperceptible techniques:\n - In UBW-P, instead of using the BadNets triggers, the authors can try the imperceptible ones from WaNet, or LIRA [1].\n - In UBW-C, there is no constraint to enforce the poisoned image to be similar to the clean one. As a result, we can see apparent artifacts on the poisoned examples. Employing that constraint in the optimization goal (Eq. 6) would be an interesting direction to explore.\n- The authors should verify the proposed watermarking methods under dataset-based backdoor defenses such as Spectral Signatures [2] and Activation Clustering [3].\n- It is also more persuasive if they are verified under more backdoor defenses such as NAD [4] and Mode Connectivity [5].\n- The denotations for the datasets (\\mathcal{D}) and dispersibility metrics (D) are easy to be confused.\n\n[1] Lira: Learnable, imperceptible and robust backdoor attacks. In CVPR 2021.\n\n[2] Spectral signatures in backdoor attacks. In NeurIPS 2018.\n\n[3] Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering. In SafeAI at AAAI 2019\n\n[4] Neural attention distillation: Erasing backdoor triggers from deep neural networks. In ICLR 2021.\n\n[5] Bridging mode connectivity in loss landscapes and adversarial robustness. In ICLR 2020.\n - Can we design imperceptible triggers for UBW-P and UBW-C?\n- Are UBW-P and UBW-C stealthy under dataset-based backdoor defenses such as Spectral Signatures and Activation Clustering?\n- Are UBW-P and UBW-C stealthy under recent backdoor defenses such as NAD and Mode Connectivity? The authors discusses the societal impacts of the proposed technique. I would recommend to add limitations mentioned above.", " This paper proposes a methods, UBW-C and UBW-P, to verify unauthorized use of open-sourced dataset. For this, authors train a watermark generator G(. ; theta) using eq. 6 to minimize L(f(G(x:theta);w*),y) and entropy. Also, authors insist that the proposed watermarking is robust in dispersibility for predictions of poisoned data. A malicious network that trained using the watermarked dataset may predict randomly for watermarked test data and clearly for clean test data, so it is possible to verify using the difference between the two predictions, for watermarked and clean test data. This work has specialty in untargeted attack that makes malicious network predicts random label for watermarked test data. Then, experiment section provides comparisons with some other data poisoning methods on CIFAR10, and ImageNet subset. However, I think this paper is weak in comparisons to similar works under various conditions(architectures, and large-scaled datasets), and the proposed seems quite harmful for original dataset. Also, I think the proposed verification method requires detailed specification on malicious network, so it seems weak on practice.\n\n===============After Author-Reviewer Discussion==========================\nI raised some concerns at the first review, and many of them are discussed duing this period. In specific, issues about applications on large-scaled datasets, and transferability, stealthiness are well addressed. \n\n\nStrengths: This work addresses an interesting which make a trained DNN predicts random label via dataset watermarking. I think the random prediction is an insightful and novel. Then, large-scaled datasets, and transferability, stealthiness, robustness to defense are well addressed.\n\nWeaknesses: Drop in BA for WBD-C is a weakness, and I'm still not sure about whether the random classification is better than previous guided misclassification for verifying malicious users. I expect the property is fit to attacking purpose.\n\n\nHowever, I think the insightful property is more important, and I'll expect the weakness can be enhanced later. Also, I'll re-update my score at the end of Reviewer-MetaReviewer Discussion. \n\n\n\n I have some concerns as follow:\n\n1) I cannot be sure about strength of the untargeted watermarking. What is better than targeted one? In Fig 2, only difference between target label and random label in prediction. \n\n2) Dispersibility occupies large portion of this paper, but I don't undestand necessity of it. It isn't used for training generator in Eq. 6 and 7, and used only as evaluation metric. However, I think it is entropy-based ASR instead of accuracy-based ASR, but they ar not much different. \n\n3) Table 1 and 2 show that the proposed highly drops the benign accuracy from 92.53% to 86.99% for CIFAR10, and 67.3% to 59.6% for ImageNet subset. It seems that the proposed watermark is harmful.\n\n4) Only ResNet-18 and small-scale datasets(CIFAR10 and ImageNet subset) were used in evaluation. Then, only label-consiststent watermarking and Sleeper Agent were compared. I think it is necessary to compare other recent works such as Radioactive Data [1] or which is much similar to the proposed method, or various backdoor attack/data poisoning approaches, and authors have to compare with them.\n\n[1] Sablayrolles et. al.. Radioactive data: tracing through training. ICML 2020\n\n\n\n5) In Eq. 6 and 7, f(.) means NN architecture. Is it necessary to know architecture of malicious model?\n\n\n\n 1) Drop in BA for WBD-C\n\n2) Unclear strength of random classification in dataset watermarking." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "nips_2022_kcQiIrvA_nz", "Mc13g0wBN0J", "YgPwpzn_Hna", "fTdLNGcv2R", "fTdLNGcv2R", "fTdLNGcv2R", "fTdLNGcv2R", "qPv0iys0-g", "hHfuFYGTqmn5", "J_Iv2zvxl8qt", "i95KYOnJosS", "i95KYOnJosS", "9vicxjS6K46", "fR3qiUq7ISY", "D-iNlUn2-Y2", "fR3qiUq7ISY", "k4paZC8xhMo", "V1sGQYEAHvi", "9seYzqxktg", "FMh7tMDk-J", "RwEBL1uhaKo", "LZhPO3wkIMw", "fR3qiUq7ISY", "1TMxds3AJy", "kMEHzeBAwz1", "nips_2022_kcQiIrvA_nz", "nips_2022_kcQiIrvA_nz", "nips_2022_kcQiIrvA_nz", "nips_2022_kcQiIrvA_nz" ]
nips_2022_F7NQzsl334D
ClimbQ: Class Imbalanced Quantization Enabling Robustness on Efficient Inferences
Quantization compresses models to low bits for efficient inferences which has received increasing attentions. However, existing approaches focused on balanced datasets, while imbalanced data is pervasive in the real world. Therefore, in this study, we investigate the realistic problem, quantization on class-imbalanced data. We observe from the analytical results that quantizing imbalanced data tends to obtain a large error due to the differences between separate class distributions, which leads to a significant accuracy loss. To address this issue, we propose a novel quantization framework, Class Imbalanced Quantization (ClimbQ) that focuses on diminishing the inter-class heterogeneity for quantization error reduction. ClimbQ first scales the variance of each class distribution and then projects data through the new distributions to the same space for quantization. To guarantee the homogeneity of class variances after the ClimbQ process, we examine the quantized features and derive that the homogeneity satisfies when data size for each class is restricted (bounded). Accordingly, we design a Homogeneous Variance Loss (HomoVar Loss) which reweights the data losses of each class based on the bounded data sizes to satisfy the homogeneity of class variances. Extensive experiments on class-imbalanced and benchmark balanced datasets reveal that ClimbQ outperforms the state-of-the-art quantization techniques, especially on highly imbalanced data.
Accept
After rebuttal, the reviewers unanimously agree that the submission should be accepted for publication at NeurIPS.
train
[ "QzOhALEf2wd", "PUvmtZO27gv", "J1L2F6N7W0f", "b-eMVSr-dkY", "y4rSUKj6Xgf", "T2zIWXr6gEO", "v6QhNSmN9nl", "wl0CDEcig3", "AxsK32u0SV1", "zgttOU_QLLN", "gWIFA1tKMh", "IKCeUCjrrdE", "uZKcq8FWI89", "18oY5L2-8gI", "PP3ESY3FlcW", "j-PDzOjdQZ2", "W7jtW_X8zpp" ]
[ "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed answer and the new quantization error results. I have updated the rating accordingly.", " Thanks for your detailed elaboration. I recommend the authors to combine the above content into the paper, since it can strengthen the contributions of your work. I do appreciate the efforts and therefore I raised the score to 5.", " The responses to the two questions raised are as follows.\n\nQ1. I am wondering the opinions from the authors on the potential synergy between ClimbQ and mixed-precision quantization. From my perspective, ClimbQ can be potentially combined with mixed-precision quantization since the imbalanced class distribution can also make the range of the value representation divergent. Do you think it's feasible to potentially combine these two lines of works?\n\nA1. Yes, we also consider that the mixed-precision quantization may be applicable to the imbalanced class distributions with different ranges. The classes with larger ranges can be assigned with more bits (i.e., using more quantized values), and the classes with smaller ranges can be assigned with fewer bits (i.e., using fewer quantized values) to effectively reduce the quantization errors $|x - Q(x)|$ and avoid a significant performance degradation according to [1]. In addition to the range, we also think that it may also be feasible to utilize other metrics such as the Hessian matrix and eignenvalues [2, 3] to measure the contained information in separate class distributions for the decision of the assignment of bits.\n\n--Reference\n- [1] Rastegari, M., Ordonez, V., Redmon, J., & Farhadi, A. (2016, October). Xnor-net: Imagenet classification using binary convolutional neural networks. In European conference on computer vision (pp. 525-542). Springer, Cham.\n- [2] Shen, S., Dong, Z., Ye, J., Ma, L., Yao, Z., Gholami, A., ... & Keutzer, K. (2020, April). Q-bert: Hessian based ultra low precision quantization of bert. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 05, pp. 8815-8821).\n- [3] Wang, K., Liu, Z., Lin, Y., Lin, J., & Han, S. (2019). Haq: Hardware-aware automated quantization with mixed precision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8612-8620).\n\nQ2. For inference performance, I am referring to the inference latency/throughput. Can ClimbQ help with the inference performance by exploiting class imbalance?\n\nA2. Thanks for the question. We have conducted experiments to compare the inference time (sec./batch) and throughput (#images/sec.) of ClimbQ with the quantization baselines in the paper. The results are presented in the linked document: https://www.dropbox.com/s/nqxps1p3sjbeo9c/response_to_reviewer2_additional_Q2.pdf?dl=0.\n\nIt can be observed that ClimbQ has fewer time costs in inference, i.e., with smaller latency compared with other approaches. In addition, the throughput of ClimbQ is higher, i.e., more images can be processed in a fixed time span. The better efficiency of ClimbQ in inference than that of the compared approaches is mainly due to a simple function adopted (see Eq. (1)) for the scaling and projection of class distributions and the uniform quantization (see Eq. (2)) without other additional operations used in the compared approaches, such as clipping functions and transformations. \n", " The responses to the two questions raised are as follows.\n\nQ1. Fig 1 in the supplementary material: $w_k$ is exponentially decreasing. Would a simpler expression such as $e^{-n_k}$ suffice?\n\nA1. Thanks for the question. We have plotted $w_k$ and $e^{-n_k}$ in the linked document: https://www.dropbox.com/s/p67lnevw3t7qqmr/response_to_reviewer4_additional_Q1.pdf?dl=0. It can be observed from the figures that when $n_k$ is large (i.e., majority classes), $w_k \\simeq e^{-n_k}$. However, $e^{-n_k}$ is much smaller than $w_k$ when $n_k$ is small (i.e., minority classes). Therefore, the $w_k$ cannot be fully expressed by $e^{-n_k}$. \n\nMoreover, we conducted the experiments on $w_k$ under different $\\beta$ settings. As shown in the tables in the linked document, it can be seen that the performance is better when $\\beta$ is larger, i.e., the weights on minority classes are smaller. Nevertheless, when $\\beta = 0.999$, the performacne has been saturated. It indicates that it may have reached a good balance between the weights on the majority and the minority classes. Hence, $e^{-n_k}$ with much smaller weights on the minority classes than $\\beta = 0.999$ may be subject to the accuracy degradation such as $\\beta = 0.9999$.\n\nQ2. What are the considerations for $w_k$ when $n_k$ is within the lower and upper bounds?\n\nA2. When $n_k$ is within the lower and upper bounds, then the $k$-th class belongs to the moderated sized class (Case II.) as presented in Table 2 in Appendix. Then $w_k$ is designed between the weights on the majority classes (Case III.) and the weights on the minority classes (Case I.) since the $n_k$ is consistent with the hypothesis of the homogeneity of class variances (illustrated in Lines 65-66 in Appendix) which is described in Theorem 3.2 and stated in Lines 194-195. In contrast, the minority or majority classes whose sizes are out of bounds are regularized to much larger or smaller weights (see Table 2 in the Appendix).\n", " Thanks for your detailed responses, and most of my concerns are already resolved. (including W2, Q2, L1, Q3, L2). I would consider raising the score if the rest of comments are addressed with the revision ready. \n\n(1) I am wondering the opinions from the authors on the potential synergy between ClimbQ and mixed-precision quantization. From my perspective, ClimbQ can be potentially combined with mixed-precision quantization since the imbalanced class distribution can also make the range of the value representation divergent. Do you think it's feasible to potentially combine these two lines of works? \n\n(2) For inference performance, I am referring to the inference latency/throughput. Can ClimbQ help with the inference performance by exploiting class imbalance?\n\n\n", " Q&A1. Thanks, I misread Lines 186-188. Your revision with 2 separate sentences helps.\n\nQ&A2. Fig 1 in the supplementary material: $w_k$ is exponentially decreasing. Would a simpler expression such as $e^{-n_k}$ suffice? Also, what are the considerations for $w_k$ when $n_k$ is within the lower and upper bounds?", " The responses to the two questions raised are as follows.\n\nQ1. Following up on the scaling of the test data. The question is how to perform: \"We took mu_k and sigma_k in Eq. (1) as the mean and standard deviation of the k-th class of testing data\", if during testing the class labels are not available?\n\nA1. Thanks for the question. The implementation details are as follows.\n\nBasically, the testing data can be divided into the validation set (for the decision of hyperparameters) and the testing set (for inference and performance evaluation). \n\nFor the testing data in the validation set (where the class labels are available), we took $\\mu_k$ and $\\sigma_k$ as the mean and standard deviation of the $k$-th class. In addition, the scale factor $c_k$ defined in Line 119 can also be obtained since the sizes of classes are known.\n\nOn the other hand, since the class information was unknown for the other testing data for inference, we adopted the distribution function of the standard normal, $N(0, 1)$, instead of $N(\\mu_k, c_k^2 \\sigma_k^2)$, in our experiments. Note that the main reason that we utilized the standard normal is that the features had been normalized through the batch normalization layer before quantization. These explanations will be added to our revised version for better readability.\n\nMoreover, the learning and inference processes have been empirically validated in our experiments (see Sec. 4), indicating that the quantized model used can learn the identification capability well on the imbalance data during the inference.\n\n\n\nQ2. About the massive quantization error not being demonstrated or justified. The reviewer means the actual quantization error. Based on the paper, the authors seem to imply that the accuracy differences are caused by quantization errors but the extent of the quantization error is not demonstrated, so the claim sounds more like a hypothesis than a fact.\n\nA2. We have examined the actual quantization errors, and the results are presented through the anonymous link: https://www.dropbox.com/s/n4295bndnko193t/response_to_reviewer3_additional_Q2.pdf?dl=0.\n\nThe quantization error is measured as: $\\frac{|w_q - w|}{max(w_q) - min(w_q)}$. The numerator is the discrepancy between the quantized weight $w_q$ and the original floating-point weight $w$. We further normalized the error by dividing the range of the quantized space.\n\nAs shown in the linked document, we compared the actual quantization errors with and without the proposed scaling approach (see Sec. 3.1). It can be observed from the results that when the class distributions are not scaled (denoted as “w/o scaling”), i.e., the differences in the class distributions are not reduced, we have a larger quantization error and therefore a lower accuracy. In contrast, after the class distributions are scaled to the similar variations (denoted as “w/. scaling”) and projected to the same space before the quantization (see Sec. 3.1), the total quantization error is reduced, hence leading to a higher accuracy. The relationship between the quantization error and the accuracy is then illustrated in Lines 114-115. The charts presented in the linked document will be added to the Appendix in the revised version.\n\n\n", " Thanks for the detailed answers. The reviewer still has two questions:\n\n1) Following up on the scaling of the test data. The question is how to perform: \"We took mu_k and sigma_k in Eq. (1) as the mean and standard deviation of the k-th class of testing data\", if during testing the class labels are not available?\n\n2) About the massive quantization error not being demonstrated or justified. The reviewer means the actual quantization error. Based on the paper, the authors seem to imply that the accuracy differences are caused by quantization errors but the extent of the quantization error is not demonstrated, so the claim sounds more like a hypothesis than a fact. \n\n", " We number c) and d) in the response to W1 as below.\n\nW1. a) $N(.,.)$ is used before being defined, b) $x$ is introduced as data to then being extended to also the features from a deep learning model, c) there is abuse of notation (e.g., $F(X)$ a distribution is equated to a random variable $U$), d) $x'$ is not formally defined\n\nA1. \n\na) Thank the reviewer for pointing out this. We will add the definition of the notation.\n\nb) Thank you for your comment. We would like to modify the notation as follows. The network features for N data are denoted as $X =$ {$x_1, x_2, …, x_N$}.\n\nc) In probability, $F(X)$ generally represents a random variable that is transformed from the random variable $X$ through the function $F$, while $F(x)$ actually denotes the distribution value which can be derived from the probability, $P(X <= x)$. \n\nd) Thank you for pointing out this. To clarify the applied value of $x'$ in Eq. (2), we will add $x_k’ = D_k(x_k)$ below Eq. (2).\n", " We thank the reviewer for taking the time for a review. The replies are listed as the following.\n\n\nQ1. Lines 186-188 vs lines 194-195: the null hypothesis is rejected when the class size is NOT too small/large (lines 186-188), but it is rejected when the class size is too small/large (lines 194-195).\n\nA1. \n\nActually, what we illustrated in Lines 186-188 is that: ”It indicates that when class data sizes are not too tiny or excessively large, class variances are homogeneous (H0 is satisfied); otherwise, significant differences appear between class variances (H0 is rejected, and Ha is satisfied).”. According to the description, the null hypothesis is rejected when the class size is too small/large. \n\nTo avoid the misunderstaing, the referenced content will be rephrased in the revised version as:\n“ It indicates that when class data sizes are not too tiny or excessively large, class variances are homogeneous (H0 is satisfied). On the other hand, when class data sizes are too small or large, significant differences appear between class variances (H0 is rejected).” \n\nQ2. Eq 4: more motivation on how $w_k$ is designed would be beneficial, particularly the intent for the numerator and denominator.\n\nA2. \n\nThe motivation on how $w_k$ is designed has been described in Lines 221-224 that: (1) the weights of minority classes are heavier than those of majority classes, (2) the weights increase as the class data size falls far below the lower bound, and (3) the weights reduce as the class data size exceeds the upper bound. \n\nFrom the analyses in Appendix A.3, we can observe the approximations of the numerator and denominator. The numerator is designed smaller than the denominator to regularize the $w_k$ ranging in [0, 1]. The growth rate of the denominator is faster when the class size $n_k$ is larger, which leads to the motivation (1). In addition, when the actual class size $n_k$ is much smaller than expected size $n_k^e$, both the numerator and denominator approximate to one, which leads to a larger $w_k$ (i.e., $w_k$ = 1). This is consistent with the motivation (2). Moreover, when the $n_k$ is much larger than $n_k^e$, the $w_k$ is smaller than one. The result reflects the motivation (3). These explanations will be included in the revised version.\n\n\n\nQ3. Line 170: nominator -> numerator ?\n\nA3. We thank to the reviewer for a careful review. We will modify it in the revision. \n", " We thank the reviewer for the careful reading of the manuscript and the constructive\ncomments. The responses are as follows.\n\nW1. a) $N(.,.)$ is used before being defined, b) $x$ is introduced as data to then being extended to also the features from a deep learning model, c) there is abuse of notation (e.g., $F(X)$ a distribution is equated to a random variable $U$), d) $x'$ is not formally defined\n\nA1. \n\na) Thank the reviewer for pointing out this. We will add the definition of the notation.\n\nb) Thank you for your comment. We would like to modify the notation as follows. The network features for N data are denoted as $X =$ {$x_1, x_2, …, x_N$}.\nIn probability, $F(X)$ generally represents a random variable that is transformed from the random variable $X$ through the function $F$, while $F(x)$ actually denotes the distribution value which can be derived from the probability, $P(X <= x)$. \nThank you for pointing out this. To clarify the applied value of $x'$ in Eq. (2), we will add $x_k’ = D_k(x_k)$ below Eq. (2).\n\nW2. a) Why if the data (including features) is being scaled during training? b) How is the scaling being applied during testing and if not necessary, why is it the case?\n\nA2. \n\na) The motivation of scaling has been illustrated in Lines 113-116. According to the exploration results in Fig. 1, we found that the quantization error is related to the class size and feature variation. Therefore, we aimed at scaling the class variations to the same scale for quantization. \n\nb) We also performed the scaling in testing. We took $\\mu_k$ and $\\sigma_k$ in Eq. (1) as the mean and standard deviation of the \n$k$-th class of testing data.\n\nQ1. Can the authors justify or use a different visualization technique?\n\nA1. Thanks for the suggestion. We calculate the actual class variances from the CNN features and visualize the results based on the bar chart. Please see the anonymous link: https://www.dropbox.com/s/b3ahyu07wjow0fb/response_to_reviewer3_Q1.pdf?dl=0. \n\nQ2. In Section 3.1.1, the authors claim that differences in class distributions cause massive quantization error, however, it is not justified or demonstrated.\n\nA2. In Lines 253-258 and Lines 270-273 of the main paper, it can be seen that the proposed ClimbQ is demonstrated with the higher accuracies, i.e., the smaller quantization errors, than the compared research in Table 1 and Table 2, since the difference of variations between class distributions is reduced. \n\nQ3. The $N$ of $N(\\mu_k$,$\\sigma_k)$ is not shown in Figure 2.\n\nA3. Thanks to the reviewer for pointing this out. This will be fixed in the revision.\n\nQ4. a) The normal distribution is demonstrably not true in general practical settings (without data transformation). b) The beginning of Section 3.1.2 clearly states that data x is assumed to be Gaussian, though later it is stated that x are features. Please clarify by stating the notation earlier.\n\nA4. \n\na) We adopt the normal distributions in this paper mainly according to the exploration analyses on the benchmark datasets as shown in Fig. 2 in Appendix. If there is a dataset with another distribution as the reviewer mentioned, we can replace the distribution function (DF) of normal distribution in Eq. (1) with the DF of the new distribution since the $X$ in Theorem 1 can be a random variable of any continuous distribution.\n\nb) Thank you for your comment. We shall modify the notation as follows and will state earlier in the revision. The network features for N data are denoted as $X$ = {$x_1$, $x_2$, …, $x_N$}.\n\nQ5. What happens in practice when $\\beta$ -> 1 (0.999 < $\\beta$ < 1)?\n\nA5. Thanks for the question. We have conducted the experiments by setting $\\beta$ = 0.9999. The results are presented through the following anonymous link: https://www.dropbox.com/s/vw7ukd9bf1wzr3e/response_to_reviewer3_Q5.pdf?dl=0. From the linked document, although we can obtain a better performance when we set a larger $\\beta$ value, the accuracy gets saturated when $\\beta$ approximates to one, e.g., $\\beta$ >= 0.999, since the weight on the majority approximates to zero (see Fig. 1 in Appendix). \n\nQ6. Did the authors consider ClimbQ with a simple inverse probability weighting ($1/p_k$, where $p_k$ is the proportion of samples in class $k$)?\n\nA6. We have considered $1/p_k$ ($1/p_k = n/n_k$), where $n = (n_1 + n_2 + … + n_k)$. However, such a design did not work well since $1/p_k$ can be extremely large or small if the data are highly imbalanced, e.g., suppose $n = n_1 + n_2 = 9999 + 1$, then $w_1 = 10000/9999 ~= 1$, and $w_2 = 10000/1 = 10000$. Accordingly, the training process is unstable.\n\nTherefore, in this paper, we designed $w_k$ with $\\beta = 0.999$ as the base of an exponent (see the design of w_k in Line 212) which restricts $w_k$ not in a large range. The $w_k$ is analyzed in Appendix A.3, and the result in Fig. 1 in Appendix shows that the $w_k$ is limited in the range [0, 1]. The effectiveness of the design is validated in the experiments (see Sec. 4.2).", " We thank the reviewer for taking the time for a careful review. The responses are listed as follows.\n\n\nW1. Limited novelty (the solution do not add new insights on class-imbalanced quantization)\n\nA1. In this paper, we adopt the “distribution function” (defined in Eq. (1)) to generate scaled class feature distributions for quantization to reduce the quantization errors of minority classes (illustrated in Sec. 3.1). Prior quantization research (see Sec. 2) has not considered the impact of the heterogeneity of class distributions on the quantization results and performance. In addition, to the best of our knowledge, the distribution function has not been utilized to scale the distributions of network features. Hence, in our opinion, our approach possesses novelty and new insights from these perspectives.\n\n\nW2. The proposed solution seems restricted within quantization-aware training\n\nA2. We mainly focus on the Quantization-Aware Training (QAT) research to design the approaches since QAT with the fine-tuning process usually outperforms another branch Post-training Quantization (PTQ) [1]. In addition, QAT learns the quantized models more efficiently without the information of full-precision models that Zero-shot Quantization (ZSQ) research requires [1]. \n\n\n\nQ1. What are the benefits (and/or downsides) of ClimbQ over other mixed-quantization schemes?\n\nA1. ClimbQ is designed with the consideration of the heterogeneity of the class distributions to address the imbalance issue. In contrast, existing mixed-precision quantization schemes without exploring the imbalance problem tend to suffer from an issue of a large quantization error in the minority classes (as presented in Fig. 1). Nevertheless, mixed-precision quantization schemes can flexibly assign the required bits to each network layer. Therefore, assuming data are balanced, mixed-precision quantization schemes are generally able to obtain a better prediction performance than fixed-precision schemes [1], such as ClimbQ. \n\n\nQ2. Can you comment on potential applicability of your approach on post-training quantization?\n\nA2. The proposed approach can conceptually be applicable on the imbalance post-training quantization. We can first remove the component of uniform quantization and pretrain a full-precision model only using the generation of scaled class distribution (see Sec. 3.1) and the HomoVar rebalancing loss (see Sec. 3.2). During the quantization process, we retain the scaled class distribution component (since the learned weights are based on the new distributions) and then implement the uniform quantization (see Eq. (2)) or other existing PTQ quantization functions. \n\n\nQ3. Can you comment on potential impacts of your approach on the inference performance?\n\nA3. Previous QAT and PTQ research used trained quantization parameters in inference. However, there is generally a difference between the training and testing distributions in the real data, which may lead to a bias during inference and the accuracy degradation. In contrast, we utilize the testing statistics (e.g., mean and variance) in inference, thereby addressing the issue of the inference bias. \n\n\nL1. I assume the work requires the knowledge of the distribution on the full dataset (which may limit the practicality of this approach since it may require multi-party sharing).\n\nA1. Thanks for the point raised. However, there may be a misunderstanding which needs to be clarified. In this paper, we adopted a batch-wise processing in inference. Sec. 4 shows the effectiveness of ClimbQ in the case that only a few testing data (batch data) are accessed. As such, we can independently process the multi-party data during inference.\n\nL2. I am concerned about the potential implications in terms of the inference performance.\n\nA2. This is same as the question 3. Please see the response to Q3. Thanks.\n\nL3. I would suggest to make the scope of your approach clear upfront.\n\nA3. Thanks for the suggestion. We will clarify the scope upfront in the revision.\n\n\nReference\n- [1] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.\n\n", " We thank the reviewer for taking the time to review our paper. In response to the comments, our replies are as follows.\n\nW1. The generation of imbalance data using γ is unclear.\n\nA1. In this paper, we follow the previous research on imbalance (long-tailed) learning [1, 2, 3] to generate the imbalance data by the ratio γ. As described in Lines 230-231, the imbalance ratio γ indicates the number of the largest training class divided by that of the smallest. Below is the detailed procedure, by taking CIFAR-10-LT with γ = 50 for an example. \n\n- Step 1. We sampled from the balanced data CIFAR-10 in which training data instances were 5000 for each class and a total of 10 classes are indexed (labeled) from 0 to 9. \n- Step 2. We chose the class 0 as the maximal class with size 5000. \n- Step 3. We chose the class 9 as the minimal class and sampled 5000/γ = 5000/50 = 100 data. \n- Step 4. The other classes (indexed 1 to 8) were then sampled in exponential distribution as shown in Fig 1. (a). \n\n\nQ1. Have the authors considered plugging in eq(4) to other competing approaches?\n\nA1. Thanks to the reviewer for the suggestion to implement the HomoVar loss on other competing approaches. We have conducted the experiments and the results are provided through the anonymous link: https://www.dropbox.com/s/9afaq6fymqv0hbg/response_to_reviewer1_Q1.pdf?dl=0. It can be seen that the HomoVar loss incorporated with our proposed ClimbQ quantization process can still achieve the best performances since the compared works did not fully explore the difference between the class distributions.\n\nReference\n- [1] Ren, Jiawei, Cunjun Yu, Xiao Ma, Haiyu Zhao, and Shuai Yi. \"Balanced meta-softmax for long-tailed visual recognition.\" Advances in neural information processing systems 33 (2020): 4175-4186.\n- [2] Li, M., Cheung, Y. M., & Lu, Y. (2022). Long-tailed Visual Recognition via Gaussian Clouded Logit Adjustment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 6929-6938).\n- [3] Wei, C., Sohn, K., Mellina, C., Yuille, A., & Yang, F. (2021). Crest: A class-rebalancing self-training framework for imbalanced semi-supervised learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10857-10866).\n", " The authors propose better quanitization techniques for class-imbalanced datasets. \n\nSpecifically, propose \n1. ClimbQ - scale the class distributions such that rare classes have higher variance, followed by bucketization of CDF\n2. ClimbQ + homovar - a weighted loss function inspired from Levene’s hypothesis testing.\n\nThe authors compare the proposal against related techniques on 3 datasets, mostly by simulating different levels of imbalance.\n Strengths.\n---------\n\n- The proposal is well-motivated, simple and shows strong experimental results.\n- The paper is also well-written.\n\nWeakness\n--------\n- The generation of simulated data through γ is a bit unconvincing since it was not clear how exactly was the imbalance simulated.\n Having said that, I do think results in Appendix#3 sound convincing. - Have the authors considered plugging in eq(4) to other competing approaches? N/A", " This paper investigates the issue of quantization on class-imbalanced data. The key observation in this work is that quantizing imbalanced data inclines to obtain a large error due to the differences between separate class distributions, which leads to a significant accuracy loss. In order to tackle this issue, this work proposes ClimbQ, a new framework that focuses on diminishing the inter-class heterogeneity for quantization error reduction. ClimbQ first scales the variance of each class distribution and then projects data through the new distributions to the same space for quantization. To guarantee the homogeneity of class variances after the ClimbQ process, we examine the quantized features and derive that the homogeneity satisfies when data size for each class is restricted (bounded). Accordingly, ClimbQ embeds a new Homogeneous Variance Loss (HomoVar Loss), which reweights the data losses of each class based on the bounded data sizes to satisfy the homogeneity of class variances. Extensive experiments on class-imbalanced and benchmark balanced datasets reveal that ClimbQ outperforms the state-of-the-art quantization techniques, especially on highly imbalanced data Strengths:\n- Important problems and the insights are valuable\n- An end-to-end solution to address the class-imbalanced issues within quantization\n- Evaluation results are sufficient to justify the effectiveness of the proposed approach\n- Source codes available\n\nWeaknesses:\n- Limited novelty (the solution donot add new insights on class-imbalanced quantization)\n- The proposed solution seems restricted within quantization-aware training - What are the benefits (and/or downsides) of ClimbQ over other mixed-quantization schemes?\n- Can you comment on potential applicability of your approach on post-training quantization?\n- Can you comment on potential impacts of your approach on the inference performance? - I assume the work requires the knowledge of the distribution on the full dataset (which may limit the practicality of this approach since it may require multi-party sharing).\n- I am concerned about the potential implications in terms of the inference performance.\n- I would suggest to make the scope of your approach clear upfront.", " The authors investigate quantization of (deep neural network) model parameters in class-imbalance settings. This is achieved by scaling the variance of of each class separately, which is then mapped to a common distribution for quantization. Further, a loss function with class reweighing is used to satisfy the homogeneity of class variances. Experiments on four datasets (Synddigit-LT, CIFAR-10-LT, CIFAR-100-LT, ImageNet-ILSVRC) demonstrate state-of-the-art results for quantized models, specially, on highly imbalanced data. Strengths:\nThough the variance scaling and quantization are relatively straightforward, the weighting approach based on hypothesis testing offers a different and interesting angle to class weighting schemes. The experiments are extensive, consider multiple datasets, imbalance ratios and importantly, also consider existing class weighting approaches, which not originally intended for quantization, can be used within the ClimbQ framework.\n\nWeaknesses:\nThough the paper is relatively well organized, it is difficult to follow, for instance, N(.,.) is used before being defined, x is introduced as data to then being extended to also the features from a deep learning model, there is abuse of notation (e.g., F(X) a distribution is equated to a random variable U), x' is not formally defined, etc.\n\nSomething that is never explained or discussed is why if the data (including features) is being scaled during training, how is the scaling being applied during testing and if not necessary, why is it the case?\n\nPost-rebuttal: increased rating after detailed clarification from the authors. Though the assumption of variance differences between classes of different sizes is reasonable and can be demonstrated in practice in multiple ways, using t-SNE is perhaps not the best way considering that due to the minimum distance specification in t-SNE, a larger dataset will occupy a larger space regardless of variance. Moreover, being a local linear embedding technique, distributional variance are not necessarily well represented. Can the authors justify or use a different visualization technique?\n\nIn Section 3.1.1 the authors claim that differences in class distributions cause massive quantization error, however, it is not justified or demonstrated.\n\nThere seems to be a problem in Figure 2, by which N of N(\\mu_k,sigma_k) is not shown.\n\nThe authors argue that data follows a normal distribution, which is demonstrably not true in general practical settings (without data transformation). Though it is possible that the authors are referring to model parameters and features, the beginning of Section 3.1.2 clearly states that data x is assumed to be Gaussian, though later it is stated that x are features. Please clarify by stating the notation earlier.\n\nWhat happens in practice when beta -> 1 (0.999 < beta < 1)?\n\nDid the authors consider ClimbQ with a simple inverse probability weighting (1/p_k, where p_k is the proportion of samples in class k)? The authors address the impact and limitations of the proposed approach in Section 6.", " The authors propose ClimbQ quantitzation for efficient inference in\nthe context of class-imbalanced problems. The majority (minority)\nclasses have larger (smaller) variance and they reduces (increase) the\nvariance. To project onto a uniform distribution, they use a\ncumulative distribution function, which is then quantized. Based on\nLevine's hypothesis testing on homogeneity in variances, they show a\nlower and upper bound of class data size to reject/accept the null\nhypothesis (homogeneous variances). Using the bounds, they divide the\nclasses into: minority, moderated, and majority classes. They then\ndesign the HomoVar loss, which weight minority classes more and\nmajority classes less.\n\nEmpirical evaluation on 3 imbalanced datasets indicates the proposed\nmethod generally outperform 6 existing methods.\n The problem of quantization for class-imbalanced problem is\ninteresting.\n\nThe proposed projection onto an uniform space and HomoVar loss are\ninteresting. They derived the upper and lower bounds of class size\nwith respect to Levine's hypothesis testing on homogeneity in\nvariances.\n\nEmpirical evaluation indicates the proposed approach generally\noutperform existing methods.\n\nThe paper is generally well written.\n lines 186-188 vs lines 194-195: the null hypothesis is rejected when\nthe class size is NOT too small/large (lines 186-188), but it is\nrejected when the class size is too small/large (lines 194-195).\n\nEq 4: more motivation on how w_k is designed would be beneficial,\nparticularly the intent for the numerator and denominator.\n\nline 170: nominator -> numerator ?\n Limitations and negative societal impact were discussed.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "v6QhNSmN9nl", "J1L2F6N7W0f", "y4rSUKj6Xgf", "T2zIWXr6gEO", "IKCeUCjrrdE", "zgttOU_QLLN", "wl0CDEcig3", "gWIFA1tKMh", "gWIFA1tKMh", "W7jtW_X8zpp", "j-PDzOjdQZ2", "PP3ESY3FlcW", "18oY5L2-8gI", "nips_2022_F7NQzsl334D", "nips_2022_F7NQzsl334D", "nips_2022_F7NQzsl334D", "nips_2022_F7NQzsl334D" ]
nips_2022_4F0Pd2Wjl0
Error Correction Code Transformer
Error correction code is a major part of the physical communication layer, ensuring the reliable transfer of data over noisy channels. Recently, neural decoders were shown to outperform classical decoding techniques. However, the existing neural approaches present strong overfitting, due to the exponential training complexity, or a restrictive inductive bias, due to reliance on Belief Propagation. Recently, Transformers have become methods of choice in many applications, thanks to their ability to represent complex interactions between elements. In this work, we propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths. We encode each channel's output dimension to a high dimension for better representation of the bits' information to be processed separately. The element-wise processing allows the analysis of channel output reliability, while the algebraic code and the interaction between the bits are inserted into the model via an adapted masked self-attention module. The proposed approach demonstrates the power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins, at a fraction of their time complexity.
Accept
This paper is part of a popular line of research aiming to apply neural network concepts to the decoding of error-correcting codes. The main novelty consists in the introduction of an architecture based on transformers. The authors provide convincing and thorough numerical results comparing the BER and the complexity of the proposed approach with various baselines. Such results apply to codes in the short to medium block-length range (from 32 to 128 bits). The reviewers have expressed a number of concerns in their initial reports. After the rebuttal stage, most of these concerns have been resolved. The reviewers Nt2o and MQDw have particularly appreciated the additional numerical results provided by the authors (BP baselines, non-Gaussian channels, other modulations and SCL decoder for polar codes). This is also explicitly pointed out in the updated reviews. In summary, there is clear consensus towards accepting the paper. After my own reading of the manuscript, I agree with this assessment and I am happy to recommend acceptance. As a final note, I would like to encourage the authors to include in the camera ready the additional experiments and discussions mentioned in the rebuttal.
train
[ "Km4WD5jxiXn", "BpfyE6OBR7Y", "V0E26lz5cBX", "E4njdxncOR7", "MsE6WVZOiIv", "q1vpEDDT5kI", "qsyu5hU9Xa", "zMMaItNd-X1", "h7TXKmjTNll", "i2n0exlmbhT", "YecrtciLAeq", "yL2lOIGjFh" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for the valuable ideas, which have no doubt helped improve our manuscript.\nWe would be happy to know if you are satisfied with our answers, or if there is anything else we can address.", " Thank you for the reply and the revised manuscript. I have read them and adjusted the score accordingly.", " We appreciate the reviewers' detailed comments and valuable suggestions. We have made an effort to factually address the stated issues, as indicated in the summary of changes that has been posted.\n\nIf the response to each reviewer has not already addressed all concerns, we would appreciate the opportunity to further discuss our work.", " Thank you for the supportive and comprehensive review.\n\n## Model-based vs. model-free\nWe agree the terminology can be misleading since our Transformer network uses a code-based mask (we also provide the unmasked Transformer as an ablation study).\n\nThe terminology has been borrowed from [26] and refers to decoders that implement any variants of generic types of neural networks. This contrasts with the model-based methods, in which an existing non-neural decoder (e.g. BP) is augmented with learnable parameters. We clarified this in the revised version.\n\n## Comparison to SOTA non-neural decoders\nIn Appendix D we add, besides BP, the performance of the SC-L for polar codes. As can be seen, relatively shallow ECCTs can compete and sometimes even surpass the SCL for some of the codes and SNRs. Increasing the capacity of the network, which currently has only six layers, may further improve the performance to set new SOTA results. \n\nA rigorous comparison should take into account exact complexity analysis as well as the potential acceleration of the ECCT as suggested in Section 6.3. For example, a low-rank approximation, e.g. Linformer [32], would transform the quadratic complexity in $d$ to linear, which could make ECCT extremely competitive on the algorithmic complexity level as well.\n\n## Non-Gaussian channel\nIn Appendix E, we present the performance of our model for a Rayleigh channel, where we can observe the ECCT remains effective even for such channels. \n\n## 16QAModulation\nIn Appendix F we add the performance under 16QAModulation. ECCT’s advantage is maintained under different modulations.\n", " We thank the reviewer for the mostly supportive review and detailed feedback. \n\nWe note that despite writing “*I would like to advocate the acceptance of this paper in that regard*”, the overall grade was slightly lower than the acceptance threshold. We kindly ask to know if our answers satisfy all of the concerns raised, and if not, we would be happy to future comply.\n\n## Section 6.1. experiments\nWe added clarifications in this section, and, following the review, the revised manuscript provides in Appendix B another illustration for a larger BCH code.\n\n## Comparison with vanilla augmented BP\nWe now also provide in Appendix G the complexity of vanilla augmented-BP algorithms as well as numerical simulations of the complexity for different codes. We analyze and compare the complexities and performance, and also provide the performance of two models applied on two different codes, that have similar complexity to the ‘at convergence’ neural BP decoders. These models improve upon Neural BP by 12\\% and 8\\% on average over the normalized SNR range.\n\nWe note that the vanilla augmented-BP has reached its full capacity and is not able to further improve its performance, contrary to the proposed ECCT. \nAlso, computational complexity may not be the only appropriate metric for efficiency. Our current implementation (even for 6 layers) is much faster than a 100 layers/50-iterations (neural or not) BP even on a general purpose GPU, and can provide much shorter latencies and higher throughput. Furthermore, the complexity of Transformers can be greatly reduced via the vast amount of recently developed methods as mentioned in Section 6.3. For example, a low-rank approximation, e.g. Linformer [32], would transform the quadratic complexity in $d$ to linear, which could make ECCT extremely competitive on the algorithmic complexity level as well.\n\n## Performance on a non-Gaussian channel.\nFollowing the review, we provide in Appendix E a comparison between BP and our method for a Rayleigh channel. ECCT’s advantage is maintained in such channels.\n", " Thank you for the very supportive and comprehensive review.\nWe thank the reviewer for pointing us to important corrections and typos. All have been addressed in the revisited manuscript.\n\n## The effect of masking on complexity\nThe proposed masking approach can lead to a great reduction of the run-time and power on dedicated devices via an adapted memory fetching and by processing paired elements only in the tensor/matrix multiplication unit (e.g. https://arxiv.org/ftp/arxiv/papers/1704/1704.04760.pdf). \nBesides the fact the mask is very sparse, since the mask is symmetric and the dot-product is a bilinear symmetric operation (over real numbers), only half of the computations are required.\nThe current implementation employs a general-purpose GPU, simulating the masking effect. The main goal of the experiments is to demonstrate the effect of masking on accuracy.\n \n## An intuition for encoding the syndrome bits\n A non-zero syndrome means that at least one particular parity-check bit would give a negative value via the binary to sign mapping $f(s_{i})=1-2s_{i}$. This sign is modulated by its magnitude $|y_i|f(s_{i})$ ensuring the reliability of this same parity check bit. Thus, a non-zero parity-check bit is easily detectable, and its contribution is diminished via the softmax self-attention mechanism (exponent of a negative number).\n\n## Comparison with List-SC decoders for Polar codes\nAppendix D of the revised manuscript contains comparisons with the linearithmic SOTA SC-L decoder for all the Polar codes used in our experiments. As can be observed, the proposed *shallow* ECCTs can compete and sometimes even surpass the SCL for some of the codes and SNRs. Increasing the capacity of the network, which currently has only six layers, should further improve the performance as with LDPC codes.\n", " Thank you for the supportive and comprehensive review.\n \n## The need for multi-head attention\nThe Transformers are indeed permutation equivariant models. However, the multi-head attention aims at enriching the analysis of the embedding and not of the elements. We now provide in Appendix C experimental results regarding the impact of the number of heads. As can be observed, using more than one head is beneficial for performance.\n \n## Additional illustrations of self-attention maps\nWe now provide in Appendix A the illustration of several self-attention maps for different codes with their corresponding inputs. Interestingly, we can observe the ECCT seems to focus its processing on the syndrome in the early stage.\n \n## Typo in line 147\nWe thank the reviewer for the correction. This typo was due to the wrong placement of the \\label.\n\n## Larger LDPC codes\nThe method can be applied at arbitrary code length under potentially high memory and computational training constraints. For example, running our code on a Polar(512,384) code requires 7x more time per epoch, which is computationally intensive but still feasible. In other domains, Transformers are often run on 512-1024 tokens, which supports the viability of our method for larger codes.\nAlso, we established in our experiments that our framework is much more scalable than the classical networks used by [2], which struggle with learning larger codes such as $n=127$ (Figure 4.c). Following the review, we provide in Appendix H the performance of the ECCT on two larger codes. We can observe that ECCT can learn to efficiently decode larger codes as well.\n", " We have uploaded a revised version of our manuscript, which contains the recommended clarifications and additional results specifically requested by the reviewers.\n\nFollowing a request by FPkL, we provide in appendix A illustrations of self-attention maps for several codes.\n\nFollowing a request by FPkL, we have added in appendix C experiments assessing the impact on the accuracy of the number of heads in the self-attention layers.\n\nFollowing a request by FPkL, we have added in appendix H the performance of our model for an LDPC code with $n=529$ and a Polar code with $n=512$.\n\nFollowing a request by AayN and MQDw, we have added in appendix D the performance of the SCL decoder for Polar codes.\n\nFollowing a request by Nt2o, we have added in appendix B an ablation validating the impact of the reliability embedding. It employs a larger BCH code.\n\nFollowing a request by Nt2o, we provide in Appendix G the complexity of BP, numerical simulations of the complexity, and accuracy comparisons between our shallower model and a neural BP model that has a similar complexity.\n\nFollowing a request by Nt2o and MQDw, we provide in Appendix E experiments of our method with a non-Gaussian (Rayleigh) channel.\n\nFollowing a request by MQDw, we have added in Appendix F experiments of our method with 16QAModulation.", " This paper proposes a Transformer based that employs relaxed inductive bias compared tgo Tanner graph-based methods while utilizing domain knowledge compared to standard model-free decoders based on fully-connected graph.\n This paper's main strength is the well-defined scope of the problem. \n\nThe paper's main weakness is that the proposal’s benefits were not studied thoroughly. For examples, I have a few questions below\n - Figure 3: Is the multi-head self-attention block still necessary? Given that we’re looking for the localized bits that meet each row of the parity check (PC) matrix, will single-head self-attention suffice? In other words, if you permute the rows of the PC matrix, you will get the same result, indicating that the problem has some sort of equivariance property that the authors may utilize in the Transformer design\n\n- Figure 2: This is a good illustration; it would be fascinating to see how the attention maps visualized after the training and how they compare to the restrictive Tanner graph; it would also be interesting to see if the mask-based decoders genuinely help attention to more targeted interactions or not \n\n- Ln 147: Meant Algorithm 1?\n - Ln 154-157: Is the complexity reduction in 4.2 from code-aware self-attention adequate enough to apply the method to longer LDPC codes, such as greater than 512, say 1K or 4K lengths? If it’s not, the authors should have stated that clearly as a limitation or future work of the study", " - The paper presents a novel Transformer based generic decoding procedure for typical linear error correction code families (LDPC, Polar etc..). \n- The method does achieve SOTA performance on mid block length codes (~100-200)\n- The key novel contribution of the work was the introduction of positional reliability embeddings and the attention mask, both of which (particularly the mask) uses some clever information theoretic domain understanding. These ideas improve the convergence and hence the performance of the model\n- One key selling point of the paper is that the method is general enough to be applied to any linear code without any modification. Strengths: \n- The paper presents a novel way of using transformers and self attention to achieve strong results in terms of the BER\n- The architecture isn't just using a ML-model and blindly using it for the decoding problem. The training uses some clever ideas from the code construction to improve the convergence.\n\nWeaknesses:\n- The explanation was overall good, but in some places a bit lacking. More detailed comments at the end\n- The authors briefly mention that the runtime/power numbers might be sub-par as compared with non-ML methods.. still it might be great to see the comparison to know where the field is at. \n\nSpecific suggestions:\n- A couple of typos.. line 117 propriety -> property\n- $y_b$ -> although clear, it is not defined in the manuscript\n- The postprocessing an pre-processing is not clear.. please describe more clearly. For example: it is not clear to me (without reading the cited reference, what you mean by \"The post-processing step plugs back the vector elements of y....\"\n- The use of mask is quite nice, and the authors claim that it reduces the complexity by 84%.. does this reduction in complexity lead to reduction in FLOPS/run-time.. or just a symbolic reduction which leads to lower complexity and better convergence? Please clarify - The use of mask is quite nice, and the authors claim that it reduces the complexity by 84%.. does this reduction in complexity lead to reduction in FLOPS/run-time.. or just a symbolic reduction which leads to lower complexity and better convergence? Please clarify\n\n- The intuition behind positional reliability encoding is intuitively clear to me, at least for the elements corresponding to |y_i|.. can the authos explain the intuition behind the encoding for the syndrome bits? \n\n- Comparison with List-SC decoding for polar codes might be interesting. No societal limitations", " This paper considers a very important and interesting topic: the decoding of error-correcting codes. \nOver the past few years, there has been a lot of work on applying learning to the decoding problem. Various neural architectures have been considered as well, starting from feedforward networks to recurrent and convolutional neural networks. There also have been a long line of work which introduced learnable parameters to the existing decoder architecture, some of which are considered as a baseline in this paper. \n\nThere has not been a transformer-based channel decoder yet; training a transformer that can successfully decode error-correcting codes is empirically quite challenging. This paper addresses the challenge by utilizing the knowledge of codes in a way; using the appropriate masking and modulating the embedding by syndrome values. This approach is novel and interesting - so I would like to advocate the acceptance of this paper in that regard. \n \n(Strength) \nAs mentioned in detail above, the paper successfully demonstrates a transformer-based channel decoder for the first time. \n\n(Weakness) \nDespite the idea being novel and the results being promising, I have two major concerns. \n\nThe first is whether the comparison between the BP families and the transformer families is fair. Given that the transformer-based decoder is not a purely data-driven decoder, we would expect to see gains compared to the existing decoder. For example, L-layered augmented BP decoders typically have complexity similar to the L-layered BP decoders. Hence, the complexity of the augmented BP decoder is similar to the complexity of the traditional BP decoder but is superior in terms of reliability. [Hyper-BP might be computationally expensive due to the 'hyper network' which selects the weights. However, there are other neural augmented BP algorithms that do not include the weight selection module but are pretty good.]\n\nSecond, the presentation and ablation studies can be further improved. For example, section 6.1 and the description of Figure 5 lack details. Also, I wonder if any similar ablation study can be done for the polar or LDPC or BCH codes. Section 6.2 is interesting and insightful. In Section 3, providing the number of operations would be more informative (and potentially mention vanilla augmented-BP algorithms as well). \n\nFor these reasons, my current score is 4, but I am really at the exact borderline. I would like to listen to the authors; I will go through the rebuttal and fix my score afterwards. \n\n--- \nAfter the rebuttal: The new experiment results (e.g., Appendix G) are exactly are very informative and satisfactory. The reliability gap between the transformer-based decoders and neural BP decoders does not seem huge. (It is a bit hard to interpret the gain from the negative logarithm of the BER, and the SNR gain might be more indicative.) Nevertheless, I still really like the idea of utilizing the transformer architecture for decoding, and the updated manuscript is comprehensive. I'd updated my score accordingly. I'd be curious to see the comparison between the reliability of neural-augmented BP and the transformer (vanilla, no weight adaptation network) -based decoder when their complexities are similar. \n\nI'd be also curious to see the performance of these decoders for non-Gaussian channels. Is there a difference in terms of robustness? The authors mention that the complexity could be potentially improved by several techniques. ", " This paper studies neural decoding of (linearly coded) transmissions in the physical communication layer. It proposes a novel architecture that has three main features: it is based on transformers, contains a scaled element-wise embedding of the input, and has an adapted-mask self-attention mechanism. The proposed architecture is compared empirically to existing neural decoders and is shown to outperform them with higher decoding success and less computational complexity. Strengths:\n----------------\n\nStrength 1: The presented method is the first transformer-based decoder and it also gives an efficient architecture.\n\nStrength 2: The proposed method has better decoding performance than other neural network based approaches. \n\nStrength 3: The proposed method allows to input expert knowledge to the system and has lower complexity than other neural network based methods.\n\n\nWeaknesses:\n---------------------\n\nMinor Weakness 1: Only AWGN channel is considered in this work. This may be in line with other neural decoder works, but it would be important to consider also wireless channels with fast fading and e.g., with interference, which change the noise statistics. There is a possibility that the learned methods might provide even further gains in these scenarios by learning the noise correlation statistics.\n\nMinor Weakness 2: Especially with wireless fast fading channels, other modulations than BPSK should be considered, at least 16QAM.\n\nAfter the author feedback (rebuttal):\n------------------------------------------\nI have reviewed the author feedback and they have done good job in clarifying most of the questions the reviewers have posed. I have changed the score accordingly. \n\n\n Question 1: in related work, a dichotomy between model-based on model-free was presented and the authors of the present work place theirs’ in the model-free category. Since the present work uses expert knowledge in how the adapted-mask is generated, one could argue that it belongs somewhere in the middle of these categories. Could the authors elaborate on this?\n\nQuestion 2: Would there be a way to compare neural decoders to other non-learned SOTA decoders? Would it be possible to do more quantitive comparison as well? See above." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "h7TXKmjTNll", "E4njdxncOR7", "nips_2022_4F0Pd2Wjl0", "yL2lOIGjFh", "YecrtciLAeq", "i2n0exlmbhT", "h7TXKmjTNll", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0", "nips_2022_4F0Pd2Wjl0" ]
nips_2022_vgIz0emVTAd
DISCO: Adversarial Defense with Local Implicit Functions
The problem of adversarial defenses for image classification, where the goal is to robustify a classifier against adversarial examples, is considered. Inspired by the hypothesis that these examples lie beyond the natural image manifold, a novel aDversarIal defenSe with local impliCit functiOns (DISCO) is proposed to remove adversarial perturbations by localized manifold projections. DISCO consumes an adversarial image and a query pixel location and outputs a clean RGB value at the location. It is implemented with an encoder and a local implicit module, where the former produces per-pixel deep features and the latter uses the features in the neighborhood of query pixel for predicting the clean RGB value. Extensive experiments demonstrate that both DISCO and its cascade version outperform prior defenses, regardless of whether the defense is known to the attacker. DISCO is also shown to be data and parameter efficient and to mount defenses that transfers across datasets, classifiers and attacks.
Accept
In this paper, DISCO, a test-time defense against adversarial attack, is proposed based on prior concents of adversarial denoising, manifold modeling, and implicit function. The authors show promising efficiency and experimental results in DISCO. However, a large concern raised by some reviewers is the limitied novelty but the authors claimed that the perspective of modeling local statistics and the introduction of the local implicit function for adversarial defense are important contributions. Another limitation is that some reviewers concern robustness evaluation on norm-bounded attacks only, but the authors claim that many baselines in RobustBench [25,26,32,33,81,87,99,110,116,119,116] are evaluated only on norm-bounded attacks. Since most reviewers are satisfied with authors' responses, this work is suggested to be accepted but the AC hopes the authors continue to clarify the limitations and consider taking recent publications into consideration to further revise the paper.
train
[ "f2ab6Yu6Ya4", "xTJx05aD7x", "Qx-RMNJ-rx9", "OCVuUVQN7v", "1GmX18t_r8O", "KHSgHBBtPgN", "Ud85DTzmTy4", "ak2l36xKZhb", "_PAtjxpA1IT", "6dv9nkAMWze", "WgsjHr4wL7B", "m642s8fS00g", "yzw041YJo-" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\nWe appreciate your efforts in reviewing our paper. We have addressed your questions in detail. As the deadline is approaching, would you please check our response and acknowledge our rebuttal?\nThank you so much.\nBest regards,\nAuthors", " Thank you for the thorough response. It has adequately addressed almost all of my concerns, provided that the method and related work are amended as discussed, so that LIIF and existing test-time defenses receive adequate credit to help readers navigate this topic.\n\nThe one remaining concern is the strength of the transfer attack evaluation. For transfer attacks, let me first highlight advice from the initial review:\n\n> Note: AutoAttack by default returns the original input, and not the perturbed input, so make sure to nevertheless use the perturbed inputs for this transfer attack, as near-failures may succeed when transferred.\n\nTo unpack this further, just running the default AutoAttack on the classifier without DISCO is a weaker attack than it could be and should be. There are two issues with AutoAttack for transfer: (1) if the attack does not succeed it returns the original unperturbed input and (2) the first attack to succeed is returned. (1) is an issue because a perturbed input may still cause a misclassification when transferred, as it is applied to a different model, but the original input is obviously an easier input. (2) is an issue because the first perturbation to achieve misclassification may not be the strongest. For example, an iterate of PGD may just barely push the logit for the wrong class higher than the logit for the right class, and at this point AutoAttack will terminate. A stronger transfer attack would keep iterating PGD up to some threshold number of steps to try and make the loss even higher. While (2) is a good efficiency trick when used without transfer, it can result in overestimates of robustness when used with transfer.\n\nThe bottom line is that _running default AutoAttack is not a sufficient transfer attack_. The four attacks it includes are good choices, but these cannot be run with the default library configuration, or else the transferred attacks will not be as strong as they could be. I encourage the authors or future readers to double-check the transfer results to achieve potentially more accurate estimates of robustness.\n\nAt this point I am encouraged to maintain my rating (7/Accept). Although DISCO is limited in its technical novelty, it is empirically novel, and it is highly informative to the community to evaluate robustness across so many models, datasets, and attacks while emphasizing transferrability across classifiers and attack types. If DISCO's robustness holds up to further evaluation, the kind of test-time defense proposed here could be a revealing counterpoint to the mainstream of adversarial training.", " ### Weakness\n\n**W1:** Sorry for the confusion and thanks for the suggestion. The notation $\\hat{f}_{i^*,j^*}$ denotes the concatenated feature whose location is within the kernel size s centered at $p^*$ (L165). The concatenated feature corresponds to the blue bar of dimension $C^* s^* s$ in Figure 5. The notation $E$ and $L$ are the encoder (L156-L157) and the local implicit module (L161), respectively. More specifically, the local implicit module $L$ is a mapping of $L:\\mathbb{R}^{C^* s^* s+2+2} \\rightarrow \\mathbb{R}^{3}$ (See figure 5), which is implemented by 4 layers MLP (each hidden layer has dimension 256) (See L162). It takes the concatenated feature (dimension $C^* s^* s$), the relative position $r=p-p^*$ (dimension 2) and the pixel shape (dimension 2) as input to predict a RGB value $v\\in\\mathbb{R}^{3}$. Since DISCO supports multi-resolution output, the pixel shape is the height and width of the output pixel in the normalized coordinate (See L169). Figure 5 will be revised to better match the text.\n\n**W2:** Please refer to General Comment (Novelty) for more discussion.\n\n**W3:** Thanks for the suggestions. The typos will be fixed in the final version. The last row of Table H should be Cifar100, instead of Cifar10.\n\n### Questions\n\n**Q1:** As discussed in Section 2, implicit module has shown significant success in various field, including 2D images [13, 24] and 3D shapes [94, 68, 85, 71, 118, 52, 46, 14, 67, 77, 29, 115]. When compared to GAN based reconstruction methods in 3D, implicit module has been shown [a, 71,118, 14, 67, 77] to better reconstruct complex object details. However, even global implicit function [71,118, 14, 67, 77] is not excel in generalizing to novel object classes. Such limitation has been addressed by the introduction of local implicit function, such as [a, b, c]. To sum up, the benefit of using local implicit module for adversarial defense is multifold, including (1) better representation power of capturing local patch statistics, (2) able to transfer the defense across datasets and (3) parameter and computation efficiency (See General Comment (Efficiency)).\n\n[a] Local Deep Implicit Functions for 3D Shape\n[b] Local implicit grid representations for 3d scenes\n[c] Deep local shapes: Learning local sdf priors for detailed 3d reconstruction\n\n**Q2:** The details of PGD attack used in training and testing are identical and are applied to all the evaluated datasets (unless specified). As discussed in L49-L54 in Appendix G, we adopt the public code for the attack implementation. By default, for PGD attack, the maximum perturbation is $\\epsilon=8/255$ (See L224), step size is 2/255 and the number of steps is 100. We will include more implementation details in the appendix.\n\n**Q3:** (There is no table 9 in the paper; assuming the reviewer is referring to Figure 9). As shown in Figure 9, we found $K_{def}\\leq 3$ (at most 3 consecutive DISCOs) is enough to defend attacks that observes 1 to 5 DISCO stages (i.e. $K_{adv}=$ 1 to 5). The table below reports the robust accuracy with both $K_{adv}$ and $K_{def}$ from 1 to 5, which is an extension of Figure 9. Similar to the conclusion in L324-L325, the robust accuracy tends to be lower when the attacker has full knowledge of the number of DISCO cascade (i.e. $K_{adv}=K_{def}$).\n\n| | $K_{adv}$=1 | $K_{adv}$=2 | $K_{adv}$=3 | $K_{adv}$=4 | $K_{adv}$=5 |\n| --- | --- | --- | --- | --- | --- |\n| $K_{def}$=1 | 47.2 | 55.3 | 58.9 | 62.4 | 64.2 |\n| $K_{def}$=2 | 59.6 | 52.0 | 57.5 | 57.7 | 60.4 |\n| $K_{def}$=3 | 65.4 | 59.8 | 57.2 | 58.5 | 59.2 |\n| $K_{def}$=4 | 68.6 | 60.9 | 60.0 | 57.3 | 58.5 |\n| $K_{def}$=5 | 69.4 | 64.0 | 60.3 | 58.9 | 57.7 |", " ### Weakness\nDISCO is specifically designed for adversarial defense, instead of image smoothing. Please refer to General Comment (Novelty).\n\n**W1 (slow inference)**: Please refer to General Comment (Efficiency) and section E in appendix for more discussion.\n\n**W1 (other attacks)**: Since we mainly evaluated DISCO against norm-bounded attacks, we did not claim DISCO's robustness or its vulnerability against other forms of attacks (e.g., 1-pixel attack, patch attack or functional adversarial attacks) (See L339-L341). This will be investigated in future work. Please refer to General Comment (Limitation) and section 5 for more discussion.\n\n### Questions\n\n**Q1**: Sorry for the confusion and we will carefully differentiate the term \"image\" manifold and \"patch\" manifold in the final version. In fact, we do make the statement that DISCO does not project the entire image into the manifold, but only the local patch (See L61-L62 ; L72-L73; caption of Figure 2). Furthermore, we use the term barely outliers to refer to the perturbed images (L27-L29). While there are multiple defense approaches that project the barely outliers to the natural image manifold, these are usually global image modelling [66, 99, 88, 123, 95, 5, 105, 86, 53] (L33-L39), which is also adopted by many image synthesis and GANs methods. Unlike prior approaches, DISCO performs the barely outliers projection by modeling the local patch statistics and repeating the local manifold projection process over all the pixel neighborhoods. As mentioned in L61-L64 and caption of Figure 2, DISCO performs local manifold projection at each pixel neighborhood, conditional on feature vectors of the adversarial input image. The local modeling requires much smaller parameter and training dataset sizes than global modeling models and enables much more precise control of the manifold projection operation. As suggested, we will revise the text in the final version.\n\n**Q2**: Given an input image, DISCO only computes the feature map f=E(x) once and for each query pixel location, the neighbor feature at the location is extracted and used to the predict the RGB value. Note that the RGB prediction at different query pixels can be performed simultaneously in a batch. Please refer to General Comment (Efficiency) and section E in appendix for more discussion.\n\n**Q3**: Due to the excessive references and related works, we do encourage the reader to refer to related survey paper for more complete review (See L92-93). For more discussion, please refer to General Comment (Baselines) for more discussion.\n\n**Q4**: Thanks for suggestion. We will fix these language glitches accordingly in the final version.\n\n### **Limitation**\n\nPlease refer to W1 (other attacks).", " ### Weakness\n\n**W1**: DISCO is specifically designed for adversarial defense, instead of denoising. While there are many methods in the denoising literature, there is little clue that the denoise methods can be directly applied to the adversarial defense literature. Note that both [1,2] are customized for denoising purpose and do not conduct any experiment related to adversarial attack. Please refer to General Comment (Novelty) for more discussion.\n\n**W2**: Please refer to General Comment (Efficiency) and section E in appendix for more discussion.\n\n**W3**: Thanks for the suggestion. We will revise the writing as suggested in the final version.\n\n**Limitation**: Please refer to the discussion of W1 to W3.", " ### Weaknesses\n\n**W1**: DISCO first introduces implicit function for adversarial defense. While there are other ways to implement implicit function, we adopt [13] for DISCO implementation. The purpose of [13] and DISCO is entirely different, where the former/latter is designed for superresolution/defense. We modified the code of LIIF for DISCO, which consumes adversarial image (Appendix L61). We also proposed DISCO cascade for better defense. We will highlight the credit of LIIF in the main paper\n\n**W2**: The results of [81] listed in Table 1&2 are obtained with WRN70-16, while the results of no defense (first row of Table 1&2) are obtained with WRN28-10. Take Table 1 for example. When compared to [33] (Table 1 row 4) that also uses WRN28-10, DISCO beats [33] on SA (89.26 vs 87.5) and RA (85.56 vs 63.44). [81] also reports the result with WRN28-10 in Table 2 of [81], where the results of SA/RA are 89.90/62.06. Under WRN28-10, [81] harms SA by 4.88, while DISCO harms SA by 5.52. However, DISCO outperforms [81] by 23.5 (85.56 vs 62.06) on RA\n\n**W3**: While [A-D] and DISCO shares the idea of adversarial purification, DISCO is a defense that models the local patch statistics. Such property results in data and parameter efficiency, which is not shown in [A-D]. Below compares DISCO with [A-D] and DISCO beats all 4 baselines. The discussion of [A-D] will be included in the paper\n\nAccording to [A]’s setup, DISCO is evaluated on Cifar10 using WRN28-10 under PGD40 attack ($\\epsilon=8/255$). While [A] reported SA/RA of 86.14/80.24 using its default setting, DISCO achieves 89.26/80.80. Note that DISCO is also not optimized for this experiment. DISCO also has much less param. than [A] (1.6M vs 29.7M)\n\nUnder AutoAttack, [B] achieves 79.21/40.68 RA (Table 3 of [B]) on Cifar10/Cifar100 dataset, while DISCO achieves 85.56/67.93 (Table 1 & Appendix Table C). Under APgd[18] attack, [B] achieves 80.65/47.63 RA (Table 3 of [B]) on Cifar10/Cifar100 dataset, while DISCO achieves 85.79/77.33 (Appendix Table E & Table 3). DISCO beats [B] on 2 different attacks and datasets\n\nUnder AutoAttack, [C] achieves 67.79/33.16 RA (Table 1&2 of [C]) on Cifar10/Cifar100 dataset, while DISCO achieves 85.56/67.93 (Table 1 & Appendix Table C)\n\n[D] also compares with SOTA defenses in RobustBench. When Cifar10 and WRN28-10 classifier is considered, [D] achieves 70.64/78.58 RA (Table 1 & 2 of [D]) under $\\epsilon_\\infty=8/255$ and $\\epsilon_2=0.5$ respectively, while DISCO achieves 85.56/88.47 (Table 1 & Table 2). When ImageNet is considered, [D] achieves 40.93/44.39 RA (Table 3 of [D]) with ResNet50/WRN50, while DISCO achieves 68.2/69.5 (Appendix Table D)\n\n**W4 (Decision-based Attack)**: Since DISCO mainly follows the setting in RobustBench, we do not consider decision-based attack in this work. We will add this to the limitation. See general comment (Limitation)\n\n**W4 (Transfer attack)**: DISCO is robust to transfer attack. Table 4 shows that if the attacked inputs are computed using AutoAttack on classifier w/o DISCO, the attacked inputs fail when presented to DISCO+classifier\n\n**W5**: Each DISCO in the cascade shares the same weight (Fig. 3(c) & Sec. 3.4). Assume K DISCO stages. The input is passed to the 1st DISCO and the output of 1st DISCO is passed to the 2nd DISCO. The process is repeated for K times and the output of Kth DISCO is passed to the classifier\n\n### Questions\n\n**Q1**: See W4 (Transfer attack)\n\n**Q2**: See W5\n\n**Q3**: See W1\n\n**Q4**: Fig. 9 shows the results when different numbers of DISCO stages are presented for attack and defense. When the numbers of DISCO stages are identical during attack and defense ($K_{def}=K_{adv}$), RA is lower (L324-325), because the attacker has the full knowledge of the defense. Note that the BPDA [4] attack is used in Fig. 9, which has shown to be a strong attack to circumvent defenses with obfuscation gradient\n\n**Q5**: Table 5 shows the SA and RA on ImageNet (1000 classes) when DISCO only observes partial classes (100 and 500 classes) during training. Unlike baselines that train on entire dataset, DISCO has decent results, even on unseen classes during testing. Note that each class has equal number of sample (50 per class; See data size in Table 5)\n\n### Others\n\n**References** [A-G] will be added\n\n**Fig 2** shows that DISCO can capture local representation. In practice, DISCO purifies an image without concatenating other images in width. While the suggestion is interesting, it might cause the aliasing effect on image borders, even though the effect might not hurt the result. We will study this in the future\n\n**Fig 5**: The arrows means that the pixel location p loops over the entire image and DISCO will predict a RGB value for each pixel. The arrow direction is meaningless, because RGB prediction at different pixels can be processed simultaneously and are not dependent on each other. The implicit module is implemented by MLP (L162). Figure 5 will be revised\n\n**Limitation** will be revised", " ### Weakness\n\n**W1**: Please refer to General Comments (Novelty).\n\n**W2**: Unlike some of the transformation based baselines [3, 5, 66, 88, 95, 99, 123] that project the adversarial image into the natural image manifold by modeling global statistics, our work introduces a novel perspective, based on local reconstruction by leveraging the ability of local implicit functions to perform sophisticated modeling of image statistics. Our results, namely the showing of significantly better transfer across datasets and much greater parameter efficiency, suggests that the combination of a less ambitious task (modeling of local rather than global statistics) and a more powerful local modeling (implicit function instead of GAN or AutoEncoder) is a better trade-off than those of previous approaches. However, and although local implicit functions have been quite successful in many domains, including 2D image super-resolution [13] and 3D reconstruction [94, 68, 85, 71, 118, 52, 46, 14, 67, 77, 29, 115], it is difficult to prove theoretically why the implicit function is a better model for capturing local statistics than GANs or AutoEncoders. We believe that this is a research problem on its own, which could benefit multiple domains. Our results certainly show advantages for local modeling with implicit functions, and will likely inspire theoretical work in this question. While we intend to investigate these issues, we leave the theoretical analysis for future work. We believe the paper already demonstrates the power of implicit functions as a solution to the adversarial defense problem. Note that this is the first paper to introduce implicit functions to the adversarial defense problem.\n\n### Questions\n\n**Q1**: Please refer to General Comments (Baselines) for more discussion. While all [a-c] are related to adversarial defense, their settings are different to that considered in RobustBench[16], which proposed a fair benchmark for evaluation across defenses. More specifically, [a] only considers a single dataset Mnist in their work and its github clearly reveals its failure on Cifar10 dataset ([https://github.com/Uooga/Local-Flatness-Regularization](https://github.com/Uooga/Local-Flatness-Regularization)). [b] and [c] reports CIFAR10 robust accuracy of 47.24 (Table II of [b]) and 52.54 (Table 3 of [c]) under PGD20 attack with step size 0.003. To compare with [b,c], we apply DISCO on standard ResNet18 and achieve robust accuracy of 67.50 on CIFAR10, which outperforms [b,c] by more than 14 points. References [a-c] will be added.", " We thank the reviewers for their thoughtful comments. Major issues are addressed here; minor suggestions will be fixed. General comments are covered here; individual questions are addressed below. SA/RA denotes standard/robust accuracy.\n\n1. **Novelty**: At a high level, DISCO resembles baselines that perform adversarial removal or denoising (L107-116). While the idea of adversarial removal has been used and implemented by prior works [99, 88, 123, 95, 5], these methods model the image globally with GANs or conditional models of pixel statistics (L32-41 ; L110-116). Prior works [25,119] inspired by the denoising literature have poor performance and there is little evidence that directly applying denoising methods to adversarial defense can succeed. The restriction of the manifold modeling to small patch is a critical difference between DISCO and prior defenses based on image manifold modeling (L61-62). The use of implicit function to implement the manifold projection also enables DISCO to model the conditional local image statistics more accurately, which is the key difficulty of prior works. Due to these 2 properties, DISCO outperforms prior works by a large margin (Table 1, Table 2 and Fig. 6), can produce outputs of various sizes (L116) and can transfer the defense across datasets (L292-302). Overall, while DISCO shares the spirit of prior methods, we believe the perspective of modeling local statistics and the introduction of the local implicit function for adversarial defense are important contributions. \n2. **Baselines**: While we appreciate the additional baselines suggested by the reviewers, we would like to emphasize that DISCO is already compared and outperforms 120+ models on the RobustBench[16]. As mentioned by [16], there are more than 3000 papers in the adversarial literature and many of them are evaluated under different criteria. So, fair comparisons are not always easy, which led to the introduction of [16]. Hence, we mostly compare DISCO with baselines on [16] for fair comparisons. Because there are so many results, we intentionally place the tables of numerical results in the appendix and only include the corresponding plots (Fig. 6) in the main paper. This is to help readers focus on the important ideas. As suggested, we will further shorten the references list and keep only the essential ones.\n3. **Efficiency**: As discussed in Sec. 3.4 (L189-202), DISCO is parameter efficient (1.6M) compared to SOTA classifiers (for example ResNet101 has 44.5M; See Fig. 4), because DISCO only operates on local patches instead of entire image. Given this efficiency, the defense complexity can be analyzed with respect to (a) training and (b) testing phase. Consider the ImageNet, for example. \n \n (a) For training, DISCO only requires 0.5% training data (L195), while the adversarial training methods require the entire training set. In addition, unlike adversarial training methods that compute adversarial examples on-the-fly, we precompute the adversarial images in the data preparation stage (See Fig. 3(a)), which expedites the training process. Finally, since DISCO is classifier agnostic (See L287-291), no forward pass through the classifier is required during training. All together, these properties make DISCO significantly efficient to train.\n \n (b) For testing, the adversarial examples have to be passed through DISCO + classifier, resulting in a memory cost of $O(N_c + N_d)$ (See L203-L207). However, it is usually the case that $N_c$ > $N_d$ (See Fig. 4), making the complexity of the classifier larger than that of DISCO. In addition, Table 4 and Table H of the appendix show that DISCO + robust classifier further improves RA. For clarity, we compare the FPS of SOTA method [81] and DISCO+[81] (See Table H), on our machine (L218). The former achieves 33.7 FPS and the latter 29.3 FPS. While adding DISCO lowers the FPS by 4.4, it also increases the RA by 4.13 points (66.58 vs 70.71; See Table H). On the other hand, when compared to STL [99] (See Table J and section E in appendix), DISCO is 5.9x faster. To sum up, while adding DISCO has a slight increase in computing cost, this cost is minor and enables a large increase in RA. We believe that the latter outweighs the former and DISCO is superior when all factors are considered.\n \n4. **Limitations**: DISCO is mainly evaluated under norm-bounded attacks and we leave the investigation of other type of attacks for future work (L339-343). While it would be ideal to develop a universal defense, robust to all types of attacks (patch attack, decision based attack, functional adversarial attack, etc.), most defenses are only evaluated on certain types of attacks. For example, many baselines in RobustBench [25,26,32,33,81,87,99,110,116,119,116] are evaluated only on norm-bounded attacks, without even discussing other types of attacks. We hope that our disclosure of DISCO’s limitation will not be a reason to penalize it.", " The authors propose DISCO to remove adversarial perturbations by localized manifold projections. They aim to output the clean RGB value for an adversarial image and a pixel location. Their method is built upon the assumption that the manifold projection required for adversarial defense is conditioned on the synthesis of a natural image given the perturbed one which can be defined as a function of local image patches instead of a the whole image. Strengths:\n1-- The paper is well-written,\n2-- The studied problem is an interesting problem,\n3-- Extensive experimental analysis.\n\nWeakness:\n1-- Lack of sufficient novelty,\n2-- Lack of theoretical analysis 1-- Can the performance be compared to these works?\na) Xu et al. \"Adversarial defense via local flatness regularization\"\nb) Li et al. \"Semi-supervised robust training with generalized perturbed neighborhood\"\nc) Bai et al. \"Clustering effect of adversarial robust models\" The authors have included an important limitation for their work and it an interesting problem to study.", " DISCO is a test-time defense against adversarial attack that removes perturbations from inputs by projecting them onto a local implicit representation.\nThe local implicit representation of an input pixel is encoded by a convolutional network then decoded by an MLP given the coordinates of the pixel and its nearest convolutional features.\nThis approach to image encoding and decoding follows LIIF, which proposed implicit representations for tasks like super-resolution, but not adversarial defense.\nThe representation is trained on paired data of clean and adversarial inputs, where the adversarial inputs are generated by a standard attack such as PGD, by sampling matching patches from the clean and adversarial inputs and minimizing the L1 distance between the decoded adversarial patches and the clean patches.\nBy operating on local patches and their corresponding convolutional features, DISCO is able to generalize without excessive amounts of training data or iterations, and do so with a much reduced total number of parameters compared with other defenses.\nSince DISCO takes input images and makes output images, it can be composed into a cascade to leverage more computation for more robustness.\nExperiments cover standard benchmarks like RobustBench, the usual datasets of CIFAR-10 and ImageNet, and a variety of defense baselines with adversarial training and older test-time transformation-based defenses.\nDISCO achieves state-of-the-art robustness against the standard attacks like AutoAttack, and more distinctly shows transfer across attacks, datasets, and architectures, which distinguishes it from the dominant adversarial training approaches to defense.\n\n Strengths\n\n- Robustness: DISCO achieves state-of-the-art robustness across datasets (CIFAR-10, ImageNet), norms (L_inf, L_2), and architectures (ResNets, Wide ResNets, different training methods).\n Furthermore, test-time defense with DISCO complements train-time defense by adversarial training, and composing the two is more robust still (Table 4).;\n- Efficiency: DISCO is efficient to train and test in terms of computation, data, and parameters. As a spatially-local defense, its computation by convolution is efficient, and its generalization is improved by its smaller input dimensions (of small patches, rather than whole images).\n As a local convolutional/implicit model, DISCO only needs <0.1x the parameters of current classifiers and generative models deployed for defense, and it may be computed incrementally across pixels, so its memory usage is minimal.\n Generalization is shown by training on <1% of the training data (as patches), as opposed to the 100% normally used for adversarial training (as images), while still achieving improved robustness.\n- Transferrability: DISCO is trained against PGD but evaluated on the standard RobustBench suite as well as a collection of other (generally weaker, but still diverse) attacks like BIM and FGSM. This sort of transferrability is not easily achieved for adversarial training, and such methods generally re-train the model against different attacks, which requires great computational expense.\n- Attack/Defense Resource Asymmetry: DISCO requires more computation of the attacker than the defender, which may hinder the practicality of attack. This is demonstrated for gradient-based attack (Figure 10) in computation time and is also true of memory if doing full BPDA.\n However, this is not necessarily true of black-box attacks, which do not scale in the same way. That said, this point is softned by DISCO's robustness to the black-box Square attack.\n\nWeaknesses\n\n- The method is essentially LIIF [13], and its technical contributions w.r.t. to LIIF need to be highlighted, with credit given to LIIF for the foundation.\n While LIIF is cited in passing in the related work, this is not a sufficient or appropriate amount of acknowledgement, when Sections 3.1, 3.2, and 3.3 generally follow from it.\n One difference is that DISCO is trained for adversarial denoising, with paired clean/attacked images, but the architecture, inference, and general scheme of local implicit image functions are all due to LIIF.\n DISCO resembles LIIF all the way down into its specific implementation, with an EDSR-like architecture, and input patch size of 48x48, for example.\n- DISCO harms standard accuracy by 4-5 points absolute, while competing adversarial training defenses can lose less (~2% points, for Rebuffi et al. [81] for example.)\n- The related work and experiments exclude a whole wave of more recent test-time defenses some of which are also agnostic to the classifier, as DISCO is.\n Please see references [A-D] below (in chronological order of first appearance). This gap in scholarship is significant, and would be a reason for rejection if it were to go unaddressed.\n- Note that many test-time defenses have claimed large boosts in robustness, but further evaluation showed the gains to be exaggerated [E].\n While the experiments in this work broadly cover different datasets, architectures, and attacks, there are nevertheless gaps.\n 1. There is no decision-based attack. Square is black-box, but still depends on confidence, and RayS or Boundary have been found to succeed in cases where Square fails.\n 2. There is no transfer attack from the trained classifier (without the defense) to the composition of DISCO and the classifier. Such transfer attacks, which are valid in the white-box setting, can succeed against input purification defenses like DISCO.\n- (Minor) Cascade DISCO is not clearly described, though a reader may guess that it is the iterated composition of DISCO with itself, where the decoded output of one step is re-encoded as the input to the next step.\n\n[A] Adversarial Purification with Score-based Generative Models. Yoon et al. ICML'21.\n\n[B] Combating Adversaries with Anti-Adversaries. Alfarra et al. AAAI'22 (arXiv'21).\n\n[C] Adversarial Attacks are Reversible with Natural Supervision. Mao et al. ICCV'21.\n\n[D] Diffusion Models for Adversarial Purification. Nie et al. ICML'22.\n\n[E] Evaluating the Adversarial Robustness of Adaptive Test-time Defenses. Croce et al. ICML'22 (arXived three months before the deadline, in February).\n\n Questions\n\n- Is DISCO robust to a transfer attack on the underlying classification model? That is, if AutoAttack is applied to the nominally or adversarially trained classifier without DISCO, and then these attacked inputs are presented to DISCO + the classifier, do the attacks succeed?\n This is a valid attack under a white-box threat model, and the possibility of training surrogate models. (Note: AutoAttack by default returns the original input, and not the perturbed input, so make sure to nevertheless use the perturbed inputs for this transfer attack, as near-failures may succeed when transferred.)\n- How is the DISCO cascade computed? Is it simply the iterated application of the DISCO encoder and decoder on their outputs?\n- Please discuss the contribution relative to LIIF [13]. It seems that DISCO should be given the empirical credit for the application of local implicit functions to defense, but much of the technical contribution was established by LIIF. (This can be fine, but clear credit attribution is a part of good scholarship.)\n- Please explain how additional attack iterations are _worse_ in Figure 9. Is this not a symptom of an issue with the chosen attack? More steps should not hurt, unless the unrolling is in effect resulting in vanishing gradients, causing a kind of obfuscation that is not actually strongly measuring the defense itself.\n- (Minor) What should we conclude from the sensitivity of DISCO to the number of classes for training (Table 5)? Does this suggest that DISCO requires class-comprehensive and class-balanced data if it is to be effective? Could there be way to make it more class-agnostic?\n\nOther Feedback\n\n- As DISCO is a test-time defense, these related strong and recent test-time defenses could be of interest: DiffPure [F] and LINAC [G].\n To be clear this is just an FYI, and the existence of these concurrent papers publshed after the deadline at ICML'22 have no bearing on this review.\n- The claim about relative complexity on lines 205-206 holds not just for adversarial training, but any defense (including test-time defenses) that take model gradients (such as self-supervised input purification, SOAP).\n- Figure 2 makes an important point about local representation, but does so confusingly. Consider re-captioning the figure to highlight the point that DISCO is spatially local, and so can defend different content in the same image, and learn to defend many images by training on patches from a single image.\n Rather than pasting one image into the corner of another, why not just concatenate the images side-by-side in width? By the way, does DISCO actually fix these inputs? How is each classified?\n- Figure 5 has unclear elements. What do the arrows mean in orange and red? Are they simply an ordering of the input pixels? Consider placing a box around the loss to indicate the training phase, since inference only queries the implicit module without a loss. Consider depicting the implicit module as an MLP (visualized as some stack of layers, or however) to more fully summarize the architecture of the defense.\n\n[F] Diffusion Models for Adversarial Purification. Nie et al. ICML'22\n\n[G] Hindering Adversarial Attacks with Implicit Neural Representations. Rusu et al. ICML'22\n\n Empirical limitations and societal impacts are discussed. The main empirical limits are the need to try different attack families, with sparse spatial attacks suggested in this work and decision-based attacks suggested in this review, and the need to experiment on more classifiers, datasets, etc.\nThis amount of discussion is adequate, but it would be better still to acknowledge that the evaluated attacks (like AutoAttack) were primarily designed for train-time defenses, not test-time defenses, and so more work is likely needed to design adaptive attacks on implicit representations and other test-time methods.", " This paper proposes an implicit function-based method for adversarial defense. It consumes an attacked image as input and predicts a clean image. The proposed `implicit module` uses the context information around a query pixel to reason the center pixel.\n\nThe experimental results on CIFAR10, CIFAR100 and imagenet datasets are in favor of the proposed methpd. +. the method is clean, simple and seems effective.\n+. the experimental results are in favor of the method.\n\n- the novelty of the method is limited. there are a lot of similar methods in the field of denoising which also use similar strategy of denoise with context around. The author should clearly identify the difference between the proposed method and those denoice methods like [1, 2].\n- the proposed method is tedious compared with adversarial training methods because the proposed method requires two forward passes, one for obtaining the clean image and the other for classification. While adversarial training methods requires only a single forward pass.\n- The writing quality is somehow unqualified, the overall method is simple and clear, but some the writing doesn't follow a good standard. For example, in L126, you can ether remove the section description, or give more detailed one like\"in this section, we xxxx, we first xxxx, then we xxxx, finally we xxx\", rather than just a single sentence.\n\n\n[1] Batson, Joshua, and Loic Royer. \"Noise2self: Blind denoising by self-supervision.\" International Conference on Machine Learning. PMLR, 2019.\n[2] Laine, Samuli, et al. \"High-quality self-supervised deep image denoising.\" Advances in Neural Information Processing Systems 32 (2019). NA - Novelty\n- not clear the difference with denoise methods\n- writing quality", " The authors propose the use of local implicit functions to undo adversarial perturbations. This is inspired by related works on image manifolds, specifically image synthesis, as well as recent progress on implicit representations for 2D/3D data. Namely, instead of processing the entire image, the authors propose to process individual pixels conditioned on local information from a small patch which is projected by an MLP to a corrected value in the spirit of implicit neural representations. Training is based on minimizing the L1 distance of the MLP output against the clean pixel RGB. An extensive set of experiments show that the proposed approach outperforms prior defenses in a number of settings, with superior transferability. Strengths:\n- Disentangle the training of the defense modules from the base classifier.\n- A significant saving in the number of parameters in the defense network.\n- Considering the relative cost of training the defense vs attack modules, which can be magnified by repeating the projection up to K times.\n- Improved robustness and transferability over known defenses.\n\nWeaknesses:\n- This is essentially an image smoothing approach which comes with the following drawbacks:\n- - Each pixel-patch is passed through the MLP which is likely to slow down the inference process.\n- - It assumes norm-bounded attacks leaving it susceptible to other forms of attacks as the authors mention (e.g., 1-pixel, patch) in addition to e.g., functional attacks [2].\n\n[1] Laidlaw, C., & Feizi, S. (2019). Functional adversarial attacks. Advances in neural information processing systems, 32. 1. Mainly, I disagree with the high level description of the approach as a projection on the \"image\" manifold while it is more of a \"patch\" manifold [2]. Similarly, barely outliers as used to refer to perturbed inputs is not the same thing as manipulating individual patches. This contrast is also made more apparent due to the repeated references and comparison to image synthesis and GANs. Taken together, it is not clear how global information is incorporated in the encoded features used by the implicit module, so I feel that more work is needed to explain the improved performance of the proposed model.\n\n2. I would like to see some timing statistics. L174 (\"Note that this is not computationally intensive because the encoder feature map f = E(x) is computed once and used to the predict the RGB values of all query pixel locations.\") does not tell the whole story.\n\n3. The references are somewhat excessive. While this is an appreciated effort, it is difficult for readers to trace the important ideas. I would encourage the authors to revisit the long sequences of references leaving only 2-3 in the main text and perhaps move the rest to a \"see also ...\" add-on remark.\n\n4. There is quite a bit of simple language glitches. Please double-check. For example:\nL128: lie a\nL140: trained projecting\nL144: is the generated by\nL154: for each pixel locations\n\n[2] Peyré, G. (2009). Manifold models for signals and images. Computer vision and image understanding, 113(2), 249-260. The main limitation is the assumption of norm-bounded perturbations.", " This paper provides a way, named aDversarIal defenSe with local impliCit functiOns (DISCO), to protect classifiers from being attacked by adversarial examples. DISCO is composed of two parts: an encoder and a local implicit module. For inference, DISCO takes an image (either clean or adversarially perturbed) and a query pixel location as input and outputs an RGB value that is as clean as possible. After the process, the new output image is expected to wipe out all the adversarial perturbation on it, making the classifier predicts with high accuracy. In summary, I think that DISCO is one type of denoising model that aims to be adversarially robust. Strengths:\n\n1. The experimental results are rich. Moreover, it seems that the robust accuracy of DISCO outperforms other existing methods, even for adversarially trained or transformation-based methods.\n2. Transferability of DISCO: In the experiment section (Section 4), the authors assert that even though the use of datasets/classifiers/attack methods in training and testing phases are different, DISCO outperforms listed robust methods. As shown in Table 7. This particularly shows that, unlike adversarial trained, the robustness of the proposed model is independent of which attack/data be considered.\n\n\nWeaknesses:\n\n1. In lines 153-155; and lines 165-170: This should be a core part of the proposed method. However, the notations and interpretations are not clear. For example, what does \"the pixel shape as input\" mean? How does the local implicit module $L$ takes the three parts into consideration, to predict a clean RGB value? Moreover, I can't see any notation in Figure 5 that is used in the text. Like $E$, $L$, $\\hat{f}_{i^\\ast j^\\ast}$, etc. So it seems like the figure does not help readers to understand the main method in this paper, but rather than making them more confused.\n2. The novelty is not enough: Since the heuristic idea about adversarial removal has been developed for several years, it seems that the proposed method is mainly followed by this idea; with a slight modification.\n3. Few typos listed here:\n\n a. In the main paper, line 144: is the generated by… → is generated by…\n\n b. In Supplementary, Table H, the last row: Cifar10 → Cifar100 (maybe?).\n\n 1. What is the benefit of the local implicit module? Is it better than a global module only for efficient computation time?\n2. What are the iterations of your PGD attack during training and testing?\n3. In Table 9, why K only has 3 values for defense and attacks have 5 values?\n Yes. The authors of this study addressed the limitations and potential negative social impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4, 3 ]
[ "ak2l36xKZhb", "KHSgHBBtPgN", "yzw041YJo-", "m642s8fS00g", "WgsjHr4wL7B", "6dv9nkAMWze", "_PAtjxpA1IT", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd", "nips_2022_vgIz0emVTAd" ]
nips_2022_6rhl2k1SUGs
Watermarking for Out-of-distribution Detection
Out-of-distribution (OOD) detection aims to identify OOD data based on representations extracted from well-trained deep models. However, existing methods largely ignore the reprogramming property of deep models and thus may not fully unleash their intrinsic strength: without modifying parameters of a well-trained deep model, we can reprogram this model for a new purpose via data-level manipulation (e.g., adding a specific feature perturbation). This property motivates us to reprogram a classification model to excel at OOD detection (a new task), and thus we propose a general methodology named watermarking in this paper. Specifically, we learn a unified pattern that is superimposed onto features of original data, and the model's detection capability is largely boosted after watermarking. Extensive experiments verify the effectiveness of watermarking, demonstrating the significance of the reprogramming property of deep models in OOD detection.
Accept
The reviewers agree that the proposed method is interesting and yields good performance. A number of concerns were raised during the initial round of reviews concerning the rigorousness and completeness of experiments, but these were addressed during extensive back-and-forth between authors and reviewers.
train
[ "MFKTygbr2kF", "YOvmwWDqPe6", "8oJGA4TT3u8", "S_P8xvp0qd8", "1mvxr7LLXCe", "L3r7CEiNgw", "IYAd9rZzf_y", "GwShy9bg7YC", "jDM4sgM4xo", "iMQSsvedng_", "xm3bOit9_5s", "FUxiyzCb8Iz", "FO-kWtmDjbV", "qFl9vurc9Z_", "Zv-v1dOkmFn", "vThqHlEQTR_", "0Vqvr1OWuwcc", "au3MH8XUIUy", "BdegYlMim1_", "Mtd4mcWvqrZ", "viaC98ZBSa6", "QRP8ENGCcmI", "0BXWdZ-jPkk", "bA_UWXls75g", "zGX08tJxYNB", "wQz2Ajr_dP", "3vJgE0XW8Kk", "u0CVen1DNY1", "raoN27hC1w7", "BPD9S8oB4bZ", "07-0TcR_qU", "OujBX2wyvwJ" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer WTn8,\n\nGlad to hear that your concerns are addressed well. Thanks for supporting our paper to be accepted.\n\nBest regards,\n\nAuthors of #1621", " Sincerely thanks for the constructive suggestions/comments of all the reviewers. We have correspondingly revised the current submission and marked the revision in blue color in the latest submission.\n\n**For Reviewer dM3K**:\n\nThe following points have been added to our revision:\n\n- We describe the hyper-parameter tuning strategy in Section 6 (lines 222-235) and Appendix C.7 (lines 684-686), emphasizing the candidate value sets for the considered hyper-parameters and the random search tuning strategy with validation sets separated from the test situations. \n\n- We add the experiments in Appendix C.2 with Table 11 (MaxLogit) to demonstrate the power of watermarking can benefit from better choices of scoring strategies. \n\nThe following point will be added to our revision:\n\n- We will adopt the new hyper-parameter tuning strategy with random search and tiny-ImageNet (an OOD dataset). It will substitute our current tuning strategy to further reflect the generality and effectiveness of watermarking. \n\n**For Reviewer WTn8**:\n\nThe following points have been added to our revision:\n\n- We further discuss the optimization procedure in Eq. (5) (lines 162-166) to clarify our purpose.\n\n- We list the performance with free energy scoring on CIFAR benchmarks with different choices of T in Table 37, demonstrating the influence of its value on the performance of free-energy scoring-based watermarking. \n\n- We move the ImageNet experiments and the asscoiated discussions to the main content (Tables 3-4 and lines 257-264), better demonstrating the power of our watermarking strategy. \n\n- We use ID and OOD consistently throughout this paper (instead of IN and OUT in Eq. 4, 9-10), making our description clearer. \n\n**For Reviewer qcRd**:\n\nThe following points have been added to our revision:\n\n- We refine the motivation in reprogramming property for OOD detection (lines 31-34) and the heuristics in why our proposed learning framework works well (lines 40-53 and 126-131). \n\n- We describe the hyper-parameter tuning strategy in Section 6 (lines 222-235) and Appendix C.7 (lines 683-686), emphasizing the candidate value sets for all the hyper-parameters and the tuning strategy of random search. \n\n- We add the near-OOD detection experiments (Table 5) and the associated discussion (lines 265-287) in Section 6, revealing that our watermarking can excel at this challenging setting. \n\n- We add the experiments (Table 24) with different backbone models in Appendix C.6, demonstrating that our proposal is general when facing various model architectures. \n\n- We correct the typos (Eq. 3, Eq. 10, Table 7, lines 314-317) that appear in our previous version. \n\n- We move the discussions about test accuracy and masking to Appendix C.1 and C.8. \n\n- We rename the \"Ablation Study\" section by \"Effect of Hyper-parameters.\" \n\n- We add experiments (Tables 16-19) about \"perm\" and \"rotate\" in Appendix C.3, demonstrating the possibilities in using the shifting augmentations for our watermarking strategy. \n\n\nThe following point will be added to our revision:\n\n- We will adopt the new hyper-parameter tuning strategy with random search and tiny-ImageNet (an OOD dataset). It will substitute our current tuning strategy to reflect further the generality and effectiveness of our watermarking strategy. \n\n\nBest regards, \n\nAuthors", " Dear Reviewer dM3K:\n\nThanks for your great efforts in reviewing and good questions here. We really hope that our answer can help to clarify. Since the discussion due is approaching, please let us know if anything we could further clarify.\n\nBest regards,\n\nAuthors of #1621", " Thank you for your reviews. Please see our response below, or at https://openreview.net/forum?id=6rhl2k1SUGs&noteId=1mvxr7LLXCe (above your new comments).\n\n>Could the authors try the two new augmentations (permute and rotate) for the other experiments as well? It doesn’t seem fair or practical that the with watermark option has 3 different variants. What variant should one use in practice? Is the rotate augmentation the best overall? Also, are the results setting CIFAR10 as in or CIFAR 100? Either way, I would like to see the other variant as well.\n\n**Response 1:** In the above experiments, we want to reveal the power of our watermarking strategy in near OOD detection and demonstrate the possibility for further improvements.\n\nFollowing your kind suggestion, we conduct experiments on CIFAR benchmarks with the softmax scoring and the free energy scoring, and the average results (**regarding iSUN, Places365, Texture, SVHN, and LSUN**) with \"perm\" and \"rotate\" can be seen in the following table. As we can see, the results with \"perm\" (w/ watermark permute) and \"rotate\" (w/ watermark rotate) is comparable with (even better than) the original learning setup with only the Gaussian noise (w/ watermark common). \n\n| CIFAR-10 softmax scoring | FPR95 | AUROC | AUPR |\n|:------------------------:|:-----:|:-----:|:-----:|\n| w/o watermark | 55.69 | 89.98 | 97.70 |\n| w/ watermark common | 41.29 | **93.34** | **98.53** |\n| w/ watermark permute | 41.03 | 92.87 | 98.46 |\n| w/ watermark rotate | **41.02** | 92.72 | 98.38 |\n\n| CIFAR-100 softmax scoring | FPR95 | AUROC | AUPR |\n|:-------------------------:|:---------:|:---------:|:---------:|\n| w/o watermark | 83.14 | 74.04 | 93.50 |\n| w/ watermark common | 81.77 | 74.30 | 93.71 |\n| w/ watermark permute | **79.18** | **76.81** | **94.36** |\n| w/ watermark rotate | 79.23 | 76.49 | 94.27 |\n\n| CIFAR-10 free energy scoring | FPR95 | AUROC | AUPR |\n|:----------------------------:|:---------:|:---------:|:---------:|\n| w/o watermark | 37.78 | 90.53 | 97.46 |\n| w/ watermark common | 23.63 | 95.61 | 98.86 |\n| w/ watermark permute | 23.70 | 95.66 | 98.84 |\n| w/ watermark rotate | **22.46** | **95.80** | **98.90** |\n\n| CIFAR-100 free energy scoring | FPR95 | AUROC | AUPR |\n|:-----------------------------:|:-----:|:-----:|:-----:|\n| w/o watermark | 81.41 | 76.88 | 94.13 |\n| w/ watermark common | **76.95** | 79.72 | 95.17 |\n| w/ watermark permute | 77.09 | **80.75** | **95.44** |\n| w/ watermark rotate | 79.71 | 79.86 | 95.12 |\n\nIt is difficult to make a conclusion which one is better according to our results, which is an interesting problem to study in the future.\n\n>All in all, due to the heuristic nature of watermarking, At the current state, due to the tuning protocol, it’s hard for me to believe the results. \n\n**Response 2:** We have demonstrated why the watermarking strategy works in the above new response (see https://openreview.net/forum?id=6rhl2k1SUGs&noteId=GwShy9bg7YC). We hope it can help you understand why the training objective of the watermarking strategy is useful.\n\n>I want the experiments to be very carefully done. It seems to me that watermarking is showing an improvement compared to the baseline because it gets 3 extra hyperparameters to tune, and because those parameters were tuned on the same distribution as the test set.\n\n**Response 3:** When we prepare an OOD detection work, we have to make a fair comparison with previous works. Namely, we have to use their validation sets for a fair comparison. However, we totally agree with you about hyperparameter tuning. We have conducted new experiments using tiny-ImageNet (NOT the same distribution as the test set) as a validation set. The results also show the effectiveness of our methods. We hope this can relieve your concerns a lot. In fact, using tiny-ImageNet as a validation set can also be regarded as an experimental contribution to the whole field, which is indeed more reasonable in the OOD detection problem. **Lastly, please receive our sincere thanks. Your reviews are actually helpful for the whole OOD detection community.**", " Thank you for your reviews. Here are responses to your new comment below (https://openreview.net/forum?id=6rhl2k1SUGs&noteId=iMQSsvedng_).\n\n>Could the authors try the two new augmentations (permute and rotate) for the other experiments as well? It doesn’t seem fair or practical that the with watermark option has 3 different variants. What variant should one use in practice? Is the rotate augmentation the best overall? Also, are the results setting CIFAR10 as in or CIFAR 100? Either way, I would like to see the other variant as well.\n\n**Response 1:** In the above experiments, we want to reveal the power of our watermarking strategy in near OOD detection and demonstrate the possibility for further improvements.\n\nFollowing your kind suggestion, we conduct experiments on CIFAR benchmarks with the softmax scoring and the free energy scoring, and the average results (**regarding iSUN, Places365, Texture, SVHN, and LSUN**) with \"perm\" and \"rotate\" can be seen in the following table. As we can see, the results with \"perm\" (w/ watermark permute) and \"rotate\" (w/ watermark rotate) is comparable with (even better than) the original learning setup with only the Gaussian noise (w/ watermark common). \n\n| CIFAR-10 softmax scoring | FPR95 | AUROC | AUPR |\n|:------------------------:|:-----:|:-----:|:-----:|\n| w/o watermark | 55.69 | 89.98 | 97.70 |\n| w/ watermark common | 41.29 | **93.34** | **98.53** |\n| w/ watermark permute | 41.03 | 92.87 | 98.46 |\n| w/ watermark rotate | **41.02** | 92.72 | 98.38 |\n\n| CIFAR-100 softmax scoring | FPR95 | AUROC | AUPR |\n|:-------------------------:|:---------:|:---------:|:---------:|\n| w/o watermark | 83.14 | 74.04 | 93.50 |\n| w/ watermark common | 81.77 | 74.30 | 93.71 |\n| w/ watermark permute | **79.18** | **76.81** | **94.36** |\n| w/ watermark rotate | 79.23 | 76.49 | 94.27 |\n\n| CIFAR-10 free energy scoring | FPR95 | AUROC | AUPR |\n|:----------------------------:|:---------:|:---------:|:---------:|\n| w/o watermark | 37.78 | 90.53 | 97.46 |\n| w/ watermark common | 23.63 | 95.61 | 98.86 |\n| w/ watermark permute | 23.70 | 95.66 | 98.84 |\n| w/ watermark rotate | **22.46** | **95.80** | **98.90** |\n\n| CIFAR-100 free energy scoring | FPR95 | AUROC | AUPR |\n|:-----------------------------:|:-----:|:-----:|:-----:|\n| w/o watermark | 81.41 | 76.88 | 94.13 |\n| w/ watermark common | **76.95** | 79.72 | 95.17 |\n| w/ watermark permute | 77.09 | **80.75** | **95.44** |\n| w/ watermark rotate | 79.71 | 79.86 | 95.12 |\n\nIt is difficult to make a conclusion which one is better according to our results, which is an interesting problem to study in the future.\n\n>All in all, due to the heuristic nature of watermarking, At the current state, due to the tuning protocol, it’s hard for me to believe the results. \n\n**Response 2:** We have demonstrated why the watermarking strategy works in the above new response (see https://openreview.net/forum?id=6rhl2k1SUGs&noteId=GwShy9bg7YC). We hope it can help you understand why the training objective of the watermarking strategy is useful.\n\n>I want the experiments to be very carefully done. It seems to me that watermarking is showing an improvement compared to the baseline because it gets 3 extra hyperparameters to tune, and because those parameters were tuned on the same distribution as the test set.\n\n**Response 3:** When we prepare an OOD detection work, we have to make a fair comparison with previous works. Namely, we have to use their validation sets for a fair comparison. However, we totally agree with you about hyperparameter tuning. We have conducted new experiments using tiny-ImageNet (NOT the same distribution as the test set) as a validation set. The results also show the effectiveness of our methods. We hope this can relieve your concerns a lot. In fact, using tiny-ImageNet as a validation set can also be regarded as an experimental contribution to the whole field, which is indeed more reasonable in the OOD detection problem. **Lastly, please receive our sincere thanks. Your reviews are actually helpful for the whole OOD detection community.**", " Thank you for your comments. Note that a reply to your \"Response to A1.3\" can be found at https://openreview.net/forum?id=6rhl2k1SUGs&noteId=GwShy9bg7YC (another reply to your new comments). We will focus on your \"Response to A2\" here.\n\n>**Subquestion 1**: \"Am I understanding correctly that the hyperparameters were tuned on the validation sets corresponding to the test OOD datasets? If so, I think it’s wrong to tune hyperparameters with data that are from the same distribution as the test OOD data. Tuning the hyperparameters this way gives the model access to the very OOD classes that the model is trying to distinguish, and this is an extra degree of freedom that the baseline does not get to enjoy. The correct way to tune the hyperparameters would be to use a validation dataset that is semantically separate from the test OOD datasets\". \n\n**Response 1**: We totally agree with you. For fair comparison, we follow [3, 4]: we do not use any data point in considered test datasets, and the adopted validation datasets follow the setups in many previous works [3,4]. It means that our hyper-parameter tuning strategy is proper in comparison with previous works, and the training procedure does not involve any data about test OOD cases.\n\nHowever, your suggestion of using a semantically different validation dataset is correct (we totally agree with this point), and we believe it will become the standard tuning strategy in the future in this field. Therefore, we will completely follow your suggestion with tiny-ImageNet for hyperparameter tuning in our revision. Here, we want to demonstrate that our watermarking strategy is robust to various hyper-parameter settings, due to our observations that there exists a similar trend in preference of hyperparameters, in **using the average detection performance on adopted validation datasets (current strategy)** and **using the detection performance on tiny-ImageNet**. \n\nThe following results regarding FPR95 (softmax scoring-based watermarking on CIFAR-10) is a verification of our claim, where the \"**candidate**\" rows represent randomly selected sets of hyperparameter setups from Appendix C.6. We find that for your suggested tuning strategy with tiny-ImageNet, the optimal one (the last row \"**optimal**\" in the following table) is the same as that used in our paper. We sincerely appreciate your constructive suggestions in hyper-parameter tuning, and we will follow your suggestion with tiny-ImageNet in our revision, replacing all the results from Table 17-28. \n\n| | $\\sigma_1$ | $\\rho$ | $\\beta$ | Average Validation | tiny-ImageNet |\n|:---------:|:----------:|:------:|:-------:|:------------------:|:-------------:|\n| candidate | 2.0 | 5.0 | 5.0 | 94.94 | 95.05 |\n| candidate | 1.6 | 1.0 | 4.0 | 62.11 | 74.50 |\n| candidate | 1.2 | 0.5 | 3.0 | 63.52 | 72.70 |\n| candidate | 0.8 | 0.1 | 2.0 | 46.75 | 62.50 |\n| candidate | 0.4 | 0.05 | 1.0 | 48.66 | 62.90 |\n| optimal | 0.4 | 1.0 | 3.5 | **42.43** | **58.30** |\n\n>**Subquestion 2**: \"Ideally, the baseline (no watermark) hyperparameters (learning rate at the very least) should also be tuned on the validation dataset such that both methods are calibrated on the same dataset. This is reasonable, since models are usually tuned on the in-distribution validation set– we can additionally tune for OOD detection performance on a separate OOD validation dataset.\"\n\n**Response 2**: In fact, many previous works directly use well-trained ID classifiers in discerning ID and OOD data, and there are not any training procedures that involve learning rate tuning (i.e., no hyperparameters actually). For other hyperparameters, such as $T$ in free energy scoring, we have adopted the suggested settings in their original paper that lead to the optimal solutions (**we also tune it according to your comment, please see results in the Response 5**). Therefore, the comparison is fair. If we miss something, we are happy to discuss and add experiments if needed.", " >**Subquestion 3**: \"First of all, how exactly were the hyperparameters tuned? It couldn’t have been grid search as there are too many possibilities. In tables 17-28, what were the values of the other parameters that were not varied? \"\n\n**Response 3**: As your guess, we adopt the random search with many trials. Step 1: we randomly select a hyperparameter (e.g., $\\beta$) and fix the values of all other hyperparameters to be their optimal values. Step 2: we select the best $\\beta$ from the set. Step 3: do Steps 1-2 again. We repeat Steps 1 and 2 50 times in our experiments. We will further emphasize our hyper-parameter tuning strategy in our revision (e.g., adopt more advanced discrete optimization methods). \n\n>**Subquestion 4**: \"In my opinion, the best way to tune in such a setting would be to use random search [2] with many trials (20 for example).\"\n\n**Response 4**: We totally agree with you. We will follow your kind suggestions of using random search in [2] (or a more advanced method, expecting your suggestions) and tiny-ImageNet in replacement of our current tuning strategy. When we get results, we will report them here. However, due to the time limitation, we cannot demonstrate them in the uploaded revision.\n\n>**Subquestion 5**: \"The baseline (without watermark) hyperparameters are tuned on tiny ImageNet (or any other validation set that doesn’t overlap semantically with the test OOD datasets).\"\n\n**Response 5**: In our main content, the hyperparameter $T$ is the only tunable value for the baseline methods. $T$ is taken from the set $\\\\{1, 5, 10, 50, 100, 500, 1000\\\\}$. The following tables show the results with tiny-ImageNet being the OOD dataset (CIFAR-10/100 being the ID dataset) with different values of $T$. As shown in the following table, $T=1$ (as suggested in previous papers) is the best in general.\n\n| CIFAR-10 | 1 | 5 | 10 | 50 | 100 | 500 | 1000 |\n|:--------:|:---------:|:---------:|:-----:|-------|-------|-------|-------|\n| FPR95 | 33.45 | **31.75** | 34.70 | 37.80 | 38.00 | 39.25 | 35.60 |\n| AUROC | **92.69** | 92.47 | 91.45 | 90.12 | 90.10 | 89.37 | 90.38 |\n| AUPR | **98.25** | 98.17 | 97.80 | 97.48 | 97.47 | 97.17 | 97.40 |\n\n| CIFAR-100 | 1 | 5 | 10 | 50 | 100 | 500 | 1000 |\n|:---------:|:---------:|:-----:|:-----:|-------|-------|-------|-------|\n| FPR95 | **59.55** | 59.75 | 61.90 | 66.95 | 68.35 | 94.40 | 69.65 |\n| AUROC | **85.10** | 83.91 | 82.48 | 79.84 | 80.53 | 79.52 | 79.01 |\n| AUPR | **96.25** | 95.92 | 95.54 | 94.54 | 94.95 | 94.40 | 93.65 |\n\n>**Subquestion 6**: The proposed idea is tuned with random search with more than 6 trials on the same validation dataset as above.\n\n**Response 6**: The following table lists the results on CIFAR-10 with softmax scoring and tiny-ImageNet as an OOD validation dataset with 6 individual trials. We will conduct more experiments with random search regarding tiny-ImageNet and provide the results in our revision. \n\n| $\\sigma_1$ | $\\rho$ | $\\beta$ | FPR95 | AUROC | AUPR |\n|:----------:|:------:|:-------:|:-----:|:-----:|:-----:|\n| 0.8 | 2.0 | 1.0 | 58.45 | 87.62 | 96.51 |\n| 0.6 | 0.7 | 2.5 | 65.25 | 84.04 | 95.89 |\n| 1.6 | 0.2 | 2.5 | 68.70 | 83.56 | 95.97 |\n| 1.2 | 0.07 | 0.5 | 66.35 | 83.74 | 95.82 |\n| 0.4 | 1.0 | 4.0 | 64.30 | 95.29 | 96.25 |\n| 4.0 | 5.0 | 3.0 | 87.90 | 68.93 | 91.81 |\n\n>**Subquestion 7**: \"It seems like table 4 for seems to be the same as Table 19. Depending on which table is true, lines 264-271 should be corrected.\"\n\n**Response 7**: Many thanks for your kind correction. We will update all the tables that lead to the confusion. \n\n>**Subquestion 8**: \"Lastly, the “Ablation study” section should be renamed, as studying the effect of the hyperparameters is not technically an ablation.\"\n\n**Response 8**: Sincerely thank you for your suggestions, we will rename this section about hyperparameter selection.", " **Principle-level response to your concern:**\n\nFirst of all, we want to thank you. This is very good critical thinking regarding our paper. Sorry for the previous misunderstandings. Now, we realize your major concerns. We think the following sentence might address your concerns. **In the watermarking strategy, we want to reprogram previous ID classifiers to help increase the classifiers' confidence in ID data and decrease the corresponding confidence when the classifiers do not see any ID pattern.** Since the reprogrammed models are more confident in ID classes, we can expect to relieve the issue of overconfidence in ID classifiers. Note that, the overconfidence issue of ID classifiers is the main reason why ID classifiers will recognize the OOD data as ID data. If you think this can help you address your concerns, we will update them in our revision. If not, welcome more thoughts!\n\nThen, we want to mention that many OOD detection methods cannot see the OOD data in advance, yet they can still improve the OOD detection performance. The main reason is that they focus on enhancing the confidence of ID classifiers on ID data (one simple but representative way is to use the temperature functions). The related theory is still missing why we can perform OOD detection by only using the ID data (we cannot see OOD data yet we can distinguish between ID and OOD data). One possible reason is that the ID classifier might be regarded as a good one-class classifier (ID classes as the only class). \n\n**More details:**\n\nGiven the main aim of the watermarking strategy (given above): we want to reprogram previous ID classifiers to help increase the classifiers' confidence in ID data. Overall, making the model have low confidence for random noise (effect of $\\ell^\\text{out}(\\cdot)$) can play the role of regularization, such that the training procedure will not find a trivial solution that always returns a high confident prediction for any watermarked data point (effect of $\\ell^\\text{in}(\\cdot)$). \n\nThen, since the watermark is trained such that the model will **only** produce a high confident prediction for the watermarked ID cases, it is proper to assume that the model can return low confidence for those unseen watermarked OOD data. From another lens, the key issue in OOD detection is that the model can be overconfident in unseen data. Therefore, we make the scores of ID data higher, such that we can better distinguish between ID and OOD cases. \n\nOur above explanations are verified by our extensive experiments. However, we sincerely appreciate your concerns and we believe that future exploration can lead to many improved learning methods for watermarking, which requires our further studies. ", " Thanks for the authors' rebuttal.\nI have read it carefully and it solve with my concerns largely.\nSo I would keep my rating as accept (7).\n\n\n", " Could the authors try the two new augmentations (permute and rotate) for the other experiments as well? It doesn’t seem fair or practical that the with watermark option has 3 different variants. What variant should one use in practice? Is the rotate augmentation the best overall?\nAlso, are the results setting CIFAR10 as in or CIFAR 100? Either way, I would like to see the other variant as well.\n\nAll in all, due to the heuristic nature of watermarking, I want the experiments to be very carefully done. At the current state, due to the tuning protocol, it’s hard for me to believe the results. It seems to me that watermarking is showing an improvement compared to the baseline because it gets 3 extra hyperparameters to tune, and because those parameters were tuned on the same distribution as the test set.", " **Response to A1.3**\n\nThe sentence that I’m referring to was used in line 132-133, which to my understanding is explaining why watermarking works:\n“From the lens of our model, the scores should remain low if a watermarked OOD input is given since the learning procedure does not see any OOD data during training, and only the watermark can be observed”.\n\nAlso, I’m not sure I am convinced about this statement:\n“After training the watermark, the ID classifier will make low confident predictions for those watermarked OOD data since the patterns of OOD data largely deviate from the ID cases”. It is true that OOD data has a disjoint label set than ID data. However, the very problem of OOD detection is that our networks are easily fooled by the very OOD data that largely deviates from the ID classes. It’s not convincing to me that a normal classifier gets fooled by OOD data, but watermarking does not.\n\nLastly, as I mentioned in my response to A1.1 + A1.2, model reprogramming has access to target domain data, but watermarking does not. So it doesn’t make sense to me that watermarking can adapt for specified tasks and datasets and therefore, watermarking doesn’t add anything more principled compared to the previous scoring methods.\n\n**Response to A2**\n\nAm I understanding correctly that the hyperparameters were tuned on the validation sets corresponding to the test OOD datasets? If so, I think it’s wrong to tune hyperparameters with data that are from the same distribution as the test OOD data. Tuning the hyperparameters this way gives the model access to the very OOD classes that the model is trying to distinguish, and this is an extra degree of freedom that the baseline does not get to enjoy. The correct way to tune the hyperparameters would be to use a validation dataset that is semantically separate from the test OOD datasets, for example in [1] (see appendix A). Ideally, the baseline (no watermark) hyperparameters (learning rate at the very least) should also be tuned on the validation dataset such that both methods are calibrated on the same dataset. This is reasonable, since models are usually tuned on the in-distribution validation set– we can additionally tune for OOD detection performance on a separate OOD validation dataset.\n\nAfter reading author responses to all the reviews, I have further questions. First of all, how exactly were the hyperparameters tuned? It couldn’t have been grid search as there are too many possibilities. In tables 17-28, what were the values of the other parameters that were not varied? In my opinion, the best way to tune in such a setting would be to use random search [2] with many trials (20 for example). It seems like the authors have 6 random trials shown in their response to reviewer dM3K. I would be happy to see an extension of that table such that:\n- The baseline (without watermark) hyperparameters are tuned on tiny ImageNet (or any other validation set that doesn’t overlap semantically with the test OOD datasets).\n- The proposed idea is tuned with random search with more than 6 trials on the same validation dataset as above.\n\nAlso, it seems like table 4 for $\\rho$ seems to be the same as Table 19, which for $\\beta$. Depending on which table is true, lines 264-271 should be corrected.\n\nLastly, the “Ablation study” section should be renamed, as studying the effect of the hyperparameters is not technically an ablation. \n\n[1] Hendrycks, Dan, Mantas Mazeika, and Thomas Dietterich. \"Deep anomaly detection with outlier exposure.\" arXiv preprint arXiv:1812.04606 (2018).\n\n[2] Bergstra, James, and Yoshua Bengio. \"Random search for hyper-parameter optimization.\" Journal of machine learning research 13.2 (2012).\n", " I’m still not sure how model reprogramming (idea that a model can be repurposed for a new task by modifying the pattern of the inputs) will necessarily lead to improved OOD detection performance. The only thing that watermarking guarantees is what it is trained for– higher score for ID + watermark, and low score if only watermark + some perturbation is observed. There is nothing in this setup that makes it such that unknown input + watermark will result in lower score.\nFurthermore, model reprogramming has access to the target dataset, which explains why it works on the target domain. However, in OOD detection we never have access to the test OOD dataset, and by definition we are dealing with unknown unknowns. Therefore, it is hard for me to believe that the reprogramming property has anything to do with explaining how watermarking is supposed to help with OOD detection. Please let me know if I’m missing something.\n", " Dear Reviewer qcRd:\n\nThanks for your great efforts in reviewing and good questions here. We really hope that our answer can help to clarify. Since the discussion due is approaching, please let us know if anything we could further clarify.\n\nBest regards,\n\nAuthors of #1621", " Dear Reviewer dM3K:\n\nThanks for your great efforts in reviewing and good questions here. We really hope that our answer can help to clarify. Since the discussion due is approaching, please let us know if anything we could further clarify.\n\n\n\nBest regards,\n\nAuthors of #1621\n\n", " Dear Reviewer qcRd,\n\nWe have completed the experiments regarding large-scale models using ViT (**experiments regarding large-scale models**), please see details in https://openreview.net/forum?id=6rhl2k1SUGs&noteId=Mtd4mcWvqrZ . We have also completed the experiments regarding the near-OOD experiments (**experiments regarding more difficult OOD detection tasks**), please see details in https://openreview.net/forum?id=6rhl2k1SUGs&noteId=vThqHlEQTR_ .\n\nNow, we have addressed all of your initial concerns regarding our paper and provided the required experimental results. We are happy to discuss them with you in the openreview system if you still have some concerns/questions. We also welcome new suggestions/comments from you!\n\nBest regards,\n\nAuthors of #1621", " For the near OOD detection in Q3, we further conduct experiments with our proposed softmax scoring-based watermarking and the free-energy scoring-based watermarking, demonstrating the power of our two realizations in Section 5 for near-OOD detection. \n\nIn addition to the common watermarking learning setup (common) in Section 5 (ID data + random Gaussian noise), we further consider the use of shifting augmentations in CSI [5], which can be used to construct near-OOD data from the ID data. Here, we consider two representative shifting augmentations: permuting evenly partitioned data (permute) and rotating 90 degrees of original data (rotate). The shifting-augmented ID data are then taken as OOD data feed into $\\ell^\\text{out}(\\cdot)$ along with random Gaussian noise. \n\n\nThe experimental results for the softmax scoring and the free-energy scoring are summarized in the following two tables. Here, the common watermark learning setup can already lead to improved performance in near-OOD detection regarding the cases without watermarking (w/o watermark). Moreover, watermarking with shifting augmentations (perm and rotate) can further boost the detection power of the models, demonstrating the effectiveness and the generality of our watermarking in near-OOD detection.\n\n**softmax scoring:**\n\n| | FPR95 | AUROC | AUPR |\n|:-------------:|:---------:|:---------:|:---------:|\n| w/o watermark | 90.10 | 55.47 | 86.16 |\n| common | 88.28 | 53.16 | 84.75 |\n| permute | 86.45 | 60.04 | 86.33 |\n| rotate | **81.50** | **65.69** | **88.67** |\n\n**free-energy scoring:**\n\n| | FPR95 | AUROC | AUPR |\n|:-------------:|:---------:|:---------:|:---------:|\n| w/o watermark | 52.25 | 86.49 | 96.44 |\n| common | 49.75 | 88.52 | 96.98 |\n| permute | 48.55 | 88.40 | 97.03 |\n| rotate | **47.85** | **88.90** | **97.08** |\n\n\n\nWe will also merge the above results into our paper. If you have more questions regarding our paper, feel free to tell us. We are very happy to discuss them with you in the openreview system.", " Dear Reviewer dM3K,\n\nWe have addressed your initial concerns and replied to your further comments (see https://openreview.net/forum?id=6rhl2k1SUGs&noteId=viaC98ZBSa6). We are happy to discuss them with you in the openreview system if you still have some concerns/questions. We also welcome new suggestions/comments from you!\n\nIf all of your concerns are properly addressed and you can confirm this with us, we will be very grateful.\n\nBest regards,\n\nAuthors of #1621", " Dear Reviewer qcRd,\n\nWe have completed the experiments regarding large-scale models using ViT, please see details in https://openreview.net/forum?id=6rhl2k1SUGs&noteId=Mtd4mcWvqrZ .\n\nNow, we have addressed your initial concerns regarding our paper and provided the required experimental results. We are happy to discuss them with you in the openreview system if you still have some concerns/questions. We also welcome new suggestions/comments from you!\n\nBest regards,\n\nAuthors of #1621\n\n\n", " Dear Reviewer WTn8,\n\nWe have addressed your initial concerns regarding our paper. We are happy to discuss them with you in the openreview system if you feel that there still are some concerns/questions. We also welcome new suggestions/comments from you!\n\nBest regards,\n\nAuthors of #1621", " We use the ViT-B/16 model trained on the CIFAR-10 dataset, and the following table summarizes the average results regarding both the softmax scoring and the free energy scoring. \n\n\n| | FPR95 | AUPR | AUROC |\n|:----------------------------:|:-----:|-------|:-----:|\n| softmax w/o watermarking | 34.95 | 91.31 | 85.13 |\n| softmax w/ watermarking | **31.63** | **92.52** | **86.77** |\n| free energy w/o watermarking | 21.24 | 94.95 | 90.58 |\n| free energy w/ watermarking | **20.64** | **95.10** | **90.87** |\n\n\nCompared with the results given by WRN-40-2 in Table 1-2, the usage of the large-scale model can truly lead to better performance regarding both the cases with and without watermarking. However, the improvement after watermarking on the large-scale model is not as remarkable as that of the WRN-40-2 model (small-scale model). It may be due to the fact that the large-scale models themselves can already excel at OOD detection, so there does not remain a large space for their further improvements. \n\nWe will also merge the above results into our paper. If you have more questions regarding our paper, feel free to tell us. We are very happy to discuss them with you here.", " Many thanks for your valuable comments! We will answer your two questions below. \n\nActually, there is no particular requirement for the validation sets, such as requiring that validation sets should follow the same distribution as test situations. In our paper, we assume that the OOD validation sets are separated from the OOD test sets (i.e., iSUN, Places365, Texture, SVHN, and LSUN) just for fair comparison [1, 2]. Further, the test datasets differ from both the training situation (watermark training relies only on ID data) and the test situation (the validation and the test sets use different data). \n\nHere, we echo our claim that \"there is no particular requirement for the validation sets\" by the following experiments on the CIFAR-10 dataset regarding the softmax scoring (ID classifiers are trained with the CIFAR-10 dataset). Specifically, we consider using (1) **the validation sets used in [1, 2] (i.e., the individual validation set for iSUN, Places365, Texture, SVHN, and LSUN, respectively)**; and (2) **a new validation set: the tiny-ImageNet**. We report the OOD detection performance on the above validation datasets in terms of FPR95 in the following table. The \"candidate\" rows represent randomly selected sets of hyperparameter setups. We find that no matter which validation datasets are used in this experiment, the optimal one (the last row (optimal) in the following table) is the same as that used in our paper. \n\n|| $\\sigma_1$ | $\\rho$ | $\\beta$ | Validation Set for iSUN | Validation Set for Places365 | Validation Set for Texture | Validation Set for SVHN | Validation Set for LSUN | tiny-ImageNet |\n|:-------:|:------------:|:--------:|:---------:|:-------:|:-----------:|:---------:|:-------:|:-------:|:-------:|\n|candidate| 2.0 | 5.0 | 5.0 | 92.50 | 92.75 | 97.80 | 99.10 | 92.55 | 95.05 |\n|candidate| 1.6 | 1.0 | 4.0 | 50.15 | 77.85 | 58.40 | 76.65 | 47.50 | 74.50 |\n|candidate| 1.2 | 0.5 | 3.0 | 43.90 | 75.75 | 58.10 | 89.15 | 50.70 | 72.70 |\n|candidate| 0.8 | 0.1 | 2.0 | 44.75 | 66.20 | 40.80 | 37.00 | 45.00 | 62.50 |\n|candidate| 0.4 | 0.05 | 1.0 | 50.50 | 67.00 | 45.05 | 35.15 | 45.60 | 62.90 | \n|optimal| 0.4 | 1.0 | 3.5 | **40.60** | **61.15** | **40.15** | **29.85** | **40.40** | **58.30** |\n\n\nAs we can see, regarding each of the considered validation sets, we will have the same preference in hyperparameters (i.e., $\\sigma_1=0.4, \\rho=1.0, \\beta=3.5$), aligning with the choice in our paper. It means that the hyperparameters are pretty robust to our particular choice of validation setup. Note that the FPR95 reported in the above table is the FPR95 values on six validation sets: iSUN, Places365, Texture, SVHN, LSUN, and tiny-ImageNet, with different hyperparameters.\n\nWe will also merge the above analysis into our paper. If you have more questions regarding our paper, feel free to tell us. We are very happy to discuss them with you here.", " Thank you for your response. It addressed most of my previous concerns. For the validation dataset, is there a specific requirement? More specifically, how is it selected and what is its relationship to the training and test datasets?", " Many thanks for your constructive comments and kind suggestions! Please find our responses below.\n\n> Q1.1. The motivation is unclear. Could the authors elaborate on the motivation behind watermarking?\n\n**A1.1.** We are motivated by model reprogramming [1,2], stating that the model can be repurposed for new tasks without modifying its parameters. It facilitates our watermarking, aiming to boost the performance of previous scoring strategies that may produce unsatisfactory results in many cases (cf., Fig. 2). Here is a detailed description for your reference, which is a brief summary of our Introduction. \n\nIn previous works, post-hoc OOD detection largely relies on scoring functions built upon well-trained ID classifiers. However, one can hardly adjust these advanced methods for specific tasks since we prefer models with intact parameters. Therefore, how to boost the performance of existing scoring-based OOD detection is a very attractive problem. In this paper, we suggest that one can learn a watermark added to test-time inputs, of which pattern can be learned for specific scoring strategies and datasets. Our method is well supported by previous studies in model reprogramming [1,2], which states that **a model (with its fixed parameters) can be repurposed for a new task by modifying the pattern of inputs**. Thus, we want to reprogram the fixed ID classifier to fit a new task: identifying OOD data.\n\nAlso, our work is very different from previous works in studying reprogramming properties of deep models, and their padding strategies [1] will destroy models’ original capability in OOD detection (also agreed by Reviewer WTn8). We overcome these drawbacks, and thus the watermarking performs well in OOD detection.\n\nOverall, it is the first time that model reprogramming is studied in the literature on OOD detection, and our proposal provides a new road that can motivate more works in this area. This fact is also recognized by the other two reviewers. We will continue elaborating on the motivation behind watermarking in our revision.\n\n> Q1.2. The method is a bit ad-hoc.\n\n**A1.2.** Our methods are not designed for specific datasets. They are general methods and can be used in more scenarios, which can be verified from the following two perspectives. \n\nFrom the principle perspective, the effectiveness of our methods takes root in the reprogramming property of DNNs. This property of DNNs has been verified in many academic papers [1,2], ranging from image classification to time-series analysis. It supports that we can **reprogram a deep model to complete other tasks**. Thus, our watermarking strategy, motivated by the reprogramming property, **is capable of reprogramming the ID classifier to complete OOD detection tasks**. \n\nFrom the experimental perspective, we have tested the performance of our methods across a set of datasets (cf., Section 6 and Appendix C.5) and scoring functions (cf., Appendix C.1-C.2). Therein, our watermarking can achieve better performance across all the considered settings (e.g., various datasets, scoring functions, and learning strategies), **further verifying the generality of our watermarking in the area of OOD detection**. ", " > Q1.3. The authors claim that watermarking works because “the learning procedure does not see any OOD data during training.” If the only difference between test ID and OOD is that OOD data were not seen before, then why is watermarking necessary?\n\n**A1.3.** There is a misunderstanding in this comment. The watermarking strategy works due to the reprogramming property of DNNs (please see A2). The sentence “the learning procedure does not see any OOD data during training” appears when we design the learning procedure regarding watermarks (more like technical details instead of the principle that watermarks can work on OOD detection). This sentence itself does not imply the general reason why watermark works.\n\nAs for the design of this learning procedure, our aim is to train the watermark to recognize the watermark with/without ID pattern (with low/high confident predictions). After training the watermark, the ID classifier will make low confident predictions for those watermarked OOD data since the patterns of OOD data largely deviate from the ID cases. Note that, the definition of OOD data is the data whose label set is disjoint with the label set of ID data, meaning that OOD data are not seen before.\n\nWe further clarify the necessity of watermarking strategy. Given a fixed ID classifier, previous studies design many scoring functions to help identify OOD data. However, previous scoring strategies cannot fully distinguish ID and OOD cases and can hardly make adaptations for specified tasks and datasets. Thus, the necessity of watermarking strategy lies in the fact that it can largely boost the OOD detection capability of ID classifiers for considered tasks with a learned watermark (a static pattern) added to test-time inputs. **The effectiveness of the watermarking strategy is supported by model reprogramming [1,2] (in principle), verified by our extensive experiments (in practice), and supported by the other two reviewers.**\n\n>Q2. What was the validation OOD dataset used to tune the hyperparameters? The method involves, and is sensitive to hyperparameters. Furthermore, unsurprisingly, the optimal hyperparameter setting seems to depend on the test OOD data. This limits the usage of this method. Additionally, the authors seem to have tuned the hyperparameters on a separate validation OOD dataset disjoint from the ones used to evaluate the tables, however, there is no mention of this validation dataset in the main paper of the appendix. Since the quality of the hyperparameters should depend on the validation dataset, it’s important to know how the performance changes depending on different choices of the validation datasets. \n\n**A2.** There exists a set of validation datasets that are separated from the test datasets, following the setups such as [3,4] for a fair comparison. The detailed description of hyperparameter tuning can be found in Appendix C.6, and it is also summarized below for your reference.\n\nWe adopt validation OOD datasets that are separated from the original iSUN, Places365, Texture, SVHN, and LSUN. Further, we choose the proper $\\sigma_1$ from the candidate parameter set {0.0,0.2,0.4,0.6,0.8,1.0,1.2,1.4,1.6,1.8,2.0}, and the proper $\\rho$ from {0.0,0.02,0.05,0.07,0.1,0.2,0.5,0.7,1.0,2.0,5.0}. For softmax scoring, $\\beta$ is chosen from {0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0}; and for free energy scoring, $\\beta$ is chosen from {0.0,0.02,0.04,0.06,0.08,0.1,0.2,0.4,0.6,0.8,1.0}. We will move the hyperparameter selection part to the main content in our revision.\n\nMoreover, we have shown how the hyperparameters affects the performance of our methods in Tables 17 - 28 (the 12 tables in page 20 in the original submission). \n\nWe also demonstrate the detailed performance of our methods on different validation datasets. The results (regarding FPR95) with softmax scoring on CIFAR-10 dataset can be found below, where we select a set of candidate hyperparameter setups. As we can see, our selected hyperparameters (last line) is preferred across all the considered datasets. \n\n\n| $\\sigma_1$ | $\\rho$ | $\\beta$ | iSUN | Places365 | Texture | SVHN | LSUN |\n|:------------:|:--------:|:---------:|:-------:|:-----------:|:---------:|:-------:|:-------:|\n| 2.0 | 5.0 | 5.0 | 92.50 | 92.75 | 97.80 | 99.10 | 92.55 |\n| 1.6 | 1.0 | 4.0 | 50.15 | 77.85 | 58.40 | 76.65 | 47.50 |\n| 1.2 | 0.5 | 3.0 | 43.90 | 75.75 | 58.10 | 89.15 | 50.70 |\n| 0.8 | 0.1 | 2.0 | 44.75 | 66.20 | 40.80 | 37.00 | 45.00 |\n| 0.4 | 0.05 | 1.0 | 50.50 | 67.00 | 45.05 | 35.15 | 45.60 |\n| 0.4 | 1.0 | 3.5 | **40.60** | **61.15** | **40.15** | **29.85** | **40.40** |\n", " >Q3. The evaluation of the method is limited. The evaluated ID and OOD pairs are very easy to distinguish, especially since CIFAR and ImageNet are very object-centric, while the OOD datasets are mostly scenery-based (except for SVHN, which are all numbers). I would like to see at the minimum, harder problems like CIFAR-10 vs CIFAR-100, which many papers already include, or ImageNet 1k vs ImageNet 21k (excluding ImageNet 1k). It would be great to see other more difficult datasets, like the ones used in open set recognition (e.g. the semantic shift benchmark from [1]) with bigger models like the Vision Transformer.\n\n**A3.** We have conducted extensive experiments on many benchmark datasets in the field of OOD detection (including the ImageNet OOD detection benchmark), and the results demonstrate the generality and effectiveness of our methods. However, your suggestion can further make our evaluation solid. Thus, we conduct the experiments regarding the “CIFAR10 vs. CIFAR100” setting (near-OOD detection), taking CIFAR-10 as the ID case and CIFAR-100 as OOD case. \n\nHere, we adopt the learning objective in Contrasting Shifted Instances (CSI) [5], which is well-known to be effective in near-OOD detection, with the L2 norm of feature representation (for the second last layer) as our scoring function. We follow the default hyperparameter setting in [5], further setting $\\alpha=0.001$ and $\\rho=0.005$ for our watermark training (since the learning objective in CSI considers both the ID and OOD cases, we do not need to specify $\\beta$). The algorithm is run for 50 epochs without learning rate decay. The results are summarized as follows, which demonstrate that our watermarking is general in combination with CSI and is competent for challenging near-OOD detection tasks. \n\n\n| | FPR95 | AUROC | AUPR |\n|:-------------:|:-----:|:-----:|:-----:|\n| w/o watermark | 95.72 | 50.01 | 85.29 |\n| w/ watermark | **88.95** | **65.46** | **91.44** |\n\n\nAs for the experiments regarding the large-scale models, we are currently testing the effectiveness of the watermarking strategy with Vision Transformer with the ImageNet OOD detection benchmark. Since the model is a little bit large, we are still waiting for the results. We will report the results here when done!\n\n> Q4. It seems like the sign is off for Equation 3 and Equation 11 (the higher the energy, the more OOD it should be). Lines 283-284 “watermark is learned with the softmax scoring and tested regarding the free energy scoring” -> “learned with free energy and tested with softmax scoring”?\n\n**A4.** Thank you for your kind correction. We will revise the related description to eliminate the confusion.\n\n> Q5. Figure 5 seems to be missing something. The text refers to LSUN-C and LSUN-R, but there is only LSUN in Figure 5.\n\n**A5.** We mainly give the results with LSUN-R for the OOD test since we find that previous methods already perform well regarding LSUN-C. Here, for integrity, we list the OOD performance of LSUN-C with softmax scoring and free energy scoring. \n\nSoftmax Scoring: \n\n| | | FPR95 | AUROC | AUPR |\n|:---------:|:-------------:|:-----:|:-----:|:-----:|\n| CIFAR-10 | w/ watermark | **18.00** | **97.94** | **99.60** |\n| | w/o watermark | 21.40 | 97.23 | 99.44 |\n| CIFAR-100 | w/ watermark | **29.35** | **96.87** | **97.00** |\n| | w/o watermark | 60.85 | 85.87 | 96.71 |\n\nFree Energy Scoring: \n\n| | | FPR95 | AUROC | AUPR |\n|:---------:|:-------------:|:-----:|:-----:|:-----:|\n| CIFAR-10 | w/ watermark | **3.85** | **99.20** | **99.84** |\n| | w/o watermark | 5.00 | 98.50 | 98.71 |\n| CIFAR-100 | w/ watermark | **14.40** | **97.45** | **99.46** |\n| | w/o watermark | 24.90 | 95.64 | 99.07 |\n\n> Q6. The experiments testing watermarking’s impact on test accuracy (lines 272-279) don’t seem very relevant since watermarking is only used to detect in vs. out.\n\n**A6.** In the original submission, we want to demonstrate that our model can excel at OOD detection while keeping the test accuracy largely intact after watermarking. Accordingly, we can do classification and detection simultaneously without feeding both the original inputs (for classification) and the watermarked inputs (for detection) into the model. It can save twice the amount of computation when deployed in real-world applications. Reporting test accuracy is also common in many other papers, such as [6,7]. \n\n\n> Q7. The experiments with masking don’t seem very surprising. \n\n**A7.** In the original submission, we are curious about this experiment because the learned watermark looks like it only focuses on the edge. Your understanding is correct here. We will move more details regarding our experiments to the main content and put this part (lines 292-308) in the Appendix. Meanwhile, we can also demonstrate more experiments (e.g., CIFAR10 vs. CIFAR100) in the main content.", " **Revision Plan**\n\nIn our revision, we will clarify our setup of hyperparameter selection in the main text, add more experiments about watermarking to verify our superiority, and move the experiments about masking into our Appendix. We will also clarify our motivation in the Introduction and fix the confusion in our draft. Thanks again for your constructive comments.\n\n\n**General Response**\n\nWe have addressed your concerns about our paper. If you have more suggestions, please tell us. We will merge them into our revision as well! Since your evaluation is important for our paper, we sincerely hope that you can re-evaluate our paper if your concerns have been addressed.\n\n**References**\n\n[1] Yun-Yun Tsai, Pin-Yu Chen, and Tsung-Yi Ho. Transfer learning without knowing: Reprogramming black-box machine learning models with scarce data and limited resources. ICML (2020).\n\n[2] Gamaleldin F. Elsayed, Ian J. Goodfellow, and Jascha Sohl-Dickstein. Adversarial Reprogramming of Neural Networks. ICLR (2019). \n\n[3] Shiyu Liang, Li Yixuan, and Srikant Rayadurgam. Enhancing the reliability of out-of-distribution image detection in neural networks. NeurIPS (2017).\n\n[4] Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. VOS: Learning what you don’t know by virtual outlier synthesis. ICLR (2022).\n\n[5] Jihoon Tack, Sangwoo Mo, Jongheon Jeong, and Jinwoo Shin. CSI: Novelty detection via contrastive learning on distributionally shifted instances. NeurIPS (2020).\n\n[6] Weitang Liu, Xiaoyun Wang, John D. Owens, and Yixuan Li . Energy-based out-of-distribution detection. NeurIPS (2020).\n\n[7] Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep nearest neighbors. ICML (2022).", " We sincerely thank you for your constructive comments and kind suggestions! Please find our responses below.\n\n> Q1. Why do previous methods not use the Gaussian Noise? Are your methods the only methods that can use this information in post-hoc OOD detection?\n\nA1. Your understanding is correct. Our method is the only one in post-hoc OOD detection that can use this information. Previously, researchers mainly focus on devising various scoring functions, which cannot make effective adaptation for tasks or other information.\nThe capability in using other information reflects our flexibility in effective detection, even with such weak knowledge as Gaussian noise (without any cost to obtain). In Appendix C.1-C.2 (cf., Supplementary Material), we further demonstrate that stronger information (e.g., surrogate OOD data) can also benefit our watermarking strategy, making our proposal very general and attractive. We humbly appreciate your concern, and we believe how to incorporate proper information into our watermarking strategy will be an interesting question that requires further exploration.\n \n> Q2. It is better to explain the use of Eq. (6). This optimization procedure seems very important to the method but lacks enough explanation.\n\nA2. Sorry for our unclear description. To find the proper watermark with minimal risk in Eq. (5), we use the first-order gradient update to iteratively update watermark’s elements. However, directly using the gradient direction in feature updates can lead to unstable optimization [1]. Using the signum of first-order gradients instead can largely mitigate this issue [2], motivating our optimization formula in Eq. (6). We will rephrase the related description in our revision to make it clearer.\n \n> Q3. In the free-energy version, how to set the hyperparameter T?\n\nA3. In our experiments, we directly report the results with $T=1$ following [3]. We also follow your kind suggestion to report the cases with other values of $T$ from the candidate set {1,5,10,50,100,500,1000}. Surprisingly, we find that $T=1$ leads to the best results among the candidates, and we summarize the results on the CIFAR-10 dataset as an example in the following table.\n\n| T | 1 | 5 | 10 | 50 | 100 | 500 | 1000 |\n|:-----:|:---------:|:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|\n| FPR95 | **25.94** | 27.84 | 27.76 | 28.93 | 28.48 | 28.02 | 31.21 |\n| AUROC | **95.08** | 94.36 | 94.06 | 93.59 | 93.59 | 93.73 | 93.46 |\n| AUPR | **98.79** | 98.59 | 98.47 | 98.25 | 98.25 | 98.22 | 98.10 |\n \n> Q4. How to analyze the power of watermarking in math?\n\nA4. The power of adding watermarks can be verified by the properties of a fully-connected ReLU network and its expressive power, summarized by the following theorem.\n\n**Theorem 1**. _Given a fixed fully-connected ReLU network $f_0$ with width $d_m$ , for any Lebesgue-integrable function $g:\\mathbb{R}^d\\rightarrow\\mathbb{R}$ and any $\\epsilon>0$, if the width $d_m\\le d+4$, then there exists a data-dependent watermark $w$ such that\n$\\int |g(x)-f_0(x+\\boldsymbol{w})|dx < \\epsilon.$_\n \nThis theorem states that we can approximate any Lebesgue-integrable function $g$ using the fixed network and a data-dependent watermark, which provides a theoretical foundation to optimize our objective well using the watermarking strategy.\n \n> Q5. The results regarding ImageNet benchmark should be moved into the main content instead of the appendix, which can help better demonstrate the watermarking strategy's performance.\n\nA5. Following your kind suggestion, we will move the ImageNet experiments to the main content in our revision, better demonstrating the power of our watermarking strategy.\n\n> Q6. In the whole paper, the data use ID and OOD to represent the ID or OOD data. While the loss functions use IN and OUT to present parts regarding ID and OOD. Keeping both consistencies is a better option.\n\nA6. Following your suggestion, we will use ID and OOD consistently throughout this paper, making our description clearer. \n\n**Revision Plan**\n\nIn our revision, we will further discuss the optimization procedure in Eq. (6), improve the readability and clarity of our paper, move the ImageNet experiments into the main content, and use ID/OOD consistency instead of IN/OUT. Thanks again for your constructive comments. \n\n**General Response**\n\nWe have addressed your concerns about our paper. If you have more suggestions, please tell us. We will merge them into our revision as well! Please discuss with us in the openreview system. We will try our best to address your further concerns and merge your comments into our revision.\n", " **References**\n\n\n[1] Qizhou Wang, Feng Liu, Bo Han, Tongliang Liu, Chen Gong, Gang Niu, Mingyuan Zhou, and Masashi Sugiyama. Probabilistic margins for instance reweighting in adversarial training. NeurIPS (2021).\n\n[2] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. ICLR (2018).\n\n[3] Weitang Liu, Xiaoyun Wang, John D. Owens, and Yixuan Li . Energy-based Out-of-distribution Detection. NeurIPS (2020).\n", " We sincerely thank you for your constructive comments! Please find our responses below.\n\n> Q1. Will selecting the wrong hyperparameters hurt OOD performance? How to choose the hyperparameters?\n\nA1. Your understanding is correct. A wrong setup can truly hurt OOD performance. The experimental results can be found in Figure 5, Tables 3-4 in the main content, and Tables 17 to 28 in the Appendix (cf., Supplementary Material). Furthermore, the details about our hyperparameter selection can be found in Appendix C.6, where we follow [1,2]. It is also summarized below for your convenience.\n\nWe adopt validation OOD datasets that are separated from the test OOD datasets including iSUN, Places365, Texture, SVHN, and LSUN. We choose the proper $\\sigma_1$ from the candidate parameter set {0.0,0.2,0.4,0.6,0.8,1.0,1.2,1.4,1.6,1.8,2.0} and the proper $\\rho$ from {0.0,0.02,0.05,0.07,0.1,0.2,0.5,0.7,1.0,2.0,5.0}. For softmax scoring, $\\beta$ is chosen from {0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,4.5,5.0}; and for free energy scoring, $\\beta$ is chosen from {0.0,0.02,0.04,0.06,0.08,0.1,0.2,0.4,0.6,0.8,1.0}. \n\n> Q2. The previous comment is particularly important when the performance improvements of the proposed method saturate when the dataset has more classes or becomes more complex (e.g., cifar100 or imageNet). This makes the proposed method not favorable to be used in practice.\n\nA2. Thanks for your comments. First, we want to clarify that we have shown how to select the hyperparameters (see A1). Then, we discuss the effectiveness of our methods in complex situations. The saturation of improvement is a common phenomenon that frequently happens in OOD detection [3,4], since many scoring functions fail in complex setups (e.g., large number of classes). Thus, we next justify that our watermarking strategy works in a large-class setup.\n\n**We have conducted experiments on the ImageNet OOD detection benchmark** in the Appendix C.5 (line 625, page 18 in the original submission) to verify that the watermarking strategy still works when we have many classes. The results are reported in Table 16. The results show that our watermarking strategy still improves the performance of OOD detection a lot. Namely, the proposed watermarking strategy is also effective when the dataset has many classes. For example, after adding the learned watermark, the FPR95 is improved from ~52% to ~44% (see Table 16). We will move Appendix C.5 to the main content in our revision to justify the effectiveness when we meet the many-class situation (i.e., in more practical scenarios).\n\nLastly, we justify the effectiveness of our watermarking strategy when given different scoring functions.\n\nSince the different scoring functions have their own performance limitation in detecting OOD data, we want to test if the watermarking strategy can further boost the performance of different types of scoring functions. **In Appendix C.1, we show other scoring functions with the watermarking strategy (Tables 9-11).** It can be seen that our strategy can **further boost** the performance of other scoring functions. Besides, we also adopt the MaxLogit in OOD scoring [4] (**another type** of scoring function that is better than softmax in the large-class setup) with our learned watermark. Their comparison on the CIFAR-100 dataset is summarized below.\n\n| MaxLogit | w/ watermark | w/o watermark |\n|:-------------:|:---------------:|:---------------:|\n| FPR95 | **68.15** | 72.37 |\n| AUROC | **83.73** | 79.58 |\n| AUPR | **96.19** | 94.90 |\n\nAs we can see, with the learned watermark, the performance of MaxLogit can be further improved as well.\nThe above analysis verifies that the watermarking strategy is effective and general in a large-class setup. It can also further improve the performance of many different scoring functions [4].\n\n**Revision Plan**\n\nIn our revision, we will clarify hyperparameter selection in our main content and move the ImageNet-based experiments to the main content. Thanks again for your constructive comments. \n\n**General Response**\n\nWe have addressed your concerns about our paper. If you have more suggestions, please tell us. We will merge them into our revision as well! Please discuss with us in the openreview system. We will try our best to address your further concerns and merge your comments into our revision.\n\n**References**\n\n[1] Shiyu Liang, Yixuan Li, and Srikant Rayadurgam. Enhancing the reliability of out-of-distribution image detection in neural networks. NeurIPS (2017).\n\n[2] Xuefeng Du, Zhaoning Wang, Mu Cai, and Yixuan Li. VOS: Learning what you don’t know by virtual outlier synthesis. ICLR (2022).\n\n[3] Fort Stanislav, Jie Ren, and Balaji Lakshminarayanan. Exploring the limits of out-of-distribution detection. NeurIPS (2021).\n\n[4] Dan Hendrycks, Steven Basart, Mantas Mazeika, Andy Zou, Joseph Kwon, Mohammadreza Mostajabi, Jacob Steinhardt. Improving and assessing anomaly detectors for large-scale settings. (2022).\n\n", " The paper proposes a method to improve the OOD detection capability of the existing OOD methods. The paper relies on the \"reprogramming\" property of DNNs and learns watermarks that, when applied, can better separate the ID and OD data as in existing methods. The paper proposes an algorithm to learn these watermarks more effectively. Experimental results demonstrate that the proposed method can improve the OOD performance of the existing OOD detection methods. The paper also includes several qualitative analysis to study various aspects of the proposed method, including ablation studies, and transferability. The paper proposes an interesting approach to improve the performance of the existing OOD methods. In general, I find the paper's contribution novel:\n1) A nice motivation for the study of reprogramming in the context of OOD. I can see that the proposed method helps improve the separation of ID and OD data.\n2) The proposed algorithm seems to be effective in learning the watermarks.\n3) The experiments at least demonstrate the core results of the task, and the benefits of the proposed approach.\n\nI, however, also have some concerns/comments:\n\n1) I can see that in practice, we don't have access to the OD dataset. And there's a large variation in the OOD performance as reported in the ablation study. How exactly do we choose the hyperparameters? Will selecting the wrong hyperparameters can even hurt OOD performance. I'm also not sure about the detail of the ablation experiments (I don't seem to be able to find it in appendix). What OD dataset is used?\n2) The previous comment is particularly important when the performance improvements of the proposed method saturate when the dataset has more classes or becomes more complex (e.g., cifar100 or imagenet). This makes the proposed method not favorable to be used in practice. \n In general, I find the study of reprogramming in the context of OOD interesting. However, I'm concerned about the practicality of the proposed method as mentioned in the weakness comments. The paper does not explicitly mention the limitation but I think it is discussed in some parts. I would appreciate it if the paper could provide a dedicated section to discuss the limitation of the method.", " OOD detection aims to detect OOD data given only ID data or ID classifier. Since OOD data may cause risks when deploying the deep learning models into the real world, it is important to detect OOD data (like this paper considering). Based on my experience, this paper considers a setting where only ID classifiers are available and it is unavailable to see the ID data. Thus, the performance on ID classes will be maintained naturally. Given only the models, previous methods proposed score functions to determine if a data point is OOD, which is a natural way to utilize the models’ information. However, this paper argues that there might be another way to use the models, i.e., reprograming the models via changing data slightly. Compared to previous methods, this paper considers using the models from a higher level (based on the model reprograming property of deep models), which is very novel and interesting. \n\nFrom the technical level, the proposed watermarking strategy is also novel and different from previous reprograming methods. Extensive experiments verify that the proposed OOD detection methods are useful and performs better in general. Pros:\n\n1. Reprogramming property of deep models is first used for OOD detection, which makes good contributions to the field of OOD detection. Previous methods only use models’ outputs and do not fully use the reprogramming property of deep models. This paper considers this property and successfully uses this property to address the OOD detection problem, which is novel. \n\n2. In previous post-hoc OOD detection methods, there are no tuning parameters after selecting a score function, which does not fully use the models’ information. However, this paper breaks through this situation and shows that we can further utilize the models’ information.\n\n3. Previous reprogramming methods cannot be directly used for OOD detection, and this paper makes a new contribution to this field by proposing the watermarking strategy, which is novel.\n\n4. Experiments verify the performance of the watermarking strategy, which is solid evidence that the reprogramming property of deep models can also help address the OOD detection problem. This finding will motivate more work in this field. It is appreciated that the performance is also tested on ImageNet benchmark datasets.\n\nCons:\n\n1. When training the watermarks, additional noise is introduced. The proposed methods seem to use more information than previous methods. Why do previous methods not use this information? Are your methods the only methods that can use this information in post-hoc OOD detection? More explanations are required here. \n\n2. It is better to explain the use of Eq. (6). This optimization procedure seems very important to the method but lacks enough explanation. \n\n3. In the free-energy version, how to set the hyperparameter T? How does the value of T influence the performance of free-energy watermarking method?\n\n4. The results regarding ImageNet benchmark should be moved into the main context instead of the appendix, which can help demonstrate the performance of watermarking strategy better.\n\n5. In the whole paper, the data use ID and OOD to represent the ID or OOD data. While the loss functions use IN and OUT to present parts regarding ID and OOD. Keeping both consistencies is a better option.\n\n6. Adding some specific noise into data can have a great impact on the model’s output, which has been verified in many areas. Watermark is one of them. How to analyse the power of watermark in math? Necessary discussions are required. \n Please see weakness. The authors have addressed the limitation part. There are no ethical concerns regarding the proposed strategy.", " This paper presents an out-of-distribution (OOD) detection method called “watermarking”, inspired by adversarial reprogramming. Given a scoring function that measures how in-distribution an image is, the method is to learn a fixed additive perturbation, later applied to test images, such that in-distribution images have high scores, and the perturbation has a low score. The authors claim that this will translate to being able to distinguish in vs out (i.e. OOD images with the additive perturbation will yield lower score) since OOD data was not observed during training, only the watermark. With the addition of some tricks (sign gradient descent and sharpness-aware minimization), watermarking is shown to improve OOD detection performance compared to not using watermark, on various scoring functions, and on various in-distribution datasets. Strengths:\n- To the best of my knowledge, the idea of learning an additive perturbation to improve OOD detection is novel.\n- The method is relatively simple, and doesn’t involve training a classifier or generative model.\n- The method can be used with various scoring functions, and it improves the OOD detection performance most of the time.\n\nWeaknesses:\n- The method is a bit ad hoc and the motivation is unclear. The authors claim that watermarking works because “the learning procedure does not see any OOD data during training”. If the only difference between test ID and OOD is that OOD data were not seen before, then why is watermarking necessary?\n- The method involves, and is sensitive to hyperparameters. Furthermore, unsurprisingly, the optimal hyperparameter setting seems to depend on the test OOD data. This limits the usage of this method. Additionally, the authors seem to have tuned the hyperparameters on a separate validation OOD dataset disjoint from the ones used to evaluate the tables, however, there is no mention of this validation dataset in the main paper of the appendix. Since the quality of the hyperparameters should depend on the validation dataset, it’s important to know how the performance changes depending on different choices of the validation datasets.\n- The evaluation of the method is limited. The evaluated ID and OOD pairs are very easy to distinguish, especially since CIFAR and ImageNet are very object-centric, while the OOD datasets are mostly scenery-based (except for SVHN, which are all numbers). I would like to see at the minimum, harder problems like CIFAR-10 vs CIFAR-100, which many papers already include, or ImageNet 1k vs ImageNet 21k (excluding ImageNet 1k). It would be great to see other more difficult datasets, like the ones used in open set recognition (e.g. the semantic shift benchmark from [1]) with bigger models like the Vision Transformer.\n\n[1] Vaze, Sagar, et al. \"Open-set recognition: A good closed-set classifier is all you need.\" \n - Could the authors elaborate on the motivation behind watermarking? \n- What was the validation OOD dataset used to tune the hyperparameters?\n- It seems like the sign is off for Equation 3 and Equation 11 (the higher the energy, the more OOD it should be). It seems like the sign is correct for the code though.\n- Figure 5 seems to be missing something. The text refers to LSUN-C and LSUN-R, but there is only LSUN in Figure 5.\n- The experiments testing watermarking’s impact on test accuracy (lines 272-279) doesn’t seem very relevant, since watermarking is only used to detect in vs out.\n- Lines 283-284 “watermark is learned with the softmax scoring and tested regarding the free energy scoring” -> “learned with free energy and tested with softmax scoring”?\n- The experiments testing the effect of masking my thresholding (lines 292-308) doesn’t seem very surprising, because the watermarks were trained to be a certain way, so there is no guarantee that changing parts of it abruptly, would do anything meaningful.\n The authors have adequately addressed limitations of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "jDM4sgM4xo", "nips_2022_6rhl2k1SUGs", "BPD9S8oB4bZ", "iMQSsvedng_", "vThqHlEQTR_", "xm3bOit9_5s", "xm3bOit9_5s", "FUxiyzCb8Iz", "07-0TcR_qU", "vThqHlEQTR_", "bA_UWXls75g", "0BXWdZ-jPkk", "au3MH8XUIUy", "QRP8ENGCcmI", "nips_2022_6rhl2k1SUGs", "zGX08tJxYNB", "BPD9S8oB4bZ", "OujBX2wyvwJ", "07-0TcR_qU", "zGX08tJxYNB", "QRP8ENGCcmI", "raoN27hC1w7", "OujBX2wyvwJ", "OujBX2wyvwJ", "OujBX2wyvwJ", "OujBX2wyvwJ", "07-0TcR_qU", "07-0TcR_qU", "BPD9S8oB4bZ", "nips_2022_6rhl2k1SUGs", "nips_2022_6rhl2k1SUGs", "nips_2022_6rhl2k1SUGs" ]
nips_2022_bIlUqzwObX
Reinforcement Learning with a Terminator
We present the problem of reinforcement learning with exogenous termination. We define the Termination Markov Decision Process (TerMDP), an extension of the MDP framework, in which episodes may be interrupted by an external non-Markovian observer. This formulation accounts for numerous real-world situations, such as a human interrupting an autonomous driving agent for reasons of discomfort. We learn the parameters of the TerMDP and leverage the structure of the estimation problem to provide state-wise confidence bounds. We use these to construct a provably-efficient algorithm, which accounts for termination, and bound its regret. Motivated by our theoretical analysis, we design and implement a scalable approach, which combines optimism (w.r.t. termination) and a dynamic discount factor, incorporating the termination probability. We deploy our method on high-dimensional driving and MinAtar benchmarks. Additionally, we test our approach on human data in a driving setting. Our results demonstrate fast convergence and significant improvement over various baseline approaches.
Accept
All reviewers are in agreement that this paper should be accepted. It combines clear writing, a well-motivated setting (external termination due to unobserved accumulation of costs), and sound theoretical analysis with a novel algorithmic contribution (TermPG) that performs well on an interesting domain that aligns well with the stated setting. Furthermore, the additional leveraging of the cost estimation for dynamic discounting may itself be of fairly broad interest to research in RL. Clear Accept, really solid paper.
train
[ "ghIE9_xi6Fi", "O9wi3iwn1LV", "O1jyfgSLxF5", "XOlz6thldNB", "aFLtkU-mNMX", "BkDB8f7xiE", "bYboBHxnT6v", "iAsE8EDZaoz", "oHyn8BqxDbe", "_MfTwYZxSUh", "d1D-syiasit", "3FGwXNnlrv5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response. I encourage you to make the changes & clarifications discussed. I recommend acceptance of the paper.", " Thanks to the authors for their response and clarifications. I've read through all the other reviews and responses, and am satisfied to recommend acceptance. I've adjusted my score upwards to reflect this", " Thank you for the clarifications. After reading the other reviews and the authors' responses, I keep my score and recommend the paper for acceptance. I encourage the authors to udpate the paper with the additional discussions and clarifications suggested by the reviewers. ", " ### References:\n\n[1] Ross, Sheldon M. \"Average cost semi-Markov decision processes.\" Journal of Applied Probability 7, no. 3 (1970): 649-656. \\\n[2] Auer, Peter, Thomas Jaksch, and Ronald Ortner. \"Near-optimal regret bounds for reinforcement learning.\" Advances in neural information processing systems 21 (2008). \\\n[3] Abeille, Marc, Louis Faury, and Clément Calauzènes. \"Instance-wise minimax-optimal algorithms for logistic bandits.\" In International Conference on Artificial Intelligence and Statistics, pp. 3691-3699. PMLR, 2021. \\\n[4] Abbasi-Yadkori, Yasin, Dávid Pál, and Csaba Szepesvári. \"Improved algorithms for linear stochastic bandits.\" Advances in neural information processing systems 24 (2011). \\\n[5] Chatterji, Niladri, Aldo Pacchiano, Peter Bartlett, and Michael Jordan. \"On the theory of reinforcement learning with once-per-episode feedback.\" Advances in Neural Information Processing Systems 34 (2021): 3401-3412. \\", " Thank you for your thorough and positive review! We are encouraged that you found the problem well-motivated and the paper well written, and that you highlighted the theoretical guarantees and practical algorithm. We appreciate your comments and address them below.\n\n### 1. Re RUDDER:\n\nWe note that our problem can’t be directly cast to a delayed reward setting since the costs affect the transition function (and not the reward), but more importantly – our problem is non-Markov w.r.t. the costs.\n\n### 2. Re relation to SMDP formulation:\n\nYou raise an interesting comment w.r.t. the SMDP formulation. There is indeed a similar tradeoff between costs and rewards in that formulation as we have in our setting. On the one hand, lowering the termination probability directly improves the overall reward merely by extended survival. On the other hand, assuming for the sake of simplicity that the cost does not influence termination (i.e. when $b \\to \\infty$), there is indeed a negative impact of cost avoidance and reward maximization, similar to [1]. (see all refrences below)\n\n### 3. Re bounded rewards:\n\nBounded rewards is a very common assumption in the RL literature [2]. Particularly, this assumption only scales the regret by a factor $R_{max}$, yet it helps maintain brevity. It is therefore convenient to set the rewards to the interval $[0,1]$. We note that, similarly, we can work with any non-negative sub-Gaussian rewards.\\\nFinally, in our setting, this choice is orthogonal to the choice of costs. Specifically, the costs only affect the termination probability, and therefore a fixed cost function would have the same exact effect for any scaling of the reward function. This is due to the fact that the costs affect the obtained reward only implicitly through the transition function.\n\n### 4. Re intuition on the theory, $\\kappa$, $L$, and practical implications:\n\nTo answer your question, we start with some intuition regarding $\\kappa$. Suppose the costs are all very small and the bias is large, such that we are on the far left side of the sigmoid function. In this case we will get terminated only after a large number of steps, and the credit assignment problem (for the costs) will be very hard. In this case, estimating the costs becomes a hard problem, and this directly affects our regret. On the other hand, assume that our bias is small but costs are large, so that we are on the far right size of the sigmoid function. In this case, we will most likely be terminated after one step, but estimating the costs of different actions will be hard. This small gap will be evident in the regret after enough episodes. \\\nTheoretically, the hardness stems from the logistic bandit problem (see for example [3], and particularly Proposition 4). There, $\\kappa$ is inherent to the difficulty in estimating the parameters of the logistic model. \n\nRegarding your concern of practical implications. We believe that in practical applications the costs would be quite sparse, making the potential problem of a wide range of outcomes less apparent. Moreover, recall that in practice we use limited memory “windows” that mitigate the credit assignment problem. For example, in a recommender system, a user may exit an application due to bad recent recommendations, but is less likely to be affected by very old recommendations. \\\nAccounting for longer horizons can be potentially done with a hierarchical approach,. Lastly, we emphasize that $\\kappa$ is a real factor which exists in lower bounds of the estimation problem, as shown in [3]. \n\nRegarding $L$ – this is a parameter that is commonly used in linear bandit papers [4,5], and provides a bound on the parameters. In our case, it bounds the parameters $c$, such that in the worst case $L \\leq c_{\\text{max}}\\sqrt{SAH}$. In practice, $c$ is usually sparse, making this factor much smaller. \n\nWe will clarify these points further by adding relevant discussion.\n\n### 5. Re large windows:\n\nAs the reviewer mentioned, using a larger window produced somewhat improved results. This overparameterization was good for the agent to improve learning as convergence of the costs was fast. We believe that in practical non-tabular settings, using a larger window can benefit the expressivity of the learned cost function. This allows the agent to travel through “wrong” cost functions, which can help it improve.\n\n\n### Re Safety: \n\nWe agree that this is an important point. The agent may find a way to “stop” the terminator through unsafe behavior. To avoid such bad outcomes, we should definitely take such scenarios into account and add safety constraints wherever possible. Particularly, termination does not eliminate the need for safe RL, but only complements it. Thank you for pointing this out; we will emphasize this in our work.\n", " ### Re no optimism. \nThis is a great question. We first note that optimistic estimation of the costs is required by our theoretical analysis. A very common technique in RL for optimistic estimation is to use an ensemble [3,4], and therefore we chose this practice for our implementation. Indeed, as our experiments suggest, using this mechanism improved performance in all of the environments. We believe that a possible way to make this mechanism more efficient would be to use more workers, and we encourage it whenever it is computationally possible. We also note that, though we achieved optimism through an ensemble of cost networks, other methods of uncertainty estimation could be used (e.g., MC Dropout). Finally, our experiments suggest that even without the ensemble, our method still exceeds the baselines. Thus, the size of the ensemble might serve as a hyper-parameter which balances the performance and computational efficiency of the algorithm. \n\n### References:\n\n[1] Abbasi-Yadkori, Yasin, Dávid Pál, and Csaba Szepesvári. \"Improved algorithms for linear stochastic bandits.\" Advances in neural information processing systems 24 (2011). \\\n[2] Chatterji, Niladri, Aldo Pacchiano, Peter Bartlett, and Michael Jordan. \"On the theory of reinforcement learning with once-per-episode feedback.\" Advances in Neural Information Processing Systems 34 (2021): 3401-3412. \\\n[3] Yu, Tianhe, Garrett Thomas, Lantao Yu, Stefano Ermon, James Y. Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. \"Mopo: Model-based offline policy optimization.\" Advances in Neural Information Processing Systems 33 (2020): 14129-14142. \\\n[4] Peer, Oren, Chen Tessler, Nadav Merlis, and Ron Meir. \"Ensemble bootstrapping for Q-Learning.\" In International Conference on Machine Learning, pp. 8454-8463. PMLR, 2021.", " Thank you for your helpful review. We are encouraged that you found our work significant and novel, the paper easy to read, and that the experiments clarify the impact of the paper. Please find our response to your comments below.\n\n### Re chosen termination model:\nWhile the function class of the termination model we describe is simple, we argue that it can capture expressive features as it depends on full trajectory states and actions. This is also apparent in the inverse RL literature, where a sum of rewards is assumed for the underlying human data. We believe this model can capture complex termination scenarios, such as the human termination example in our work. \n\nYour comment is also related to the question of using more complex function classes for termination. This is an interesting direction for future work. Particularly, for a cost function $P(\\text{termination}) = f(\\mathrm{c})$, where $\\mathrm{c} = (c(s_1, a_1, c(s_2, a_2), \\ldots, c(s_H, a_H))$, and $f \\in \\mathcal{F}$ is some function class, one would require to establish a result that obtains local confidence bounds for the costs, similar to Theorem 1. Particularly, the fact that we can establish local confidence bounds motivates our choice of model, as this lets us obtain an efficient algorithm for learning the costs and solving the TerMDP. \n\nOur practical solution can easily be adapted to the general class setting, by changing the sum over cost networks to some other function of the cost networks (see Figure 2), possibly a neural network. This is an interesting direction for future work - we will add further discussion on this to the paper. \n\n### Re std and more seeds: \nWe’ve run experiments of 5 new seeds and changed the std to corresponding 95% confidence intervals. This change did not have a meaningful effect on our results. We will upload an update of the paper with new plots.\n\n### Re needed amount of termination signals:\nWe agree that adding the termination-to-episode ratio is an insightful statistic, as you suggested. We gathered the data into the following table (which we will add to the paper):\n\n| | **Backseat** | **Driver** | | **Minatar** | | |\n|:--------------------------:|----------------:|------------|:--------------:|-------------:|:------------:|:-----------:|\n| **Experiment** | **Coin Avoid.** | **Human** | **Space Inv.** | **Seaquest** | **Breakout** | **Asterix** |\n| **Number Episdoes** | $6e6$ | $6e6$ | $10e6$ | $10e6$ | $10e6$ | $10e6$ |\n| **Number of Terminations** | $0.15e6 \\pm 0.04e6$ | $0.14e6 \\pm 0.03e6$ | $0.26e6 \\pm 0.14e6$ | $0.3e6 \\pm 0.08e6$ | $0.28e6 \\pm 0.05e6$ | $0.34e6 \\pm 0.05e6$ |\n\n\nWe also agree that in the worst case, the number of required terminations can be large, and would theoretically scale as $O(T/H)$ where $T$ is the number of iterations and $H$ is the horizon. Still, we note that:\n\n- We are generally limited by theoretical lower bounds in RL, which require a large iterations to converge (regret scales as $O(\\sqrt{HAST})$). It is thus reasonable to assume that learning with implicit termination signals would also require large amounts of iterations to converge, unless the cost function is very sparse or simple. Nevertheless, as we show in Figure 6 (page 24, Appendix E), the cost function for Backseat Driver converges faster than the reward itself.\n\n- We emphasize that we do not necessarily control the termination signals. Particularly, we assume they are given to us as part of the environment. Our framework is designed to cope with this particular problem. Therefore, we do not necessarily need to think of terminations as a means for designing an algorithm, but as a constraint that already exists in the world, one which must be taken into account. \n\n### Re definition of L: \n\nWe define $L$ in line 84. We will remind the reader of this definition when it is used again. The parameter $L$ is an upper bound on the values of $c$. We note that this factor can scale in the worst case as $c_{\\text{max}}\\sqrt{SAH}$, if all costs are equal to $c_{\\text{max}}$, though in practice can be much smaller if the cost function is sparse (which is usually the case). This factor is common in linear bandit [1,2] and logistic bandit literature, which we utilize for our confidence result.\n", " Thank you for your positive review and your helpful comments. Please find answers to your questions below:\n\n### 1. Re State-action dependent cost vs state-dependent cost:\nUsing state-only costs would not change the analysis. We can write the termination probability using a state-only cost function, which would be a special case of the state-action cost function. This would relate to the setting in which every action has the same cost. To make this clear, we’ll rephrase the wording of state-dependent cost function to state-action-dependent cost function.\n\n### 2. Re Different function classes for termination:\nThank you for this important question. The case of more complex function classes is indeed interesting. For a cost function $P(\\text{termination}) = f(\\mathrm{c})$, where $\\mathrm{c} = (c(s_1, a_1), c(s_2, a_2), \\ldots, c(s_H, a_H))$, and $f \\in \\mathcal{F}$ is some function class, one would need to establish a result which obtains confidence bounds over the costs, similar to Theorem 1. Particularly, local confidence bounds are essential for learning the costs and solving the TerMDP. The simplicity of our model enables derivation of such a high-confidence result, while at the same time we found it to be expressive enough in practice. Moreover, our practical solution can easily be adapted to the general non-linear class setting, by changing the sum of cost networks to some other function of the cost networks (see Figure 2), possibly a neural network. This is an interesting direction for future work, and we hope it would spark further research. We will add more discussion on this matter to the paper.\n\n### 3. Re similarities to IRL:\nWe agree that the IRL problem resembles the cost estimation problem. However, in the former, one must recover the unknown reward function assuming optimality, while here one must recover an unknown cost function from sparse termination signals. An important difference between these is the role of the cost function, in contrast to the reward. While the reward is optimized directly, the costs in the TerMDP affect the transition probability to a terminal state. This makes this inverse problem quite different. Also, the reward and cost functions may play different roles in the TerMDP setting, as the terminator’s preferences may not align with the designed reward function. \n\nAnother way to view this problem is as an inverse problem for a state dependent discount factor. One could attempt to construct an IRL problem in which both the discount (= 1 - probability of termination) and the reward function must be recovered.\n\n### 4. Re relation to reward design and preference-based RL:\nThe TerMDP model considers a new method to adapt to external feedback. There are two main scenarios to take into consideration with termination:\n\na. Termination by an exogenous observer - this is the setting we mostly discuss in the paper, which assumes we do not control the termination signals and they are given to us as part of the environment. Our framework is designed to cope with this particular problem. Therefore, comparison to reward design / preference-based methods is not very informative in this setting, as we consider a different feedback mechanism. \n\nb. Termination as a design choice - we can view termination as an event we trigger ourselves for improving RL algorithms. This would better relate to the preference-based / reward design setting. Reward design is quite a hard problem, and it is unclear how to systematically accomplish this task. Preference-based methods are great for incorporating external knowledge without the need to design a reward function. Termination can be considered as an even easier alternative to incorporate such knowledge - as termination signals are very sparse and require very little information from the external observer. This was evident in our experiments (Section 5, lines 241-250), where we gave the human observers a very general objective for terminating the agent, which they were able to easily interpret. \\\nAs an example, consider the case where termination happens when the agent “gets lost” or approaches something known to be suboptimal. In that case, termination may signal the RL system that it had ventured to undesired territory. In some applications, designing such a termination signal may be a lot easier than designing a reward function that may percolate to the desired area of the state space. \n\nFinally, we note that the cost function need not be related to the reward function, in the sense that the terminator may have preferences that are not aligned with our primary objective, in contrast to preference based / reward shaping methods. \n\nWe’ll add further discussion on the relation of these problems to our work.\n\n-----------------------------------------------------------------------------------------------\nWe also note, as the reviewer mentioned, that we will open-source all our code and environments.\n\n\n", " We thank the reviewers for spending the time and effort to carefully evaluate our work. We are encouraged that the reviewers found our work “novel” [gYNG, tpfd], “extremely well written” [nYZQ], and “well motivated” [nYZQ] with “many useful applications” [tpfd] and “real-world applications” [gYNG]. Beyond these encouraging descriptions, the reviewers also made valuable comments that we answer in the following.", " This paper studies the RL setting with external / forced termination and defines a new type of MDP, the Termination MDP or TerMDP. The authors propose a theoretically-grounded method which is well suited to this setting and derive regret bounds. They also introduce a scalable approach, TermPG, which combines optimism and a dynamic discount factor incorporating. The method is evaluated on two different domains, including an autonomous driving scenario with human termination data. TermPG is significantly better than vanilla PG and other variants. \n ## Clarity\n\nThe paper is generally clear and well-written. \n\n\n## Soundness\n\nThe introduced setting and method are well-motivated with practical examples, analyzed through a theoretical lens, and thoroughly evaluated on a number of challenging tasks. The authors compare with a number of ablations, demonstrating the importance of each design choice. The proposed approach seems significantly better than naive algorithms in this setting, across many environments. \n\n\n## Novelty and Significance\n\nThe paper introduces a new type of MDP, the TerMDP, together with a method suited for this setting TermPG, which is novel as far as I know. I particularly liked the optimistic discounting factor part of the algorithm. The paper also introduces a new benchmark and evaluation protocol in a realistic domain for evaluating methods for TerMDP. I also appreciated the fact that the authors ran experiments with human data, rather than merely synthetic data or human proxies (which are typically quite disconnected from reality). It is promising that the results are strong in those cases. While this setting is quite niche / specific and I wouldn't expect a large part of the community to build on this work, I believe it is still important for certain real-world applications and for building systems that won't act in isolation but rather interact with humans, so I hope it will inspire more research in this area. The open sourcing of the code for environments and methods could help advance research in this area, so I encourage authors to make these publicly available.\n\n 1. In the introduction, you mention that the cost function is state-dependant, but in the first equation for the termination probability, it looks like it is a function of state-action pairs. Could you write it as a state-only function? Would this change the formulation, analysis, algorithm, or results?\n\n2. Why did you choose this particular probability as a function of the termination? Could this be modelled in a different way or what are the advantages / disadvantages of this choice?\n\n3. The TerMDP formulation seems to share some similarities with the IRL (inverse RL) paradigm where the agent receives some reward but it is not the full reward (there is uncertainty, some unknown part of the reward which needs to be inferred from the environment -- in this case the termination actions). Could you comment on this?\n\n4. In the related work you mention several relevant areas such as contrained MDPs, reward design, global feedback, or preference-based RL. However, you don't compare your methods with any of those (or some variants adapted to this setting). Could you add such baselines or at least explain why those aren't being considered?\n The paper addresses limitations and potential negative social impacts in the last section. I found this section to have a reasonable amount of detail and nuance. However, it would still be good if the authors can further expand (perhaps in the appendix) on how this approach compares with others, particularly different types of reward shaping, inverse RL methods. ", " The paper introduces an extension of the MDP paradigm, to the case where episodes can be interrupted by an external non-Markovian signal. This has several practical applications for cases with human supervision. The paper includes the formalization of this setting, theoretical properties on provably-efficient learning, and practical algorithm that is evaluated on driving and MinAtar benchmarks. (+) The proposed setting is interesting, and I can indeed see many useful applications of this. As such, this is a significant novel contribution.\n(+) The paper is clearly written, and the reader can understand the methodology and contributions.\n(+) Several experimental domains are considered, and ablations are performed to clarify impact of different aspects of the model.\n\n(-) The paper makes a very specific assumption about the conditions under which termination happens (based on sum of costs), which seems more out of mathematical convenience than out of real-world motivation. This limitation is clearly stated upfront, and acknowledged in the discussion.\n(-) More experiments would be helpful to draw stronger conclusions. Only 5 seeds are used (Fig.5 caption), yet there has been many reports that this is insufficient for high-confidence conclusions in RL (e.g. Henderson et al., AAAI 2018). Why not run 10 seeds? This should be feasible.\n(-) The number of terminations needed to learn seem very high. Can you report how many terminations are observed for each result? The figures should # steps, so it’s not obvious to know how many terminations were observed. But #steps is in the 10^6-10^7, which suggestions #terminations would be much higher than is practical for humans to give. The fact that you had to train a termination for BDr Expt 2 highlights this limitation; it would have been preferable to show results with direct human termination, to be verify whether the proposed work meets the problem as stated.\n -\tWhat is “L” in Sec.3, e.g. Thm 1 (l.123)? This seems an important (cubic) factor in the bound, yet I could not see it defined.\n-\tHow much is the ensemble needed? This seemed somewhat of a distraction on the proposed method. Its effect is investigated in the Ablation, and it seems to help. But would it be possible to achieve similar effect without the extra cost of an Ensemble?\n-\tWhat metric do you report specifically? “We report mean and std” (l.214). Is this the std dev or the std error? I suspect std dev, based on caption of Table 1. I would recommend std error to draw conclusions on significance, and verify whether you ran enough seeds.\n -\tThe paper provides a reasonable discussion of technical limitations.\n-\tThe paper provides some discussion of societal impact in the final discussion, at a level that is adequate for the potential risks.\n", " The paper introduces a variant of an episodic MDP where exogenous termination occurs - the episode is terminated by an observer based on some accumulated cost function that is not observed by the agent. Under this new formulation (and some additional assumptions) the paper proposes a solution to the problem, which amounts to estimating these unknown costs from the sparse termination signal, and then incorporating the cost estimation into an RL algorithm optimistically. Theoretical guarantees are provided with respect to the cost estimation, bounds on the regret and the existence of optimal policies in this setting. Results on several domains (requiring deep RL methods) demonstrate that the proposed algorithm outperforms policy gradient baselines when both algorithmic and human termination is applied exogenously. \n\\+ The paper introduces a useful variant of the standard MDP formalism. I found this well-motivated with several real-world use cases, and I was surprised to see that no one has considered such an extension previously.\n\n\\+ The paper provides theoretical guarantees for the algorithm and then extends it to implement a practical policy gradient method that can be used on larger MDPs with all the latest deep RL algorithms. \n\n\\+ On the whole, the paper was extremely well-written (but see below) and well laid out \n\n\\+ At various points, the paper points out limitations or shortcomings, which I appreciated\n\n\\- The only section I found difficult to read was Section 3. In particular, the various symbols (such as $\\kappa$) encapsulate an awful lot, and it makes it hard to get a sense of how tight the bounds are and how they behave under various conditions (I have some questions in the next section). \n\n\\- Much of the theoretical analysis relies on the particular form of termination assumed. There are also certain other assumptions made where it is not clear if they are done for simplicity or because of some fundamental reason. \n\n\\- A lesser issue, but the experiments, while requiring a neural network for function approximation, are on \"smaller\" domains - it is thus unclear how the method scales (or whether it can, given the computational overhead) to larger ones. While reading the paper, parts of it put me in mind of other formalisms or approaches. While it is not necessary to include these in the paper, I would be curious to hear the authors' thoughts on how their paper links to those and whether there is any connection. I also have a couple of clarifying questions below.\n\n1. The approach to learning the cost function relies on the termination condition being of a specific form. This paper introduces this MDP extension and so it is reasonable for the paper to make any claim about how termination occurs, but I am curious about other forms of termination. For example, if there was just a hard threshold at which point termination occurs, could we use something like RUDDER [1] to do credit assignment on the termination signal (1/0) to estimate the costs in practice? \n\n2. The MDP formulation here puts me in mind of the average reward SMDP formulation of [2], where the aim is to learn a policy that maximises the infinite-horizon ratio between expected reward and expected cost. Of course, this assumes that the cost function is known, but does cater for the infinite horizon MDP case, which seems more in line with the autonomous driving example. I would be interested if the authors think there is any link here or under what conditions it would be appropriate to use the TerMDP vs AR SMDP formulation.\n\n3. On line 59, the rewards received by the agent are in the range $[0, 1]$, but I could not see why this would be the case. Certainly, I understand that the rewards must be non-negative since termination is \"bad\", but why should the rewards be bounded? \n\n4. On the topic of the theoretical bounds, I struggle to get some intuition about the various terms. The constant $L$ bounds the norm of the costs, but there is no bound on the costs themselves. So let's say that, for example, the costs were in the millions (and so was the bias). What would that mean for the bounds? Would things change if everything was scaled accordingly (e.g. costs/bias are 1 vs 1e6). A similar question about $\\kappa$ - if the derivative is near 0, then $\\kappa$ would be massive. Presumably, this equates to the case where termination happens extremely infrequently (or all the time)? One concern here for me would be what this means in practice - how useful are these bounds for practical purposes if termination happens every so often (ranging from all the time to never)? And how likely is termination to occur in the domains tested?\n\n5. In the results, I notice that a window size twice as large as necessary improves performance. Is this a statistically significant result? If so, why should this be? \n\nMinor typo: Line 158: rollouts -> rolls out\n\n[1] Arjona-Medina, Jose A., et al. \"Rudder: Return decomposition for delayed rewards.\" Advances in Neural Information Processing Systems 32 (2019).\n[2] Ross, Sheldon M. \"Average cost semi-Markov decision processes.\" Journal of Applied Probability 7.3 (1970): 649-656. One big positive of the paper was that it was honest about where problematic areas arise (both in the theoretical analysis as well as potential practical deployment). One additional aspect that bears discussing, though, is [3] and related work. In particular, if termination is done from outside the system, then there is no problem. However, you could imagine a case of a real robot being operated by a real human. Since termination is bad, the robot could learn either positive behaviours that mean the human does not wish to terminate it, or it could learn to prevent the human from terminating it (e.g. by preventing the human from pressing a kill-switch), which of course would be a terrible outcome. \n\n[3] Orseau, Laurent, and Stuart Armstrong. \"Safely interruptible agents.\" Proceedings of the Thirty-Second Conference on Uncertainty in Artificial Intelligence. 2016. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "d1D-syiasit", "XOlz6thldNB", "iAsE8EDZaoz", "aFLtkU-mNMX", "3FGwXNnlrv5", "bYboBHxnT6v", "d1D-syiasit", "_MfTwYZxSUh", "nips_2022_bIlUqzwObX", "nips_2022_bIlUqzwObX", "nips_2022_bIlUqzwObX", "nips_2022_bIlUqzwObX" ]
nips_2022_p_g2nHlMus
Rethinking Generalization in Few-Shot Classification
Single image-level annotations only correctly describe an often small subset of an image’s content, particularly when complex real-world scenes are depicted. While this might be acceptable in many classification scenarios, it poses a significant challenge for applications where the set of classes differs significantly between training and test time. In this paper, we take a closer look at the implications in the context of few-shot learning. Splitting the input samples into patches and encoding these via the help of Vision Transformers allows us to establish semantic correspondences between local regions across images and independent of their respective class. The most informative patch embeddings for the task at hand are then determined as a function of the support set via online optimization at inference time, additionally providing visual interpretability of ‘what matters most’ in the image. We build on recent advances in unsupervised training of networks via masked image modelling to overcome the lack of fine-grained labels and learn the more general statistical structure of the data while avoiding negative image-level annotation influence, aka supervision collapse. Experimental results show the competitiveness of our approach, achieving new state-of-the-art results on four popular few-shot classification benchmarks for 5-shot and 1-shot scenarios.
Accept
This paper tackles few-shot learning with a transformer architecture and, inspired by the intuition that fine-grained information is ignored in existing methods, uses an inner-loop token re-weighting method to improve results. Overall the reviewers appreciated the use of modern architectures (Vision Transformers), the reasonableness of the re-weighting intuition, and experimental results. Concerns were raised about comparison to existing methods with similar intuitions (e.g. [A] mentioned by eF5W), fairness of the comparison with respect to model capacity and in general ablations demonstrating that it's the method (not transformers by themselves) leading to improved results, and lack of principled explanations for the design choices, and computational complexity. The authors provided strong rebuttals, including new experiments using linear classifiers and prototypical approaches, use of smaller models, and a demonstration of potential pruning methods to address computational complexity. The reviewers were overall receptive to the rebuttal, and all recommended acceptance of this paper after some back-and-forth. The paper provides both a nice benchmark applying Vision Transformers to few-shot learning as well as a method that is demonstrably better through ablation studies. Therefore, this paper provides several nice contributions to the community, and I recommend acceptance.
val
[ "FoJbLjv1q4p", "xed_i8L6fo", "yfGwk4qBB-H", "Ihh3_GbsB2S", "WiNslw0VrUP", "jSGLAMzN_A0", "8iDmcnOT_Vj", "-cn33EhWxD", "fZkjysigkyP", "6R2dlUKR48i", "awuPyh8j9s-", "dDhOVIcpwNF", "A9HFNo0-M3c", "v6SSYysrYfX", "KCdhzm_i_c" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your continued feedback!\n\n> _[...] for the supervised pre-training for the same FSL task in Fig. 4, what exactly is done?_\n\nFor adequate comparison to related work in FSL, we follow the widely adopted pretraining scheme used in FEAT [52] and other works (e.g. DeepEMD [53]) for our supervised pretraining. In detail, we train the network with cross-entropy loss on the training set of the respective dataset to solve a standard classification task (e.g. for miniImageNet: 64 classes) – i.e., using the exact same data we use for self-supervised pretraining. Like [52] we use the representations of the penultimate layer (before the classifier) to evaluate the performance and quality of the embeddings. To judge suitability of the encoder for few-shot tasks, an N-way 1-shot task is commonly solved (e.g. N=16 for miniImageNet due to the 16 classes in the validation set) – and we tried three different variants here:\n\n 1.) & 2.) One sample per class is encoded to produce a class-embedding (’prototype’), and classification performance is evaluated using 15 queries per class. (This is the method used in recent related works). To retrieve one embedding per sample, we use the average over all patch tokens produced by the Transformer architecture. For fairness regarding metrics, we evaluate both:\n1) _embedding distance_ (MSE) and \n2) _embedding similarity_ (cosine) to perform classification.\n\n3.) We additionally use our own patch-based classifier to evaluate the FSL setting using all patch embeddings (as we later do during fine-tuning & evaluation).\n\nWe perform validation over 200 such FSL-tasks after every epoch during training and pick the best-performing model regarding highest average validation accuracy. We encountered clear signs of overfitting during this type of training, with the training accuracy\nconsistently improving to convergence, but validation accuracy plateauing (or decreasing) rather early on (~350-500ep), independent of the variant we used to evaluate on the validation set.\n\nWe thank you for pointing this out and will include a comprehensive list detailing all used hyperparameters into the supplementary material of the paper.", " I think Q1 is the only issue that remains unclear to me. I agree that supervised learning induces a tendency of the representation space to overfit to the structure of the classes observed during training (i.e., ‘supervision collapse’ problem). The authors presented a self-supervised pre-training strategy, creating a more general/less distorted representation space that is significantly better suited to generalize to yet unseen classes and avoid collapse. However, for the supervised pre-training for the same FSL task in Fig. 4, what exactly is done? (so that the comparison to the proposed scheme is fair)", " Thanks to the author for the comments to address my concerns. I will still recommend a weak acceptance of this work.", " Thank you for providing detailed answers. I will stick to my original rating and recommend acceptance. ", " Thank you for the detailed response, the new results are quite interesting and my concerns (shared with Reviewer erRy) are for the most part addressed. Apologies for missing the implication in Fig.7b. Regarding token reweighting for CTX/FRN, I had been envisioning a setup with batch folding, for a similarly support-set-based training scheme, but batch folding is in no way universal and yes, such an adaptation would clearly be out of scope. Regarding batch folding, I’m not sure I understand the distinction between training with a support-query split, and using only the supports, versus training with a smaller batch and all images as supports, but this is ultimately a semantic argument and not particularly relevant to my scoring of the paper at this point. Final rating raised to Accept. ", " ... continued from previous\n>_Although this paper applies Masked Image Modeling (MIM) as the pretext task for pre-training ViT, use of other self-supervised pre-training approaches like contrastive learning (e.g., DINO [C], MoCo v3 [A]) would be possible. It will be also good if the authors provide some insights or comparisons about the choice of the self-supervised pre-training approach. If MIM is desirable for this task, more explanations and supports would be needed._\n\nOur obtained experimental results clearly indicated that enforcing an explicit loss onto the\npatch tokens via the pretext task of Masked Image Modelling (MIM) indeed significantly helps to\nbuild strong embeddings that encode the semantic content of the associated patch. We have run\nour initial experiments using DINO which does not enforce such local constraints and analyzed\nthe representation quality of the patch tokens w.r.t. token similarity across different samples of\none class by visualising the top-k most-similar tokens in a side-by-side comparison between two\nimages. We observed significantly worse and less reliable matches when using DINO compared to\nour choice with MIM as pretext task. We are happy include visualisations together with a discussion\nof these findings into the supplementary material of the revised paper.\n\n>_The need to perform additional learning for test data (few-shot instances from\nnovel classes) is needed for the proposed work (but not necessarily for a number of SOTAs). I’d like to see\nhow the authors would elaborate on this issue._\n\nZero-update methods have often proven successful since updating the actual model at\ninference time on such small number of samples (5-25) easily leads to overfitting, however at the\ndownside of not being able to adapt to the task at hand. Recent methods such as CAN[18] and FEAT [52] have\nattempted to solve the problem by learning an extra module to predict a task-specific refinement of their\nrepresentations – however the learner itself remains fixed at inference time and thus relies on\nsufficiently good a-priori training. We are in contrast learning a small set of parameters on the fly at\ninference time (scalar weights for our correspondences) once per task before classifying the query\nsamples. This allows our method to be adaptive to the structure of the class set encountered for\nthe task at hand and leverage the inter- and intra-class dependencies without requiring additional\nlearnt modules. Table A1 in the supplementary material provides insights into the efficiency vs.\neffectiveness trade-off. As the results suggest, our method can gain significant performance\nimprovement (more than 1% within 5 update steps) without noticeably sacrificing efficiency \n(≈ 3ms per task). Further improvement can be observed by increasing the optimization steps.", " We thank you for the feedback, and will address your concerns in the following point by point:\n\n>_In Fig. 4, the authors show that self-supervised pre-training performs significantly better than the supervised-pre-training counterpart. However, in prior SSL literature (e.g., [16, A]), SSL pre-training only slightly outperforms supervised pre-training (sometimes or even worse than it). A proper explanation (or insight) that the proposed SSL pre-training surpasses a large margin over the supervised counterpart reported in Fig. 4 is needed._\n\nFew-shot learning is distinctively different from conventional classification (like [16,A]) in one important aspect: novel previously unseen classes are encountered at test time. As such, supervised learning induces a tendency of the representation space to overfit to the structure of the classes observed during training. In other words, the representation space is created and condensed to easily separate observed training classes, but at the expense of distorting other dimensions that might be crucial to correctly distinguish yet unseen classes. This is known in the few-shot literature as ‘supervision collapse’ [4 ]. Since no class labels are provided during the self-supervised pre-training, we expected the method to create a more general/less distorted representation space that is significantly better suited to generalize to yet unseen classes and avoid collapse. These intuitions are supported by the results we have obtained (Fig 4.). We further\nobserve that self-supervised training is helpful to prevent early overfitting when learning from small few-shot datasets (e.g. 38.4K miniImageNet vs. 1.2M ImageNet1K).\n\n>_The t-SNE visualization in Fig. 5, only verifies that the patch-level embeddings derived from the same “instance” are clustered together, and those from different instances are separated from each other. However, this figure only explains/visualizes separation between different instances, but not the discrimination between different classes (which is much more important for FSL). It is desirable to see whether the embeddings extracted from the same “class” are gathered, while the embedding of different classes would separate far from each other._\n\nWe have included two new PCA visualisations of the embeddings for the entire support set of a 5-way 5-shot setting into the uploaded first revised version of our paper, with different classes indicated by color. Note that we obtain one embedding/token per image patch, and we thus expect a much larger spread of the embeddings within one class as well as higher partial overlap between classes due to similarities in background and non-class-relevant objects present in the scene, compared to other (e.g. prototype-based) methods. As can be clearly observed in the rightmost sub-figure, our token importance reweighting-based method is able to determine the essential parts\nof the image that are characteristic for each class (highlighted by the respective class color) while excluding irrelevant tokens like background or non-primary objects (highlighted in brighter color). In this way, our method is able to reliably separate instances of different classes even in cases where the images contain multiple objects and depict complex real-world scenes – in other words determine what matters most in an image for the task at hand (as shown in Figure 6).\n\n>_Since the title of this paper emphasizes the aspect of “generalization” in few-shot learning, one would expect learning strategies with results/comparisons with recent cross-domain few-shot learning works (e.g., [B]). In other words, cross-domain FSL aims to transfer the learned knowledge to the novel classes in unseen target domains (showing generalization ability)._\n\nWe would like to clarify that in our work, we are not using the term ‘generalization’ as an indication of recent cross-domain scenarios but rather to describe the ‘generalization’ aspect that is inherent in few-shot classification tasks: the generalization of our representations trained on data from the training distribution towards samples of novel classes from the test data distribution. As we discuss in the introductory section of our paper, using single labels for complex or multi-object scenes can easily lead to supervision collapse – meaning that the representation space overfits to the structure of classes encountered during training (as discussed above). As our results demonstrate, our approach using self-supervised pre-training followed by meta fine-tuning with our token reweighting approach is able to obtain new state-of-the-art results on several few-shot learning benchmarks by generalizing towards the test class distribution.\n", " Thank you for your review. We will address the concerns in the following point by point:\n\n>_As shown in Fig. 4, the use of self-supervised pre-training for Vision Transformer provides a significant improvement, compared to supervised training. It would be interesting to see what performance existing classifiers (e.g. learning a linear classifier, prototype) obtain when using the same backbone network. This would help evaluate the benefits of the proposed embedding similarity-based classifier._\n\nWe have run additional experiments using our pre-trained ViT-small backbone followed\nby classifier-specific meta fine-tuning to provide further insights into the performance of existing classifiers. We were able to obtain a test accuracy of 82.80% for the prototypical network after optimising the pre-trained backbone with meta-finetuning, which is competitive to the results we obtain for our method without reweighting (’0 step’ in Fig.7 (b)) but is still clearly outperformed by our reweighting-based approach (see table below). To provide a fair comparison, we optimize the linear classifier at inference time to adapt to the support set and obtained a maximum test accuracy of 82.37%. Both results indicate the quality of embedding our backbone is able to produce but also demonstrate\nthe importance of our task-specific reweighting-based approach.\n\n| Model | Test Acc |\n| :--- | :----: | \n|Protonet w/ Euclidean distance| 82.80±0.59| \n|ProtoNet w/ Cosine distance | 79.90±0.65|\n|Linear classifier | 82.37±0.57|\n|FewTURE (ours) 0 rew. steps|82.68±0.55|\n|FewTURE (ours) 15 rew. steps|**84.05±0.53**|\n\n>_In the sota comparison, most previous methods use ResNet-12 backbone, while the authors employ ViT-small and Swin-Tiny. It will be helpful if the authors include the model sizes (number of parameters) for each of these backbones for comparison._\n\nWe have included the model sizes as well as some additional state-of-the-art baselines (using WRN-28-10) into our comparison to the state-of-the-art. We will further include a more in-detailed discussion regarding the influence of model size into the supplementary material.\n\n>_The approach computes patch-wise correspondence between all the support and query images. I wonder if this could become computationally expensive when dealing which large number of number classes, or when each class has more samples (e.g. 100 way - 30 shot classification). A discussion on this would be beneficial._\n\nWe thank you for pointing this out and acknowledge that this is indeed an important point the needs consideration. Our method can be easily adapted by pruning the number of considered tokens and thus scaled to the large setting _i.e.,_ many-way many-shot. We have run experiments using the attention maps inherent in our approach to prune our number of patch tokens and only use the top-k for increased computational efficiency. Using our ViT-small backbone, we trained and evaluated pruning the number of tokens to 75%, 50%, 25% and 10% of the original token number and obtained the following results:\n\n| # tokens | Test Acc |\n| :--- | :----: | \n|100%| 84.05±0.53| \n| 75%| 83.15±0.57|\n| 50%|82.81±0.59|\n| 25%|81.79±0.57|\n| 10%|81.05±0.62|\n\n", " _... continued from previous_\n\n>_Is the inner-loop token-reweighting scheme compatible with more sophisticated patch-to-patch techniques, such as CTX or FRN? Or is this (understandably) left for future work?_\n\nThanks for pointing out these two works in this context. CTX [9] proposes to reweight support features and produce query-aligned prototypes via using a Transformer-styled cross attention mechanism between a class of support features and the query feature, while FRN [48] proposes to reconstruct the query features via a weighted-sum of a pool of support class features. The weighting mechanisms of both methods are based on the query features and performed to refine/create embeddings. In contrast, our task-specific reweighting mechanism depends on the support set and its labels and works directly on the embedding similarity matrix, which makes it distinctly different from FRN and CTX. Developing a similarity-based reweigthing mechanism that additionally leverages query features could however be an interesting future work.\n\n>_Given a limited computation budget this may not be feasible, but it would be interesting to see if the results\nfrom Fig4 and Sec3.2 hold for a more standard meta-fine-tuning technique (i.e. a basic Prototypical setup).\nAuthors appear to have produced such a model in at least one setting for Fig5 (“average”). What does the\nperformance look like? Can authors elaborate or speculate on this?_\n\nWe have run additional meta fine-tuning experiments and trained both a linear classifier as well as prototypical approach using our pre-trained ViT-small backbone for a 5-way 5-shot setting on miniImagenet, obtaining the results shown in the table below. While both achieve\ncompetitive performance, both are clearly outperformed by our proposed method. We optimized the linear classifier at inference time to allow sufficient adaptation to the task at hand and provide fair comparison. It is worth nothing that while the linear classifier as well as our method can efficiently adapt to the task at inference time, the prototypical does not offer this capability.\n\n| Model | Test Acc |\n|----------|:-------------:|\n| Protonet w/ Euclidean distance | 82.80±0.59 |\n| ProtoNet w/ Cosine distance | 79.90±0.65 |\n| Linear classifier | 82.37±0.57 |\n| FewTURE (ours) 0 rew. steps | 82.68±0.55 |\n| FewTURE (ours) 15 rew. steps | **84.05±0.53** |\n\n> _A slightly relevant omitted citation: the masked inner token reweighting scheme\nmight possibly owe some conceptual debt to Batch Folding from [B][Few Shot Learning with Localization in\nRealistic Settings, CVPR2019], which also models a support-to-support classification task with an identical\nimage-masked leave-one-out scheme (though admittedly implemented quite differently)_\n\nWe thank you for pointing out this work’s relation to our method and will include it into our next revision. While we see the general idea of image-masked ’leave-one-out’ scheme as a possible similarity, the implementation is (as indicated) significantly different between both methods. We use image-wise (5-shot) or token-wise (1-shot) masking of patch-similarities while still strictly adhering to the few-shot learning split of support and query set, and only use the actual support images and labels in our adaptation strategy whereas [B] employs a leave-one-out cross-validation scheme across all images the entire batch (support and query images) to increase the number of data samples, reduce gradient noise and learn better representations.\n\n> _[...] small typos_\n\nThank you for pointing these out, we have corrected them in our revised version.", " We thank you for your detailed feedback, which we will address in the following point by point:\n\n>The strong results come with a major caveat: [...] The difference in model size should be discussed and addressed. \n\n>How should readers interpret the difference in model backbone sizes, and how does this impact the presented results?\n\nWe thank you for drawing our attention to this aspect of our work, and would like to address this in three ways:\n1. We included the most recently published works (2021 & 2022) to ensure fair comparison, the majority of which use the ResNet12 backbone. While many other popular works indeed use WRN-28-10 (_e.g._ S2M2 [31], LEO [38], CC [15]), most have been outperformed by the more recent ResNet12-based methods. We have additionally added the two recent WRN methods OM [35] and PSST [8].\nWe are happy to include a more extensive comparison to the state-of-the-art (including previous years) into the supplementary material.\n\n2. Related works have shown that model size seems to not be a good indicator for few-shot performance, most likely since training datasets are comparably small (_e.g._ 38.4K images in miniImageNet vs. standard ImageNet with 1.28M) and big networks are thus much more prone to overfit.\nChen _et al._ [Chen] demonstrate in Figure 3 of their paper that the performance gains due to larger backbones plateau across all methods for backbones bigger than ResNet10 and only offer diminishing gains (if any at all). The investigations of Mangla _et al._[31] yielded similar results, where the performance on the miniImageNet and tieredImageNet datasets even decreased by around 0.5-1% when scaling up from ResNet18 to ResNet34 (Table 2). \nWe thus conclude that increased number of parameters on its own does not lead to better few-shot performance, and the tendency of many recent works to choose the established ResNet12 (12.4M) over bigger backbones is highly likely a result of this. \nWe will add additional discussion regarding the model size and its potential influence to the supplementary material of our paper (due to space limitations of the main paper's body). \n\n3. We have run additional experiments using the significantly smaller ViT-tiny architecture with only 5M parameters [45]. Initial results show that our method achieves a competitive accuracy of 81.10% on the miniImageNet test dataset with less than one seventh of the number of parameters of WRN-28-10: \n\n|Model | Backbone |Params| Test Acc.|\n|:---|:----:|---:|---:| \n|OM [35]| WRN28-10| ≈ 36.5M|85.29±0.4|\n|FewTURE (ours)|ViT-Tiny| ≈ 5M|81.10±0.61|\n|FewTURE (ours)|ViT-Small| ≈ 22M|84.51±0.53|\n|FewTURE (ours)|Swin-Tiny| ≈ 29M|**86.38±0.49**|\n\n[Chen] Chen, Wei-Yu, et al. \"A closer look at few-shot classification.\", ICLR 2019\n\n> [...] missing ablations – what is the performance contribution of the vision transformer backbone vs token reweighting vs logsumexp aggregation? And less importantly, how significant was the choice of similarity metric?\n\n1. Token reweighting: The contribution of our token reweighting scheme to the model’s performance can be seen in Figure 7 (b) of our paper, where we show the achieved test accuracies for our method trained and evaluated with different numbers of reweighting\nsteps. As can be seen, ’5 steps’ show a significant improvement in performance (82.68% to 83.83%) over the ’0 step’ variant (no reweighting) – demonstrating the importance of this component.\n\n2. Aggregation: We use the logsumexp operation for our aggregation as it poses a rigorous and numerically stable way of combining individual class probabilities (one for each token) to a valid overall probability distribution over classes for each image, independent of\nhow the individual token probability scores are obtained (see similarity metrics below). We have run additional experiments (training and testing) using our method (ViT-small) and 15 token reweighting steps with the only change being the suggested aggregation via sum\n(mean), and found it to underperform our proposed logsumexp method of aggregation. Direct addition without normalization (mean) proved highly unstable due to large logit values.\n|Method|Test Acc.| \n|:---|:----:| \n|mean logits|80.13±0.60| \n|logsumexp| **84.05±0.53**| \n\n3. Similarity metrics. We have investigated the use of negative mean Euclidean distance and unscaled dot-product as alternative, and found our proposed cosine similarity to outperform both:\n|Metric|Test Acc.| \n|:---|:----:|\n|Cosine|**84.05±0.53**|\n|neg MSE|81.85±0.58|\n|Cosine|37.60±0.64|\n\nWe thank you for pointing out these ablations and will include the additional results with discussion into the supplementary material of our revised paper.", " We thank you for your review. We will address your feedback in the following point by point:\n\n>_It is better to discuss other feature re-weighting-based methods in few-shot classification. For example, for fine-grained few-shot classification, Lee et al. [A] shows a consistent improvement in all previous methods by adding an attention-based feature re-weighting module._\n\nWe thank you for pointing out this interesting work. While we discuss other related methods that can be interpreted as spatial (_e.g._ CTX [9], CAN [18]) or channel-wise feature reweighting (_e.g._ FEAT [52]), this very recent work has not yet been included. \nIn contrast to previous works, both TDM [A] and our method share the idea of using the entire support set to determine helpful inter- and intra-class information to solve the task, however differ significantly in the way how the challenge is approached. \n[A] uses two attention modules to predict class- and task-specific weight vectors and transforms the feature maps extracted via a CNN backbone by reweighting the channels. In contrast, our reweighting approach is not modifying the embeddings but directly uses the similarity between patch tokens encoded by our Vision Transformer and determines a single scalar importance weight for each, _i.e._ learns which spatial regions matter.\nWe will include and discuss this work in the next revision of our paper.\n\n>_It is unclear to me what \"meta fine-tuning\" means._\n\nWe would like to apologize if our introduction of the term _'meta fine-tuning'_ in Section 3.1 has not been explicit enough and might thus have led to increased difficulty in understanding our work.\nIn our paper, we use the expression _'meta fine-tuning'_ to indicate the meta training phase that follows after the initial self-supervised pretraining (_i.e._ fine-tuning of the weights). As the name indicates, this training procedure is conducted in a bi-level meta-learning manner where two cascaded loops are used (_cf._ [47], [13]). While the inner loop performs the task adaptation via our token importance weights using the provided support sets (Section 2.4), the outer loop computes the loss by evaluating the classification performance on the unseen query samples (Section 2.3), and uses this to update the parameters of the network.\n", " This works deal with the image-level annotation problem in few-shot classification. During training time, given support and query images, the authors first split images into patches, use a transformer to extract feature tokens, and learn which tokens are related to the label more. During test time, it self-learned within support images to find which tokens are more important. The experiment shows a consistent improvement. Strength:\nThe transformer-only architecture makes it very clean and thus makes it potentially be a new baseline of the problem. Reweighting features to focus on task-specific information is also a reasonable idea. The experiment shows a consistent improvement.\n\nWeakness:\nIt is better to discuss other feature re-weighting-based methods in few-shot classification. For example, for fine-grained few-shot classification, Lee et al. [A] shows a consistent improvement in all previous methods by adding an attention-based feature re-weighting module.\n\n[A] Lee et al., \"Task Discrepancy Maximization for Fine-Grained Few-Shot Classification.\", CVPR 2022. It is unclear to me what \"meta fine-tuning\" means. Does it mean the step of fine-tuning the model with support data at interference time?\n Yes, the author addressed the potential problem when training data is highly limited.", " Motivated by supervision collapse caused by standard few-shot training on weak image-level labels, authors introduce a token-based approach based on unsupervised vision transformers that reweights tokens in an inner loop based on their discriminative power. The model achieves strong results and demonstrates the viability of vision transformer models on few-shot tasks without extra pre-training. STRENGTHS:\n\n•\tDemonstrated use of vision transformers for few-shot learning is on its own a neat contribution\n\n•\tDemonstrated use of purely unsupervised pre-training for few-shot learning is also a neat contribution, if not quite as novel\n\n•\tResults are impressive and span multiple benchmarks and architectures\n\n•\tStraightforward and sensible approach to token aggregation and reweighting\n\nWEAKNESSES: \n\n•\tThe strong results come with a major caveat: the ViT-small and Swin-tiny architectures have 22M and 29M parameters respectively, while the compared baselines are almost entirely based on ResNet12, which by my recollection has only 12M parameters. While this is still less than the widely-used WRN-28-10 backbone (36.5M params), I worry that the comparisons presented in the paper are apples-to-oranges. The difference in model size should be discussed and addressed. \n\n•\tUse of vision transformers for few shot classification deserves an empirical study all on its own. Understandably this is not provided here, but because of this it is unclear to what degree improvement is coming from the token reweighting scheme (in theory compatible with existing convolutional architectures) vs the vision transformer backbone (in theory compatible with existing few-shot classifiers). For example, how does the token reweighting scheme compare to simply training a linear classifier head on the support features from a vision transformer? Admittedly, a full comparison along both these axes would be clearly out of scope here. \n\n•\tSimilarly, there is no ablation study provided for the impact of token reweighting vs the logsumexp aggregation scheme. How much better is logsumexp aggregation than direct addition, for example, which would correspond to basic prototype comparison with reweighted averages on each prototype? More broadly, it appears that the token reweighting scheme is broadly compatible with many existing token-to-token classifiers such as CTX and FRN, and it is not clear how the logsumexp aggregator compares. \n\n•\tMore generally, the approach, while straightforward and sensible, does contain a few design choices that are not fully explained or empirically justified (for example, in addition to above, the choice of token similarity metric). \n\n•\tA slightly relevant omitted citation: the masked inner token reweighting scheme might possibly owe some conceptual debt to Batch Folding from [Few Shot Learning with Localization in Realistic Settings, CVPR2019], which also models a support-to-support classification task with an identical image-masked leave-one-out scheme (though admittedly implemented quite differently). My main questions involve the two broad weaknesses outlined above. \n\n1. How should readers interpret the difference in model backbone sizes, and how does this impact the presented results?\n\n2. Can authors provide or elaborate on the missing ablations – what is the performance contribution of the vision transformer backbone vs token reweighting vs logsumexp aggregation? And less importantly, how significant was the choice of similarity metric?\n\nLess impactful questions/comments:\n\n3. Is the inner-loop token-reweighting scheme compatible with more sophisticated patch-to-patch techniques, such as CTX or FRN? Or is this (understandably) left for future work?\n\n4. Given a limited computation budget this may not be feasible, but it would be interesting to see if the results from Fig4 and Sec3.2 hold for a more standard meta-fine-tuning technique (i.e. a basic Prototypical setup). Authors appear to have produced such a model in at least one setting for Fig5 (“average”). What does the performance look like? Can authors elaborate or speculate on this? \n\nSome small typos:\n\n•\tPg2 line 58: extend  extent\n\n•\tPg2 line 66: class is in  class in\n\n•\tPg3 footnote: in generally  is generally\n\n•\tPg4 line 121: The in this way introduced  In this way, introduced / The introduced\n\n•\tPg4 line 158: device  devise\n\n•\tPg7 fig7 caption: SDG  SGD The analysis of 1-shot effectiveness and discussion of smaller training datasets is insightful. The entanglement of vision transformer benefits with token reweighting benefits in presented results is not discussed. Societal impacts are not discussed, though do not extend beyond those of few-shot learning in general. ", " The paper addresses the problem of few-shot classification. The main idea is to establish semantic correspondences between the patches from the support and the query images. These correspondences are then used to reason which class a query image belongs to. In order to down-weight the impact of background patches when performing classification, the authors also introduce an online optimization strategy to determine which patches in the support images are most informative when performing few-shot classification. The method uses the Vision Transformer to encode the patches in the support and query images. In order to learn strong generic features, the Vision Transformer is trained in an unsupervised manner using the masked image modelling task. The self-supervised pre-training is shown to provide better results then the supervised counterpart. The proposed method obtains state-of-the-art results on four few-shot classification benchmarks. ## Strengths\n**S1**: The paper is well written and easy to read.\n\n**S2**: The proposed few-shot classifier using patch-wise correspondences is novel and interesting. The online optimization allows determining which regions are most crucial to perform classification and can be helpful specially in case of clutter in the support set images. The use of patch-wise correspondences allows determining the class of the query image by jointly reasoning over the support set as well as the query.\n\n**S3**: The self-supervised pre-training of the Vision Transformer makes sense, specially in the context of few-shot learning, to learn generic feature representation. \n\n**S4**: The proposed approach is shown to obtain state-of-the-art results on 4 standard benchmarks (mini ImageNet, Tiered ImageNet, CIFAR-FS, FC100).\n\n**S5**: The authors provide helpful analysis and ablation studies, showing the impact of the major contributions.\n\n\n## Weaknesses\nI do not have any major issues with the paper. Some minor issues which could be addresses\n\n**W1**: As shown in Fig. 4, the use of self-supervised pre-training for Vision Transformer provides a significant improvement, compared to supervised training. It would be interesting to see what performance existing classifiers (e.g. learning a linear classifier, prototype) obtain when using the same backbone network. This would help evaluate the benefits of proposed embedding similarity based classifier.\n\n**W2**: In the sota comparison, most previous methods use ResNet-12 backbone, while the authors employ ViT-small and Swin-Tiny. It will be helpful if the authors include the model sizes (number of parameters) for each of these backbones for comparison.\n\n**W3**: The approach computes patch-wise correspondence between all the support and query images. I wonder if this could become computationally expensive when dealing which large number of number classes, or when each class has more samples (e.g. 100 way - 30 shot classification). A discussion on this would be beneficial. Please check the comments listed under Weaknesses. The authors discuss the limitations of their work.", " This paper adopts self-supervised trained Vision Transformer (ViT) architecture as the feature extractor, deriving patch-level representations for few-shot classification problems. To exploit the relation across patches, the authors propose a token importance reweighting mechanism (which is required to perform during both training and testing stages). The experimental results show satisfactory performance in several commonly-used FSL benchmarks to verify the effectiveness of this method. The overall paper is easy to follow. The idea of utilizing the self-supervised trained ViT architecture to derive patch-level representations is interesting and seems effective for few-shot learning tasks. Since self-supervised model pre-training is agnostic to the image-level class labels, the trained model is more generalizable for downstream tasks. Also, applying ViT to extract patch-level features allows the model to produce more fine-grained information. However, I have the following concerns about this work:\n\n1. In Fig. 4, the authors show that self-supervised pre-training performs significantly better than the supervised-pre-training counterpart. However, in prior SSL literature (e.g., [16, A]), SSL pre-training only slightly outperforms supervised pre-training (sometimes or even worse than it). A proper explanation (or insight) that the proposed SSL pre-training surpasses a large margin over the supervised counterpart reported in Fig. 4 is needed.\n\n2. The t-SNE visualization in Fig. 5, only verifies that the patch-level embeddings derived from the same “instance” are clustered together, and those from different instances are separated from each other. However, this figure only explains/visualizes separation between different instances, but not the discrimination between different classes (which is much more important for FSL). It is desirable to see whether the embeddings extracted from the same “class” are gathered, while the embedding of different classes would separate far from each other.\n\n3. Since the title of this paper emphasizes the aspect of “generalization” in few-shot learning, one would expect learning strategies with results/comparisons with recent cross-domain few-shot learning works (e.g., [B]). In other words, cross-domain FSL aims to transfer the learned knowledge to the novel classes in unseen target domains (showing generalization ability).\n\n4. Although this paper applies Masked Image Modeling (MIM) as the pretext task for pre-training ViT, use of other self-supervised pre-training approaches like contrastive learning (e.g., DINO [C], MoCo v3 [A]) would be possible. It will be also good if the authors provide some insights or comparisons about the choice of the self-supervised pre-training approach. If MIM is desirable for this task, more explanations and supports would be needed.\n\n5. The need to perform additional learning for test data (few-shot instances from novel classes) is needed for the proposed work (but not necessarily for a number of SOTAs). I'd like to see how the authors would elaborate on this issue.\n\n[16] He et al. “Masked autoencoders are scalable vision learners.” CVPR 2022\n[A] Chen et al. “An Empirical Study of Training Self-Supervised Vision Transformers.” ICCV 2021\n[B] Chen et al. “A Closer Look at Few-shot Classification.” ICLR 2019\n[C] Caron et al. “Emerging Properties in Self-Supervised Vision Transformers.” ICCV 2021\n\n\n\n Please see my questions raised in the above weakness part. The authors did provide discussions on the limitation of the proposed work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 5 ]
[ "xed_i8L6fo", "8iDmcnOT_Vj", "awuPyh8j9s-", "-cn33EhWxD", "fZkjysigkyP", "8iDmcnOT_Vj", "KCdhzm_i_c", "v6SSYysrYfX", "6R2dlUKR48i", "A9HFNo0-M3c", "dDhOVIcpwNF", "nips_2022_p_g2nHlMus", "nips_2022_p_g2nHlMus", "nips_2022_p_g2nHlMus", "nips_2022_p_g2nHlMus" ]
nips_2022_--aQNMdJc9x
Misspecified Phase Retrieval with Generative Priors
In this paper, we study phase retrieval under model misspecification and generative priors. In particular, we aim to estimate an $n$-dimensional signal $\mathbf{x}$ from $m$ i.i.d.~realizations of the single index model $y = f(\mathbf{a}^T\mathbf{x})$, where $f$ is an unknown and possibly random nonlinear link function and $\mathbf{a} \in \mathbb{R}^n$ is a standard Gaussian vector. We make the assumption $\mathrm{Cov}[y,(\mathbf{a}^T\mathbf{x})^2] \ne 0$, which corresponds to the misspecified phase retrieval problem. In addition, the underlying signal $\mathbf{x}$ is assumed to lie in the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs. We propose a two-step approach, for which the first step plays the role of spectral initialization and the second step refines the estimated vector produced by the first step iteratively. We show that both steps enjoy a statistical rate of order $\sqrt{(k\log L)\cdot (\log m)/m}$ under suitable conditions. Experiments on image datasets are performed to demonstrate that our approach performs on par with or even significantly outperforms several competing methods.
Accept
In this paper, the authors study the standard phase retrieval problem, in the case where the signal is assumed to come from a generative model prior. In particular, they propose an algorithm that starts with a spectral method followed by an iterative approach. The authors provide two theorems giving guarantees on the performance of each step of the algorithm and illustrate how their procedure performs with respect to some previous algorithms. All reviewers were found to judge positively the work of the authors, finding the paper clear, and well organized, and discussing honestly both the advantages and the limitations of their methods and theorems. The reviewers also found the answer to their questions during the rebuttal phase satisfactory.
train
[ "9EhSYHamwA6", "2YnOsXRc8mH", "K_GYClagkMr", "xVGHx7K9REz", "4PfuxtXc2Sa", "E4FeSgD-53y", "NAdEREDnZxZ", "7-74ZXBWMrc", "jgyU9ZueFcS", "qYOA7I0tli8", "wUN_QqPkOrt", "miOX_i6vhL", "7aHXDd-oStf" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are pleased that the reviewer found our answers globally satisfactory, and we thank the reviewer again for the comments. Our responses to the two points are as follows:\n\n(**Comparison with the Bayes-optimal performance**) This is a helpful suggestion. We will compare the performances of our algorithm and the AMP algorithm proposed in [3] exactly in the setting of [3].\n\n(**Different preprocessing functions**) This is also a helpful suggestion. We agree with the reviewer that by using different preprocessing functions $\\mathcal{T}(y)$, the performance of our spectral initialization method can be greatly enhanced. In an opening paragraph on preprocessing functions, we will mention that when using different $\\mathcal{T}(y)$, the condition becomes $\\mathrm{Cov}[\\mathcal{T}[f(g)],g^2] \\ne 0$ (and the matrix $\\mathbf{V}$ becomes $\\frac{1}{m}\\sum_{i=1}^m \\mathcal{T}(y_i) (\\mathbf{a}_i \\mathbf{a}_i^T -\\mathbf{I}_n)$). ", " I thank the authors for detailed answers. In general I have found them globally satisfactory, and I look forward to the changes to be implemented in the paper. On two of the points discussed:\n\n- I agree with the author's response that the work [3] tackles the Bayes-optimal performance in a more restricted class of models than the one studied in this paper, in a more specific regime. Still I think it would have been a good experiment to compare the performances of the algorithm presented here exactly in the setting of [3] (which, as you mentioned, is included in the setting considered here), to gauge -- even in a restricted setting -- how much one gains by having access to $f$. I agree nevertheless that this might require more work than what is expected in a review phase. \n\n- Coming back to the condition $\\mathrm{Cov}[f(g),g^2] \\neq 0$, since the only point in which it is necessary is to show that the spectral method matrix points in expectation in the direction of $\\mathbf{x}$, one could also mention that using different preprocessing functions $\\mathcal{T}(y)$, the condition becomes $\\mathrm{Cov}[\\mathcal{T}[f(g)],g^2] \\neq 0$. Thus it seems to me that this is no longer a restriction on the model considered, but only a mild condition on the preprocessing function chosen in the algorithmic procedure (which, on top of that, can greatly enhance the performance of the method, see my question 2). Perhaps this should also be mentioned in an opening paragraph on different preprocessing functions. ", " Thanks for your recognition of this paper and the insightful comments and suggestions. Our responses to the main concerns are as follows (the responses to your concerns about the initialization condition of Theorem 1, the novelties with respect to previous analyses, and the comparison to the Bayes-optimal error in [3] are provided in the general responses to all reviewers). All citations refer to the reference list in the submitted main document.\n\n(**About the condition $\\mathrm{Cov}[f(g),g^2] \\ne 0$**) We thank the reviewer for this insightful comment. We believe that the point mentioned by the reviewer corresponds to the fact that $\\mathbb{E}[\\mathbf{V}]= \\nu \\mathbf{x} \\mathbf{x}^T$ (see the proof of Lemma 5), and we believe that this is the only point in which the condition $\\nu := \\mathrm{Cov}[f(g),g^2] \\ne 0$ is necessary.\n\n(**The paper does not discuss recent literature on spectral methods for phase retrieval & The Bayes-optimal performances has been characterized using message-passing algorithms**) We thank the reviewer for pointing out these interesting papers to us and for the nice summary of the ideas of two of these papers. We agree that our paper would benefit from considering the impact of the use of different functions $T(y) \\ne y$ in the spectral method and from discussing more the practical influence of the structure induced by the generative prior on the reconstruction error. However, we believe that addressing the reviewer's comments is orthogonal to our main goal, which is to provide recovery guarantees for MPR under generative priors. While the reviewer’s comments have value in refining and extending the general scope of work in the broader area (which would be a major research accomplishment in itself), that appears to be better left to a dedicated piece of work, and we will cite all the papers mentioned by the reviewer in the Conclusion and Future Work section in our revised version.\n\n(**Could the authors clarify why they can assume that $\\mathbf{w}^{(t)}$ has only positive coordinates?**) The assumption that $\\mathbf{w}^{(t)}$ has only non-negative coordinates is mild since $\\mathbf{w}^{(t)} \\in \\mathrm{Range}(G)$ and we can easily set the activation function of the last layer of the pre-trained neural network generative model $G$ to be a certain non-negative function such as ReLU or sigmoid during pre-training (e.g., for the pre-trained VAE model used for the MNIST dataset, the last layer has sigmoid activation). \n\n(**Can the authors comment on the possibility of extension of their work to the (more relevant for many applications) complex case?**) Thanks for the suggestion. We believe that based on the technical results in [54, 12] (which study complex Gaussian measurements), it is straightforward to extend our work to the complex case, and we will mention this in the revised version.\n\n(**Typos**) We thank the reviewer for the careful reading of our paper and we will correct all the typos in the revised version. In particular, we will modify the sentence “We may assume that the dataset contains all non-negative vectors” to “We may assume that the dataset contains only vectors whose elements are all non-negative”. In addition, $\\beta_1$ is a sufficiently small positive constant that is used to absorb a certain $o(1)$ term (see Eqs. (155) and (156)). To avoid confusing the reader, we will mention that it is a sufficiently small positive constant in the statement of Theorem 2. ", " Thanks for your helpful comments and suggestions. Our responses to the main concerns are as follows (the responses about the technical novelties and the comparison with the Bayes-optimal rate derived in [3] are provided in the general responses to all reviewers).\n\n(**Intuition for Steps 1 & 2**) We thank the reviewer for the helpful suggestions. The major intuition for Step 1 is that even if we do not assume the knowledge of the link function $f$, we still have that the expectation of $\\mathbf{V}$ is $\\nu \\mathbf{x}\\mathbf{x}^T$ (see the proof of Lemma 5), for which $\\mathbf{x}$ is the leading eigenvector. Since the classic power method is popular for obtaining the leading eigenvector and the underlying signal $\\mathbf{x}$ is assumed to lie in the range of a generative model, we follow [47] to use a variant of the classic power method that projects the calculated vector onto the range of the generative model in each iteration. \n\nThe major intuition for step 2 is that when using $(y − \\mathbb{E}[y])(\\mathbf{a}^T \\mathbf{x})$ to replace $y$, the MPR model can be transformed into a conventional single index model that satisfies Eq. (4), and it can be further converted into a scaled linear measurement model with unconventional noise. \n\nWe will add these intuitions into our revised version, and we will be more explicit about what information is assumed to be known to the statistician in the derivation of the algorithm.\n\n(**Q1**) A $d$-layer feedforward neural network generative model is typically $L$-Lipschitz continuous with $L = n^{\\Theta(d)}$ (see [5]) and we may set $r = n^{\\Theta(d)}$. Studying how $L$ changes from random initialisation throughout the training would be quite interesting, but it is beyond the scope of this work. \n\n (**Q2**) According to our experimental results, performing Step 2 always improves over Step 1, and we did not observe any example in which Step 2 after Step 1 hurts reconstruction.\n\n(**Q3**) According to the algorithm-independent lower bound established for MPR with sparse priors (which is $\\Omega(\\sqrt{(s\\log n)/m}$), where $s$ is the number of non-zero entries of the signal; see [56, Thm. 8]) and the algorithm-independent lower bound established for generative model based principal component analysis (which is $\\Omega(\\sqrt{(k\\log L)/m})$; see Theorem 3 in the arXiv version of [47]), the rate $\\sqrt{(k\\log L)\\cdot (\\log m)/m}$ is naturally conjectured to be near-optimal (the $\\log m$ term only plays a minor role). \n\n(**Q4**) If we have access to the exact link function $f$ (and thus the knowledge of $\\nu$, which is only dependent on $f$), we believe that using the correct $\\nu$ (instead of using $\\hat{\\nu}^{(t)}$) will lead to (at least slightly) better reconstruction performance (see the general responses about technical novelties compared to [70], where we mentioned that the scale factor plays an important role). We believe that the $\\log m$ term can be removed (and thus lead to a tighter bound) for the case that we have access to the exact link function $f$. \n\n(**Q5**) Thanks for the comment. We will add the experimental results for $y = |\\mathbf{a}^T \\mathbf{x} +\\eta|$ into Sec. 5 in our revised version.", " Thanks for your recognition of this paper and the useful comments and suggestions. Our responses to the main concerns are as follows (the response to your concern about Gaussian measurements is provided in the general responses to all reviewers).\n\n(**The paper is not clearly written**) We thank the reviewer for pointing out the problems in our writing. We will correct these problems in our revised paper.\n\n(**Is there a guarantee that gradient based method as used in the experiment in the paper will converge to the global minimum/minima?**) We conjecture that to guarantee the convergence to global minimum/minima for gradient based methods, we need to follow the analytic framework of [30] and assume a ReLU neural network with i.i.d. zero-mean Gaussian weights and no offsets. Although beyond our scope, this is a very interesting future direction. \n\n(**Could you provide intuition on what $\\mathbf{V}$ is?**) The expectation of $\\mathbf{V}$ is $\\nu \\mathbf{x}\\mathbf{x}^T$ (see the proof of Lemma 5), for which each column is a scalar product of $\\mathbf{x}$. This motivates the use of $\\mathbf{V}$ (which is regarded as an approximation of $\\nu \\mathbf{x}\\mathbf{x}^T$) to get the initialization vector. We will add such an intuition into the revised version. \n\n(**Does the decay in reconstruction error match the theorem of $C/\\sqrt{m}$ (for a fixed $k$)?**) Thanks for the question. We will add the corresponding figures (with the $x$-axis being $1/\\sqrt{m}$ and the $y$-axis being the reconstruction error) into the revised paper. \n\n(**How would recovery of the MNIST images change if $f(x) = |x|$ and this information was used? (for example using the algorithms in [71] or [30])**) It seems that due to additional assumptions adopted to deduce powerful theoretical guarantees on favorable optimization landscape, it is not suitable to compare with the algorithm proposed in [30]. (In particular, the authors of [30] make the assumption about a ReLU neural network generative model with i.i.d. zero-mean Gaussian weights and no offsets, which is not satisfied by our pre-trained neural network models. For example, for the pre-trained VAE model used for the MNIST dataset, the weights are not i.i.d. Gaussian, the activation function of the last layer is sigmoid (not ReLU), and there are offsets.) We will compare with the algorithm proposed in [71] for the case $f(x) = |x|$ (or the noisy version). ", " Thanks for your recognition of this paper and the useful comments and suggestions. Our responses to the main concerns are as follows. \n\n(**It's unclear whether (18) is strict or mild**) When $\\zeta = 1/\\nu$ (in the experiments, we need to use $\\hat{\\nu}^{(t)}$ to approximate $\\nu$), (18) reduces to $\\\\|\\mathbf{x}^{(0)} -\\mathbf{x}\\\\|_2 < \\frac{1}{5}$ (see Remark 4). This coincides with the condition $\\mathrm{dist}(\\mathbf{x}^{(0)},\\mathbf{x}) < \\delta \\\\|\\mathbf{x}\\\\|_2$ (note that in our settings, both $\\mathbf{x}$ and $\\mathbf{x}^{(0)}$ are unit vectors and the distance measure is $\\\\|\\mathbf{x}^{(0)} -\\mathbf{x}\\\\|_2$), which is commonly used in relevant works (see, e.g., [12, Eq. (3.1)], [54, Thm. 4.1], and [89, Lem. 3.1]). Such a condition will be satisfied if the spectral initialization step returns an $\\mathbf{x}^{(0)}$ that is close to $\\mathbf{x}$. \n\n(**How quickly does step 2 converge? Is it that Theorem 2 holds approximately in experiments using approximate projections?**) In our experiments, we found that $T_2 = 30$ steps are usually sufficient for step 2 to converge. In our revised version, we will add the figures with the number of iterations of step 2 (namely $T_2$) being the $x$-axis and the reconstruction error being the $y$-axis. From our experimental results, we observe that step 2 works well and we believe that Theorem 2 holds approximately in experiments using approximate projections.\n\n(**Is step 1 required to satisfy the initialization requirements for step 2, in practice?**) This is a very interesting question. Step 1 is at least required to satisfy the initialization requirements for step 2 in theory (see Remark 3), and it has been standard to use a spectral initialization step in follow-up works of [54] to provide recovery guarantees for phase retrieval. We will perform the suggested experiments to check whether step 1 is also required to satisfy the initialization requirements for step 2 in practice and add the corresponding numerical results into our revised version. ", " Thanks for your useful comments. Our responses to the main concerns are given as follows (the responses to your concerns about Gaussian measurements and strong theoretical assumptions are provided in the above general responses to all reviewers).\n\n(**Not enough motivating discussion on why one needs to study MPR**) The motivations mainly follow those discussed in reference papers [56,89]. In particular, the two major motivations are as follows: \n\n(i) The MPR model encompasses the noisy phase retrieval model as a special case in addition to various other additive and non-additive models with even link functions (see [56, Page 2]).\n\n(ii) Theoretical analysis for PR typically relies on the correct model specification that the data points are indeed generated by the correct model, and the MPR model enables theoretical analysis under statistical model misspecification (see [89, Page 2]). \n\nWe will add these motivations into our revised paper, instead of simply leaving them to reference papers.\n\n(**Comparison of the two initialization conditions**) We have briefly discussed the comparison of $\\mathbf{x}^T\\mathbf{w}^{(t_{0})} \\ge c_0$ and $\\\\|\\\\mathbf{x}-\\mathbf{w}^{(t_0)}\\\\|_2 < \\delta \\\\|\\mathbf{x}\\\\|_2$ in Remark 3. In the following, we provide a more detailed discussion: When both $\\mathbf{x}$ and $\\mathbf{w}^{(t_0)}$ are unit vectors (this is the setting of our Theorem 1), the typical initialization requirement $\\\\|\\\\mathbf{x}-\\mathbf{w}^{(t_0)}\\\\|_2 < \\delta \\\\|\\mathbf{x}\\\\|_2$ can be reduced to $2(1- \\mathbf{x}^T\\mathbf{w}^{(t_0)}) < \\delta^2$, or equivalently, $\\mathbf{x}^T\\mathbf{w}^{(t_0)} > 1- \\frac{\\delta^2}{2}$. Note that $\\delta$ is typically a small positive constant (e.g., $\\delta = \\frac{1}{6}$ in [9] and $\\delta = \\frac{1}{8}$ in [12]), and thus the typical initialization condition requires $\\mathbf{x}^T\\mathbf{w}^{(t_0)}$ to be larger than some positive constant that is close to $1$. This is stronger than the assumption in our Theorem 1, which requires $\\mathbf{x}^T\\mathbf{w}^{(t_0)} \\ge c_0$ with $c_0$ being a sufficiently small positive constant. We will add such a discussion into our revised version. \n\n(**Sample complexity comparison is unfair if $\\mathbf{x}^T\\mathbf{w}^{(t_{0})} \\ge c_0$ is used**) We agree that the sample complexity comparison is unfair if $\\mathbf{x}^T\\mathbf{w}^{(t_{0})} \\ge c_0$ is used, and we have mentioned this in Remark 1 that \"we note that such an advantage of our spectral initialization step comes at a price\".\n\n(**Additional discussion on the link function $f$**) $y$ will be sub-exponential when $f(x) = x^c + \\text{lower order terms}$ with $c \\le 2$ (since the product of two sub-Gaussian random variables is sub-exponential), and therefore the $y$ corresponding to all the measurement models presented in our paper is sub-exponential. We remark that the assumption of sub-exponential $y$ is not essential and it can be easily relaxed (in fact, this assumption is mainly used in Eqs. (30), (32), and (41) in the supplementary material). For example, when $f(x) = x^c$ with $c$ being a positive and even integer that is larger than $2$, there will be only a minor change in the order of the $\\log m$ term in the sample complexity and statistical rate. But for brevity, we follow [56,89] to make the assumption of sub-exponential $y$ to avoid non-essential complications. We will add these additional discussions on the link function $f$ into the revised version.", " We are very grateful to the reviewers for their helpful feedback and suggestions, and are pleased to have received a generally positive response. Our responses to the main concerns shared by multiple reviewers are given as follows. Other responses are given to each reviewer separately. All citations refer to the reference list in the submitted main document.\n\n(**Gaussian measurements**) The assumption about Gaussian measurements is standard for the theoretical analysis of phase retrieval (PR) and it is adopted in classic prior works such as [54,12], and in the papers [56,89] that study MPR under sparse priors, as well as in the papers [30,36,37,45] that study PR under generative priors. We agree that non-Gaussian measurement models such as sub-sampled Fourier measurements are more practical, and the extension to these measurement models is a very interesting future direction (we will mention this in the Conclusion and Future Work section). However, it is beyond the scope of the current work. \n\n(**The initialization condition of Theorem 1**) We follow [47] to assume the initialization condition $\\mathbf{x}^T\\mathbf{w}^{(t_{0})} \\ge c_0$, which basically assumes weak recovery of the signal (and as mentioned by Reviewer c4oX, this does not seem to be an issue in practice and we do not force such a weak correlation to exist in the numerical simulations). As far as we can tell, it appears to be the mildest initialization condition that we can assume for practical spectral initialization for PR with generative priors. The reason is as follows: Practical spectral initialization methods for sparse PR/MPR typically first estimate the support of the signal and then perform the power method on the submatrix corresponding to the estimated support (see, e.g., [54,9,38,83,89]), or relax the problem to a (convex) semidefinite program (see [56]). Unfortunately, for a generative model, both ideas no longer work since we cannot estimate a set that plays a similar role as the support, and without further assumptions, the problem cannot be relaxed to a convex optimization problem. \n\n(**Comparison with the Bayes-optimal rate/error derived in [3]**) Our results seem to be not directly comparable to those in [3] due to significant differences in the settings. More specifically, in [3]: (i) An AMP algorithm is proposed, and a neural network with i.i.d. Gaussian weights and no offsets is assumed (whereas we only impose the Lipschitz continuity assumption on the generative model. The activation function of our pre-trained neural networks is not restricted to be ReLU and there are offsets). (ii) Asymptotic analysis (not fully rigorous) is given under the high-dimensional regime with $m/n$ being fixed (whereas we provide a rigorous analysis with no restrictions on $m/n$). (iii) The noiseless PR model is focused on (whereas we study the MPR model). \n\n(**Technical novelties**) Our analysis builds on works such as [56,47,70], but we believe that these techniques are combined and extended in a novel manner, with distinct proofs. For instance: \n\n(a) In [56], the estimator is constructed via refining the solution of a semidefinite program by $\\ell_1$-regularized regression. In contrast, we use the projected power method for spectral initialization, and then an iterative procedure is performed to refine the initial guess. We believe that for generative models, an iterative procedure is much more practical since the corresponding optimization problem is non-convex and cannot be solved exactly, and we believe that the direct study of generative priors adds significant value to existing approaches based on convex relaxations.\n\n(b) We make use of the method proposed in [47] in our Step 1, but we believe that Theorem 1 is an original and valuable contribution relative to [47] (as mentioned by Reviewer VWWv). In particular, in the proof of Theorem 1, we need to carefully deal with the effect of statistical model misspecification. This requires proving Lemmas 4, 5, and 6, along with a more powerful concentration inequality for sub-Weibull random variables of order $\\frac{1}{2}$ (in comparison, in [47], sub-exponential concentration is sufficient). \n\n(c) A projected descent algorithm has been proposed in [70] for linear measurements with generative priors. We remark that one major difference between our Step 2 and the algorithm in [70] is that we need to take the scale factor into account (for the algorithm in [70], there is no scale factor), and we observed from numerical experiments that the scale factor plays an important role. For example, if it is not varying with $t$ (e.g., fixing it as $\\hat{\\nu}^{(0)}$, instead of $\\hat{\\nu}^{(t)}$), the reconstruction performance will be significantly worse (we will present corresponding numerical results in the revised version). In addition, the authors of [70] only provide a simple analysis for noiseless linear measurements, whereas we provide a much more complicated analysis for the general MPR model.", " The authors study a single index model (SIM) for observations of the form $y = f(ax)$, with $f$ being unknown nonlinear function, $a$ is Gaussian, and $Cov[y,(ax)^2] \\neq 0$. This is also referred as Misspecified phase retrieval (MPR). The inverse problem is solved under the assumption that $x$ has a generative prior. The overall algorithm is a two step approach which utilizes a spectral type initialization, followed by projected iterative descent rule using this initialization. The algorithm is validated via numerical simulations. Strengths:\n\n-Use of generative priors in the context of misspecified phase retrieval (MPR) under new set of assumptions.\n\n-Theoretical assumptions and remarks are well presented.\n\nWeaknesses:\n\n-Not enough motivating discussion on why one needs to study Misspecified phase retrieval (largely left to reference papers). \n\n-Gaussian measurements are not practical. \n\n-Strong theoretical assumptions. Requires $x^T w^{t_0} > c_0$. \n\n-How is $x^T w^{t_0} > c_0$ a weaker requirement than closeness of initialization requirement $\\||x-w^{t_0}\\|| < \\delta \\||x\\||$ (for which spectral initialization is typically used)? \n\n-Sample complexity comparison is unfair if $x^T w^{t_0} > c_0$ is used. Given proper initialization, even prior methods incur O(k) samples for generative priors or O(s) for sparse priors. \n\n Can the condition $x^T w^{t_0} > c_0$ be experimentally validated? What does $c_0$ typically look like under finite sampling.\n\nWhat are some classes of functions $f$ that would satisfy $y = f(ax)$ being subexponential? The paper would benefit from additional discussion on the link function $f$.\n\nWould the theoretical arguments fall through for non-Gaussian measurements? Authors have made useful references to papers with potential overlap, as well as provided discussions on the limitations of their assumptions which is good.", " The goal is to recover signals under the misspecified phase retrieval model, where measurements are generated $y_i = f(a_i^T x)$, $i = 1 \\ldots m$, for i.i.d. Gaussian $a_i \\in \\mathbb{R}^n$, for $x$ generated by an $L$-lipschitz generative model, and for an unknown link function $f$. If $f$ satisfies the criteria $\\text{Cov}_{g = a^Tx}[f(g), g^2] \\not = 0$, then the proposed algorithm can be used to recover $x$ at order $O((k \\log L) \\cdot (\\log m))$ measurements, conjectured to be near-optimal. This assumption is natural for the MPR problem as it captures any nonzero correlation between the misspecified measurements and the phaseless measurements in the correctly-specified phase retrieval model. \n\nThe proposed algorithm has two steps. The first step estimates the principal component of\n\n $V = \\frac{1}{m} \\sum_{i=1}^m y_i (a_i a_i^T - I_n)$, for which $\\mathbb{E}_{a\\sim \\mathcal{N}}[V] = \\nu x x^T$. The second step views misspecified phase retrieval as a conventional SIM with measurements $\\tilde{y}_i = (y - \\mathbb{E}[y])(a^T x)$, applying a projected gradient based method to iteratively improve the estimate from the first step. Both of these steps can be applied independently to solve MPR separately, with the first having weaker initialization assumptions. The authors find empirically that the second step improves estimates provided by the first step. Additionally, they prove that the first and second steps enjoy near-optimal statistical rates, and that the second step converges exponentially fast to an estimate below the guaranteed statistical error threshold. Strengths: \n\n- Theorems 1 and 2 are clearly stated and relevant to the empirical problem. I did not find any obvious errors while reviewing their proofs.\n- Theorem 1 seems to be a small, but original and valuable contribution relative to [45] and [47]. The authors establish in Lemma 6 a bound on the error $E = V - \\nu x x^T$, thereby controlling the misspecification error for the MPR problem. Following from the results of [47], Lemma 6 implies the estimation rate for step 1, giving theoretical justification to the practical method introduced in [45].\n- To the best of my knowledge, exponential converge attained by Theorem 2 is original and valuable. Some results in the S-REC framework [5, 48] show that global optimizers of a certain non-convex objective achieve low estimation error, _without_ proving that this non-convex objective can be efficiently optimized. Therefore it is interesting that the gradient-based approach of step 2 has exponential convergence above the statistical error level.\n- Empirical evaluations clearly demonstrate the significant benefit of the second step of the algorithm.\n- Empirical evaluations are favorable to MPRG (proposed), which performs better than or on-par with alternative methods.\n\nWeaknesses: \n\n- It’s difficult to know the practical significance of the initialization requirements for Theorems 1 and 2, particularly the latter. Regarding theorem 1, Remark 2 is a convincing argument that the Theorem 1 requirement is mild. However, it’s unclear whether (18) is strict or mild, and whether there is any reason to believe it holds approximately in practice. - In experiments using approximate projections onto the range of pretrained GANs, how quickly does step 2 converge? Is it that Theorem 2 holds approximately in experiments using approximate projections?\n- What is the recovery performance when step 2 is run with initialization $\\mathbf{w}^{(0)}$, the column with the largest diagonal entry in $\\frac{1}{m} \\sum_{i=1}^m y_i a_i a_i^T$? In other words, what is the performance of the combination step 1+2 relative to the performance of step 2 only? Is step 1 required to satisfy the initialization requirements for step 2, in practice? This question is related to my concern about the strictness/weakness of Theorem 2 assumptions. - The method requires exact projection onto the range of a generator function. However, this seems to be a standard assumption in the literature, for which approximate methods work well.\n- It is unclear whether the recovery condition in Theorem 2 is strict or mild. However, the benefit of step 2 is clearly beneficial empirically, whether or not Theorem 2 can be satisfied.", " The paper considers the problem of recovering a signal $x \\in \\mathbb{R}^n$ from $m$ measurements corresponding to a single index model of the form $y_i = f(a_i^\\top x)$. Here, $a_i\\in \\mathbb{R}^n$ are sensing vectors and $f:\\mathbb{R} \\rightarrow \\mathbb{R}$ is an unknown (may be random) non-linear function. The single index model encompasses the classical phase retrieval problem $y_i = |a_i^\\top x| +\\eta$ (among others), where $\\eta$ is noise. Under the assumption on $f$ that $(a^\\top x)^2$ and $y$ have non-null covariance, $x$ is in the range of a $L$-Lipschitz generative model $G:\\mathbb{R}^k \\rightarrow \\mathbb{R}^n$ and suitable randomness on $a_i$ and $y_i$, the paper studies a two-stage algorithm for recovering $x$ given $(y_i )_{i=1}^m$ and $G$: the first step provides an initial estimate of $x$ using spectral methods and the second step iteratively refines this estimate by solving a related linear recovery problem using a descent-type method and projecting onto the range of the generator at each iteration. Strengths:\n- The paper studies an interesting problem.\n- The paper builds on existing work on projected power method to obtain an estimate and introduces a new scheme to refine the estimate.\n- The paper provides a statistically optimal rate of recovery on the order of $\\sqrt{k/m}$ with near-optimal sample complexity of $O(k\\log(L)\\log(m))$.\n\nWeakness:\n\n- The paper is not clearly written. For example:\n\t- In the abstract, the notation Cov[.,.] should explained before using it.\n\t- In line 22, the \"For example\" line provides example of works that study methods of solving the phase retrieval problem while referencing (to the previous sentence) to applications of phase retrieval instead. \n\t- In line 31, the authors state that norm of $x$ is absorbed in SIM without an explanation (maybe stating that the problem is only well-defined up to that norm scaling would be beneficial to the readers).\n\t- The paper starts by associating Compressed Sensing unambiguously with the case where the underlying signal is sparse. However, this is misleading as the prior on signal can be arbitrary but low dimensional and the signal recovery problem can still be referred to as compressed sensing.\n\n- The paper considers the case where the measurement matrix is random gaussian. A discussion on viability (or lack-thereof) of the proposed algorithm on a more realistic measurement models like sub-sampled Fourier would be useful for the readers.\n\n- The paper provides limited discussion on the projection onto the range of the generative model under the conditions required for Theorem 2. That is, under the conditions like $L$-Lipschitz generative model, is there a guarantee that gradient based method as used in the experiment in the paper will converge to the global minimum/minima? \n - Algorithm 1 part 1 uses the matrix V to get the initialization for part 2. Could you provide a intuition on what V is (or what the action of V on an arbitrary vector gives)?\n\n- Figure 1a shows denoising property of your algorithm. Does the decay in reconstruction error match the theorem of $C/\\sqrt{m}$ (for a fixed k)?\n\n- The paper focuses on the case where the exact form of $f$ is not known. This is true for the numerical experiments presented as well. How would recovery of the MNIST images change if $f(x) = |x|$ and this information was used? (for example using the algorithms in [71] or [30]). Comparison of to these methods would be interesting.\n\n\n The paper adequately addresses its limitations. \n", " This work proposes a two-step algorithm for solving the misspecified phase retrieval problem. The algorithm is constructed based on two key assumptions: a) the $m$ (possibly noisy) observations $y_{i}$ are generated by a single-index model $y_{i} = f(a_{i}^{\\top}x)$ (a.k.a. generalised linear model) with $a\\sim\\mathcal{N}(0,I_{n})$, but *crucially* the statistician does not has access to the link function $f$ - she only knows that the observations $y$ correlate with $(a^{\\top}x)^2$; b) the signal $x$ is drawn from a *known* generative prior, i.e. $x=G(z)$ where $G:B(r)\\to\\mathcal{S}^{n-1}$ is $L$-Lipschitz and $z\\in B(r)\\subset\\mathbb{R}^{k}$ is a latent representation ($B(r)$ is the ball radius $r$ and $\\mathcal{S}^{n-1}$ the unit sphere). The algorithm consists of a spectral initialisation-type step, based on a projected power method, plus a descent-like step.\n\nThe main theoretical contributions are:\n\n1. A bound on the reconstruction performance of the first step, stating that for sufficiently enough data $m = \\Omega(k\\log(nLr))$, the first step achieves near-optimal reconstruction performance $O(\\sqrt{k\\log(nLr)/m})$ (up to a log factor in $m$) with probability $1-O(1/m)$ (Theorem 1).\n2. A bound on the reconstruction performance showing that for a suitable learning rate and warm initialisation (the reason why step 1 is required), the second step achieves near-optimal reconstruction with probability $1-O(1/m)$ for sufficient data $m = \\Omega(k\\log(nLr))$ (Theorem 2)\n\nNumerical simulations with real datasets, different link functions and with trained generative models are used to illustrate the performance of the proposed algorithm, and to compare it with other methods in the literature. Phase retrieval is an important problem naturally arising in different signal processing tasks from science and engineering [SEC+]. Although it is not very different from linear reconstruction problems from the point of view of information theory (both require $m\\approx n$ samples for statistically reconstructing a Gaussian signal [MLK+]), it is a notably harder problem computationally. Therefore, designing algorithms that exploit structure in the signal for efficient reconstruction is a significant endeavour.\n\nWhile many works have approached this problem by deriving algorithms for sparse signals (c.f. [SEC+] for a review), a more recent line of work have investigated the setting where the signal is drawn from a generative model, typically parametrised by a deep neural network (e.g. VAEs, GANs, etc.) [30], with promising computational advantages. This paper builds on this line, with the main difference with respect to the literature being the \"misspecified\" setting, i.e. the algorithm proposed does not rely on the knowledge of a particular link function for the observation likelihood.\n\n**Strengths**:\nThe contribution is timely and well-placed within the literature. As discussed above, phase retrieval is a challenging computational problem relevant to different fields. Therefore the design of efficient algorithms that exploit structure with theoretical guarantees is a significant contribution to this line of work.\n\n**Weaknesses**:\nThis work heavily builds on previous contributions - the algorithm proposed is a combination of methods from the literature (projected power method [45, 47] and projected descent [70]). While I don't think this is a problem per se, this makes the presentation hard to parse for a reader who is not familiar with this literature. For instance, the authors could discuss better where the proposed algorithm comes from, and provide some intuition for Steps 1 & 2. The authors could also be more explicit about what information is assumed to be known to the statistician in the derivation of the algorithm (the term \"misspecified\" is not very precise, and can mean different things in statistics).\n\n**References** (numbered refs. are from the bibliography in the paper)\n\n[SEC+] Y Shechtman, YC Eldar, O Cohen, HN Chapman, J Miao, M Segev, *Phase Retrieval with Application to Optical Imaging: A contemporary overview*, in IEEE Signal Processing Magazine, vol. 32, no. 3, pp. 87-109, May 2015.\n\n[MLK+] A Maillard, B Loureiro, F Krzakala, L Zdeborová, *Phase retrieval in high dimensions: Statistical and computational phase transitions*, NeurIPS 2020. -**[Q1]**: The information about the generative model only appears in the bounds eqs. (17) and (18) through the Lipschitz constant $L$ and the radius $r$ (in particular their product $Lr$). Can the authors provide some intuition on how these quantities are connected with the expressiveness of the prior? For instance, for a fixed generative model architecture, how $L$ changes from random initialisation throughout the training? Can this result be used to help us choosing a specific prior for a given task?\n\n-**[Q2]**: The bounds in Step 1 eq. (17) and Step 2 eq. (20) scale in the same way with the quantities $(L, r, m, n, k)$ involved in the problem. Is it clear that performing Step 2 always improve over Step 1 (i.e. $\\beta_{2}>\\beta_{1}$)? If not, did the authors observe any example in which Step 2 after Step 1 hurts reconstruction?\n\n-**[Q3]**: In which sense is the rate $\\sqrt{(k\\log{L})(\\log{m})/m}$ near-optimal? For a given generative prior, say a fully connected random network, how does this compares with the Bayes-optimal rate derived in [3]?\n\n-**[Q4]**: How would the algorithm perform in the matched case, where the statistician would have access to the exact link function $f$ generating the observations $y$ (i.e. we could let $\\hat{\\nu}^{t}\\to \\nu$)? Can a tighter bound be derived for this case? Would it consistently beat APPGD?\n\n-**[Q5]**: It would be interesting to have also an experiment with the link $y=|a^{\\top}x + \\eta|$ in Sec. 5, which is relevant to some experimental settings where $y\\geq 0$. Some limitations of this work are discussed, e.g. the fact that the theoretical guarantees assume exact knowledge of the projection $\\mathcal{P}_{G}$ which in practice needs to be approximated and the choice of step size $\\zeta$ for Step 2, which in practice also needs to be estimated by $1/\\hat{\\nu}^{t}$.", " This work considers the phase retrieval problem. In this model, the statistician is given \n$m$ i.i.d. observations of $y = f(\\mathbf{a}^T \\mathbf{x})$, in which $\\mathbf{a} \\sim \\mathcal{N}(0,\\mathrm{I}_n)$, and $\\mathbf{x} \\in \\mathbb{R}^n$ (the vector to recover) is assumed to be generated using a generative \nprior $G : \\mathbb{R}^k \\to \\mathbb{R}^n$ (typically with $k \\ll n$).\nCrucially, she does not know the function $f$, a setting which the authors call misspecified phase retrieval.\nUnder the following fairly general condition on the function $f$: $\\mathrm{Cov}[f(g), g^2] \\neq 0$,\nfor $g \\sim \\mathcal{N}(0,1)$, the authors develop in Section 3 an algorithm that is agnostic to the function $f$, and that is made of two steps: \n- First, a general spectral method on a well-chosen matrix, mixed with a projection on the range of the generative model.\n- Secondly, a simple iterative algorithm approximating procedures used in other single-index models, again completed with a projection step.\n\nIn Section 4, the authors provide two theorems giving guarantees on the performance of each step of the algorithm described above, \nand argue that under certain conditions they reach near-optimal performance.\n\nFinally, the authors provide numerical experiments in Section 5 to illustrate how their procedure performs with respect to some previous algorithms.\n\nPlease note that given the length and available time, I did not check in full detail the proofs given in the supplementary material, and only read them superficially.\nI also read the Section D of the supplementary on additional numerical experiments. However, I did not look at the provided code. - I found the paper overall very well organized, as well as well-written and pleasant to read, and I thank the authors for that.\nIn general, the results are quite clearly stated and discussed.\nMoreover, the limitations of the results are not hidden and well discussed, e.g. the projection step in the algorithm that needs to be approximated, \nthe Lipschitz condition on the generative model, the ``informative initialization'' assumption required in Theorem 1, or the additional $\\log m$ in the rates obtained with respect to the optimal ones.\n\n- The numerical results are well-presented and convincing. While the improvements over APPGD or the pure spectral method are small, \nthey seem to be significative.\n\n- On the other hand, the initialization condition of Theorem 1 seems quite limiting: it basically assumes weak recovery of the signal, while usually spectral methods are used to obtain such a weakly-correlated estimator. The theorem only covers how the method improves from weak to strong correlations. While this does not seem to be an issue in practice (if I understood correctly, the authors do not force such a weak correlation to exist in the numerical simulations), it limits the scope of Theorem 1. \n\n- Another criticism I have is that, from a non-specialist viewpoint, the results seem somehow incremental with respect to [1]: while I was not familiar with this paper before, a rapid read suggests that perhaps the main addition of the present paper is to consider generative models, which in the end only leads to minor changes in the algorithm with respect to Algorithm 1 of [1]. Moreover, the analysis of the spectral method relies on results of [2], and is quite classical in my eyes. Perhaps the authors should discuss more the novelties of this work with respect to such previous analyses (e.g. by an exploration of the gains offered in practice by generative priors, see also my question below).\n\n[1] Neykov, M., Wang, Z., \\& Liu, H. (2020). Agnostic estimation for misspecified phase retrieval models. The Journal of Machine Learning Research, 21(1), 4769-4807.\n\n[2] Liu, Z., Liu, J., Ghosh, S., Han, J., \\& Scarlett, J. (2022). Generative principal component analysis. arXiv preprint arXiv:2203.09693. 1. From my understanding, the condition $\\mathrm{Cov}[f(g), g^2] \\neq 0$ is crucial for the spectral method, since it ensures that for $m$ large enough, an isolated eigenvalue \n(with eigenvector in the direction of $\\mathbf{x}$) pops out in the spectrum of $\\mathbf{V}$. Is this the only point in which this condition is necessary? \nThis point should be discussed in relation with another question I have on the spectral method used, see below.\n\n2. The paper does not discuss a recent literature on spectral methods for phase retrieval,\ncf. e.g. [3-6] for studies in the case of Gaussian vectors $\\mathbf{a}_i$.\nIn particular, [4,6] shows that, in the case in which $f(g)$ is known and $\\mathbf{x}$ does not come from a generative prior, \nthe optimal spectral method is as in $(10)$, in which $y_i$ is replaced by a function $T^\\star(y_i)$, with e.g.\\ $T^\\star(y) = 1 - y^{-1}$ for noiseless magnitude observations. Moreover, \nin this case, recovery can be achieved for $m = \\Theta(n)$, rather than $m = \\Omega(n \\log^2 n)$ given by Theorem 1 here.\nWhile this setting has significant differences from the one considered here (in particular $T^\\star(y)$ is not accessible), maybe the authors should also consider the impact of the use of different functions $T(y) \\neq y$ in the spectral method.\nThis might allow interesting comparisons with the literature mentioned, potentially improve drastically the performance of the method, as well as maybe relaxing the main condition assumed on $f(g)$ (see my point 1)?\n\n[3] Mondelli, M., \\& Montanari, A. (2018, July). Fundamental limits of weak recovery with applications to phase retrieval. In Conference On Learning Theory (pp. 1445-1450). PMLR.\n\n[4] Luo, W., Alghamdi, W., \\& Lu, Y. M. (2019). Optimal spectral initialization for signal recovery with applications to phase retrieval. IEEE Transactions on Signal Processing, 67(9), 2347-2356.\n\n[5] Lu, Y. M., \\& Li, G. (2020). Phase transitions of spectral initialization for high-dimensional non-convex estimation. Information and Inference: A Journal of the IMA, 9(3), 507-541.\n\n[6] Maillard, A., Krzakala, F., Lu, Y. M., \\& Zdeborová, L. (2022, April). Construction of optimal spectral methods in phase retrieval. In Mathematical and Scientific Machine Learning (pp. 693-720). PMLR.\n\n3. If the statistician has access to $f$, the Bayes-optimal performances has been characterized using message-passing algorithms\nin detail in [7-9] (in particular [9] tackles the case of generative models, and is mentioned in the paper).\nDo the authors know how this Bayes-optimal error compares to the ones obtained in Figures 2 and 4? \nIt would be interesting to compute it, to gauge the influence of two effects: ($i$) the gain offered by the knowledge of $f$, and $(ii)$ the gain offered by the use of the generative model (e.g. by comparing the Bayes-optimal performance without a generative prior to the one reached by MPRG -- which is agnostic to $f$ and uses generative priors). In general, I believe the paper would benefit from discussing more the practical influence of the structure induced by the generative prior on the reconstruction error.\n\n[7] Barbier, J., Krzakala, F., Macris, N., Miolane, L., \\& Zdeborová, L. (2019). Optimal errors and phase transitions in high-dimensional generalized linear models. Proceedings of the National Academy of Sciences, 116(12), 5451-5460.\n\n[8] Maillard, A., Loureiro, B., Krzakala, F., \\& Zdeborová, L. (2020). Phase retrieval in high dimensions: Statistical and computational phase transitions. Advances in Neural Information Processing Systems, 33, 11071-11082.\n\n[9] Aubin, B., Loureiro, B., Baker, A., Krzakala, F., \\& Zdeborová, L. (2020, August). Exact asymptotics for phase retrieval and compressed sensing with random generative priors. In Mathematical and Scientific Machine Learning (pp. 55-73). PMLR.\n\n4. In Remark 2: while I see that one might assume all coordinates of $\\mathbf{x}$ to be positive, \ncould the authors clarify why they can assume that $\\mathbf{w}^t$ has only positive coordinates? \n\n5. This paper only considers real phase retrieval. Can the authors comment on the possibility of extension of their work to the (more relevant for many applications) complex case? It is likely that the results extends quite straightforwardly, which would be interesting to mention.\n\n6. Finally, I found the following typos: \n- Line 58: ``seminar work''\n- Line 73: ``an SIM''\n- Line 79: ``an $l_1$ regularized least square''\n- Line 219: ``all non-negative vectors''?\n- In Theorem 2, there is no mention of $\\beta_1$ in both eqs. (19) and (20), some $\\beta_2$ must surely be changed to a $\\beta_1$ (at the end of (19) and in (20) ?)\n- Line 698 (supplementary): ``there exists such a net with the cardinality satisfies'' Nothing to signal on societal impacts. On limitations of this work and how the authors address them, see the previous points." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3, 3 ]
[ "2YnOsXRc8mH", "K_GYClagkMr", "7aHXDd-oStf", "miOX_i6vhL", "wUN_QqPkOrt", "qYOA7I0tli8", "jgyU9ZueFcS", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x", "nips_2022_--aQNMdJc9x" ]
nips_2022_V_4BQGbcwFB
Positively Weighted Kernel Quadrature via Subsampling
We study kernel quadrature rules with convex weights. Our approach combines the spectral properties of the kernel with recombination results about point measures. This results in effective algorithms that construct convex quadrature rules using only access to i.i.d. samples from the underlying measure and evaluation of the kernel and that result in a small worst-case error. In addition to our theoretical results and the benefits resulting from convex weights, our experiments indicate that this construction can compete with the optimal bounds in well-known examples.
Accept
We thank the authors and reviewers for their work throughout the reviewing process. The paper generated detailed and interesting discussions. While there remains minor concerns, we are confident that the paper brings new elements and will generate exciting discussions in the kernel quadrature community, and we are happy to recommend acceptance. We trust the authors to use all information in the discussion threads to polish the camera-ready version of the paper.
train
[ "YubQrGn7tqp", "nIYRJNJ6eR4", "9PXbD-OCEpg", "_0xSQ3bowK", "z-m1FoRRkqT", "vz6fs5Sq24c", "AUd1FDFU8n", "nqQZ6Hk57OV", "Uct1aPwY-Hx", "Q1zT_ndtZy5", "CKquFpF2P-O", "lXO6x6hAijt", "uHM8oC9Eb_m", "Eaxwvp3MNXx" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I will keep my original score.", " Thank you to all the reviewers for constructive comments and suggestions. Although we have already replied to each reviewer, we here summarize our primary updates of the revised manuscript in two parts:\n\n- *Contribution and Limitation*: We have added two paragraphs after the paragraph starting from *Contribution.*\nThe first paragraph is about spectral decay, as it seemed to be better to provide more intuition on $\\sigma_n$ or $r_n$, which are essential in kernel quadrature but not readily available from intuition. See also Section B.3 for a more general derivation of factorial decay in the case of Gaussian kernels. The second paragraph is clarifying the *Limitation* of our proposed methods, as we have a strong theory for Mercer-based methods which is not generally available (this limitation is also shared with other theory-oriented papers based on spectral properties), while our theoretical bound for Nystrom-based methods is not computationally competitive though are algorithmically practical and empirically perform very well.\n\n- *Adding a strong baseline KT++*: Thanks to Reviewer 8cVE, we became aware of the competitive method called kernel thinning (https://arxiv.org/abs/2105.05842), and we added its recent variant (https://arxiv.org/abs/2111.07941) in our experiments. Interestingly, while KT++ or its '+ opt' outperforms most methods including ours in the case where spectral decay is moderate (= comparable to $1/n^2$), our methods becomes faster when there is a strong spectral decay (higher order Sobolev, or Gaussian RKHS). It empirically supports our explicit use of spectral decay via the Mercer/Nystrom approximation.", " I thank the authors for a speedy turnaround and a satisfactory response to my second set of comments.\n\nI will note one final comment about your response for 4: My comment was indeed about such a triangle inequality analysis---the gains (over the rate 1/n, e.g., over KT), that we see in Figure 2 would disappear once you use a test set since the error to the input points by your algorithm is likely way smaller than the error that will be introduced due to the test set of the same size. [I will assume it would be done in another revision as stated by the authors.]\n\nOnce again, thank you for writing an interesting paper, and providing timely responses to my comments. I have increased my score accordingly. ", " Thank you for additional comments.\n\n> 1. I recommend the authors clarify these limitations in their contributions---e.g., after l27.\n\nThanks, we explicitly added the paragraph *Limitation* after the *Contribution* to clarify this point.\nWe deferred the description of \"intuition behind recombination\" to Section B.6 accordingly due to the limitation of space.\n\n> 2. Can the authors provide examples when their bound of $n\\sigma_n + r_n$ would decay for distributions beyond the unit cube (except Gaussian kernel Gaussian distribution case)?\n\nThey are not readily available for general cases, but we can also say that our Nystrom approximation can make use of this implicit spectral decay. This said, we can say more about Gaussian kernels (than simply the case of Gaussian distribution).\nIn Section B.3 (in the re-revised manuscript), we added a derivation of factorial decay of eigenvalues for Gaussian kernels with compactly supported distribution in $R$ (this can extend to multivariate cases, and sub-Gaussian distributions according to Bach [2, at the top of page 9] though we have not fully followed proofs).\n\n> 3. How does your work relate to this paper which analyzes the quality of Caratheodory coresets? https://arxiv.org/abs/2011.04907\n\nWe were not aware of the paper but it is actually doing recombination for Fourier features (instead of Mercer or Nystrom features in our paper). The problem setting and proof techniques seem to be very different (function approximation in $L^2$ space, information-theoretic bound, and covering entropy based on given smoothness and compactness, etc..), but their naive use of Caratheodory's theorem 3.1.1 can be algorithmically greatly accelerated by using efficient recombination algorithms.\nWe additionally mentioned it in the *Related Literature* section, thanks!\n\n> 4. In Figure 2, to mimic the real-world settings, and the quantities that you are interested in, it would be better to plot the error results using a test set to construct a P.\n\nUnlike the generalization analysis in usual machine learning tasks, we here simply have the triangle inequality regarding the MMD distance (also mentioned in Remark 1 of the KT paper, https://arxiv.org/abs/2105.05842), so we can predict the outcome quite well based on the plotted data and the size of empirical data we are using (*not* the magnitude of $N$ for recombination). But thank you for your suggestion, and we would add comments or experiments on this point in another revision to further improve the clarity of the paper.", " Thank you for your response, and for comparing with the additional baselines. \n\nI have some additional comments (I recently became aware of ref. in point 3 that seems very relevant for this work):\n\n1. This continues to remain my main concern, and is related to my previous comment #1. The practical strategy proposed in this paper is Nystrom + Empirical for which the theoretical results offer no improvement over prior work. The strategies that offer theoretical improvement are not practical. I recommend the authors clarify these limitations in their contributions---e.g., after l27. This is an important point because there are numerous methods (including some baselines in this paper) that provide an improvement over Monte Carlo empirically while theoretically, the known bounds are no better than Monte Carlo. In fact, the term $n/\\sqrt{\\ell}$ for the practical strategy makes the theoretical result nowhere close to even being competitive with any other method--e.g., in several settings, the bound will be decaying as $n\\sigma_n$ when the runtime is _exponential in $n$_. This point needs to be highlighted, and the primary contribution needs to be worded accordingly. \n\n2. Can the authors provide examples when their bound of $n \\sigma_n + r_{n+1}$ would decay for distributions beyond the unit cube (except Gaussian kernel Gaussian distribution case)? [That is would their results for the Nystrom strategy would provide any useful bound beside the simple examples?]\n\n3. How does your work relate to this paper which analyzes the quality of Caratheodory coresets? https://arxiv.org/abs/2011.04907\n\n\n4. In Figure 2, to mimic the real-world settings, and the quantities that you are interested in, it would be better to plot the error results using a test set to construct a P. That is, the error is NOT measured with respect to the points that were compressed, but rather a fresh set of points. This would mean using another set of 43487 points in Fig 2(a) [since you have more data available], doing a data split for Fig 2(b) [since you used all the data], and using one-half of the data for compression, and the other half for measuring the integration error. Such a comparison would provide more trust in your experiments.\n", " Thanks for your time spent on our paper. We hope to convince you to reconsider your evaluation based on our answers and revision of the paper. \n\n> elaborate more on the need of convex quadrature\n\nWe are happy to do so but it would be helpful if you describe which part of the three motivations we give in the introduction or the details in Appendix B.3 are unclear to you. Further, we emphasize that the advantages of convex weights are well-known in the kernel quadrature literature, see the reference Bach [2].\n\n> \"choice of recombination algorithm affects the performance\"?\n\nThe choice of the algorithms itemized in our Remark 1 does not affect our theoretical guarantee regarding the convergence rate, but it affects computational complexity; some have better worst-case complexity bounds but in practice are outperformed by other methods. It is also possible that some sort of randomization can affect practical performance (like symmetrization in kernel thinning yields unbiasedness), and it is an interesting but challenging question requiring further study. Any progress on the classical recombination problem has the potential to further improve the complexity of our proposed method.", " > to elaborate the need for the convex quadrature in a more comprehensible way\n\nWe give three different motivations in Section 1 for convex weights, Appendix B.3. We are also not the first to draw attention to the advantages of convex weights, see e.g. the cited reference Bach [2, Section 3.1]\n\n> supplementary material provides a substantial theoretical support for the proposed method, unfortunately, I was not able to check that in detail.\n\nThat is of course fine, but one of your criticism (addressed below) is that convex weights do not play a major role in the analysis. If you look at the proofs in the appendix you will see that convexity is essential; many of the results are trivially wrong for non-convex weights.\n\n> Experimental Section lacks explicit comparison with the other practical methods\n\nWe compare against the Monte-Carlo baseline, Quasi-Monte Carlo (Halton), and Bayes quadrature; further we added kernel thinning in the revised version. As we emphasize in the main text, Figure 1 is directly motivated by Belhadji et al. [6, Figure 1] which includes other methods such as DPPs and herding. In particular, since none of these come close to best performance we don't think there's much point in overcrowding the plot.\n\n> argues that convex quadrature is essential, but later fails to confirm this empirically, unfortunately, and it seems positive weights do not play a major role in theoretical analysis neither.\n\nTo clarify, we never wrote that convex weights are essential but that for many natural situations, convex weights are advantageous. Regarding the experiments: these show something stronger, namely that even in situations where convex weights are apriori not required (e.g. no RKHS misspecification, no iteration of quadrature, etc) the proposed methods often outperforms others which are not limited to convex weights on standard benchmarks. Regarding the theoretical analysis: no, positive weights are essential for choosing points in our proofs that make the main part of the appendix.\n\nOnce we have chosen points for use, considering only convex quadrature just limits the performance in terms of WCE, as the general quadrature (without any constraint on weights) simply includes convex quadrature and so never gets \"beaten\" by convex quadrature. However, limiting to convex quadrature during the algorithms, where we are still choosing a \"sparse\" set of points for use, we can greatly benefit from considering only convex quadrature thanks to its being probability measures as well as existence of efficient algorithms such as recombination.\n\n> consequences of treating a less general quadrature problem without weighting function?\n\nWe would say that they are slightly different problems, as classical \"quadrature\" concerns the approximation of measures (i.e. a set of points with *specified weights*) and not for weighted integration for each weight function, where we have to determine the weights each time. Indeed, \"kernel quadrature\" has also been used for meaning this sort of integration rule without weights (e.g., Fuselier et al [2014; https://link.springer.com/article/10.1007/s00211-013-0581-1 ] or basically most of the papers other than Bach [2] or DPP-based kernel quadratures [5, 6, 7]). So the word \"kernel quadrature\" used by Bach [2] or Belhadji et al [6] is more like choosing \"interpolation nodes\" (indeed Belhadji et al [7] uses the word \"kernel interpolation\" for the same problem), although \"kernel interpolation\" also has a slightly different meaning [Wilson and Nickisch, ICML 2015, http://proceedings.mlr.press/v37/wilson15.html ].\n\nOne major difference (relevant to our specific discussion here) is that for the latter case, where we only choose points (quadrature nodes), there seems to be no point in thinking about \"convex quadrature\" or \"approximation by a probability measure\" or whatever regarding the weights, as we have to change our weights according to the weight functions. So from the viewpoint of \"kernel quadrature with weighting function\", our method would look strange, though we want to emphasize that using discrete probability measures for approximating probability measures itself is quite common in the literature of numerical integration.\n\nAlso, for the quadrature with weighing function, we additionally have to assume that we know the exact integration of e.g. (weighting function) * k(., x) or something, which is not a practical assumption. So the latter problem is good for theoretical assessment of whether the set of points well capture the distribution, but not generally leads to practical algorithms for the whole problem of \"kernel quadrature with weighting function\". Our (unweighted) problem setting is less general, but also thinking about the practicality of the algorithms. So we may say this \"less general quadrature problem\" is a practical restriction.", " Thanks for the specific questions and references. We have taken them into account as follows:\n\n> 1. Nystrom strategies (...) bound can provide a better than Monte Carlo rate\n\nFor the appearance of $\\sqrt{\\ell}$ term, you are completely correct. This term doesn't appear in experiments and should be addressed by further theoretical developments. Nevertheless, as we have added in the conclusion section, we believe our method is meaningful as the first generally applicable algorithm with a bound based on spectral decay. \nFor the eigenvalue decay in general, we have expanded on this in the contribution section. In short, whenever the eigenvalues $\\sigma_n$ resp. the tail sum of eigenvalues $r_n$ decay quickly this gives a faster rate than MC. This is known for classic examples and our experiments (in particular Section 3.2) show that this seems to be fairly robust. Such decay conditions implicitly underpin many methods such as Nystrom that have found widespread use in applications, despite the fact that a characterisation of (kernel, measure) pairs that exhibit such a spectral decay seems currently out of reach.\n\nRelated to this: yes, KT++ has better guarantees but in the experiments with strong spectral decay (Sobolev with higher order, or Gaussian kernels) it leads to worse performance than for example, our proposed method. We believe this is due to the fact that our approach directly bets on/exploits the spectral decay. \n\n> 2. missing relevant references on recent new developments on Kernel thinning (KT)\n\nYes, thanks for these references! We have added KT both in Table 1 and have redone the experiments with KT, see Figure 1. \n\n> 3. useful reference for some other works like P-greedy, black box importance sampling, etc.\n\nThanks! We have added these.\n\n> 4. For experiments, I believe KT should be added (...) herding output should be combined with 'opt' to provide a fairer comparison\n\nThanks, we had missed both and now added them in the revision.\n\n> 5. algorithms from Remark 1 (...) high level ideas\n\nThanks, we have added another paragraph in Remark 1.\n\n> 6. What is the probability in Thm 3 / Corr 4 taken over (is it the draw of Z)?\n\nYes, it is taken over the draws of $Z$.", " Thanks for the feedback. We have addressed your comments as follows in the revision:\n\n> Maybe I am missing something, but I think the bound in Theorem 1 does not advocate for faster rates than Monte Carlo.\n\nGiven a measure from which we can produce a large number $N$ of samples, our task is to construct a measure with support $n\\ll N$ that integrates a class of functions approximately the same. In terms of the asymptotic $N \\to \\infty$ this can't do better than $1/N$. This has nothing to do with our specific method, but applies to any quadrature construction that requires $N$ samples, see eg. kernel thinning. The regime where such methods are interesting is $n\\ll N$. You are completely correct, in general the bounds that we present do not beat MC but the spectral decay of the leading constants does counteract this. We have expanded on this in the contribution paragraph in Section 1. The intuition is that many measure/kernel pairs exhibit such a spectral decay and one can exploit it by the methods we propose; see also question 1 by reviewer 8cVE.\n\n> clarify the dependence in number of samples in the quadrature n and dimension d in the bounds of Cor 1 and 2?\n\n> can you report the slopes for various dimensions and infer a dependence dimension experimentally?\n\nThanks, it is a very reasonable question.\nIt is rather about the interaction between dimensionality and eigenvalues $\\sigma_n$ (or the \"difficulty\" of the problem), since our algorithm itself does not depend on the dimensionality of the problems.\n\nThere should be several regimes for increasing the dimensiona of the problem; for example, as a multivariate version of (periodic) Sobolev spaces in our setting ($\\sigma_n \\sim n^{-2r}$ when $d=1$), we can not only consider a product RKHS aka Korobov spaces (in that case $\\sigma_n \\sim n^{-2r}(\\log n)^{2r(d-1)}$), but also the classical multivariate Sobolev spaces (with $L^2$-norms with respect to all the partial derivatives up to degree $r$: then $\\sigma_n \\sim n^{-2r/d}$). These can be found in Bach [2, Section 2.3]. In the case of multivariate Gaussian kernel with Gaussian distribution, we have $\\sigma_n\\lesssim \\exp(-cn^{1/d})$ for each dimension $d$. We will briefly add this remark when we get an additional page for the camera-ready version (if applicable).\n\n> minor typos\n\nThanks for these, all fixed.", " Thank you for taking your time for reading our paper and reading even the appendix in detail. We would like to reply your comments below.\n\n> In Section 3, it is not clear what you mean by the opt version\n\nIf we explicitly write \"convex quadrature\", the optimization of the weights is conducted over the simplex.\nSo, in the revised manuscript, iid Bayes (= Monte Carlo + opt) and Halton are combined with the optimization over $R^n$,\nand other methods (N./M + emp, Thinning, Herding) are combined with the optimization over the simplex.\n\n> How do you explain the fact that in Corollary 2 the boundedness assumption (condition (a) in Theorem 8) is not required?\n\nCorollary 2 rather corresponds to (b) of Theorem 8. The randomness regarding satisfying the inequality without \"boundedness assumption\", that is qualitatively treated in Theorem 8(b), does not happen in the \"+ empirical\" case thanks to the deterministic nature of the recombination algorithm.\n\n> It would be nice to add the N-th eigenvalue as a benchmark in the graphs\n\nYes, thanks for the suggestion! We tried it, but the graph is already pretty crowded and we ran out of time playing with the latex figure spacing to fit everything into 9 pages. We will try visibly improve the figures in the final version either by putting some plots in the appendix or otherwise.\n\n> Theorem 8 and Theorem 10 were mentioned implicitly in Table 1. In my opinion, I believe that they deserve to get an explicit mention in the main paper.\n\nThanks, we agree but the main body is already quite dense. We should have enough space to explain these theorems and the `non-empirical' version of the algorithm if we have another content page in case of acceptance (if applicable).", " This article studies a family of quadrature rules suited for functions that belongs to an RKHS. The proposed construction is based on a recombination algorithm that takes a discrete measure $\\nu_{N}$, that approximates the initial measure $\\mu$ (for example, a Monte Carlo approximation), and outputs another discrete measure $\\mu_{n}$. The weights of the quadrature are obtained by enforcing that the quadrature rule is exact on n functions $\\phi_{1}, \\dots, \\phi_{n}$ that are taken to be equal to $\\phi_{i} := k_{0}(x_{i},.)$ where $k_{0}$ is a ‘low rank’ kernel. \n\nThe contributions of the paper may be summarised as follows:\n- A generic result (Theorem 1) that gives the upper bound of the worst-case error (on the unit ball of the RKHS) for the proposed algorithm for an arbitrary 'low rank' kernel $k_0$\n- The instantiation of Theorem 1 to the case when $\\nu_{N}$ is the Monte Carlo approximation of $\\mu$ and $k_0$ is obtained from the Mercer decomposition of $k$\n- The instantiation of Theorem 1 to the case when $\\nu_{N}$ is the Monte Carlo approximation of $\\mu$ and $k_0$ is obtained through the Nyström approximation\n- Several numerical simulations that illustrate the theoretical rates. \n\nThe article is well written and the proven results were widely discussed and compared to the existing results in the literature. \n\n \n\nStrengths:\n* The empirical versions of the algorithms are flexible and may be used in domains where the eigenfunctions of the integration operator are not tractable.\n* The proposed quadrature rules are convex, which is an important property in misspecified settings\n* The theoretical analysis is insightful and may be used for other applications \n\n\nWeaknesses:\n
\n* The empirical versions of the quadrature (when $\\nu_N$ is Monte Carlo approximation of $\\mu$) are very useful in practice, yet they come with weak convergence rates: the second term in the r.h.s. of the bound (5) in Theorem 1 is $\\mathcal{O}(1/N)$, which is typically a slow rate in the kernel-based quadrature literature.  \nQuestions and suggestions:\n\n* In Section 3, it is not clear what you mean by the opt version: the optimization of (15) is done in the simplex or $R^{N}$?\n\n* How do you explain the fact that in Corollary 2 the boundedness assumption (condition (a) in Theorem 8) is not required?\n\n* It would be nice to add the N-th eigenvalue as a benchmark in the graphs\n\n* Theorem 8 and Theorem 10 were mentioned implicitly in Table 1. In my opinion, I believe that they deserve to get an explicit mention in the main paper. -", " This problem focuses on the quadrature problem, more specifically on bounding the maximum mean discrepancy between a quadrature approximation mu_Q= sum w_i delta_{x_i} and a target distribution mu, where the (xi) is a n-subset of an available bigger sample DN of i.i.d. samples of mu.\nThey focus on convex rules i.e. when the weights are positive and sum to 1.\n\nThe main idea is to consider a basis of n functions phi_1.. phi_n approximating RKHS functions, then sample N points (y1…yN), and select a n-subset (x1..xn) reweighted by (w1..wn). There are thus two sources of error (1) approximating k/the RKHS by a n-finite dimensional one, e.g. through Nystrom or Mercer approximations, (2) the empirical measure supported on y1…yN.\n\nThey provide several results, in particular th 1, which becomes more explicit in Cor 2 through Mercer approximation and Cor 4 through Nystrom approximation. The authors evaluate empirically their algorithm on small dimensional datasets (with either Nystrom or Mercer approximations) and compare it to the Monte Carlo rate , uniform grid and iid Bayes (weights minimizing equation 14) and demonstrate it achieves faster rates in practice than Monte Carlo.\n Strengths\nThe paper is quite well written, and well referenced regarding kernel quadrature rules. It tackles an important problem, quadrature rules, which enables to approximate integrals accurately, with applications in Bayesian inference for instance. The proposed algorithm has complexity O( nN + n 3 log(N/n) \u0001) which is competitive with herding in high dimensions, where the latter become intractable because of the global optimization subroutine. \n\nWeaknesses\n\nMaybe I am missing something, but I think the bound in Theorem 1 does not advocate for faster rates than Monte Carlo.\n\nIn theorem 1, the expected squared MMD (where the expectation is taken wrt the available sample DN) is bounded by a constant c_1(n) (later on it will depend on n) + c_2/N. \n\nClassical bounds on (squared) MMD (e.g. “A kernel two sample test”, Gretton et al.) between the empirical measure on DN (without quadrature weights) and the target mu is already of order c_2/N. Hence, from this bound it does not seem that the quadrature provides a better approximation of the target distribution mu, since if I understood well, Qn (of size n \\le N) can be of the same size than DN (size N) a priori.\n\nThe bound of Theorem 1 starts to be interesting if n is always much smaller than N and the term int(k-k0)dmu is much smaller than 1/n.\nThe first condition is typically satisfied (the authors claim l98 they will take N ~ n^2 ) later, so c_2/N becomes of order c_2/n^2, which is faster than Monte Carlo (c/n) . However, for the second term c_1(n) it is less clear. Corollary 2 (Mercer approximation) yields a term c_1(n), indeed decreasing with n, but it is not clear how fast c_1(n) decreases, faster than 1/n? Same question for Corollary 4 (Nystrom approximation) which is not very explicit in n. \n\nif it is not faster than 1/n, then the quadrature rule (supported on n points) does not have a better upper bound currently than any empirical measure supported on n points of DN.\n\nHowever, the experimental results show that the steep is lower (better) than the Monte Carlo rate, which encourages the method. \nIn the experiments, it would be interesting to report the steeps and understand the dependence of the rate with respect to n but also with respect to the dimension.\n \nFor these reasons, I tend to reject the paper, but I am ready to revise my opinion if the authors can clarify their results.\n - Is it possible to clarify the dependence in number of samples in the quadrature n and dimension d in the bounds of Cor 1 and 2?\n- can you report the slopes for various dimensions and infer a dependence dimension experimentally?\n\n\nMinor comments\nl55: “hybrid approaches” -> hybrid between what and what?\nl69: a subset (xi)_{i=1}^n (n instead of N)\nl71: (iii) in practice, constructing N>> n samples can be a challenge as well (eg MCMC methods are expensive)\nl138: Nystom The limitations of the theoretical and empirical results could be discussed much more extensively, see comments above. ", " The authors investigate the properties of kernel quadrature rules with convex weights. The authors proposed new algorithms that take as input i.i.d. samples, a suitable kernel, and provide a small integration error in the worst-case over the associated reproducing kernel Hilbert space. Two strategies are proposed based on kernel approximations via Mercer decompositions and Nystrom approximations along with recombination algorithms for point measures.\n\n \nStrengths of the work include (a) perhaps a novel combination of two concepts: Spectral properties of kernels, and recombination results about point measured, and (b) the ability to simultaneously analyze strategies involving Mercer decomposition, and Nystrom approximations.\n\nLimitations are (a) lack of clear discussion on when the proposed methods improve over prior strategies, while a summary is provided in Table 1, it is unclear if the methods indeed provide an improvement, (b) missing prior works, and (c) lack of stronger baselines that can be used in experiments. (see questions for more details),\n \n\nI have a few major concerns:\n\n1. For their Nystrom strategies (which are generically feasible for implementation), it is unclear if the mentioned bound can provide a better than Monte Carlo rate since that needs n / sqrt(l) < 1/ N which in turn means l > n^2 * N --- which is necessarily larger than N. On the other hand, the Mercer strategy in this work has a key limitation as it requires the knowledge of eigenfunctions, which is unknown for almost any generic (k, P) pair (other than a few exceptional pairs). Can the authors provide a detailed discussion about this aspect---as to the achieved rate in terms of the total number of queries---and provide concrete examples where their methods provide an improvement over prior strategies? In particular, further unpacking of Table 1 is needed, and it is unclear to me if the proposed strategy (E.g., see Table 2 https://arxiv.org/pdf/2105.05842.pdf].)\n\n2. This work is missing relevant references on recent new developments on Kernel thinning (KT)---which would have a tick on all C, M, E, in Table 1 of this work. Moreover, KT provides an error of order 1/n^2 + 1/N (in terms of the notation of this work) up to logarithmic factors for the WCE with n point output, and N point input (typically referred to as MMD in the literature). See [Table 2 https://arxiv.org/pdf/2105.05842.pdf], [Table 3 https://arxiv.org/pdf/2110.01593.pdf] for error rates, and [Example 6 https://arxiv.org/abs/2111.07941] for the new variant that provides near-linear runtime (N log^3 N), which is significantly faster than the strategies mentioned. \n\n3. [Sec 1.2 https://arxiv.org/pdf/2105.05842.pdf] (KT paper) would also be a useful reference for some other works like P-greedy, black box importance sampling, etc. which are not referenced here and are relevant for the work.\n\n4. For experiments, I believe KT should be added. Moreover, like the strategies in this work, even KT and herding output should be combined with 'opt' to provide a fairer comparison to their best-performing variant (N.+emp.opt]. \n\n5. At least one of the algorithms from Remark 1 should be discussed with some brief intuition for the readers to become aware of the high-level ideas used.\n\n6. What is the probability in Thm 3 / Corr 4 taken over (is it the draw of Z)?\n\nMinor comments:\n\n- l 38/39: A reference is needed.\n See questions.", " The paper addresses a problem of kernel quadrature rules and proposes a new quadrature with positive weights that leverages recombination algorithms (stemming from Caratheodory’s Theorem) and either Mercer’s or Nystrom approximation of the kernel. Based on each version, there is also a derivation of corresponding convergence rates. The proposed methods are empirically verified on a classical setup for testing efficiency of general kernel quadrature methods.\n Originality: \n\nThe paper addresses an established task of kernel quadratures. The method is based on well-known techniques of recombination and spectral kernel approximation combined in a unexplored way to obtain convex quadrature method. The paper emphasizes the major difference of their method being th convex weights of the quadrature points.\n\nQuality: \n\nI’m not entirely convinced by the very brief motivation for the convex weights, perhaps it would benefit the paper to elaborate the need for the convex quadrature in a more comprehensible way.\n\nIt seems that extensive supplementary material provides a substantial theoretical support for the proposed method, unfortunately, I was not able to check that in detail.\n\nThe Experimental Section lacks explicit comparison with the other practical methods apart from the worst and best case baselines on a known setting with Sobolev and Korobov spaces, e.g. there is no DPP-based method in the Figures.\n\nIt is also not immediately clear whether convexity of the weights has a crucial role in any performance advantages, thus the motivation for convex quadrature rules lack empirical support (at least in the experiments conducted and the conclusions drawn from their results).\n\nThe only method that is consistently bringing advantages across all Figures is +opt variation which assumes knowledge of expectations andadditional optimization over the weights, however Table 1 does not include this information which brings a little bit of a confusion. I wish that was a little bit more clearly stated in the paper.\n\nClarity:\n\nThere is a general issue with exposition, at least for someone less acquainted with the area. Abstract as well as Related Work would greatly benefit from a more extended exposition. As mentioned above, the motivation for the convex weights is also very short and could be made more clear with more explanations. Experimental Section doesn’t make it any clearer in this regard too, perhaps adding an ablation study coud offer more transparent understanding. The paper doesn’t discuss how their derived bounds compare against other methods, even under different assumptions, perhaps there should be matching setups allowing comparison.\n\nSignificance:\n\nIt is definitely an interesting method, and perhaps will raise a good amount of interest in the kernel community. However, I’m not sure whether the paper makes a good work at demonstrating the advantages of the proposed approach over existing methods. It first argues that convex quadrature is essential, but later fails to confirm this empirically, unfortunately, and it seems positive weights do not play a major role in theoretical analysis neither.\n What are the consequences of treating a less general quadrature problem without weighting function $g$?\n\nCould authors elaborate more on the need of convex quadrature? And how the positiveness of weights affects practical performance?\n\nHow the choice of recombination algorithm affects the performance?\n The potential negative societal impact is not applicable to this submission.\n\nThe paper addresses a less general case of kernel quadrature, i.e. there is no weights function $g$ in the problem formulation (also stated in the related literature paragraph).\n\nThe method implies an ability to sample datapoints from target measure for recombination technique, however this could be problematic (this limitation is also stated by authors).\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 3 ]
[ "Q1zT_ndtZy5", "nips_2022_V_4BQGbcwFB", "_0xSQ3bowK", "z-m1FoRRkqT", "nqQZ6Hk57OV", "Eaxwvp3MNXx", "Eaxwvp3MNXx", "uHM8oC9Eb_m", "lXO6x6hAijt", "CKquFpF2P-O", "nips_2022_V_4BQGbcwFB", "nips_2022_V_4BQGbcwFB", "nips_2022_V_4BQGbcwFB", "nips_2022_V_4BQGbcwFB" ]