paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
nips_2022_Bqk9c0wBNrZ
Semi-Parametric Neural Image Synthesis
Novel architectures have recently improved generative image synthesis leading to excellent visual quality in various tasks. Much of this success is due to the scalability of these architectures and hence caused by a dramatic increase in model complexity and in the computational resources invested in training these models. Our work questions the underlying paradigm of compressing large training data into ever growing parametric representations. We rather present an orthogonal, semi-parametric approach. We complement comparably small diffusion or autoregressive models with a separate image database and a retrieval strategy. During training we retrieve a set of nearest neighbors from this external database for each training instance and condition the generative model on these informative samples. While the retrieval approach is providing the (local) content, the model is focusing on learning the composition of scenes based on this content. As demonstrated by our experiments, simply swapping the database for one with different contents transfers a trained model post-hoc to a novel domain. The evaluation shows competitive performance on tasks which the generative model has not been trained on, such as class-conditional synthesis, zero-shot stylization or text-to-image synthesis without requiring paired text-image data. With negligible memory and computational overhead for the external database and retrieval we can significantly reduce the parameter count of the generative model and still outperform the state-of-the-art.
Accept
This paper tackles the general image synthesis problem (unconditional, conditional, text-guided) using a semi-parametric manner. It first retrieves relevant samples from external dataset, and use them as additional conditions for image generation. It is verified with different image synthesis frameworks, e.g. Diffusion-based and Autoregressive-based models. The comprehensive experiments demonstrate the effectiveness of the proposed semi-parametric image generation methods, compared with baselines. The paper received all positive review rating scores after some discussions, leading to an ``Accept'' decision overall.
train
[ "cFu7gyFVL8", "zoZBZ3JSSOs", "zok2Okjzhbs", "0t-xPHY-RE", "lnffzeMXzL", "sE1ExaQRCM-2", "xg10eqjy-S-", "pjtFQ8nK2Cf", "KjRE7lCYZT", "3qaUXIqtG0r", "xtiedZZIeYn", "ifltF6cptq", "IpJRHqnckp", "k4X14VFGaHZ", "wkHCZW12AWu", "tD0hRvAra13" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As suggested by B19f we will include the ablations on D and X in the final version, just as all the other additional experiments presented here.", " Thank you for raising the score, we are pleased to see that the reviewer is satisfied with our answers .\nHere are two further clarifications:\n\n**Size of database:**\n\nB1f9 is correct in that additional data is needed during sampling. However as shown in Fig. 1, our model is much more leightweight than ADM. Even with large $m=0.05$\n (rightmost/bottommost dot in Fig. 1) our model has more than 50M parameters less than ADM (green diamond), while nearly bisecting FID score. Thus, it is actually better deployable than previous diffusion models, which in general have a higher parameter count than e.g. GANs. Thanks nonetheless for the suggestion. We will also add this information to Tab. 1. \n\n**Paired Data:**\n\nWe agree that training CLIP required paired data. However, when we say \"without requiring paired data\", we mean that our model itself is not trained on paired data. The important aspect of this is that there is no need to gather a large scale paired-data set to further improve the model, as opposed to common text-to-image models as Dall-E2 and Imagen. This is -- as stated by B1f9 -- caused by the usage of the multi-modal model that was trained on paired data and enables us to use this shared latent space. But since we use a publicly available model (https://github.com/openai/CLIP), there would be no need to retrain CLIP to achieve our results. However, we want to emphasize, that our work does not solely make use of the multimodal space of CLIP and, thus, can also be realized with alternative pre-trained encoders $\\phi$ which do not require paired training data. As shown in Section 4.3 and in this link (https://imgur.com/BQX0Blu), we can change properties of RDMs post-training only by replacing the train database with another one. This is a feat which is independent of the choice of $\\phi$ and so are our proposed sampling strategies which lead to very flexible sampling behavior as shown in Fig. 12 and 22. Moreover, we would like to draw the reviewers attention to Fig. 19, where we see that RDM outperform LDMs with more trainable parameters in quality (FID, Precision) as well as diversity (Recall) not only with CLIP but also with pre-trained VQGAN as encoder $\\phi$.", " Size of the database: Yes, I understand that one does not need to store the database into GPU memory during training, but besides RDM model itself, you still need to store features in disk for unconditional sampling or maybe other purposes (if I understand correctly). This is a concern in some real world applications such as deploying your approach into portable devices. Thus I still suggest to list the overall storage cost of your approach in inference time vs other approaches in inference time. \n\nPaired data: Thank you for the explanation. I understand that to solely train a RDM along, one does not need to have paired-data. But what I initially said is since you approach is built upon CLIP, which itself requires paired training data. In other words, the multimodal space is not free. If one replace CLIP with normal ViT or Resnet in your approach, I do not think RDM can still do text2img generation etc. Thus I feel \"without requiring paired data\" is a bit strong", " * __Societal Impact and \"true limitations\"__\nAlthough we are not sure which \"true limitations\" the reviewer refers to, we will add the following remarks to Appendix A, which already discusses some of the model's limitations.\n\n*Another limitation is an inherent tradeoff between database size (and associated storage and retrieval costs) and model performance, as evident from Fig. 1. Storing and searching indices for databases of up to billions of images can become quite costly. Furthermore, our approach depends on the image representation that is chosen to encode images from $\\mathcal{D}$ and the retrieval model. Both have significant influence on the performance of the RDM/RARM and further research needs to be done to determine the best choices here.*\n\n*Furthermore, as pointed out by XT8s one should consider the ability to curate the database to exclude (or explicitly contain) potential harmful source images. When creating a public API that approach could offer a cheaper way to offer a safe model than retraining a model on a filtered subset of the training data or doing difficult prompt engineering. Conversely, including only harmful content is an easy way to build a toxic model.*\n", " __Neighbor-based approaches vs. paired data__\n\nB1f9 asked about the purpose of lines l.279--295, while fYVT wanted to know if the results in Fig. 2 imply that neighbors are not actually needed for our model. We try to clarify these questions in the following paragraph.\n - The goal of models such as CLIP is to provide a shared space between images and captions. However, since there are always is a difference between images and their captions, e.g. images contain very specific information whereas captions may describe many different images. If one wants to use the utilize the similarity between both approaches this disconnected needs to be bridged. The need for this is demonstrated in Fig. 8: When a model is trained to reconstruct an image from its CLIP-embedding it fails when trying to generate an image based on a caption-embedding. \n- Other papers such as DALL-E 2 or LAFITE do this by training a transfer function between these two spaces, which requires paired data. We verify this by training an additional generative prior (a conditional normalizing flow) in Fig. 8 (note: we train this prior only for illustrative purposes and use paired data during training (a subset of the LAION 400M dataset)).\n- Our approach on the other hand works by providing the model with a more diffuse CLIP-image-embedding, i.e. by training the model with multiple embeddings in the form of nearest neighbors it is able to extract the high level information that is shared between all neighbors, making the model learn robust features that are also contained in captions. Therefore, retrieving multiple neighbors is a central contributor to the generalization of our model. This robust conditioning mechanism can then be used as in Fig. 2 to generate images from captions without having multiple neighbors at inference.\n- In summary, our approach provides an orthogonal approach to training a text-conditional image prior and does not require paired data. This is particularly interesting for open-source high-performance, multimodal models such as CLIP, since their training data is often not publicly available.\n", " **General Response**\n\nWe thank all reviewers for their time and helpful comments, which we believe add value to the work. This general response aims to address questions shared by at least two reviewers; however, we also address each comment in separate responses.\nTo fully address some of your questions and remarks we want to add additional figures. All images that we have linked in all our replies can be found in this gallery: https://imgur.com/a/MoFrS5S\n\n__Training on $\\mathcal{D} = \\mathcal{X}$ (retrieval database is training data)__\n\n- B1f9 and fYVT asked about the benefit of having a retrieval database that is disjoint from the training dataset. This is an interesting question and to asses this we have used the rebuttal period to train another model on ImageNet that uses ImageNet itself as the retrieval database. \n- In this experiment we found that sharing train set and database increases the visual quality (FID and Precision) on the training set. However, having a disjoint database increases the recall (diversity) of the samples as well as improves the generalization to new conditioning, e.g. COCO zero-shot text-to-image in form of FID and CLIP similarity. We believe this is caused by the model learning a more diverse set of neighbor features during training, which increases the robustness of the feature decoding. To further back this claim we conduct a second generalization study and replace the database during inference: For the model trained with the ImageNet database we use OpenImages at test time. Similarly, we use ImageNet as a test-time database for the model trained with the OpenImages-based database RDM-IN/OI refers to the former, RDM-OI/IN to the latter model. For RDM-IN/OI we see clearly deteriorated results, whereas RDM-OI/IN generalizes to the new database and even improves its performance compared to the standard setting, i.e. when using the train time database also during inference. See Table 1 for the results.\n\n __Training data = $\\mathcal{D}$__:\n\n| Method | FID (train) | FID (val) | CLIP-FID (train) | CLIP-FID (val) | Precision train | Recall train |\n|------------| -----------------------------------------------------------------| -| -| -| -| - |\n| **ImageNet** | -| -| -| -| -| - |\n| RDM-IN | 5,91 | 5.32 | 3.92 | 4.44 | 0.74 | 0.51 |\n| RDM-OI | 12.28 | 11.31 | 4.09 | 4.59 | 0.69 | 0.55 |\n| **ImageNet with interchanged database** | -| -| -| -| -| - |\n| RDM-IN/OI | 17.23 | 16.82 | 8.86 |9.75 | 0.52 | 0.60 |\n| RDM-OI/IN | 10.81 | 12.01 | 3.84 | 4.41 | 0.81 | 0.39 |\n\n\n| **Method** | FID | CLIP-FID | CLIP Score | IS |\n| ---------------| ---------| -------------| -----------------------------------------| ----- | \n| **Coco zero-shot text2img** | | | | | \n| LAFITE | 26.94\t\t| n/a | n/a | 26.02 |\n| RDM-IN | 27.28 | 18.12 | 0.29 | 24.17 | \n| RDM-OI | 22.08 | 13.16 | 0.30 | 24.31 | \n\n", " * __Other datasets such as FFHQ, LSUN__\nHere we provide additional samples on the FFHQ dataset using the OpenImages database as requested by the reviewer. Since Kynkäänniemi et al [a] showed the standard FID to be 'very insensitive to the facial region', we use their proposed CLIP-based FID [a] to measure quality of samples. Here we outperform LDMs as well as strong GAN based models such as StyleGAN2 and Projected GAN. For LDMs we compare with two models for better comparability: The first LDM model is the officially released model from the github repository (https://github.com/CompVis/latent-diffusion) with the FID-optimal sampling parameters (250 steps, $\\eta=1.0$). As our model has slightly more parameters (due to cross-attention for the conditioning) and is trained on 2 NVIDIA A100 with a larger batch size, we additionally train an LDM (same $N_{\\text{params}}$) with the same number of parameters by using the same hardware and batch size. Thus, having a database is also beneficial for comparably easy, aligned datasets. Samples can be found at https://imgur.com/OqrV2to\n\n| **Method** | CLIP-FID |\n|:-----------|:------------:|\n| Projected GAN | 4.87 |\n| Style-GAN2 | 2.90 | \n| LDM (from website) | 2.12 |\n| LDM (same $N_{\\text{params}})$ | 2.63 | \n| RDM | 1.92| ==todo== | ==todo==|\n\t\n[a] Kynkäänniemi, Tuomas & Karras, Tero & Aittala, Miika & Aila, Timo & Lehtinen, Jaakko. (2022). The Role of ImageNet Classes in Fr\\'echet Inception Distance. \n", " We here adress the individual concerns:\n\n* __It lacks novelty. The model architecture follows LDM. Conditioning on CLIP embedding to enable the power of text2image generation is similar to DALLE2, and should be attributed to CLIP.__\nThis is a call for comparison with concurrent work published on ArXiv within two months (specifically April 13, 2022, see https://arxiv.org/abs/2204.06125) of the submission date for NeurIPS 2022 (which was May 19, 2022). Moreover, our method shows how to invert the multimodal space learned by CLIP without learning a specific text-image prior, and thus the statement that this \"should be attributed to CLIP\" is incorrect as written, since CLIP does not have generative text-to-image capabilities. Our approach does not require paired data for training the retrieval-augmented model and can be trained on images only, which is particularly interesting for high-performance multimodal contrastive models such as CLIP, since their training data is often not publicly available. This is also related to the section __\"Neighbor-based approaches vs. paired data\"__ in our general response.\n\n* __Ablation studies on D__\nThanks for this suggestion. Due to the fact that a larger and, thus, more diverse database should contain many smaller ones as its subsets, we skipped such a study in the original submission. But since the reviewer explicitly asked for this study, we provide this study in https://imgur.com/Nv9hWtp and compare the test performance of RDMs trained on the ImageNet-Dogs dataset with the dabases i) WikiArt (RDM-WA), ii) COCO (RDM-COCO) and iii) OpenImages (RDM-OI) with an LDM baseline with 1.3 $\\times$ more parameters. We see that choosing a database which does not provide the model with useful information for modelling the train data as in i) gives no improvement in FID and results in lower precision scores. However, using a small database containing useful information for the training task as in ii) results in lowered FID scores and higher recall compared to the LDM baseline. As expected, further increasing the database size as in iii) improves all metrics which we attribute to the statement made in the beginning of this answer. We expect the benefits of an increased database to become more prevalent for more complex datasets similar to Fig. 11 in the submission. \n\n* __Missing LDM in Tab. 1; size of the database__\nWe would like to point out that we show comparisons to LDM throughout the whole work, e.g. in Fig. 11, 19, and 20. As RDMs consistently achieve significantly better scores than the LDM baselines, and Fig. 11 additionally shows that using an external database is especially beneficial for increaslingly difficult datasets, we did not include LDM in the evaluation on full unconditional ImageNet. Nonetheless, as requested by the reviewer, we additionally trained an unconditional LDM on ImageNet with the same number of parameters and training setting (same hardware, batch size and number of train steps) as our RDM from Tab.1, which results in **FID: 45.89; IS 23.46, Precision: 0.56, Recall: 0.55**. All of these results are worse than those of our RDM. We will add this to Tab. 1\nA database consisting of 20M visual instances takes 20GB storage space on hard disk when using the CLIP ViT-B/32 model and an FP16 representation of the encodings. Note, however, that we do not need to load this in the VRAM of the GPU and that we only use this for training. At test time, one can significantly decrease the storage requirements by using Top-M Sampling, see Figure 1. \n\n* __Need to compare to prior and paired data__\nIn l. 286-288 we explicitly state that \"our retrieval-augmented approach provides an orthogonal approach to DALL-E 2 without requiring paired data\" - training text-to-image generative priors as in DALL-E 2 requires, in contrast to our approach, paired data. See also our general repsonse, __\"Neighbor-based approaches vs. paired data\"__\n\n* __I am not sure if LDM is the first one to propose cross attention. It seems GLIDE also uses cross attention to fuse image and text features.__ \nGLIDE uses self-attention on the concatenated text and image representation sequences (vanilla attention complexity O((M+N)²), whereas LDM introduces conditioning via explicit cross-attention (vanilla attention complexity O(MN)).\n", " __Is NN not important for text-to-image synthesis?__\n\nThank you very much for this question, which touches on a central point of our paper and which we would therefore like to try to present more clearly in our general response, see __\"Neighbor-based approaches vs. paired data\"__.\n\n__Difference between IC-GAN and our model for style transfer:__\n\nThe main difference lies in the fact that during training (and inference) RDM uses feature encoding of multiple neighbor images instead of single instances. We believe that this increases the robustness of our model during test time to changes in the database. To support this claim, we have run the following evaluations: We use WikiArt as the inference database to generate 50,000 samples for both models. With these samples we evaluate FID, CLIP-FID, precision and recall of samples against the WikiArt dataset. Especially FID and precision demonstrate that our samples better approximate the WikiArt data manifold.\n\n| **Method** | FID /CLIP-FID | Precision | Recall |\n----------------------|-------------------|-------------|-----------------|\n| ICGAN | 24.75/35.17 | 0.465 | 0.276 |\n| Ours | 21.50/13.01 | 0.628 | 0.336 |\n\nIn addition, we also want to explicitly measure the transfer that happens when we exchange the database. To do this, we train a linear-probe on ResNet-50 features to distinguish images from ImageNet and WikiArt, our resulting classifier has an accuracy of 96% on an unseen validation-set. To see how well the method models the new database, we evaluate how many samples for a given inference dataset are correctly classified (i.e. D_train as ImageNet and WikiArt as WikiArt), see here https://imgur.com/0WBoVlm . We find that RDM produces images that the classifier identifies as better fitting for the given dataset.\n\nOf course a qualitative side-by-side comparison is also always important when evaluating generative models. For this we additionally take highly stylized images from the Pacs Cartoon dataset [1] as database to see the transfer capabilities in this extreme case. These samples can be found here: https://imgur.com/cSuuiDM . In addition to that there are more WikiArt-stylized images in Fig. 23 of our appendix.\n\n[1] Li, Da, et al. \"Deeper, broader and artier domain generalization.\" __Proceedings of the IEEE international conference on computer vision__. 2017.\n\n__MS-COCO zero-shot FID__:\n\nThank you for pointing this out. To show the full capabilities of our approach we evaluated the ImageNet-RDM from Fig. 2 on zero-shot COCO, see Tab. 1 in the \"__general response__\" section.\n\n__ImageNet class-conditional synthesis__:\n\nAs requested by the reviewer, we present class-conditional results on ImageNet by using the strategy introduced in Sec. 3.3 in the main paper: **FID: 13.10; IS: 36.66; Precision: 0.67; Recall: 0.33** . These results are generated with 100 DDIM steps and c.f.g scale 1.5.\n", " __Societal Impact & Limitations__:\n\nYou are right, there is always the risk of severe abuse when the ability to generate artificial images is given to humans. We as researchers need to be aware of that issue and have discussed the societal impact in Appendix B, as well as limiations of our work in Appendix A.\n\nThank you for the remark about the ability of our approach to curate the database to exclude potential harmful source images. When creating a public API that approach could offer a cheaper way to provide a safe model than retraining a model on a filtered subset of the training data or doing difficult prompt engineering. Conversely, this technique also allows for the inclusion of only malicious content, making it an easy way to create an explicitly toxic model.\n\nAnother (more technical) limitation is an inherent tradeoff between database size (and associated storage and retrieval costs) and model performance, as evident from Figure 1. Storing and searching indices for databases of up to billions of images can become quite costly. Furthermore, our approach depends on the image representation that is chosen to encode images from $\\mathcal{D}$ and the retrieval model. Both have significant influence on the performance of the RDM/RARM and further research needs to be done to determine the best choices here.\n", " __Figure 1.__\n\nThank you for your feedback on Figure 1. We have added an explicit visual marker to make it clear that the abscissa does not start at a value of 0. Additionally, we have replaced the orange markers with a line, since this is the same model for all choices of databases. The new figure can be seen here: https://imgur.com/LVmuxUl\n\nHowever, the course of the curve approaching the ADM value is rather coincidentally so. Other sampling parameters/guidance scales may give a potentially different picture, and models with different values for $| \\theta|$ and $| \\mathcal{D}|$ also behave differently --- ADM-G is in fact a different model that needs labels to train the guidance classifier. Our model does not need labels at any point for training.\n\n__Time needed to encode $\\mathcal{D}$.__\n\nUsing a batch-size of 100, our encoding pipeline takes 43 mins to process 1M examples from OpenImages (which corresponds to an encoding rate of ~385 samples/sec) while utilizing a single NVIDIA Quadro RTX 6000 GPU. The main bottleneck here is fetching data from hard disk, which is particularly expensive for OpenImages, since it contains large images up to 4K. Thus, with optimized dataloading (e.g. using file-streams as in HDF5/Webdataset) the pipeline could be even further speed up. However, since encoding a sufficiently large train database is a one-time effort that can be performed before the actual training, and since the extracted database can be used to train many models, we didn't implement advanced dataloading techniques. The encoded database can then be interpreted as part of the weights for a trained model and also used for inference. A single forward pass through CLIP (ViT-B/32) takes 43ms/batch or 0.43ms/sample on a NVIDIA RTX 2080 Ti with batch size 100.\n\n__Visually distinct domains.__\n\nWe would like to draw the reviewer's attention to Sec. 4.3 and Fig. 10 as well as Fig. 14 and 23 in the appendix, where we perform domain transfer between natural images and artistic images (via the WikiArt database). However, we also believe that this is an interesting point that deserves further investigation, and add experiments with the ArtBench database (cf. https://imgur.com/BQX0Blu for qualitative samples) that show that the model is able to perform transfers with different visual/artistic styles. In addition to that we also explore the transfer with the Pacs cartoon dataset [1]. In this dataset images are heavly stylized and have particularly distorted subjects compared to natural images. Images can be found here: https://imgur.com/cSuuiDM . We find that our model is able to stylize the images to some degree, however, it is not matching the style of the cartoon images perfectly. The model also struggles particularly when given the task so synthesize stylized giraffes (bottom right), a reason for this might be that the model has not seen any giraffes during training on ImageNet data.\n\n[1] Li, Da, et al. \"Deeper, broader and artier domain generalization.\" __Proceedings of the IEEE international conference on computer vision__. 2017.\n\n__Training time for semi-parametric models.__\n\nYes, our semi-parametric approach to image synthesis requires less training time to achieve the same quality as it \"classic\" counterparts. More precisely, we have added https://imgur.com/Nv9hWtp, which shows how FID, IS, Precision, and Recall for three instances of RDM (with different train databases, for a detailed explanation of the different instances of RDMs see Response to B1f9, Part 1, section 'Ablation studies on D' ) and LDM (RDM: 400M trainable parameters, LDM: 576M parameters) behave over progressive training on the Dogs subset of ImageNet. We see that the RDM achieves an FID of 50 about 3 times faster, while the recall (diversity) is consistently better than that of the non-retrieval model.\n\n__Results not SOTA compared to classifier guided ADM__ \n\nWe would like to point out a fundamental difference in these approaches: ADM-G __must__ train a classifier, because they __cannot__ use classifier free guidance as their model does not use any conditioning. In contrast, we condition on neighbors and can make use of various truncation techniques as introduced in l.199 (Sec. 3.3) and demonstrated in Sec. 4.5. Therefore, the results are not directly comparable and we solely included those to show that we can reach their performance even without access to labels.\n", " __Related works should be mentioned and compared__\n\nkNN-Diffusion [1] is a concurrent work which we discuss in Appendix C. Unfortunately, no code has been published until now and we hence cannot compare it with this work more explicitly. We agree with the reviewer that comparing to LAFITE [2] improves our work. Therefore, we assess the common zero-shot metrics on COCO, which demonstrate that our model outperforms LAFITE in FID and achieves only slightly lower Inception Scores, despite being trained on a significantly smaller dataset . Please see the next question for the results.\n\n__Text-to-image worse than related work__\n\nWe would like to point out that Fig. 7 contains results that were produced for an ablation study and not tailored towards beating the state of the art (evaluated on 2000 captions only, shallow models on ImageNet). To show the full capabilities of our approach we evaluated the ImageNet-RDM from Fig. 2 on zero-shot COCO, see Tab. 1 in the \"__general response__\" section. Here, we improve over LAFITE in FID although we train only on ImageNet, which has less than half of the size of CC3M, which is the train set for LAFITE, and our method does not require text prompts during training, whereas theirs does.\n\n__Diversity of generation is influenced__\n\nYes, diversity is increased, as can be seen from our comparison in Fig. 11, where we compare the retrieval-augmented models to their fully parametric counterparts. As evident from the plots, both precision and recall are higher for both the autoregressive and the diffusion models. Further, FID is consistently lower, which is better. We therefore conclude that our retrieval-based approach improves diversity of generations while increasing visual fidelity.\n\n__Same number of sampling steps ADM vs RDM__\n\nFor ADM we report the official values from the publication generated with 250 steps. Generally, for diffusion-based models, more sampling steps generally lead to better quality, so ADM is at an advantage. To have an absolutely fair comparison, we follow the suggestion of the reviewer and generate results with 250 sampling steps (Top-M 0.1, c.f.g scale 2.0) and achieved the following results **FID: 12.03 ; IS: 79.78; Precision: 0.76, Recall: 0.55**\n", " This paper tries to use an additional database D to do generation. The idea is very simple and straightforward. During training for a training image x, they use it to find the k images from the D by using CLIP image encoder and K nearest neighbor. Then they feed those retrieved image CLIP embeddings into their generative model as extra information.\n\nBecause of the shared space between text and image, once their model is trained they can do text2image generation and class conditional generation even though the model is trained unconditionally. \n\nThey studied what is the best value of k, and propose a top-m sampling strategy for a trade-off between diversity and fidelity. \n\n\n Strengths:\nUsing an additional database to help generation. (I have a mixed feeling about this point as using an external bank of images is not a new philosophy in image generation)\nThe paper writing is very clear and easy to follow. \nThey achieve state-of-the-art unconditional generation results on ImageNet with less trainable parameters.\n\nWeakness:\n\nIt lacks novelty. The model architecture follows LDM. Conditioning on CLIP embedding to enable the power of text2image generation is similar to DALLE2, and should be attributed to CLIP. \n\nMy main concerns/doubts are listed in the \"Questions\" section. Here are my major concerns for this paper regarding the setting and high-level: \n\n1, The main claim of this paper is having an additional database. But, they do not justify why they want to have a disjoint D? What if D is the same as the training data? They should have shown what are the benefits of using different images as D. They have something similar in L289, but in this case k=1. They should properly study a model with k=4 when D is the same as training images. \n\n2, They do not show enough ablation studies on the database. For example, how would different databases influence the model's performance and behavior? Although they studied the patch size used in OpenImage in the supp, those database features are all from the OpenImage, thus still similar in my mind.\n\n3, Similarly as above, they should also study the effect of the same database on the different training sets. They only conducted the training on ImageNet, what about the other dataset such as the single domain dataset (FFHQ, LSUN)? In these cases, how would the current database (OpenImage) influence the models? Does the database still provide useful information? If not, why does someone need the database? \n\n4, A lot of studies were conducted on text-to-image generation, zero-shot, etc. But actually, the reason why they can do this is mainly because of CLIP, not because of the extra dataset. Thus I do think they present the paper appropriately. If they want to highlight these properties, then they should make their story based more on CLIP. If they still want to make their main story be the database, then they need to thoroughly study the concerns I listed above. \n\n5, What is the true limitation of this paper? Although they mentioned limitations in the supp, but I do not think they take it seriously. What they mentioned are general problems for all diffusion models such as sampling speed. I hope to see they talk about the limitation unique to their approach. \n\nHere are my major questions regarding the existing experiments and statements:\n\n6, In table 1, they miss the LDM baseline.\n\n7, In table 1, they should also list the database size, instead of only parameters. Since storage is also a main concern in a real-world application. They mentioned it in the footnote on page 4, but they should explicitly list it in table 1 to show the extra storage cost.\n\n8, When talking about DALLE2, they said DALLE2 prior training requires paired data and they do not (L287), which I do not think is accurate. Since the CLIP training also needs paired data. Actually, I do not fully understand the purpose of L279-L295, if the goal of this part is to show their approach does not need to train a prior, then they need to show why a feature database is better than a prior trained on the features from the database. \n\n\nMinor concerns:\n\n1, in L157, I am not sure if LDM is the first one to propose cross attention. It seems GLIDE also uses cross attention to fuse image and text features.\n\n2, for fig2, they can do a user study instead of only show 4 images. \n\n\n \n\n \n \n My major question for this paper was they did not have enough study and analysis for training data and extra database, which is the key claim of their paper. Since they addressed them well in the rebuttal, I am happy to raise my score. \n\nI still have the same concern about the way they present the paper. The main claim of this paper is using retrieval to help generation, BUT, the other very important component in their approach is CLIP which I do not think is replaceable. In other words, if I replace CLIP with normal Resnet trained on Imagenet, I do not think one can play around with text anymore. But I can still call my paper as \"Semi-Parametric Neural Image Synthesis\" or my model as \"RDM\". Thus I think their title and story is a bit over claim or not precise if one prefer. \n\nIn summary, the contribution of this work outweighs my concern. But I highly suggest to put their ablation on D and X conducted in the rebuttal to the main paper since these ablations backup their main claim. Also consider adding CLIP into the story when present the paper. ", " The paper proposes retrieval-augmented generative model, which is a semi-parametric model conditioned on retrieved features. Strengths:\n\nThe proposed method is straight-forward and effective, good performances are achieved with smaller model size;\n\nThe paper is well-written and easy to follow;\n\nThe use of pre-trained CLIP model enables image generation, text-to-image generation and class-conditioned generation; Good experiment results are obtained on image generation task;\n\n\nWeaknesses:\n\nSome related works [1, 2] which also perform generation conditioned on CLIP features should be discussed and compared. Specifically, [1] also retrieve image features based on CLIP multi-modal joint feature space, and train the model based on retrieved features. The only difference is that [1] trains a diffusion model in a latent space;\n\nThe text-to-image generation results on MS-COCO dataset is worse than related works;\n\nDiversity of generation is influenced\n\n[1]. KNN-Diffusion: Image Generation via Large-Scale Retrieval. Oron Ashual, Shelly Sheynin, Adam Polyak, Uriel Singer, Oran Gafni, Eliya Nachmani, Yaniv Taigman.\n\n[2]. LAFITE: Towards Language-Free Training for Text-to-Image Generation. Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, Tong Sun. In Table 1, ADM and RDM perform sampling with different hyper-parameters. Some more results are suggested for fair comparison (e.g. ADM with 100 steps or RDM with 250 steps).\n\nInception Score on MS-COCO dataset for text-to-image generation is suggested to be reported. Yes", " This paper attempts to solve the problem of image synthesis (unconditional, conditional, text-guided) in a semi-parametric way. It extends IC-GAN [1] by introducing an external retrieval dataset (used for kNN search), and a pretrained fixed CLIP encoder (used for encoding image and text). The proposed method could be used on various image synthesis frameworks (e.g. Diffusion-based, Autoregressive-based models).\n\nA fundamentally similar concurrent work is kNNDiffusion [2] (also mentioned by the authors in Appendix C)\n\n[1] Arantxa Casanova, Marlène Careil, Jakob Verbeek, Michal Drozdzal, and Adriana Romero Soriano. Instance-conditioned gan. *Advances in Neural Information Processing Systems*, 34, 2021.\n\n[2] Oron Ashual, Shelly Sheynin, Adam Polyak, Uriel Singer, Oran Gafni, Eliya Nachmani, and Yaniv Taigman. Knn-diffusion: Image generation via large-scale retrieval. *arXiv preprint arXiv:2204.02849*, 2022. Strengths\n\n- The proposed method could be trained with images only, while allowing tasks including conditional, unconditional, text-guided image synthesis. This is achieved by aligning the latent space to a pretrained fixed CLIP encoder.\n- Good experiments of introducing the idea of retrieval based image synthesis in diffusion models and autoregressive models.\n\nWeaknesses\n\n- One major difference the authors have mentioned against IC-GAN is the external retrieval dataset (as IC-GAN use the same retrieval and training dataset). How does changing retrieval dataset to a different one affect the results? Does the diversity matters or is it the dataset size matters? If we use the same training and retrieval dataset in this work, does the result differ a lot?\n- Domain transfer by replacing the retrieval dataset is also presented in IC-GAN, what is the fundamental difference here?\n- It seems that the quantitative experiments of text-to-image task on COCO, and class conditional synthesis on ImageNet against SOTAs are missing.\n- In Figure 2, using text representation only performs better than using kNN samples and the combination of these two. Does this mean NN is not important for text-to-image synthesis? Then, what is the fundamental reason to introduce NN retrieval here. Please see weaknesses above. Overall, I think this paper is a good extension of IC-GAN by introducing the retrieval-based methods to diffusion and autoregressive models, along with the combination with CLIP. However, some ablation studies, quantitative experiments, and more in-depth analysis with the proposed method are missing. If the authors could address these, I am willing to raise my score. N/A", " This paper proposes a method of using an external database of images to conditional a smaller generative image model for neural synthesis. Since the model is conditioned on the external database, some amount of domain transfer can be granted through changing the exemplars in the database. This external database also tackles a scalability problem, as retrieving nearest neighbor exemplars from the database is more efficient than trying to parameterize larger and larger datasets into a generative model.\n\nPost Comments: Given the rebuttal of the authors addressing my concerns and an improvement of the ethics statement in the paper, I feel it's appropriate to increase my overall rating. I feel many of the timings per exemplar are a bit misleading, given that these are taken over large batched and amortized. Strengths:\nUse of an external database and retrieving nearest neighbor exemplars. Additionally, a database of images may not even be needed if you are pre-computing CLIP embeddings. This allows for a very efficient 2048 bits of representation per image.\n\nWith increasingly complex datasets, the semi-parametric models increase in recall performance (which is not true of the fully parametric baselines). \n\nWeaknesses:\nDomain transfer isn't explicitly shown. For example, a database of natural images used for training and cartoon images for exemplars to generate a \"cartoon tiger\" or \"animated bear\" would show a domain transfer from natural to animation images. The domain transfer examples shown are essentially the same visual domain as the training and the exemplars.\n\nFID isn't state of art compared to classifier guided ADM, even with about 80% of the parameters.\n\n 1. Figure 1 is very misleading. There is a missing graph discontinuity on the X axis for model params (e.g. 0 is at 325 M params). Visually this is showing that the semi parametric models are around 1/8th the size when in fact they are 2/3rds to ~5/6ths the size. Additionally, this graph shows a flattening of FID as the database increases. Is this true of a model whose |theta| + |D| is the same size as the ADM w/classifier baseline?\n\n2. L56, retrieving the exemplars at inference time is 0.95 ms. Since these images also need to be CLIP encoded, what is the time or compute spent to encode |D|.\n\n3. One of the claims of this paper is domain transfer being possible with a change of the image database. Where any visually distinct domain transfers experimented with (i.e. style transfer).\n\n4. What does the distribution of training data look like for parametric and semi-parametric approaches? Do semi-parametric models need less training time or less examples? As in all generative work, there is the danger of generating images that can be harmful (used to incite fear, riots, harassment). The authors mention no limitations or negative societal impact of their work which I believe is an oversight. I don't think this research has any increased risk over other generative works, in fact, with careful curation of the database, this may actually *decrease* the potential to generate harmful content. The authors should more carefully consider the societal implications of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "zoZBZ3JSSOs", "zok2Okjzhbs", "pjtFQ8nK2Cf", "xg10eqjy-S-", "sE1ExaQRCM-2", "nips_2022_Bqk9c0wBNrZ", "pjtFQ8nK2Cf", "IpJRHqnckp", "wkHCZW12AWu", "xtiedZZIeYn", "tD0hRvAra13", "k4X14VFGaHZ", "nips_2022_Bqk9c0wBNrZ", "nips_2022_Bqk9c0wBNrZ", "nips_2022_Bqk9c0wBNrZ", "nips_2022_Bqk9c0wBNrZ" ]
nips_2022_Yc4MjP2Mnob
Recommender Forest for Efficient Retrieval
Recommender systems (RS) have to select the top-N items from a massive item set. For the sake of efficient recommendation, RS usually represents user and item as latent embeddings, and relies on approximate nearest neighbour search (ANNs) to retrieve the recommendation result. Despite the reduction of running time, the representation learning is independent of ANNs index construction; thus, the two operations can be incompatible, which results in potential loss of recommendation accuracy. To overcome the above problem, we propose the Recommender Forest (a.k.a., RecForest), which jointly learns latent embedding and index for efficient and high-fidelity recommendation. RecForest consists of multiple k-ary trees, each of which is a partition of the item set via hierarchical balanced clustering such that each item is uniquely represented by a path from the root to a leaf. Given such a data structure, an encoder-decoder based routing network is developed: it first encodes the context, i.e., user information, into hidden states; then, leveraging a transformer-based decoder, it identifies the top-N items via beam search. Compared with the existing methods, RecForest brings in the following advantages: 1) the false partition of the boundary items can be effectively alleviated by the use of multiple trees; 2) the routing operation becomes much more accurate thanks to the powerful transformer decoder; 3) the tree parameters are shared across different tree levels, making the index to be extremely memory-efficient. The experimental studies are performed on five popular recommendation datasets: with a significantly simplified training cost, RecForest outperforms competitive baseline approaches in terms of both recommendation accuracy and efficiency.
Accept
The paper introduces a method for top-n item recommendation based on approximate nearest neighbor search (ANN). The authors formulate ANN as a sequence to sequence problem, the input being the user profile and activity, and the output being the top-n recommendations. The focus of the paper is on the computational efficiency of the ANN process. The proposed method learns jointly a tree-based index for organizing the items and a transformer based decoder for the top-n recommendation. The index is composed of multiple trees. Experiments are performed on classical benchmarks. The reviewers consider that this is an original contribution with a convincing experimental evaluation. The authors have added several complementary experiments, including additional baselines, during the rebuttal and answered satisfyingly to the reviewers’ comments and questions. All the reviewers recommend acceptance.
train
[ "6IYbf8Kvmm", "K53RJVxQSWN", "w0l6hi_Qa6m", "bvGrTrVWU3t", "pDqEeB3hRD6", "aOlOx-G83bJ", "FUoe71pC5k_R", "FgV3oHvljA", "vfP6keg_9Sr", "YAG_bhRrLD" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors answered all my questions and addressed all my comments. Hence, I updated my overall rating.", " Thanks for clarifying the experimental settings. I would like to raise my evaluation score and vote for acceptance on this work.", " Thanks for your approval of our work and insightful suggestions.\n\nFollowing your suggestions, we have included these ablation studies into the paper, which is now located in Sec. G of the appendices of the updated submission.", " Thanks for the reply,\n\nBasically I feel good about this paper and their responses.\n\nJust please add these ablation study to the paper and hence make it stronger.\n\nAnd I would respectfully and nicely suggest the authors do not over-state this method as an end-to-end recommender, since it is highly rare to use end-to-end rather than multi-stage in real industrial world.", " Many thanks for your positive comments and helpful questions! The detailed responses to your questions are listed below:\n\n**S1: Is this the first paper that using such beam search way based on Transformer decoder to generate the Top-N results for recommendation?**\n\nYes. To the best of our knowledge, it is the first paper that transforms the recommendation task into a seq2seq task and uses the beam search to generate the sequence.\n\n**Q1: What if they use other decoders instead of Transformer? like RNN or CNN-based decoder?**\n\nMany thanks for your questions! We replace the Transformer Decoder with RNN and CNN model, the results are as follows. We can see that the model with Transformer Decoder always performs best. \n\n | (NDCG@20) | Amazon | MIND | Gowalla | Yelp |\n | :---------: | :----: | :----: | :-----: | :----: |\n | Transformer | 0.1912 | 0.6935 | 0.2795 | 0.2269 |\n | RNN | 0.1208 | 0.6912 | 0.2286 | 0.1984 |\n | CNN | 0.1218 | 0.6933 | 0.1758 | 0.1691 |\n\n**Q2: What contributes most to the improvement? the Transformer's power or the effectiveness of joint representation learning?**\n\nAccording to the above experimental results, Transformer indeed contributes to the recommendation accuracy. To further analyze the impact of joint training, we introduce another experiment, where the joint training is disabled. To be specific, we initialized the index parameter, i.e., the embedding of each branch, as the average of all the item embeddings within the corresponding branch. The index parameters are fixed afterward, such that the training will only update the remaining parameters within the recommender. Such a variation is referred as \"Fixed Index\" in our experiment. According to the experiment results presented below, the fixed index is worse than our original performance, which means the removal of joint training is unfavorable. In other words, the effectiveness of joint training can be reflected. \n\n | (NDCG@20) | Amazon | MIND | Gowalla | Yelp |\n | :---------------: | :----: | :----: | :-----: | :----: |\n | Original | 0.1912 | 0.6935 | 0.2795 | 0.2269 |\n | Fixed Index | 0.1429 | 0.6891 | 0.1604 | 0.1588 |\n\n\n**Q3: Why index size of JTM and TDM is O(ND) since they are also tree-based methods as shown in Figure 1?**\n\nThis is because we have to store the embedding of dimension D for each internal/leaf node in trees of JTM and TDM. Therefore, the index size of JTM and TDM is O(ND). However, RecForest only stores branch embedding and computes the node representation with the transformer decoder on the fly, so the index size of RecForest is much smaller than JTM and TDM.\n\n**Q4:Will a small K post any training issues on Transformer?**\n\nThanks for this insightful question! And the answer to your question is No. We test different values of K (see Table 4) and they are really small compared to K in NLP, but there is no issue happening during the training process.\n\n**Q5: Since K is a small number, is that necessary to share the \"word\" embeddings among the trees to save the memory?**\n\nSince the total number of branches within the tree is at the same magnitude as the number of items, sharing the embeddings of the “K branches” among different levels of trees saves a lot of model parameters. However, the branch embeddings are not shared among different trees, as shown in *Index Size* of Table 1. Sharing branch embedding among different trees leads to the degradation of retrieval accuracy according to our empirical observations.\n\n**Q6: Is it for the top-N retrieve stage but not for the rerank stage? (worse than DIN)**\n\nOur Reforest mainly targets on the retrieval of the top-N items, thanks to its high running efficiency and low memory consumption. At the same time, it is also OK to apply it as an end-to-end recommender, considering that it achieves comparable performance to DIN in many cases (as Table 2).", " Many thanks for your helpful and detailed comments. The detailed responses to your questions are listed as follows:\n\n**W1:It would be helpful to include statistical significance tests of the empirical results.**\n\nThank you for your suggestion! We conduct the significance tests between RecForest and the best index-based baseline (i.e. IPNSW in general) by repeating each method 5 times. We report the p-value of paired T-test as follows. In general, RecForest outperforms IPNSW in all cases and the superiority is significant in most cases. We need to point out that RecForest not just outperforms baselines in recommendation accuracy, but also beats the baselines in retrieval efficiency and memory cost.\n\n| | Movie | Amazon | Gowalla | Tmall | MIND | Yelp |\n| :-----------: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: |\n| p for NDCG@20 | 0.00001 | 0.00549 | 0.00003 | 0.00002 | 0.00306 | 0.00107 |\n| p for NDCG@40 | 0 | 0.10965 | 0.00003 | 0.00002 | 0.00018 | 0.00309 |\n\n\n**Q1: It would be helpful to link the above statements/following features of RecForest and empirical results.**\n\nThanks for your question! We've added the linkage between our contributions to their experimental studies as follows. Besides, corresponding discussions in the Introduction (line 53-69) have also been revised accordingly. \n\n(1) Near-boundary item retrieval: Multiple trees are the key to solving this problem, so we conduct an ablation study on the Effect of Forest Construction (see section 3.6.1). The results in Figure 4 show that multiple trees achieve better results than a single tree, which confirms that a near-boundary item missed in one tree can be retrieved back on another tree.\n\n(2) Previous trajectory: The results in Table 3 show that our method performs better than previous methods which merely take account of the current node (i.e., TDM and JTM).\n\n(3) Memory-consuming tree index: In Table 1, we analyze the space complexity of indexes. In Table 3, we list their real memory cost. Both indicate the superiority of RecForest in memory consumption to other indexes.\n\n(4) Joint training cost (update cost): Due to the use of multiple trees, RecForest is much less sensitive to the partition of the item set. As a result, without any tree update, RecForest can perform remarkably better than TDM and JTM with repetitive tree updates, as shown in Table 3. Therefore, RecForest can avoid the repetitive adaption of the tree structure, saving a considerable portion of the training cost. In *Indexing Time* in Table1, we analyzed the indexing construction time for various indexing methods. The JTM costs the longest time, scaling linearly with the training size.\n\n**Limitation: The authors need to highlight the limitations of their method, preferably contrasting it and previous approaches.**\n\nThanks for your insightful suggestions and we have presented them in the conclusion and future work. The higher efficient decoder method like non-autoregressive prediction can further improve the inference efficiency. In addition, we construct the tree based on Kmeans algorithm, but there may exist some better ways to capture the similarity between items so that a better tree index can be generated.", " Thank you for your insightful suggestions! We appreciate your encouragement and provide detailed responses to your comments below:\n\n**W1: The detailed settings of some compared baselines, such as DIN and YoutubeDNN.**\n\nDue to the space limitation, we briefly introduce the experimental settings in Section 3.3 of the main text. For more details, please refer to Appendix E.\n\n**W2: The potentials of the proposed method in tackling other types of recommendation scenarios.**\n\nThanks for the suggestions! We have indeed conducted the same experiments in the non-sequential scenarios as in the sequential scenarios. And similar conclusions can be summarized from the experimental results obtained in non-sequential scenarios. All the results can be referred to in Appendix F. \n\n**Q1: Compare the new proposed RecForest approach with Transformer-based sequential recommender systems (e.g., Bert4Rec).**\n\nThanks for your insightful questions! We additionally include two Transformer-based sequential models (e.g. SASRec and Bert4Rec), which are ranking models and can be utilized to recommend items by brute-force ranking all items. The results on all datasets are as follows. RecForest can outperform both SASRec and Bert4Rec in most cases except on Gowalla. These results further validate the effectiveness of RecForest.\n\n\n\n| | Movie | | Amazon | |\n| --------- | ----------- | ------- | --------- | ------- |\n| | NDCG@20 | NDCG@40 | NDCG@20 | NDCG@40 |\n| SASRec | 0.402 | 0.4394 | 0.1868 | 0.2104 |\n| BertRec | 0.3841 | 0.4151 | 0.1241 | 0.1419 |\n| RecForest | 0.5580 | 0.5682 | 0.2339 | 0.2570 |\n| | **Gowalla** | | **Tmall** | |\n| | NDCG@20 | NDCG@40 | NDCG@20 | NDCG@40 |\n| SASRec | 0.3955 | 0.403 | 0.0879 | 0.1026 |\n| BertRec | 0.3626 | 0.3766 | 0.1057 | 0.1247 |\n| RecForest | 0.3783 | 0.3963 | 0.2059 | 0.2261 |\n| | **MIND** | | **Yelp** | |\n| | NDCG@20 | NDCG@40 | NDCG@20 | NDCG@40 |\n| SASRec | 0.6652 | 0.6682 | 0.2309 | 0.2639 |\n| Bert4Rec | 0.5024 | 0.5177 | 0.2478 | 0.2773 |\n| RecForest | 0.7583 | 0.7579 | 0.2766 | 0.3031 |\n\n**Q2: What are the specific parameter settings of some compared baselines in the evaluation section?**\n\nPlease see the above response to W1.", " In this work, a tree-based recommendation method RecForest is proposed to improve the model efficiency of tree-based method. Also, the model training cost can be improved with the multiple K-ary trees. The retrieval of near-boundary items becomes more effective. Experiments are performed on several recommendation datasets to show the model effectiveness. Strengths:\n1.The limitation of near-boundary item retrieval has been addressed in this work.\n2.The ablation study is provided to investigate the effects of forest construction.\n3.Several widely used experimented datasets are used for model evaluation.\n\nWeaknesses:\n1.In the evaluation section, it would be better to describe the detailed settings of some compared baselines, such as DIN and YoutubeDNN.\n2. It would be better to discuss the potentials of the proposed method in tackling other types of recommendation scenarios, in addition to the sequential recommendation scenario.\n Considering that the model evaluation tackles the sequential recommendation scenario, it would be better to further compare the new proposed RecForest approach with Transformer-based sequential recommender systems (e.g., Bert4Rec), or discuss the advantage of the new mode. The new method is also built over the Transformer architecture for sequence encoding.\n\nWhat are the specific parameter settings of some compared baselines in the evaluation section?\n Please find the limitations in the weakness part.", " The authors are interested in efficiently retrieving personalized items to users based on users' preferences. They focused on the efficacy aspect of the recommendation problem. The empirical results consist of several datasets with different ranges for # users, # items, and # interactions. They also included an analysis of performance on the quality of suggestions.\n The efficacy of recommender systems is often a neglected problem in tradition RecSys literature. Most of the works are interested in improve the quality of recommendations. However, this is a challenging and important problem, mainly, in practical settings.\n\nIt would be helpful to include statistical significance tests of the empirical results. The authors listed the following limitations of previous approaches to justify their framework:\n\n- it is challenging to route to items located around the partition boundaries, given that the item set is hierarchically partitioned,\n- the accuracy of the beam search can be limited by the routing decision, which is made without consideration of the previous trajectory,\n- the tree-based index can be memory-consuming, given that the number of internal nodes is at the same magnitude as the leaf nodes,\n- there is a high cost for the training stage of existing approaches when calling for the joint adaptation of the representation model and tree index, given that the tree structure needs to be repetitively updated.\n\nIt would be helpful to link the above statements/following features of RecForest and empirical results. For instance, a reference to which section contains the results that present the advantages to RecForest over baselines.\n\nThe authors correctly included the complexity analysis of essential times and index size. This helps understand the tradeoffs between previous literature and the proposed framework.\n The authors need to highlight the limitations of their method, preferably contrasting it and previous approaches.\n", " Approximate nearest neighbour search (ANNs) is used to retrieve top-N results for a user from a massive item set in personalized recommendation.\nThe paper creatively formulates the approximate nearest neighbour search (ANNs) problem into the sequence-to-sequence problem: the user profile and history is fed into the encoder of a Transformer while the top-N results are obtained by the beam-search of the decoder of that Transformer. To support this, the items are organized into a K-ary tree. To enhance the robustness, they construct several trees to form a forest. Strength: \n\n(1) Hats off to the core idea \"Given the hierarchical numbering of the items, the recommendation turns out to be a sequence-to-sequence problem: based on the encoded user information, paths to the most preferable items are progressively decoded via beam search, from which the top-n recommendation is made. \" Hence, the problem of ANN and item embedding learning are merged and solved in one system.\n\nIs this the first paper that using such beam search way based on Transformer decoder to generate the Top-N results for recommendation?\n\n(2) The experiments show RecForest significantly outperforms all efficient recommenders with indexes on all datasets w.r.t NDCG@20 and NDCG@40. \n\nWeakness:\nI don't see obvious weakness but hope the authors can carefully answer my questions presented in the following section because I feel some ablation study might be missing. (1) What if they use other decoder instead of Transformer? like RNN or CNN-based decoder?\n\n(2) Yes, RecForest significantly outperforms all efficient recommenders with indexes. But what contributes most to this improvement? the Transformer's power or the effectiveness of joint representation learning?\n\n(3) Why index size of JTM and TDM is O(ND) since they are also tree-based methods as shown in Figure 1?\n\n(4) The K-ary tree means the vocabulary of the decoder is K, right? In NLP, K is a large number like 30,000. But in your case, K should be a very small number like 2 if binary, right? Hence, I just wonder will such big difference post any training issues on Transformer?\n\n(5) Based on (4), since K is a small number, is that necessary to share the \"word\" embeddings among the trees to save the memory? \n\n(6) In real world recommendation, there are two stages: top-N retrieve and rerank. So this RecForest is for the top-N retrieve stage but not for the rerank stage, right? Since the performance is still worse than DIN in the experiments. I don't see negative societal impact of their work," ]
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "aOlOx-G83bJ", "FUoe71pC5k_R", "bvGrTrVWU3t", "pDqEeB3hRD6", "YAG_bhRrLD", "vfP6keg_9Sr", "FgV3oHvljA", "nips_2022_Yc4MjP2Mnob", "nips_2022_Yc4MjP2Mnob", "nips_2022_Yc4MjP2Mnob" ]
nips_2022_lkrnoLxX1Do
Self-Supervised Image Restoration with Blurry and Noisy Pairs
When taking photos under an environment with insufficient light, the exposure time and the sensor gain usually require to be carefully chosen to obtain images with satisfying visual quality. For example, the images with high ISO usually have inescapable noise, while the long-exposure ones may be blurry due to camera shake or object motion. Existing solutions generally suggest to seek a balance between noise and blur, and learn denoising or deblurring models under either full- or self-supervision. However, the real-world training pairs are difficult to collect, and the self-supervised methods merely rely on blurry or noisy images are limited in performance. In this work, we tackle this problem by jointly leveraging the short-exposure noisy image and the long-exposure blurry image for better image restoration. Such setting is practically feasible due to that short-exposure and long-exposure images can be either acquired by two individual cameras or synthesized by a long burst of images. Moreover, the short-exposure images are hardly blurry, and the long-exposure ones have negligible noise. Their complementarity makes it feasible to learn restoration model in a self-supervised manner. Specifically, the noisy images can be used as the supervision information for deblurring, while the sharp areas in the blurry images can be utilized as the auxiliary supervision information for self-supervised denoising. By learning in a collaborative manner, the deblurring and denoising tasks in our method can benefit each other. Experiments on synthetic and real-world images show the effectiveness and practicality of the proposed method. Codes are available at https://github.com/cszhilu1998/SelfIR.
Accept
All three reviewers voted to accept the paper, and the detailed rebuttals from the authors helped to clarify reviewers' original concerns. One remaining concern from one of the reviewers is whether this method should be referred to as "self-supervised". However, authors clarified that it is reasonable to consider this method self-supervised for real-world data. I am fine with therefore leaving "self-supervised" in the title.
test
[ "ZMlHAarxFeW", "_O2a_5l-x6", "_SOZYsJjO_", "rcNtLLEUwOI", "syFZN9MQ_N", "-j__9z6ZPg", "OPLmQYySNXu", "hhYKpl26-ag", "oTf5v00B7F2", "idCelgilBbs", "jtGKw-IbBRq" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nThanks for your more detailed comments.\n\n1. GoPro dataset\n\nThank you for your more detailed explanation.\nWe acknowledge that the sharp images in the GoPro dataset are not defect-free, and will modify the relevant descriptions in the revision.\n\n\n2. Self-supervised learning\n\nTo begin with, we respect different opinions about self-supervised learning.\nHowever, following the setting of synthetic experiments in previous self-supervised image denoising methods, only the corrupted images (with a single degradation, $i.e.$, noise) are utilized for model training.\nSimilarly, for the image restoration task in this paper, only the corrupted images (with multiple degradations, $i.e.$, noise and blur) are utilized for model training.\nIn other words, even for synthetic experiments, the ground-truth images are not involved in the training phase.\nTherefore, it can be regarded as self-supervised learning.\nWe hope this response could address your concerns.\n\n\n3. Ghost artifacts\n\nMany thanks. We will add these works in the revision.\n", " Many thanks for your acknowledgment. We will revise the paper according to your comments.", " I thank the authors for the detailed comments.\nHere are my follow-up comments.\n\n1. GoPro dataset\n\nThe authors of GoPro dataset admit the quality limitation in their follow-up paper: \nS. Nah et al., \"NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study,\" CVPRW 2019\n\nOne of the main purposes of the REDS dataset was to improve the quality of reference images as quality limitations are prevalent in high-speed videos.\nTheir presentation material explains this issue in detail: https://drive.google.com/file/d/13F6UEyBDFGTiFDyxqLzrPiq4Y2-8BKQE/view?usp=sharing\n\nBesides,I agree with the authors that the proposed method works even with the inherent noise and I don't deny that. \nWhat I wanted to deliver is that GoPro should not be considered defect-free and what is written in the paper may confuse readers.\n\n2. Self-supervision\n\nI understand what the authors mean, however, I'm still reluctant to refer to this method as self-supervised, at least for the synthetic dataset experiments.\nA more precise explanation would be that this method applied to real-world datasets in a self-supervised manner.\n\n3. Real-world datasets\n\nThanks for the explanation.\n\n4. Ghost artifacts\n\nAuthors may also want to refer to additional works:\n\n* P. Wieschollet et al., \"Learning blind motion deblurring,\" ICCV 2017\n* S. Zhou et al., \"DAVANet: Stereo Deblurring with View Aggregation,\" CVPR 2019\n\n", " Thank you for the useful information provided in the rebuttal. Particularly I am happy with the explanation provided for the data acquisition. I do not have a deep understanding of photography so am not able to determine for myself how easy it is to modify a camera's internal control, but I am wiling to take your word for it that it would in principle be possible for those who do have more control over the camera mechanism to gather pairs of images according to the third method you described. On this basis I am cementing my assessment. ", " We thank the reviewer for the valuable comments and suggestions.\nWe appreciate the reviewer's questions and hope our responses could address the concerns.\n\n\n1. Originality and novelty (difference from Noise2Noise [15] and Neighbor2Neighbor [9])\n\nWe kindly remind the reviewer that our main contribution is to leverage the nature of blurry and noisy image pairs to learn a self-supervised model and improve restoration performance. \n\nNoise2Noise [15] demonstrates that the denoising model can be trained with a noisy-noisy pair when the noise in the target image is independent of that in the input image.\nWe present that the deblurring model can be supervised with a same-scene noisy image, which is not mentioned by Noise2Noise.\nWe further demonstrate it through experiments in Table 3, which show that taking noisy or clear images as the supervision leads to similar performance on the deblurring task.\n\nIn terms of Neighbor2Neighbor [9], we directly adopt its conclusion that the sub-sampled $g_1(I_N)$ and $g_2(I_N)$ images are content approximate and noise independent.\nThat is to say, $g_2(I_N)$ can be as the supervision for denoising $g_1(I_N)$.\nCombined with the conclusion of Sec. 3.1 (Deblurring with Noisy Image), $g_2(I_N)$ can be also as the supervision for deblurring $g_1(I_B)$.\nThus, we can input $g_1(I_N)$ and $g_1(I_B)$ together into the model, $i.e.$, $\\\\{g_1(I_N), g_1(I_B)\\\\}$, and take $g_2(I_N)$ as supervision information to perform self-supervised learning (see Eqns. (10) and (11)).\nIn addition, the sharp areas in $g_1(I_B)$ can also provide auxiliary supervision for the model learning and the performance can be further improved.\n\n\n\n2. More explanations of Sec. 3.2 (Denoising with Long-exposure Image)\n\nFirst, it is difficult to detect whether a single pixel is blurred or not.\nIn other words, to build the mask of sharp areas at the pixel level, the pixel and its surrounding pixels are both required to form a patch to determine whether it is blurred or not.\nTherefore, we have tried dividing the image into overlapping patches to obtain a pixel-based mask.\nIt shows a similar performance to the non-overlapping patch-based one, but brings more running time and GPU memory.\nTaking a trade-off between performance and efficiency, we adopt the non-overlapping patch-based way.\n\nSecond, the SSIM term in Eqn. (8) is mainly used to detect the ringing artifact areas (as shown in Figure 1(c)) in blurry images. \nThe variance of these areas is even larger than that of the denoised image. \nSo it is easy for these areas to be mistaken as sharp areas when only variance is used. \nWe have conducted an experiment that removes the SSIM term in Eqn. (8). \nThe PSNR metric drops by 0.1 dB, which shows the effectiveness of the SSIM term.\n\n\n3. Clarity and more explanations of complementarity\n\nThe complementarity between blurry and noisy images lies in two aspects:\n(1) The noisy image can provide supervision information for deblurring (Sec. 3.1), and the blurry image can also provide auxiliary supervision for denoising (Sec. 3.2).\nSo we can learn to deblur or denoise without ground-truths.\n(2) Better restoration performance can be obtained when jointing two images rather than learning deblurring or denoising task individually. \nSo we further design the self-supervised restoration model that takes two degraded images as input and learns from both tasks (Sec. 3.3).\nIn revision, we will reorganize Sec. 3.3 for a clearer statement.\n\n\n\n4. Collaborative learning and self-supervised learning\n\nIn Secs. 3.1 and 3.2, we introduce the deblurring subtask and denoising subtask individually for clarity.\nFrom the perspective of these two subtasks, it is collaborative learning since the information from the other is utilized.\nWhen we combine them into the proposed framework in Sec. 3.3, the task of this work is image restoration (deblurring and denoising are subtasks of it), and we can regard it as self-supervised learning.\n\n\n\n5. How is the sharp patch detected in Sec. 3.2 used by co-learning in Sec. 3.3?\n\nIn Sec. 3.3, for obtaining mask $\\mathit{m}^\\mathit{n}$, we replace $\\mathbf{I}_\\mathcal{B}$ and $\\mathbf{\\tilde{I}}_\\mathcal{N}$ in Eqn. (8) with $\\mathit{g}_1(\\mathbf{I}_\\mathcal{B})$ and $\\hat{\\mathcal{D}}(\\mathit{g}_1(\\mathbf{I}_\\mathcal{B}), \\mathit{g}_1(\\mathbf{I}_\\mathcal{N}))$, respectively.\nNote that $\\hat{\\mathcal{D}}$ has the same parameters with $\\mathcal{D}$ but has no gradient for back-propagation.\nThen we calculate $\\mathcal{L}_\\mathit{aux}(\\mathcal{D}(\\mathit{g}_1(\\mathbf{I}_\\mathcal{B}), \\mathit{g}_1(\\mathbf{I}_\\mathcal{N}))), \\mathit{g}_1(\\mathbf{I}_\\mathcal{B}))$ according to Eqn. (9) for obtaining the auxiliary loss.\n", " 6. Is there any risk of inconsistency between the patch-dividing strategy in Sec. 3.2 and sub-sampling operation in Sec. 3.3?\n\nAn experiment has been conducted to evaluate the impact of the sub-sampling operation on sharp areas detection.\nIn the experiment, we only replaced $\\mathbf{\\tilde{I}}_\\mathcal{N}$ in Eqn. (8) with $\\hat{\\mathcal{D}}(\\mathbf{I}_\\mathcal{B}, \\mathbf{I}_\\mathcal{N})$ for obtaining the mask, and then calculated $\\mathcal{L}_\\mathit{aux}(\\mathcal{D}(\\mathit{g}_1(\\mathbf{I}_\\mathcal{B}), \\mathit{g}_1(\\mathbf{I}_\\mathcal{N}))), \\mathit{g}_1(\\mathbf{I}_\\mathcal{B}))$.\nThe results show that it has a similar performance to the method described in item 5.\n\n\n7. Is it assumed that the blur kernel is known?\n\nNo, we do not assume that the blur kernel is known.\nThe real-world blur is generally caused by camera shake and object motion. \nAlthough some trajectory information of the camera shake can be obtained by the gyroscope, the object motion is very complex and it is difficult to give the motion blur kernel.\n\n\n\n8. Visual quality comparison for real-world images\n\nFigures I and J in the supplementary material show visual comparison results for real-world images.\nIt can be seen our results are clearer and more photo-realistic.\n\n\n9. More explanations of limitations\n\nNeighbor2Neighbor [9] shows that the two images through neighbor sub-sampling from noisy sRGB image can be used as input-target pair to train a self-supervised denoising model.\nBut for Bayer images in raw-RGB space, sampled pixels are not neighboring but sub-neighboring, as the neighbors of a pixel usually don't have the same color types as the pixel.\nIt leads to a larger difference in the corresponding pixel-level content of the sampled images and does harm to performance.\nIt is also the reason why Neighbor2Neighbor [9] and our SelfIR perform relatively poorer on synthetic raw noise than sRGB noise.\n\n\nFor the evaluation of real-world images, due to the lack of ground-truth images, we have to utilize no-reference image quality assessment (IQA) metrics ($i.e.$, NIQE [19], NRQM [17], and PI [2]) to make quantitative comparisons.\nHowever, NIQE and NRQM are sometimes unstable to report the performance accurately.\nWe recommend mainly referring to the more stable metric PI which is the combination of NIQE and NRQM.\nIn terms of the PI metric, our SelfIR achieves the best performance.\nFurthermore, as described in Sec. 7, we will build a real-world dataset with high-quality ground-truths for improving experimental assessments.", " \nWe thank the reviewer for the valuable comments and suggestions.\nWe appreciate the reviewer's questions and hope our responses could address the concerns.\n\n\n1. More explanations on the GoPro dataset\n\nFirst, we kindly remind the reviewer that the definition in the introduction (L23-33) only applies to low-light photography.\nThe GoPro dataset is captured in a bright environment.\nThe observation that the short exposure image is noisy does not apply to GoPro dataset, as the low ISO can be set in bright environments.\n\nSecond, even with some mild noise $\\mathbf{N}$, it does not affect performance evaluation when taking sharp images of the GoPro dataset as ground-truths.\nWe can denote the sharp image from GoPro as \n\n$\\mathbf{I}_{GoPro}=\\mathbf{I}+\\mathbf{N},$\n\nwhere $\\mathbf{I}$ is the ideal clean and sharp image.\nWhen calculating MSE (the intermediate result for obtaining PSNR) between the model output $\\mathbf{\\hat{I}}$ and $\\mathbf{I}_{GoPro}$, \n\n$[\\mathbf{\\hat{I}}-\\mathbf{I}_{GoPro}]^2=(\\mathbf{\\hat{I}}-\\mathbf{I})^2-2(\\mathbf{\\hat{I}}-\\mathbf{I})\\mathbf{N}+\\mathbf{N}^2.$\n\nSimilar to Eqn. (5), the expectation of the second term is zero and that of the third one is constant.\nThus, the relative order of the average PSNR index for the methods will not be affected.\nIn fact, the ground-truth of many denoising datasets ($e.g.$, SIDD [1]) is obtained by averaging multiple images, where the noise is not fully eliminated as well.\n\n\n2. More explanations of self-supervised learning\n\nFor synthetic experiments, we add noise into sharp images of the GoPro dataset for generating noisy images.\nThe sharp images are only used to synthesize noisy data, but not to calculate the loss function during training.\nSuch operations are the same as previous self-supervised denoising methods ($e.g.$, N2V [11], Laine19 [13], DBSN [38], R2R [23], Neighbor2Neighbor [9], and Blind2Unblind [37]).\nFollowing these methods, our SelfIR can also be viewed as a self-supervised method.\n\nIn addition, no ground-truths are used for experiments on the real-world dataset.\nIt further illustrates the self-supervised nature of our method.\n\n\n3. Experiments on real-world dataset\n\nFor real-world dataset acquisition, we take blurry and noisy images concurrently using two adjacent cameras in this work.\nSpatial misalignment indeed exists between the image pair.\nTherefore, we exploit the optical flow network PWC-Net [8] to align the image pair.\nPlease see the detailed descriptions in Section C of the supplementary material. \n\nFor evaluation on the real-world testing set, we use the real-world training set to fine-tune the model which is pre-trained with synthetic sensor noise in a self-supervised manner.\nOther self-supervised models to be compared are also trained in this way.\nFigures I and J show visual comparison results on real-world images, and the noise in the noisy images is real.\n\nFinally, we would like to point out that our main contribution is to exploit the complementarity of long- and short-exposure images for self-supervised image restoration and improving performance. \nFor the misalignment issue, on the one hand, notable progress has been made in optical flow estimation and this work utilizes them directly.\nOn the other hand, another way for dataset acquisition can avoid the issue.\nWe can take dozens or even hundreds of burst images with a camera, where the middle frame is taken as the short-exposure image, and the average of all frames is taken as the long-exposure one.\nPlease see the detailed descriptions in `Response to Reviewer njZ6 - 1. Feasibility of real-world dataset acquisition'. \n\n\n4. Related works and typos\n\nMany thanks.\nWe will add some burst image denoising and burst image deblurring methods in related works.\nAnd we will fix these typos in the revision.\n\n\n5. About the ghost artifacts\n\nThanks for reminding the paper, we will cite this paper in the revision.\nHowever, we would like to leave the synthetic method unchanged to be consistent with the real situation, since ghost artifacts also exist in real-world blurry images (especially in areas with flickering lights).", " We thank the reviewer for the valuable comments and suggestions.\nWe appreciate the reviewer's questions and hope our responses could address the concerns.\n\n1. Feasibility of real-world dataset acquisition\n\nThe real-world data can be obtained in the following three ways:\n(1) taking two images concurrently with two adjacent cameras;\n(2) taking two images one-by-one with a single camera;\n(3) taking dozens or even hundreds of burst images with a camera, where the middle frame is taken as the short-exposure image, and the average of all frames is taken as the long-exposure one.\n\nSince we have no access to the camera's internal control, we choose the first way, which is the most feasible approach for us.\nBut for manufacturers' practical applications, the third one may be more suitable.\nIn this way, the averaged (long-exposure) image over multiple noisy images has negligible noise, as the noise in the burst images is random and generally zero-mean.\nMoreover, there is no misalignment between the long- and short-exposure images using the third way.\n\n\n2. More explanations on our real-world dataset.\n\nThe main purpose of collecting the dataset is to demonstrate the effectiveness of SelfIR on real-world images.\nLimited by time and equipment, the scale of the dataset is indeed small.\nIn order to avoid insufficient evaluation, we take as many ($i.e.$, half of) images as possible for testing.\nIn order to avoid insufficient training, we first pre-train the model with synthetic sensor noise (in a self-supervised manner) and then fine-tune it on the collected training images.\nAfter fine-tuning, the PI [2] metric on the collected real dataset is improved by 0.55, we will add the result in the revision.\n\nIn addition, it is also our follow-up goal to build a larger-scale open dataset with ground-truths, and we hope it could better facilitate the development of this task.\n\n3. Could other intermediate exposure time images also be incorporated? \n\nWe believe that it has the potential to further boost the model performance, since the intermediate exposure time images may bring more useful and richer information.\nHowever, it is non-trivial to directly extend our framework to involve these images (both noisy and blurry) in a self-supervised manner.\nTherefore, we would like to reconsider the framework and explore it in future work.\n\n\n4. Weighting hyperparameters of loss terms\n\nThe weighting hyperparameters $\\lambda_{reg}$ and $\\lambda_{aux}$ are both set to 2 by default for experiments in sRGB space.\nHere, we have varied $\\lambda_{reg}$ or $\\lambda_{aux}$ to conduct experiments on Gaussian noise.\nThe following two tables show the experimental results.\nThe results show that the sensitivity to $\\lambda_{reg}$ and $\\lambda_{aux}$ of SelfIR is acceptable.\nWe will add the results in the revision.\n\n|$\\lambda_{reg}$| PSNR / SSIM / LPIPS |\n| :----: | :----: |\n| 0 | 35.20 / 0.9473 / 0.097 |\n| 1 | 35.64 / 0.9492 / 0.082 |\n| 2 | 35.74 / 0.9499 / 0.076 |\n| 4 | 35.72 / 0.9496 / 0.075 |\n| 8 | 35.73 / 0.9497 / 0.072 |\n\n\n|$\\lambda_{aux}$| PSNR / SSIM / LPIPS |\n| :----: | :----: |\n| 0 | 35.65 / 0.9492 / 0.080 |\n| 1 | 35.73 / 0.9498 / 0.078 |\n| 2 | 35.74 / 0.9499 / 0.076 |\n| 4 | 35.73 / 0.9499 / 0.076 |\n| 8 | 35.67 / 0.9496 / 0.077 |\n\n\n5. About the diffusion model\n\nWe admitted that we currently know not so much about the diffusion model.\nTo the best of our knowledge, most existing diffusion models in image restoration are learned in a supervised manner [a, b, c].\nSo it still requires further explorations to achieve self-supervised image restoration with blurry and noisy pairs via diffusion models.\n\n[a] Saharia, Chitwan, et al. \"Image super-resolution via iterative refinement.\" arXiv preprint arXiv:2104.07636 (2021).\n\n[b] Rombach, Robin, et al. \"High-resolution image synthesis with latent diffusion models.\" CVPR. 2022.\n\n[c] Whang, Jay, et al. \"Deblurring via stochastic refinement.\" CVPR. 2022.", " This paper considers the problem of taking high quality clean images given the difficulty of the tradeoff between obtaining blurry images due to long exposure times, and noisy images from small exposure time. A typical way to handle the tradeoff is to find a compromise between the two, then use a debluring or denoising model to fix up the image (the models trained using supervision or self-supervision). Instead this paper proposes to make use of both a blurry and a noisy image, taking advantage of the fact that the blurry image will have very little noise, and the noisy model very little blur in order to co-train *both* a debluring and a denoising model. Using these two models the authors Provide a method for producing a single clean image given a blurry and a noisy image. \n \nThe main contribution of this work seems to be the idea that noisy and blurry images contain complimentary information, that, if both images are collected, can be used to train both a debarring and a denoising model. Using both of theses models to restore images then leads to good clean images. This message seems clear and intuitive, and the method a useful contribution. Too I am happy to lean towards acceptance. But there is an important major caveat. It. Is non-trivial, even infeasible to take both a long- and short-expoosure image of the same scene. The authors solution is to use two cameras side by side taking photos simultaneously. So I am quite worried about the general availability of the type of data required to use this method. \n\n---\n\nTo summarize some points:\n- The paper is clearly written and highly accessible.\n- The idea of using the complimentary information in noisy vs blurry images is natural and intuitive\n- Strong quantitative and qualitative results: the proposed method seems to improve over existing supervised & self-supervised denoising methods (& more). \n\n---\n\nSome questions:\n- The real-world dataset only contains 61 images, only 30 of which are used for training. This is clearly a really small dataset. I would really like to hear from the authors on their thoughts about this data issue, and how it affects the viability of the joint-training method. \n- Could other intermediate exposure time images also be incorporated? So instead of just having very short and relatively long exposures, also include medium length exposures.\n- The overall loss includes two weighting hypermarameters (Eqn 12). How sensitive is the method to the choice of these hyper parameters?\n- Recent works in diffusion modeling have found that when denying a sample it is better to parameterize the denoising model to *rpedict the noise* instead of directly predicting the clean sample. Did you at all look into this? It could either 1) improve the current method, or 2) provide interesting counter-evidence to the generality of the observation from diffusion modeling.\n\n---\n\nA disclaimer: I do not keep up to date with the recent literature in image restoration, so potentially will not be aware of related work.\n\n See previous section. Yes.", " This paper proposes a self-supervised technique for image restoration from a blurry and a noisy image pair. By observing that each task can be trained from the other low-quality image, the authors propose a method that brings sharpness from the noisy image and the cleanliness from the blurry image. \nHowever, I’m not sure if this method is a true self-supervised method. The training objective seems to be recovering the real noisy image from the blurry and (synthetic) augmented noisy image pair.\n [Strengths]\n\nWhile there have been attempts to use the blurry and the noisy images to achieve better-quality images, this method is novel in terms of self-supervised design. The self-supervised learning process builds on top of the observations in the nature of deblurring and denoising. The experimental results look promising given my concerns are resolved. \n\n[Weaknesses]\n\n- Related Works\n\nI think the authors should also refer to previous burst image denoising and burst image deblurring works.\nThe authors mainly describe self-supervised image denoising papers only. (+ 1 self-supervised deblurring paper)\n\n- L196 \n\nFrom the definition in the introduction (L23-33) the blurry-sharp image pair in GoPro dataset is I_B and I_N pair (not I) as the sharp image is taken with a short exposure. The only difference from the assumption is that the GoPro dataset was captured in a bright environment where 240 fps video capturing is possible, however, noise exists and image quality is low.\nThe authors may augment the sharp & noisy image with more aggressive noise for training purposes (L196-199), however, the sharp image should not be considered a ground-truth.\n\nOtherwise, the proposed method is not a true self-supervised method as I_N is generated from I in the training process. Without I (which is considered GT here), training is impossible.\n\nThe authors should at least show the proposed method could achieve higher-quality image D(I_B, I) which is cleaner and sharper than I.\n\nIn Figures I and J, is the noise in the noisy image real or synthetic? Also, how could the blurry and the real image be well-aligned?\n\n- Basic setup \n\nAre the blurry and the noisy image pair assumed to be taken 1) at the same time from different cameras or 2) at different times from a single camera?\nFor 1), the pair should be stereo images with disparity and 2) should have spatial offset between the pair due to motion.\nHowever, training from a synthetic GoPro dataset has well-aligned image pairs. Does training with the aligned data generalize to realistic-scenario images?\n\n\n- typos\n\nL141 without than accept a shoddy option -> without (sharp areas?) than to accept a shoddy option\n\nL147 patches.For -> patches. For\n\nL172, 178 sub-sampling images -> sub-sampled images\n\nL194 Hero 4 Black -> Hero4 Black \n L158 The ghost artifact has been pointed as imperfect blurry image in the following paper:\n\nS. Nah et al., “NTIRE 2019 Challenge on Video Deblurring and Super-Resolution: Dataset and Study,” CVPRW 2019\n\nThey interpolate the frames before averaging the sharp (& noisy) images to remove the ghost artifact.\nFollowing similar approaches may benefit finding sharp regions from equation (8).\n Limitations described in the paper.", " This paper proposed a self-supervised image restoration method named ``SelfIR''. The basic idea is to work with a pair of short-exposure noisy and long-exposure blurry images. The complementary nature of blurry and noisy pairs make it possible to jointly utilize the auxiliary supervision information for self-supervised denoising. The reported experimental results are promising and convincing. SelfIR can achieve better performance than competing self-supervised denoising methods on synthetic datasets and comparable performance to N2N (ref. [15]) on real-world datasets. Strengths:\n+ The idea of exploiting complementary nature of blurry and noisy image pairs is sensible. There is some novelty with the developed co-learning of deblurring and denoising in Sec. 3.3\n+ The reported experimental results are mostly convincing and promising. Improved denoising results over Ref. [9] (CVPR'2021) and Ref. [37] (CVPR'2022) have been reported on synthetic sRGB images.\n\nWeaknesses:\n- Originality. I have found more clarification is needed for Sec. 3.1 and Sec. 3.3. How is Sec. 3.1 different from Noise2Nosie [15] and how is Sec. 3.3 different from Neighbor2Neighbor [9]? Many technical details such as Eqs. (4)-(6) and Fig. 2(b) have strong similarity with previous works. Even though the authors are inspired by those existing works, they need to highlight what are the new insights brought by this paper. I am afraid that the novelty of these two subsections is limited in the current version.\n- Quality. I have two major concerns over Sec. 3.2 (denoising with long-exposure image). First, I have found that the strategy of dividing an image into non-overlapping patches ad-hoc. Even though \"it is crucial to pinpoint the sharp areas in the long-exposure images\", I don't understand why the authors take a patch-based (instead of a pixel-based) approach. Second, the use of SSIM in Eqs. (7)-(8) lacks substantial justification. Maybe it is based on heuristics but the quality of presentation for this paper is lacking, in my biased opinion. \n- Clarity. I wish authors can clarify more about the self-supervised learning procedure in Sec. 3.3. The discussion about \"complementarity\" (line 169) is vague because authors did not explicitly discuss how the complementary nature was exploited by the design of co-learning method.\n- Significance. The importance of this work seems to be to show the proposed SelfIR works better on real-world situations but the majority of the experimental results (especially visual comparisons) are for synthetic images only. 1. Is the proposed SelfIR \"collaborative learning\" or \"self-supervised learning\" or both (Line 170-171)? It seems to me Eq. (9) is collaborative learning since it involves the blurry-noisy image pair; while Eqs. (10)-(11) can be interpreted as self-supervised learning since it uses the pseudo-GT (output of restoration network D). Please correct me if I am wrong. \n2. How is the sharp patch detected in Sec. 3.2 used by co-learning in Sec. 3.3? \n3. Is there any risk of inconsistency between the patch-dividing strategy in Sec. 3.2 and sub-sampling operation in Sec. 3.3?\n4. Do you assume the blur process with non-uniform kernels K is known in Eq. (1)? If Yes, this seems a strong assumption because nonuniform kernel estimation from real-world blurry images is a long-standing open problem.\n5. Do you have visual quality comparison for denoising real-world raw sRGB images? Figs. 3-4 are for synthetic images only where supervised learning (baseline) is easy to obtain. When does the proposed SelfIR fail? For example, Table 2 shows that the proposed method falls behind N2N [15] on PSNR and Blind2Unblind [37] on NIQE. Any discussion or explanation about the limitation of SelfIR? " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "_SOZYsJjO_", "rcNtLLEUwOI", "OPLmQYySNXu", "hhYKpl26-ag", "jtGKw-IbBRq", "jtGKw-IbBRq", "idCelgilBbs", "oTf5v00B7F2", "nips_2022_lkrnoLxX1Do", "nips_2022_lkrnoLxX1Do", "nips_2022_lkrnoLxX1Do" ]
nips_2022_dO11Niyc225
A Non-asymptotic Analysis of Non-parametric Temporal-Difference Learning
Temporal-difference learning is a popular algorithm for policy evaluation. In this paper, we study the convergence of the regularized non-parametric TD(0) algorithm, in both the independent and Markovian observation settings. In particular, when TD is performed in a universal reproducing kernel Hilbert space (RKHS), we prove convergence of the averaged iterates to the optimal value function, even when it does not belong to the RKHS. We provide explicit convergence rates that depend on a source condition relating the regularity of the optimal value function to the RKHS. We illustrate this convergence numerically on a simple continuous-state Markov reward process.
Accept
The paper studies the convergence of non-parametric temporal-difference learning in the non-asymptotic regime. All referees agree that the paper is technical sound and the result is important to further our theoretical understanding of reinforcement learning. The paper merits acceptance to the conference.
train
[ "OOK4cQTPZU", "PooSwFNR2N", "1Im3RJag0Rl", "C5BpPaOWcqoz", "V4G7_1YDXY5", "idXA8HdCq5P", "LDmqG6fJ_A", "k7gKyLg1dfc", "3uOZl6pYbBU", "dFwiUsE6doS" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank Reviewer XY19 for his or her further comments.\n\nConcerning the $\\ell_\\infty$-norm analysis, given the further references that you provided on stochastic approximation, we indeed believe that the analysis could be extended to this change of norm. We will add a comment on this, and cite the additional references.\n\nConcerning the link with [30], we would like to insist on the fact that, although they are related, LSTD and TD are different algorithms. TD is an instance of stochastic approximation, while LSTD is not. Therefore, the analyses of the algorithms are quite different in nature, and in particular, the finite-time analysis of TD cannot be directly deduced from the analysis of LSTD (even in the linear approximation setting). For instance, [30] mainly employ *statistical tools* (like Rademacher complexities), while we use tools from *stochastic approximation* or stochastic optimization. These are two different approaches to two different, but related, problems. Therefore, some objects will appear in both analyses: the covariance operator, the horizon of the MRP, the richness of the RKHS and the complexity of the optimal value function (also called the capacity and source condition in optimization),... Overall, to give an analogy in optimization, the difference between LSTD and TD is the same as between linear least squares (operating a system inversion) and the least-mean-square algorithm (an instance of SGD on the least squares objective). This is the same in non-parametric settings, but to the best of our knowledge, an analysis of non-parametric least-mean-square (like the one of [28]) is not a direct consequence of the statistical study of kernel least squares.\n\nThe major technical challenges in this work are twofold:\n- one is to deal with a non-invertible covariance operator $\\Sigma$. This did not occur in the linear approximation setting, e.g., in [10] or [55] who assumed that $\\Sigma$ has full rank (which we do not assume here). We deal with this issue by introducing regularization.\n- the second one is to prove the convergence of TD to $V^*$, even when $V^* \\notin H$. This is different from previous analyses which study convergence to the fixed point of the projected Bellman operator, including [30]. This is allowed by Propositions 1 and 2, which extend similar result coming from the study of non-parametric least-mean-square, to temporal differences. Furthermore, this directly provides rates that are adaptive to the regularity of $V^*$, which we believe is novel for TD.", " I thank the authors for the detailed response. Some of my concerns are still not fully addressed and I intend to keep my score.\n\nI understand that you are subtracting $\\lambda V_{n-1}$ in the entire update as opposed to just inside the temporal difference. However, I believe introducing $\\lambda$ this way also has some impact on the effective discount factor of the problem. Since Proposition 2 provides an upper bound on the difference, I think this is ok and am satisfied with the authors’ response on this point.\n\nThere are some papers that address the difficulty of dealing with algorithms under $\\ell_\\infty$-norm contraction operators. For example, the authors of [] have shown that with some additional effort the norm-square function can also be used as a Lyapunov function to study the associated ODE. As for the stochastic algorithm, [] introduced a smooth version of sup-norm square so that the decent lemma holds with the new potential function. \n\n[1] Borkar, V. S., & Soumyanatha, K. (1997). An analog scheme for fixed point computation. i. theory. IEEE Transactions on Circuits and Systems I: Fundamental Theory and Applications, 44(4), 351-355.\n\n[2] Chen, Z., Maguluri, S. T., Shakkottai, S., & Shanmugam, K. (2020). Finite-sample analysis of contractive stochastic approximation using smooth convex envelopes. Advances in Neural Information Processing Systems, 33, 8223-8234.\n\nI understand that [30] studies off-line LSTD while this paper studies the online version. However, given that on-line and off-line TD in the tabular setting and under linear function approximation are well-studied in the literature, and [30] studied off-line non-parametric TD, the challenges in extending the results to online nonparametric TD are still not entirely clear. What is the major technical challenge in this work and what is the novel idea that is proposed to overcome the challenge? \n", " Major Comments:\n\n(1) The regularizer indeed ensures that $\\Sigma + \\lambda I$ is invertible. We analyze the regularized TD learning algorithm (12) because it allows studying to case where $||V^*||_H$ is infinite, i.e., $V^*$ does not belong to the RKHS. If $V^* \\in H$, the regularization is not necessary, as seen in Prop.4 in App.A.3.\n\nWe politely disagree with the statement: \"Introducing the regularization parameter is to some extend equivalent to solving a different MDP with a smaller discount factor\". This is not equivalent to changing the discount factor.\n\nThe updated (regularized) algorithm is written on eqn. (21). It amounts to subtracting $\\rho_n \\lambda V_{n-1}$. This term is not inside the temporal-difference because it does not depend on $\\Phi(x_n)$. This is indeed different from subtracting $\\lambda V_{n-1}(x_n)$ in the temporal difference (which we agree would be like changing the discount factor).\n\n(2) We agree that the Bellman operator is also a contraction mapping with respect to the $l^\\infty$-norm. However, we are not sure that the same proof extends to this norm for studying the ODE. First, there are technical difficulties due to the non-differentiability of this norm when we compute the derivative of the Lyapunov function. In general, the descent lemma does not hold. Furthermore, the convergence of $V^*_\\lambda$ to $V^*$ in Prop.2 is specific to the $L^2$ norm (although it can be extended to intermediate norms between the $L^2$ norm and the $H$ norm). We do not think that a similar result exists in $l^\\infty$ norm. Overall, an $l^\\infty$-norm analysis could possibly be done, but it cannot be deduced directly from our analysis. Asymptotic stochastic approximation results in $l^\\infty$-norm exist e.g. in [Bertsekas & Tsitsiklis, 1995, Neuro-dynamic programming: an overview, Prop. 4.4]. We would like to stress that former asymptotic and non-asymptotic analyses of TD are also in $L^2$ norm (see e.g., [61]).\n\n(3) We agree with Reviewer XY19 about his or her remarks on the projection step. The analysis of linear TD by [55] indeed removes the projection step. We agree that this would be a desirable feature in our case as well. We have chosen to follow the general scheme of the analysis of [10] rather than [55], because it establishes finite time bounds that are *independent of the conditioning* of the full-rank feature covariance matrix $\\Sigma$. This is a desirable property for our non-parametric analysis because our infinite-dimensional operator $\\Sigma$ is not invertible (and cannot have full rank).\nIt could be interesting to investigate in future work if a similar analysis to [55] using Steiner's method can be carried out in our case to remove the projection step. Note that in the experiments, we did not have to use any projections.\n\n(4) We have made the following mistake in the manuscript: references [30] and [42] have been permuted: [30] indeed refers to LSTD, and [42] to fitted Q iteration. We are sorry about that and we have updated the manuscript to correct this mistake. Kernel-based LSTD is indeed related to this work. In particular, the operators $\\Sigma$ and $\\Sigma_1$ are also used in [30], and the eigenvalues of the kernel are also used. However, the scope of [30] is a bit different: they study the statistical (or estimation) error of an offline algorithm (LSTD), whereas we analyze the convergence of an online algorithm (TD). Overall, we view our analysis as complementary to [30], just like studying batch least-squares and SGD on least-squares are related, yet different problems.\n\nMinor Comments:\n\n(1) We agree that the \"Deadly Triad\" refers to function approximation in general. We have clarified that in the manuscript.\n\n(2) We agree that TD-learning with either tabular representation or linear function approximation has convergence $1/\\sqrt n$. This is something we recover in Prop. 4 in Appendix A.3. We obtain the same rates for non-parametric TD when $\\theta=0$ (the rate being $n^{-(1+\\theta)/(2+\\theta)}$. The case $\\theta=0$ corresponds to $V^* \\in H$, which is enforced by the minimal projected Bellman error solution for linear function approximation. We mention this in particular on lines 244-245: in this case, regularization is not necessary. Furthermore, we would like to stress that the rates under assumption (A2) can be slower, identical or faster than $1/\\sqrt n$ depending on the source condition. In particular, if $\\theta >0$, the obtained rates are faster than $1/\\sqrt n$, and can be as fast as $n^{-2/3}$.\n", " (1) In this paper, we study the *policy evaluation* problem, and in particular the TD(0) algorithm. Policy evaluation is a subproblem of reinforcement learning, but a significant one. In particular, even though the policy is fixed, the convergence of TD(0) with function approximation is a non-trivial problem. As noted in the introduction, TD(0) with linear function approximation only converges to the minimizer of the projected Bellman error. We propose an algorithm that generically converges to the value function.\nAs noted by [10], the assumption that the Markov chain begins in steady-state is not essential (if it has not mixed yet, we could apply our analysis after the chain has approximately mixed), but it simplifies the presentation.\n\n(2) We do not mean the experiments to be generalizable nor to demonstrate the interest of the algorithm on real problem instances. However, the experiments are here to illustrate the theorems and give an intuition of the effect of the different parameters.\n\n(3) We agree that our convergence results are in the form of average-case analysis. Stochastic approximation theory could also provide almost sure convergence results, but they are mostly *asymptotic* convergence results. Instead, we follow a line of work which provides average-case analyses for TD learning, such as [10], [55], [61], [64], [40]... Average-case analysis are also common in the study of the convergence rates of stochastic gradient descent (see e.g., [28], [49], [9]). Furthermore, we would like to stress that in the experiments (see Figure 1), we plot the mean +- one standard deviation over 10 runs. Each run is one instance of the TD algorithm, without any restarting. Even though there are some unavoidable random fluctuations, the average-case analysis seems to be relevant.\n\n(4) We agree that the i.i.d. scenario of Sec.4 is not realistic in RL applications. This is why we study in Sec.5 the Markovian setting. We merely begin with the i.i.d. case as an intermediate step to make our analysis more progressive.\n\n(5) In the abstract, although its formal definition is not directly written, the \"source condition\" is followed by the brief description: \"relating the regularity of the optimal value function to the RKHS\". We give the formal definition of the source condition on page 5, line 179. We would like to stress that the source condition is an existing terminology which has already been used for the study of stochastic gradient descent (see [27], [9] and references therein).\n\n(6) In Eqn. (1), $\\gamma$ is supposed to be known, but the reward function $r$ and the transition probabilities in the expectation are unknown. TD learning is indeed useful in the case where $r$ and the transitions are unknown (else one could use e.g., value iteration). We consider this framework (reward and transitions unknown) throughout the paper. We do not use these transitions to compute the TD updates in eqn. (4). The transitions need to be known if we wanted to run the population version of TD in eqn. (12). However, this algorithm is never run in practice, and we only consider it as an intermediate step of our analysis. This is not the algorithm analyzed in Thms 1 and 2.\n\n(7) Once the process has mixed, the samples are drawn according to the invariant distribution $p$. This distribution does not need to be known to run TD. We could imagine to learn it by observing the samples, but this would not be of any help to run TD. We do not think that, even knowing $p$, we could recover the transition probabilities. For example, in Sec.6, the invariant distribution $p$ is uniform, but this brings no information on the transition kernel.\n\n(8) We analyze the regularized TD learning algorithm (12) because it allows studying to case where $||V^* ||_H$ is infinite, i.e., $V^*$ does not belong to the RKHS. If $V^* \\in H$, the regularization is not necessary, as seen in Prop.4 in App.A.3.\n\n(9) Other clarifications: \n+ l. 98: the word \"but\" has been removed.\n\n+ l. 106: the reward is not required to be non-negative in our analysis. More generally, there is no obstacle to having negative rewards in a discounted MRP.\n\n+ For Harris ergodicity and regeneration sets, we already refer to [1],[31]. Since the definitions are rather technical and outside of the scope of the paper, we have prefered to refer to the references. However, in the camera-ready version of the paper, since we have more room we could add more details.\n\n+ l. 132: $n$ refers to the index of the update (4) just above. The $n$-th update has complexity $O(n^2)$. More details on the implementation are given in Appendix B.2.\n\n+ The intuition of Lemma 2 is given in the paragraph below (l. 193 -- 197). In a simplified setting, Lemma 1 and Lemma 2 are equivalent. The intuition of Lemma 1 is given on l. 167 -- 172. The main implication of Lemma 2 is to prove Lemma 3. We have added a mention of that on line 235.\n\n+ In Prop. 1, we are interested in the solution of (13) (the fixed-point of (12)).", " We would like to thank Reviewer KMJg for the comments and questions raised. In the revised manuscript, we will add a mention to [Srikant2019] for possibly removing the projection step, and a short discussion on the dependence of $\\lambda_\\theta$ on $\\gamma$. Please understand that we prefer not to add them to the manuscript at the moment because of the 9-page limit which will be extended to 10 for the camera-ready version.\n\n(1) The analysis of linear TD by [Srikant2019] indeed removes the projection step. We agree that this would be a desirable feature in our case as well. We have chosen to follow the general scheme of the analysis of Ref.[10] rather than [Srikant2019], because it establishes finite time bounds that are *independent of the conditioning* of the full-rank feature covariance matrix $\\Sigma$. This is a desirable property for our non-parametric analysis because our infinite-dimensional operator $\\Sigma$ is not invertible (and cannot have full rank).\nIt could be interesting to investigate in future work if a similar analysis to [Srikant2019] using Steiner's method can be carried out in our case to remove the projection step. Note that in the experiments, we did not have to use any projections.\n\n(2) We analyze the regularized TD learning algorithm (eqn. 12) because it allows studying to case where $||V^* ||_H$ is infinite, i.e., $V^*$ does not belong to the RKHS. If $V^* \\in H$, the regularization is not necessary, as seen in Prop.4 in Appendix A.3.\nThe parameter $\\lambda$ is not required for a \"representer theorem\" in a strict sense in our analysis. Choosing a regularization $\\lambda >0$ ensures that the fixed point $V^*_\\lambda$ of eqn. (13) is in the RKHS. Concerning the iterates, even for $\\lambda=0$, it is straightforward from eqn. (12) that $\\forall n, V_n \\in \\text{span}(\\Phi(x_1),..., \\Phi(x_n))$, which is a kind of representer theorem. This is discussed in particular in Appendix B.2. \n\n(3) In Thm. 1, we choose $\\lambda$ as a constant $\\lambda_0$ times a term decreasing with $n$. To obtain convergence, we need $\\lambda$ to go to zero when $n$ goes to infinity. The dependence of $\\lambda$ on $n$ is tuned to obtain the best rates in the theorems (given our analysis). The constant $\\lambda_0$ must be chosen larger than a certain $\\lambda_\\theta$, of which we give an explicit expression in the proof (see pages 24-25). It behaves as a constant times $1/\\bar \\rho$, where $\\bar \\rho$ is defined in Lemma 6 (page 19). Overall, one must choose $\\lambda_0 \\geq \\frac{c}{1-\\gamma}$, where $c$ is a constant independent of $\\gamma$. So this depends linearly on the horizon $\\frac{1}{1-\\gamma}$ of the MRP. Note that in the whole paper, we have not tried to optimize the dependences on $\\frac{1}{1-\\gamma}$, so it is possible that some of them are suboptimal. In practice, $\\lambda_0$ can be tuned if $M_H$ is unknown, but we have not seen much influence of $\\lambda_0$ in our experiments. We do not believe it to be a clear limitation, but probably only an artifact of the proof technique (for comparison, we do not need it in Thm.2).", " We would like to thank Reviewer 15bu for the very interesting additional references that we were not aware of. In the revised manuscript, we will add a discussion on the given references, as detailed below. Please understand that we prefer not to add them to the manuscript at the moment because of the 9-page limit which will be extended to 10 for the camera-ready version.\n\n(1) [Cai2019] consider TD(0) with function approximation, using a one-hidden layer neural network with finite width. They prove $1/\\sqrt n$ convergence to the solution with minimal projected Bellman error, which is an interesting extension of classical results for linear function approximation. However, in Prop. 4.7 of [Cai2019], there is still a projection error term which is equal to zero only if the value function is a function generated by a neural network. The authors mention that this function space \"is a subset of an RKHS\". Therefore, this essentially corresponds to the case $\\theta \\geq 0$ with our notations, i.e., when the value function is inside the RKHS. Our main focus in the paper is to prove convergence to $V^*$ in the case $\\theta < 0$, i.e., when it is not inside the RKHS.\nMore generally, it could be interesting in further work to investigate the connections between our RKHS framework, and the infinite-width limit of neural TD in [Cai2019]. Indeed, there is a parallel line of work studying the effect of stochastic gradient descent for optimizing one hidden-layer neural networks, and we have shown some similarity between SGD and TD.\n\n(2) The analysis of [Hu2019] characterizes the exact behaviour (hence providing both upper and lower bounds) of linear TD with finite state space. The framework of Markov jump linear system seems strongly linked to finite state spaces. In particular, the expressions of the transition probabilities appear explicitly in the analysis. They could maybe be replaced by more general expectations in the continuous state-space case, but this does not appear straightforward. This is an interesting direction but we believe it to be a bit outside of the focus of the current paper.\n\n(3) The recent results by [Durmus2021] provide finite bounds for linear stochastic approximation, which apply to linear TD. They relax the uniform mixing assumption that we use by replacing it by a weaker drift condition, and by removing a boundedness assumption. This appears as a promising direction, e.g., to remove the projection step that we have to use in the Markov setting. Yet, it is not straightforward to see whether this analysis designed for \"random matrix products\" can be directly extended to infinite-dimensional operators in the non-parametric case. After a quick look at the proofs, some constants (see e.g., Lemma 16 in [Durmus2021]) depend on the dimension $d$ of the linear approximation (typically infinite in the non-parametric case).", " This paper studies the convergence of the regularized non-parametric TD(0) algorithm with RKHS approximation. For both the IID and Markovian noise settings, convergence rate bounds have been obtained. Numerical results have been given to support the theory. This paper is quite original. The quality is good. The paper is also well written.\n\nStrengths:\n1. Originality: This paper is original. The analysis is new and novel.\n\n2. Quality: The contributions are very solid. Theory for both IID and Markov noise cases have been discussed. Numerical results are also provided.\n\n3. Clarity: The paper is well written.\n\n4. Significance: The contributions are significant. The analysis is very interesting. \n\nWeaknesses:\n The connections to the following relevant papers are missing, and some clarifications/discussions are needed. \n[Cai2019] Q. Cai, Z. Yang, J.D. Lee, Z. Wang. Neural temporal-difference learning converges to global optima. NeurIPS2019.\n\n[Hu2019] B Hu, U Syed. Characterizing the exact behaviors of temporal difference learning algorithms using Markov jump linear system theory. NeurIPS 2019.\n\n[Durmus2021] A Durmus, E Moulines, A Naumov, S Samsonov, H Wai. On the stability of random matrix product with Markovian noise: Application to linear stochastic approximation and TD learning. COLT 2021.\n\nSpecific suggestions are given as follows:\n\n1. [Cai2019] has discussed some results for neural network approximation case. In the introduction, the authors mentioned that studying the RKHS case could bring us closer to understanding what happens with other universal approximators used in practice, like neural networks. Hence it seems quite relevant to discuss the existing convergence theory for the neural network case. \n\n2. [Hu2019] has given some exact analytical formulas for the TD error for linear approximation on countable state space (under both IID/Markov assumptions). Is it possible to obtain similar exact formulas for the RKHS approximation on general state case? Some discussion will be helpful. \n\n3. [Durmus2021] has addressed the TD error for linear approximation on general state space. It will be interesting to compare the assumptions in [Durmus2021] with the Harris ergodic assumption in this paper. In the linear approximation case, can the analysis method in this paper be used to get some improvements over [Durmus2021]?\n\n\n\n\n\n\n\n\n\n\n\n\n 1. [Cai2019] has discussed some results for neural network approximation case. In the introduction, the authors mentioned that studying the RKHS case could bring us closer to understanding what happens with other universal approximators used in practice, like neural networks. Can the authors comment on the connections between their paper and [Cai2019]?\n\n2. [Hu2019] has given some exact analytical formulas for the TD error for linear approximation on countable state space. Is it possible to obtain similar exact formulas for the RKHS approximation on general state case? \n\n3. [Durmus2021] has addressed the TD error for linear approximation on general state space. It will be interesting to compare the assumptions in [Durmus2021] with the Harris ergodic assumption in this paper. In the linear approximation case, can the analysis method in this paper be used to get some improvements over [Durmus2021]? Yes, the authors have discussed the limitations.", " This paper analyzes the kernel based (on-policy) TD learning. Specially, they consider the case where TD learning is performed with the value function in a reproducing kernel Hilbert space (RKHS) (eq 3, 4). They provide convergence guarantees when the true value function V* does not belong to the RKHS under a so-called source condition (A2). They also provided non-asymptotic convergence rate for the algorithm under i.i.d setting and the Markovian setting (where the state action sequence is sampled from a fixed policy).\n Strength:\n1. The paper is well written and structured. The authors gradually build the machinery from dynamic programming, RHKS, stochastic approximations (and etc) so as to introduce their main results and analysis.\n2. As far as I know, the main technical contributions of the paper is the analysis of kernel based on-policy TD learning setting in the framework of RKHS. The general framework of analysis is similar from the analysis of TD-learning with linear function approximation, the results in terms of RKHS is technical. To my knowledge, theoretical guarantees of TD-learning with general nonlinear function approximation is still lacking. Kernel based TD learning could be a step forward from existing TD-learning with linear function approximation. And such results can provide more insight and support into those kernel based methods.\n\nWeakness:\nOne potential drawback could be that the main framework of analysis bears some resemblance to the analysis of TD learning with linear function approximation. But again, performing such analysis using tools from RHKS is still technical.\n\nBased on the results in ref[0], the projection step for the Markovian case for TD-learning may not be necessary.\n\nref:\n[0] Finite-Time Error Bounds For Linear Stochastic Approximation and TD Learning, R. Srikant, Lei Ying. The paper analyzed the so-called regularized TD learning (eq 12). My understanding is that this $\\lambda$ is for the use of the representer theorem. But this parameter needs to be tuned in practices. In the main theorem (theorem 1), the analysis needs $\\lambda > \\lambda_{\\theta}$. Is there any insight on the parameter $\\lambda_{\\theta}$ here, does it depend on $\\gamma - 1$? Yes", " The paper studies the temporal difference algorithm for estimating the value function of a Markov decision process, assuming that the control policy is already applied and the resulting Markov chain is homogenous and stationary. Under certain assumptions, the rates of convergence of the weighted averages of the value functions provided by a regularized temporal difference algorithm are shown, averaging over the stochasticity, as well as the state space. Strength:\nThe framework seems technically solid. The presented results are explained rigorously, and the setting is abstract enough to include non-tabular Markov decision processes, as long as the feature space is an RKHS.\n\nWeaknesses:\nThe setting is a little artificial. \nThe experiments are not clear to be generalizable. \nThe presentation is too compact and a little hard to follow, and is also unclear in some places. \nImportance of the problem is not sufficiently motivated as the policy is already applied, the transitions are mixed and have reached to the stationarity, and now the goal is to find the value function. This, as well as the next, restrict the applicability of the proposed approach for being used as the evaluation step of a reinforcement learning policy. \nThe results are in the form of average-case analysis. Intuitively, that means that if we repeat the setting many times, we can learn the value function nicely. However, the more interesting analysis is one that can establish accuracy of the learned value function based on a single trajectory. So, in some sense, the policy evaluation is analyzed in an offline fashion, while for offline reinforcement learning policies, evaluations are not the main obstacle. \nFinally, I do not think that Section 4 fits well in the framework as restarting for many times, together with the fact that the expected learning error is studied, defeat the purpose and limit the practicality of the approach. I would like the authors to address the points discussed under Weaknesses, and also explain how they can improve the presentation. The latter seems necessary as there are unclarity and lack of enough explanation in some places. \n\nIn the abstract, 'source condition' is unclear.\n98: 'but ...' makes ambiguity. \n106: define a 'nonnegative' reward.\nIn (1), it is unclear what is known and what is not. The main interest is in the case that the Markov transitions are unknown, but from the rest of the presentation, it does not seem to be the case as some of the quantities need the transition to be computed. Note that as the processes is assumed to be stationary and/or mixed, the authors need to argue why the known distribution does not provide the transition.\nHarris ergodicity, especially its regeneration set, need to be defined, and the discussion in these lines need more explanations.\n132: it is unclear what is n, and how these computations relate to the setting.\nIntuitions of Lemma 2 that the operator is like a contraction are required, as well as implications of such a fact.\nThe first paragraph of Section 3 is unclear. \nOn proposition 1, I think we are interested in the solutions of (12), and not those of (13).\nI am not convinced about necessity and/or usefulness of regularizing the TD. Mentioned in Weaknesses. ", " This paper focuses on solving the policy evaluation problem in reinforcement learning using non-parametric TD-learning. By introducing a regularization parameter $\\lambda$, the authors derive (1) the convergence rate of the associated ODE, and (2) the convergence rate of non-parametric TD-learning under either i.i.d. or Markovian sampling. Numerical experiments agree with the theoretical findings. This paper is well organized and well written. The authors start with the ODE associated with the deterministic variant of TD-learning, and use the Lyapunov function there to study the non-parametric stochastic TD-learning algorithm. While this is a highly technical paper, the structure makes the paper easy to follow.\n\nMajor Comments:\n\n(1) What is the motivation of introducing the regularizer $\\lambda$? Is it because $\\Sigma$ is not necessarily invertible but $\\Sigma+\\lambda I$ is guaranteed to be invertible? The Bellman operator is also a contraction with respect to the $\\ell_\\infty$-norm, regardless of whether the Markov chain has a unique stationary distribution or not. Introducing the regularization parameter is to some extend equivalent to solving a different MDP with a smaller discount factor, and I feel it should be avoided if possible.\n\nThe TD-learning algorithm with the regularizer $\\lambda$ is different than the original one. What is the updated algorithm? I do not think it is simply subtracting another $\\lambda V_{n-1} (x_n)$ in the temporal difference.\n\n(2) Regarding the convergence rate of the ODE. Suppose we exploit the $\\ell_\\infty$-norm contraction instead of the weighted $L_2$-norm contraction. Do we always get geometric convergence?\n\n(3) The need for projection in TD-learning with Markovian sampling is somewhat problematic. First of all, assuming there is an oracle that gives the right projection set is not realistic. Second, the estimate of the size of the projection set depends on unknown parameters of the MDP. In [55], the authors provide a way of analyzing TD-learning with linear function approximation without a projection. Is it possible to remove the projection using similar techniques?\n\n(4) The authors listed existing literature studying non-parametric RL, but did not compare the results and the techniques with them. I briefly checked [30], which is not for fitted Q-iteration (as claimed in this paper), but studies kernel based non-parametric LSTD and seems to be closely related to this work. The technical novelty is not entirely clear in the current manuscript.\n\nMinor Comments:\n\n(1) \"Deadly Triad\" refers to bootstrapping, off-policy, and function approximation (which does not have to be nonlinear). This should be made clear in the paper.\n\n(2) TD-learning with either tabular representation or linear function approximation has been studied extensively in the literature, and the convergence rate there is $1/\\sqrt{n}$. Some discussion seems needed to clarify why non-parametric TD has a slower convergence rate.\n\n\n\n\n\n My main concerns are the need for the regularizer and the technical novelty compared to existing literature studying kernel-based RL. This paper does not have any potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "PooSwFNR2N", "1Im3RJag0Rl", "dFwiUsE6doS", "3uOZl6pYbBU", "k7gKyLg1dfc", "LDmqG6fJ_A", "nips_2022_dO11Niyc225", "nips_2022_dO11Niyc225", "nips_2022_dO11Niyc225", "nips_2022_dO11Niyc225" ]
nips_2022__zPG0ShaZTc
The Unreasonable Effectiveness of Fully-Connected Layers for Low-Data Regimes
Convolutional neural networks were the standard for solving many computer vision tasks until recently, when Transformers of MLP-based architectures have started to show competitive performance. These architectures typically have a vast number of weights and need to be trained on massive datasets; hence, they are not suitable for their use in low-data regimes. In this work, we propose a simple yet effective framework to improve generalization from small amounts of data. We augment modern CNNs with fully-connected (FC) layers and show the massive impact this architectural change has in low-data regimes. We further present an online joint knowledge-distillation method to utilize the extra FC layers at train time but avoid them during test time. This allows us to improve the generalization of a CNN-based model without any increase in the number of weights at test time. We perform classification experiments for a large range of network backbones and several standard datasets on supervised learning and active learning. Our experiments significantly outperform the networks without fully-connected layers, reaching a relative improvement of up to $16\%$ validation accuracy in the supervised setting without adding any extra parameters during inference.
Accept
The paper shows that using final fully-connected layers helps the generalization of convolutional neural networks in low-data regimes. The addition of these layers significantly improves model quality resulting in a network with the same number of parameters and better generalization performance. Initially reviewers had mixed evaluation of the paper. All the reviewers saw that the proposed method is simple and easy to follow, at the same time providing clear improvements over baselines. Also agreed that the results are "significant" and "surprising" effect. There were some concerns raised by the reviewers but the author's rebuttal mostly addressed and improved the paper with sufficiently more experiments and analysis supporting the main claim. Reviewer `DX6o` mentioned that there are few updates promised by the authors which can't be validated until camera ready but it does not seem to warrant block publication. The Author-Reviewer discussion period was active and the authors did a great job clearing various concerns and questions and all reviewers agreed to support acceptance of the paper. The paper demonstrates a simple yet effective method for small data regime which would be interesting to the broad NeurIPS audience both for practitioners as well as researchers.
train
[ "5xpGjnbpXI0", "awEMoUbhdb", "9E_O0g6kbHC", "Vjd9z4ofbokZ", "XLy_0Y4_lyZ", "fNQdvc_5JdFr", "K1ySILlZgun", "XZE5cxXhoF", "Co7yqI9Lk-lp", "HgwLBAxd8aT", "oaoe_rofvKr", "u3gCz2oCFAh", "7rqP4Wtg7oP", "PR_35vcPefv", "CQQF6n6Op0" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are glad to see that all the reviewer's concerns have been resolved and that the reviewer increased the score of the paper.\nWe assure the reviewer that we will update the paper with the new experiments for the final revision.", " Thank you for the response! The authors did a great job of dealing with all the concerns, and I am satisfied with most of the new experiments. In the SimSiam experiment, there is, of course, a clear gap in performance between training with the GT labels in a supervised manner and self-supervised learning. Therefore, I recommend the authors compare them in a pre-training and fine-tuning regime, but I don't think this comparison is within the scope of this paper. For the other experiments, It seems that new experimental results have not been updated in the paper yet, but I hope the authors add these to make the paper more solid in the final revision. I will update my score and vote for acceptance of the paper.\n", " **We thank the reviewer for the positive feedback, that they found our new experiments interesting, and that they are now inclined to accept the paper. We conducted additional experiments to address the reviewer’s remaining concerns. We will provide further discussions about these points in the final paper revision.**\n \n 1. **The method is very similar to SimSiam and BYOL.**\n\nWhile SimSiam [30] can also be directly trained on smaller datasets, its behavior on the low data-regime has not been investigated. To better underline this statement, we ran the CIFAR10 version of SimSiam (https://github.com/PatrickHua/SimSiam) on our datasplits and evaluated its performance with a kNN and with a linear classifier (averaged results of 3 runs). The training scheme follows the SimSiam [30] paper, Appendix D. Please click [SimSiam comparison](https://ibb.co/swSsbqP) to see the results.\n\nAs it can be seen, our method significantly outperforms SimSiam [30] in all data splits.\nWe mentioned that our loss is simpler, considering that we use the default classification loss (cross-entropy), and so our training regime is identical to training a ResNet. However, we agree with the reviewer that the losses used in SimSiam [30] / BYOL [31] are relatively simple too.\n\nAdditionally, we would like to highlight some further technical differences. SimSiam [30] being a self-supervised method claims that their method works because of the self-supervision scheme (Section 5.1: “SimSiam is an implementation of an Expectation-Maximization (EM) like algorithm”), and they derive the EM-like objective from their loss function. This is different from our supervised learning loss. Furthermore, there are other differences like them using multi-view images, and the stopgrad there being crucial (while gradient gating mechanism being optional for us). \n\n 3. **The authors use pretrained ResNet for Caltech datasets.**\n\nThe other experiments took longer than expected, so we were able to run Caltech101 experiments only today. We show the preliminary results (the first three cycles) in the graphs below. Please click [Caltech w/o pretraining](https://ibb.co/6sM7wvG) for the plot. For convenience we also show the results with ImageNet pretraining. Each experiment was run 5 times, we plot the mean and standard deviation.\n\nIn the 1st cycle, we are worse than ResNet18 by 1pp (with ImageNet pretraining, we were worse by 0.9pp). In the 2nd cycle, we outperform ResNet18 by 0.7pp (with ImageNet pretraining we outperformed them by 0.5pp). In the 3rd cycle, we outperform ResNet18 by 1.4pp (with ImageNet pretraining we outperformed them by 0.8pp). In this way, we show that while both ResNet18 and our method reach significantly lower results than when we use ImageNet pretraining, the relative improvement of our method compared to ResNet18 remains.\n\n 5. **Comparisons with MLPMixer and ViT are unfair.**\n\nWe thank the reviewer for further clarifying this point. Indeed, we used customized ResNet (both for ResNet18 and FR-ResNet18) for the CIFAR experiments, as in the work of LLAL [20] (we are glad the reviewer checked the code!). For the ViT training, we also used customized versions, as described in [5(supp)]. \nFor the MLPMixer, in the main paper, we used the original architecture. However, based on the reviewer’s advice, we now conducted more experiments and compared multiple customized architectures. We found that MLPMixer-Nano (https://github.com/omihub777/MLP-Mixer-CIFAR) performs better than the original version in CIFAR datasets. However, it still stays behind our method with a large margin (over 10pp in the second, and over 15pp in the third cycle). We have updated the manuscript with the new results. For convenience, please click [Fig MLPMixer](https://ibb.co/pzBPrJY), which points to a figure comparing our previous and new MLPMixer results.\n\n**Typo: Fixed in the latest revision.**", " I thank the authors for the concrete responses; the newly added robustness evaluation was very interesting. Although my concerns have not been fully addressed, I am now inclined to accept because the revised paper contains a few more essential points that adequately reflect the reviewers' concerns. However, after seeing the response and reviewers, I would like to leave the rest of the concerns that should be reflected in the final paper revision here:\n\n> ***1. The method is very similar to SimSiam and BYOL.*** \nWhile methods like SimSiam [1] and BYOL [2] have some similarities with our method, they also have significant differences both a) conceptually and b) technically. \na) Those methods are self-supervised methods, tailored at large datasets. In contrast, our method is a fully-supervised method aimed at network generalization in low-data regimes, hence, the two setups are very distinct. \nb) SimSiam and BYOL uses more complex losses than our method and work with multi-view images. In this regard, our method is much simpler than theirs (single view, cross-entropy loss). \n\n- SimSiam and BYOL mainly experimented with ImageNet and transferred pretrained backbones to small datasets, but it is known that training directly on small datasets works well even. Additionally, the SimSiam paper [1] incorporates the CIFAR experiments in Appendix D. From this standpoint, a similar point of view was pointed out between the proposed method and the methods. \n\n- Additionally, I would like to make it clear that the loss of them is not as complicated as the cross-entropy (CE) loss but l2-normalized MSE for SimSiam and BYOL [2]; actually, CE also works for training them [1, 2].\n\n> ***3. The authors use pretrained ResNet for Caltech datasets.*** \nFor the Caltech datasets, we used a pre-trained model, because it has a much higher dimensionality. While all CIFAR images are of size 32x32, the Caltech samples have various resolutions. Those samples are resized and cropped (Section A) to the ImageNet size (224x224). Furthermore, we wanted to showcase that our method can also work with pre-trained models.\n\n- I understood what the authors would like to stress, but I really wanted to know is that whether the Caltech datasets could also benefit from the proposed method without using the ImageNet-pretrained model. \nAlthough we don't have much time, can the authors provide some even simple results? \n\n> ***5. Comparisons with MLPMixer and ViT are unfair.*** \nFor the MLPMixer training, we used the timm framework (Section A.3), where the original images are upsampled to the ImageNet size, exactly as they fine-tuned the ImageNet-pretrained MLPMixer model in the official paper for the CIFAR dataset. \nAlso for training the ViT-B16, we first used the timm framework. However, we found that directly training on the CIFAR10 dataset with a smaller patch size can achieve better results [5(supp)]. Therefore we applied that training strategy. \nFinally, both papers compared the results also on the CIFAR datasets (Table 2, Avg 5 in MLPMixer [14], and in Table 2 in ViT [12]). Thus, we are using the same datasets as the authors of those papers used, so the comparison is fair.\n\n- Thank you for clarifying the training setup. However, I still have the same concern about training MLPMixer and ViT with the same image size in ImageNet pretraining. Because I found that ResNet18 was modified (at the stem and the final GAP) to train with 32x32 images for the CIFAR dataset as confirmed in your provided code, but MLPMixer and ViT were not. Why weren't they customized like that? I don't think the author should follow the ImageNet architectures for MLPmixer and ViT because they don't utilize the ImageNet pre-trained models and finetune them to the CIFAR dataset.\n\n- In fact, there are many publicly available MLPMixer and ViT architectures for CIFAR with smaller patch-sizes architecture or something else; therefore, I raised my concerns about the fairness of comparing them in the same arena. \n\n\n**Typos**: The subfigures in Figure 7 seem to have the same legend.\n\n\n\n[1] Chen et al, Exploring Simple Siamese Representation Learning, CVPR 2021\n\n[2] Grill et al, Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning, NeurIPS 2020\n\n\n\n", " We thank the reviewer for their encouraging response and praise after the rebuttal. Indeed, the original review was enthusiastic, despite the low score. We acknowledge again that the two sets of experiments, especially the knowledge distillation one, have improved the paper and made its results more convincing. \n\nWe are thrilled to see that the reviewer plans to change their score from **Reject (3)** to **Accept**. **We hope that the reviewer can explicitly update the score so it is not missed by the Area Chair and the other reviewers!**", " Thank you for the extra experiments! After looking at the new experiments, I am convinced that your method does in fact perform better than knowledge distillation methods in low data regimes. As my enthusiastic review indicated, I quite enjoy the ideas from this paper but was concerned that the gains were simply due to training with more parameters. I will update my score to an accept, great job!", " We thank the reviewer for praising our rebuttal that addresses their and the other reviewers' concerns. We also thank the reviewer for explicitly saying the paper is reasonably above the acceptance threshold.\n\nWe hope the reviewer can champion our paper, and we are willing to further discuss any doubt the reviewer (or other reviewers) have during the review process.", " Thank you for your response and updates. The data on the number of additional training parameters shows that the quality improvements you see are likely not a result of this increase. I think this paper is reasonably above the acceptance threshold and it seems the authors did a good job of addressing the feedback from the other reviewers.", " **We thank the reviewer for finding our method very interesting, the results showing clear improvements over the base network, the paper being easy to implement, and consequently seeing the paper having a large amount of impact for the community. Considering the reviewer’s praise, we were somewhat disappointed with the low grade of the reviewer. At the same time, we are thankful to the reviewer for mentioning a couple of missing experiments that will ultimately make the paper stronger. We address the reviewer’s concerns and conduct experiments to compare knowledge distillation methods and evaluate the robustness of our method. Our method compares favorably to the baselines, showcasing the benefit of our approach in the low data regime.** \n\n1. ***Experiments and comparisons with different Knowledge Distillation methods.***\n\nTo show that the main strength comes from the fully-connected layers, we follow the reviewer’s recommendation and compare with different knowledge distillation modules. We present a comparison with the suggested Deep Mutual Learning [21] module. Furthermore, we also compare with its follow-up (KDCL [22]) and with the original knowledge distillation method (KD [23]). For all experiments, we use ResNet50 teacher/mutual network for our ResNet18 student network. We have added this discussion and experiments to the manuscript (Figure 5, lines 150-160). For convenience, please click [Figure 5](https://ibb.co/2Pns5Rb), which redirects to the figure which is identical to the one in the revised version of the manuscript.\n\nAs can be seen, our method (FR-ResNet18) significantly outperforms the knowledge distillation methods, by up to 8.5pp on CIFAR10 and 3.6pp on CIFAR100 in the second cycle. Furthermore, while our method comes with only 0.37% extra parameters during training (and no extra parameters during inference), the other knowledge distillation methods use 210,5% more parameters. Thus, our method both performs better, and is significantly faster (using one network instead of two). \n\n2. ***Experiments on robustness benchmarks.***\n\nWe extended our paper with a new set of evaluations on robustness benchmarks. We used the CIFAR10-C dataset, which contains a total of 95 perturbed test sets from 19 corruption types with 5 severity levels. We did a thorough evaluation of our method and the baseline ResNet18 network on all the test sets in every iteration with 5 different runs. We have added the experiment to the supplementary material (Section C, Figures 2-4). For convenience, please click [Figure 2 supp](https://ibb.co/KwmfFg2), which redirects to the figure which is identical to the one in the revised version of the manuscript.\n\nOur method (FR-ResNet18) consistently outperforms the baseline in cases of lower corruption severity. In cases of higher corruption severity (severity 3 and severity 4), in the very low-data regime, our method significantly outperforms the baseline. However, with more added data, the baseline starts outperforming our method. In the supplementary material, we also show a similar experiment where we agglomerate results by the type of corruption, reaching the same conclusions.\n\n3. ***In figure 6f, why does the accuracy of FR-EfficientNet decrease at 10k samples while the accuracy of EfficientNet increases?***\n\nThis can be explained by the fact that we used fixed data splits over all the experiments. That specific datapoint can be considered as a bad sample for that architecture. A counterexample can be seen on the same figure at 12k, where FR increases rapidly, but EfficientNet decreases slightly. \n\n4. ***Is there a general point at which there are diminishing returns in accuracy?***\n\nYes, with more labeled samples the gap between FR method and the baseline network closes. ", " **We thank the reviewer for finding our proposed idea interesting, and for finding our paper easy to follow. We agree that the reviewer’s suggestions will improve the manuscript. Below, we address the reviewer’s comments.**\n\n1. ***The method is very similar to SimSiam and BYOL.***\n\nWhile methods like SimSiam [1] and BYOL [2] have some similarities with our method, they also have significant differences both a) conceptually and b) technically.\n\na) Those methods are self-supervised methods, tailored at large datasets. In contrast, our method is a fully-supervised method aimed at network generalization in low-data regimes, hence, the two setups are very distinct.\n \nb) SimSiam and BYOL uses more complex losses than our method and work with multi-view images. In this regard, our method is much simpler than theirs (single view, cross-entropy loss). Furthermore, while our knowledge distillation method has some similarity with stopgradient, there are a few key differences: (i) The most important is that our gradient gating is completely optional. We use it only to reduce the number of parameters at inference, but we reach the same results without it, albeit with a 0.37% increase in the number of parameters. (ii) On inference, their MLP serves as a prediction head, while with our knowledge distillation, we do not use the MLP at all.\n\nWe have updated the Related Work section in the manuscript with the discussion above (lines 259-264).\n\n2. ***No intuition why the method works.***\n\nWe agree that we did not provide a detailed explanation about our method, just a minimal hypothetical reasoning in our Limitations section. However, we think that our findings could urge further researchers to pay even more attention to this phenomenon and bring the field closer to understanding the generalization of deep neural networks, especially in the underexplored low-data regime.\n\n3. ***The authors use pretrained ResNet for Caltech datasets.***\n\nFor the Caltech datasets, we used a pre-trained model, because it has a much higher dimensionality. While all CIFAR images are of size 32x32, the Caltech samples have various resolutions. Those samples are resized and cropped (Section A) to the ImageNet size (224x224). Furthermore, we wanted to showcase that our method can also work with pre-trained models. \n\n4. ***All the training dataset is small, so the experimental verification of the claim is limited.***\n\nWe consider the Caltech datasets large and complex enough for the verification. The Caltech datasets contain images of much higher (and even various) resolution. Therefore the complexity of these datasets is similar to ImageNet in terms of dimensionality. \n\n5. ***Comparisons with MLPMixer and ViT are unfair.***\n\nFor the MLPMixer training, we used the timm framework (Section A.3), where the original images are upsampled to the ImageNet size, exactly as they fine-tuned the ImageNet-pretrained MLPMixer model in the official paper for the CIFAR dataset. \n\nAlso for training the ViT-B16, we first used the timm framework. However, we found that directly training on the CIFAR10 dataset with a smaller patch size can achieve better results [5(supp)]. Therefore we applied that training strategy. \n\nFinally, both papers compared the results also on the CIFAR datasets (Table 2, Avg 5 in MLPMixer [14], and in Table 2 in ViT [12]). Thus, we are using the same datasets as the authors of those papers used, so the comparison is fair.\n\n6. ***As aforementioned, the authors claim the proposed method is a joint KD…***\n\nUnder Knowledge Distillation we consider methods that optimize a smaller network to achieve similar or better performance than a larger network [23,28]. Our method can be seen as a KD method in the sense that the FR head (which is just slightly larger) is the teacher network, whose knowledge is distilled into the original head. In this case, the teacher and student networks share the backbone, but they have different heads. \n\nHowever, we would like to emphasize that although we consider our OJKD as an additional contribution, this is not the main contribution of our work since our aim is to improve generalization in the low data regime. In order to make our results comparable to the baseline networks, we developed OJKD; however, we see our results without the OJKD as also quite promising since by adding as few as 0.37% extra parameters, we can achieve a significant improvement.\n\n7. ***Potential negative social impacts***\n\nOur work aims to improve the generalization of deep neural networks, thus we did not consider any negative social impact specific to our work. \n\n*[1] Chen et al, Exploring Simple Siamese Representation Learning, CVPR 2021*\n\n*[2] Grill et al, Bootstrap Your Own Latent - A New Approach to Self-Supervised Learning, NeurIPS 2020*", " **We thank the reviewer for finding our paper well-written, with significant results and for the overall positive grade. Below, we further clarify the reviewer’s concerns.**\n\n1. ***Percentage increase in the parameter count.***\n\nWe have updated our supplementary material to include the number of extra parameters for all our experiments. We would like to emphasize that these parameters are used only at train time, and not used during inference. \n\n| | CIFAR | Caltech |\n|-------------------|----------------|-----------------|\n| ResNet18 | 11173962 | 11228325 |\n| FR_ResNet18 | +42058 (0.38%) | +289893 (2.58%) |\n| ResNet34 | 21282122 | x |\n| FR_ResNet34 | +42058 (0.20%) | x |\n| DenseNet121 | 6964096 | x |\n| FR_DenseNet121 | +74826 (1.07%) | x |\n| EfficientNetB3 | 10711602 | x |\n| FR_EfficientNetB3 | +82660 (0.77%) | x |\n\n2. ***Why are ResNets characterized as fully-convolutional?***\n\nWith the term fully convolutional architectures, we mean neural networks which contain no fully-connected layers except the output layer (the classifier). This way, the architectures such as ResNet or EfficientNet are fully convolutional since no non-linear layer is used after the convolutional backbone, except the final linear output layer. This is different to earlier architectures, such as VGG or AlexNet, where the convolutional backbone was followed by a series of FC layers with non-linearities. \n\n3. ***Caltech101 in Figure 3 appears to be a counterexample.***\n\nOn Figure 3c, our method consistently outperforms the baseline network, except in the first stage. Unfortunately, we do not know the exact reason behind that. One explanation is that some data splits are more beneficial for the baseline network than for our approach, which causes the baseline network to generalize better. However, since this is the only datapoint across many experiments, it shows that our method can utilize the data splits in general better, and we consider that datapoint as a statistical counterexample. \n\n4. ***Comparing with an FC stack.***\n\nIn Figure 4, we compare with MLPMixer, the flagship of fully-connected neural networks. If the reviewer meant something else, we will be happy to discuss this during the reviewers-authors discussion period.\n\n5. ***Rewriting the sentence “We show in our experiments that the network…”***\n\nWe reformulated this sentence for clarification: \n“We show in our experiments that this reduced network achieves the same test accuracy as the larger (teacher) network and thus significantly outperforms the equivalent architecture that does not use our method.” \n\n6. ***Some numbers called out in the text are very specific when I would expect them to vary depending on the target architecture.***\n\nWe introduced variables for those numeric values and defined their values in the experiment section. \n\n7. ***Experiments on knowledge distillation.***\n\nWe conducted new experiments and compared our method (FR-ResNet18) with three knowledge distillation methods. Deep Mutual Learning (DML) [21] and its follow-up KDCL [22] are online methods, KD is the original offline method [23]. In each experiments, ResNet50 was used as teacher/mutual network for the baseline ResNet18. \n\nOur method significantly outperforms the knowledge distillation methods, by up to 8.5pp on CIFAR10 and 3.6pp on CIFAR100 in the second iteration. Furthermore, the other methods use 210,5% more parameters during training. We have added this discussion and experiments to the manuscript (Figure 5, lines 150-160). For convenience, please click [Figure 5](https://ibb.co/2Pns5Rb), which redirects to the figure which is identical to the one in the revised version of the manuscript.", " **We thank all reviewers for their valuable suggestions and constructive feedback.** We are happy that they found the paper “clear and well-written” (yAFj, DX6o, jkGN), the **“results significant”** (yAFj), the proposed method **“very interesting”** (jkGN) and **“easy to implement”** (jKGN), which shows **“clear improvement”** (jkGN), and they can see the potential of **“having a large amount of impact to the community”** (jkGN) . \n\nWe would like to highlight that our method is **not only a knowledge distillation method**. Our work aims to improve the generalization of deep neural networks in low-data regimes, and to show how adding fully-connected layers significantly improves the results. Our knowledge distillation is only to show that we can use the original backbone (trained with knowledge distillation), during inference. However, even without knowledge distillation, the method reaches the same results, albeit with a marginal increase in the number of parameters (0.37%).\n\n### Summary of new experiments \n\nReviewer jkGN found the paper interesting, and having potentially high impact, but scored it most negatively. They had concerns with the lack of comparison with knowledge distillation methods (explicitly mentioning one), and robustness experiments. In the rebuttal, we have **compared our method with 3 knowledge distillation methods** (including the one mentioned by the reviewer), and included **experiments on 95 corrupted CIFAR test sets** to show the robustness analysis. In these experiments, our method compares favorably against the baselines and shows the benefit of our method in the low data regime. Please see Point (1) and (2) under reviewer jkGN for the experiments. For convenience, the knowledge distillation experiment is also provided under Point (7) of reviewer yAFj.\n\nWe updated the manuscript based on the reviewers’ suggestions and put the new experiments in the main manuscript and supplementary material. ", " The authors investigate the value of fully connected layers at the end of convolutional neural networks in the small data regime. They demonstrate that the addition of these layers significantly improves model quality in this regime. Strengths\n1. I found the paper to be very clear and well written. It was easy to understand what the authors were doing and why.\n2. The results seem significant. This is not a phenomenon that I was aware of previously, although I am not an expert in the use of deep learning in the small data regime.\n\nWeaknesses\n1. Distillation is known to improve quality for a constant parameter count so I’m not convinced that the distillation experiments disprove the hypothesis that the quality gains of adding fully connected layers are from increased parameter count. A more convincing argument is that you’re not increasing the parameter count much because of the dimensionality reduction in your FC stack. I’d encourage the authors to include data on the % increase in model parameters with their proposed addition.\n\n 1. I’d like for the authors to include data on the % increase in parameter count with their mechanism in all experiments, as I stated above.\n2. I’m confused by the author’s characterization of ResNet and EfficientNets as fully convolutional. These architectures are typically drawn with a single fully connected layer following the convolutional component of the model.\n3. Caltech101 in Figure 3 appears to be a counterexample. I’m wondering if you can explain this more. Your explanation claims you see similar behavior on this dataset but that is not what I see looking at Figure 3c\n4. One ablation that would be interesting is only training an FC stack on the target task and showing its quality relative to the results in Figure 3. I am curious if it would outperform on very small datasets and then be surpassed as the dataset grows.\n5. I was not able to understand the sentence “We show in our experiments that the network with the train time added fully connected layers still significantly outperforms the original architecture, even if both have an equal number of weights”. I’d like for the authors to clarify this in the text.\n6. Some numbers called out in the text are very specific when I would expect them to vary depending on the target architecture. For example, the “512” and “64” in Figure 1 and the “42k” in section 2. I think the text might be more clear if the authors described these dimensions in abstract and stated their exact parameterization in the experiment details.\n The above sections detail limitations/questions I’d like to address.", " This paper proposes a training method that involves a module plugged into the final features of a backbone as an additional classification head for joint training with the existing classification head. Along with the original head, the newly added head also pass through the softmax for the cross entropy loss; the training is performed with the summation of the two cross-entropy loss. The proposed module is dubbed Feature Refiner (FR), consisting of two fully connected layers followed by the layer normalizations and operating with the gradient gate (GG) right before the classification head. GG acts exactly like a stop-gradient technique, which is widely used in self-supervised learning, to train the original head only based on the output of the frozen backbone, which is the input for the head, and the backbone is trained only with the extra head with FR. At the inference phase, the original backbone with the original classification head is used for the forward propagation. The authors claim that this training method works well on low-data regimes (for me, it is limited to the low-data regimes), which may not be expected. Some experimental results on small datasets, including the CIFAR datasets and the Caltech datasets for supervised learning, active learning (only on CIFAR), and semi-supervised learning (presumably on CIFAR), are provided to show the effectiveness of the proposed method. The authors try to show the universality of the proposed method of training with some backbones, not constrained on ResNets.\n ### Strengths\n- This paper is easy to follow. \n- The proposed idea looks somewhat interesting.\n\n### Weaknesses\n- The main concern with the proposed method is that it seems to have a very similar training pipeline to SimSiam [1], which showed that using the stop-gradient is a key to training a backbone in self-supervised training. Specifically, Feature Refiner (RF) and the classification heads seem to be the predictor and the projector in the method [1, 2], respectively (the order is reversed but would not be a matter from my standpoint). A difference is at the loss, but as the authors claim (in line 83, p.3), assuming the network is trained in a KD way, the proposed method is a supervised SimSiam (with a single-view ). I hardly agree with the authors' claim that the method performs like a KD, except for using only a single-view image; the training procedure is very close to SimSiam. Therefore, the authors should argue the difference between the proposed method with SimSiam.\n- There is no intuition why the proposed method has the benefit of training with small data. \n- The experimental setup is somewhat unconvincing; the setup is inconsistent and does not follow the authors' claim of requiring pre-training for the method (in line 234, page 9). The authors specify that they use an ImageNet-pretrained ResNet for the Caltech datasets training, only providing a seemingly inappropriate reason.\n- All the training dataset is small, so the experimental verification of the claim is limited. Using small data for training and a small dataset is technically different. Therefore, it would be better to justify the proposed method on a larger scale dataset such as ImageNet with a small data regime for a stronger claim.\n- Comparison of the proposed method with MLPmixer and ViT-B16 in Figure 4 seems unfair. MLPMixer and ViT-B16 have the stem of performing non-overlapping patchification for the input, so training them with 32x32 size images in the CIFAR datasets degrades the model accuracy regardless of the size of training data.\n- As aforementioned, the authors claim the proposed method is a joint KD. However, the loss is not a straight KD-based loss (e.g., the KLD loss), and training a backbone with the extra head may not leverage the KD concept, in my opinion. Can the authors elaborate on the concept?\n\n\n### Pre-rebuttal comment\nThis work presents a training method using the stop gradient technique with an extra FC head for model training in a supervised manner. Except for using a single-view image and supervised loss, the overall training pipeline is quite similar to the previous self-supervised methods [1, 2], so the authors should elaborate on the difference and provide any intuition why the proposed method could work well. Another concern is that the experiments are not convincing because of the inconsistent experimental setups and small-scale experiments. Therefore, I am leaning towards rejection but would like to see the authors' response and the other reviewers' comments for my final decision.\n\n[1] Chen et al, Exploring Simple Siamese Representation Learning, CVPR 2021\n[2] Grill et al, Bootstrap your own latent: A new approach to self-supervised Learning, NeuRIPS 2020\n - Why the ImageNet-pretrained model is used only for training on Caltech datasets?\n - Limitation is provided, but any potential negative social impacts do not seme to be provided.", " This paper shows that one can improve accuracy in the low-data regime by adding fully connected layers to CNNs during training and distilling knowledge to a classification head, resulting in a network with the same number of parameters and better generalization performance. I found the method proposed in this paper very interesting and the results showed clear improvements over the base network. Considering how easy this is to implement, I can see this paper having a large amount of impact to the community, and the experiments showing improvements in the active learning and semi-supervised learning setting show the diversity of this approach. However, the lack of knowledge distillation baselines and theoretical backing has me questioning whether this is any different than a typical teach student distillation approach. \n\nFor instance, what would happen if one jointly trained ResNet18 and ResNet50, applied a loss similar to [1]? In this case, the test time network would contain no more parameters, so it seems like a reasonable comparison, and it follows a similar intuition of this paper (train with more parameters and drop them during test-time). Although I am not too familiar with current KD methods, I would be surprised if none of them showed a similar gain in performance, so it would be a necessary baseline to compare this papers method to. \n\nA lesser concern I have is with overfitting. Since this method seems to be training a network with more parameters on a small amount of data, I wonder if performance on robustness benchmarks would be worse than the baseline network. \n\nOverall I think that this paper has the potential to have high impact as it is well written and has a simple, easy to follow method that is effective under several tasks. I am giving it a reject as I don't believe the current experiments are sufficient to prove that this method is more advantageous than other knowledge distillation methods, but I am eager to have the authors quell my doubts with a more thorough evaluation. \n\n[1] Zhang et al. \"Deep Mutual Learning\" In figure 6f, why does the accuracy of FR-EfficientNet decrease at 10k samples while the accuracy of EfficientNet increases? It seems strange as this effect seems to only happen at this one datapoint. \n\nIn figure 4b, 5(c,d), and 6(c,d,e,f) the FR method has higher accuracy than the baseline for all amounts of data; did you run experiments where these networks are trained on the entire dataset? Is there a general point at which there are diminishing returns in accuracy? \n\n**Update**\n\nAfter the authors provided new experiments to show that their method does in fact outperform knowledge distillation methods in low data regimes, I am satisfied with the evaluation and have changed my review from reject to accept. n/a" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "awEMoUbhdb", "9E_O0g6kbHC", "Vjd9z4ofbokZ", "HgwLBAxd8aT", "fNQdvc_5JdFr", "Co7yqI9Lk-lp", "XZE5cxXhoF", "oaoe_rofvKr", "CQQF6n6Op0", "PR_35vcPefv", "7rqP4Wtg7oP", "nips_2022__zPG0ShaZTc", "nips_2022__zPG0ShaZTc", "nips_2022__zPG0ShaZTc", "nips_2022__zPG0ShaZTc" ]
nips_2022_rUc8peDIM45
The alignment property of SGD noise and how it helps select flat minima: A stability analysis
The phenomenon that stochastic gradient descent (SGD) favors flat minima has played a critical role in understanding the implicit regularization of SGD. In this paper, we provide an explanation of this striking phenomenon by relating the particular noise structure of SGD to its \emph{linear stability} (Wu et al., 2018). Specifically, we consider training over-parameterized models with square loss. We prove that if a global minimum $\theta^*$ is linearly stable for SGD, then it must satisfy $\|H(\theta^*)\|_F\leq O(\sqrt{B}/\eta)$, where $\|H(\theta^*)\|_F, B,\eta$ denote the Frobenius norm of Hessian at $\theta^*$, batch size, and learning rate, respectively. Otherwise, SGD will escape from that minimum \emph{exponentially} fast. Hence, for minima accessible to SGD, the sharpness---as measured by the Frobenius norm of the Hessian---is bounded \emph{independently} of the model size and sample size. The key to obtaining these results is exploiting the particular structure of SGD noise: The noise concentrates in sharp directions of local landscape and the magnitude is proportional to loss value. This alignment property of SGD noise provably holds for linear networks and random feature models (RFMs), and is empirically verified for nonlinear networks. Moreover, the validity and practical relevance of our theoretical findings are also justified by extensive experiments on CIFAR-10 dataset.
Accept
The paper investigates an important topic of why SGD converges to flat minima. Overall the reviewers felt that this is a nicely written paper with a nice contribution to the state of the art.
test
[ "GF-ItH1NC0x", "oZBxQX5Xqj6", "ZDUhVQGhorb", "TKpvsrf-gpe", "NdzThbOBv8", "oAzU-qxmMU9", "nIC2VFvJYnY", "s7z2AvfPLQY", "82YsEyHgU0F", "HX13j1t7OxR", "jY_TjjKEX1", "SnHw2PTXL4L", "_dDVsiMs_v4", "GMT7By_-dnb", "XLRA9tAYX4M", "Tgkm-OYaoIV", "IuekMrQS3Aw", "45jEQ9uAYkA" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe truly appreciate your comment and partially understand your considerations. However, we respectfully disagree with you on most points as explained below. \n\n---\n\n > \"stability analysis seems to provide only a small picture on why GD/SGD generalises well\"\n\n We agree that the stability analysis provides only a **small picture** of the implicit regularization. However, this does not mean that the stability analysis is *not convincing and relevant*. A detailed characterization of one of the small pictures can be critical for revealing and understanding the big picture, let alone that the big picture has not emerged. \n\n \n We clearly demonstrate that stability plays an critical role in explaining why SGD selects flat minima in the large learning rate (LR) and small batch regime. If one agrees that the **flat minima argument** is important, then the stability analysis presented here is obviously useful. However, the flat minima argument itself is sort of not convincing. From a theoretical perspective, we agree on this point since most existing arguments of the generalization of flat minima are rather hand-waving not sufficiently scientific. Even so, the practical experiments already sufficiently justified that the relevance of flat minima argument. Therefore, in our opinion, the importance and relevance currently are supported empirically instead of theoretically. \n\n&nbsp;\n\n\n\n > \"they are all written as if the trajectory implicitly minimises the curvature of the loss (stability criterion) to enhance stability.\"\"\n\nTo be honest, in our understanding, existing related works did not claim that the SGD/GD trajectory implicitly \"minimizes\" curvatures of the loss landscape. The stability only ensures that SGD stays in a flat region where the stability condition is satisfied. In other words, the stability imposes constraints on loss curvatures instead of actively minimizing them. In fact, as shown in (Cohen et al., ICLR 2021), **GD dynamics itself, in fact, keeps progressively increasing the curvature until the edge of stability is reached**, after which the curvature hovers there and does not increase any more because of the stability constraint. This increasing curvature nature of GD provides an explanation of why the flatness of GD solutions are close to 2/learning_rate instead of away from it, after all the stability condition is only necessary. We suspect that similar phenomena also happen to SGD but need much more investigations, which we leave to future work. \n\n&nbsp;\n\n> \"given step-size there are a lot of stable global minima that generalise poorly\"\n\nIn general, this might be true and specifically, we can detail the situations where this happens as follows. \n\n - When the learning rate is infinitesimal, SGD becomes gradient flow; obviously, all minima are stable now but there definitely exist ones that generalize poorly. In a word, in a small LR regime, stability cannot distinguish different minima at all. \n - When the learning rate is large and batch size is small, our paper shows that the stable minima must be flat as measured by the Fro-norm of Hessian. However, the flat minima do not necessarily generalize well for general networks and data distributions. \n\nTherefore, in the large LR and small-batch regime, stability is useful in distinguishing flat minima and sharp minimal. What remains is to understand when and why flat minima generalize well. As mentioned previously, in a separate work, we already proved that the Fro-norm of Hessian can bound the path norm of two-layer nets, thereby controlling the generalization gap. Therefore, **in such a case, one can conclude that all stable minima generalize well** as long as the LR and batch size of SGD are independent of model size and sample size. But we agree that stability cannot explain the generalization of SGD in a small LR regime, where SGD becomes close to gradient flow (GF). (GF does find generalizable minima, though they generalize worse than SGD).\n\n&nbsp;\n\n**The role of SGD in the training process?**\nFor simplicity, we compare GD and SGD from the same initialization. Assume that GD quickly converges to a sharp minimum that generalizes relatively bad. This suggests that there are many bad minima around the specific initialization. For SGD, the noise-induced stability prevents SGD from entering the basin of these bad minima, thereby enforcing SGD to continue to explore in flat regions. It is currently unclear what kind of forces guide SGD to travel to flatter regions with better generalization properties. However, it is clear that it is the the noise-induced stability that provides a possibility for SGD; otherwise, SGD will get stuck in bad minima. \n\n---\n\nLastly, we would like to thank you again for the comment, which helps make our work more solid, . We will particularly emphasize in the revision that the stability analysis is more relevant in a large LR and small-batch regime.", " Dear authors, \nthis was simply a conclusive thought on linear stability. Obviously this is a powerful tool and the linear approximation allows gently to derive theorems and provide some *necessary* conditions for convergence. However, to detail my thinking, I have to say that these stability analysis seem to provide only a small picture on why GD/SGD generalises well: for example, I know well a lot of the cited papers of the rebuttal and they are all written *as if* the trajectory implicitly minimises the curvature of the loss (stability criterion) to enhance stability. This is absolutely not the case, and at a given step-size there are a lot of stable global minima that generalise poorly. On this perspective the presented paper is less overclaiming than the cited ones.\n", " Dear reviewer,\n\nThanks for your quick feedback. We would appreciate it if you could be more specific on the points that you find not convincing. \n", " Thanks for pointing out precise training dynamics in the case of cyclical step sizes.\n\nFor the other arguments on linear stability, I understand the simplicity of such an analysis. However, I am sorry to say that I am still not convince neither by the arguments presented in the comments nor by the cited papers that I find fuzzy on numerous points.\n\nFor the other points, I thank the authors for the answer and will not change my score.\n", " We thank all the reviewers for their encouraging and insightful comments. We greatly appreciate the time and effort you spent on our paper. We will incorporate these valuable suggestions into the final version of our paper. ", " **Q: The analysis only yields a sufficient condition for instability, not a necessary-and-sufficient condition.**\n\nYes, you are correct. To obtain a sufficient-and-necessary condition, we must consider both the curvature-induced and noise-induced instability simultaneously. However, this is rather complicated, e.g., we may need to track the instabilities along different eigen directions instead of only the average ones as done currently. For this reason, we decided to leave this to future work. \n\nOn the other hand, we stress that our condition is **necessary for stability**, which allows us to conclude that SGD tends to select flat minima (under the condition that the noise aligns with the local landscape). This is one of the key differences/improvements over (Wu et al., 2018). \n\n&nbsp;\n&nbsp;\n---\n\n**Q: While in general the stability will depend on both the full-batch component and the noise component, the analysis here considers only the noise component in isolation.**\n\nWe have added a remark on this point in lines 265-268 in the revised submission. ", " Indeed, in this paper, we only performed the linear stability analysis (LSA) near global minima, but we do not suggest that LSA is irrelevant in other regions, in particular in the SGD training. Define the following stability factor at the SGD solution $\\theta_t$ by\n$$\n\\qquad \\qquad \\qquad \\qquad \\gamma_t =\\frac{ \\sqrt{B/\\mu(\\theta_t)}/\\eta}{||H(\\theta_t)||_F}.\n$$\nThe condition of linear stability is $\\gamma_t\\leq 1$. The following table reports the values of $\\gamma_t$'s in SGD trajectory for the case of batch size=5 (second column) and batch size=40 (third column). Here the model is the fully connected networks with other settings same as the ones provided in the submission.\n\n| steps | B=5 | B=40 |\n| ----- | ---- | ---- |\n|1|0.34| 0.09|\n|401|0.31| 0.15|\n|801|0.33| 0.13|\n|1201|0.39| 0.13|\n|1601|0.25| 0.12|\n|2001|0.25| 0.12|\n|2401|0.25| 0.11|\n|2801|0.25| 0.11|\n|3201|0.26| 0.11|\n|3601|0.25| 0.10|\n\nWe see clearly that the $\\gamma_t<1$, i.e., Hessian's Fro-norms are smaller than the bounds predicted by LSA, during the whole SGD training. Moreover, as expected, the bounds of the small-batch case are tighter. We thus can conclude that SGD stays in the stable region predicted by LSA. \n\nTheoretically speaking, the validity of LSA near fixed points (i.e., global minima) only requires that the landscape **locally** behaves like a quadratic function, which is often true since the first-order coefficients are zeros at global minima. However, *for non-fixed points, the validity of LSA needs the landscape to behave like a quadratic function in a **non-local** scale*, at least in the most-unstable directions (typically, the leading eigen directions of Hessian).\n\nHowever, due to the page limit, we leave the discussion of the implications and explain why LSA is valid beyond global minima, i.e., justifications of the non-local quadratic behavior of neural network landscape, to future work. ", " Yes, you are absolutely correct. We have done some extra experiments to see how the batch size affects the tightness of our bound and the experiment results (given by the two table below) confirm your conjecture. We also refer the reviewer to the descriptions in lines 329-336 and Figure 5c of the revised submission for more details. Note that due to the page limit, the result of VGG16 is not added into the revisioned submission. Notice that here we only conduct the experiment for classifying a two-class subset of the CIFAR-10 due to the time limit. We will update it to the full CIFAR-10 experiment in the future. \n\n\n&nbsp;\n\n**ResNet 38**\n\n| __batch size__ | 4 | 8 | 16 | 32 | 64 |\n|-|-|-|-|-|-|\n| __bound__ | 62.92 | 92.02 | 122.48 | 231.13 | 306.64 |\n| __flatness__ | 18.34 | 33.38 | 34.18 | 30.61 | 32.15 |\n\n&nbsp;\n\n**VGG16**\n\n| __batch size__ | 8 | 16 | 32 | 64 | 128 |\n|-|-|-|-|-|-|\n| __bound__ | 39.59 | 48.41 | 88.18 | 139.16 | 165.71 |\n| __flatness__ | 27.11 | 33.90 | 45.95 | 34.18 | 23.01 |", " A short explanation: The size-independence of flatness is crucial for ensuring the complexity of SGD solutions to be bounded independently of the model size, thereby ensuring the generalization in the over-parameterized regime. A detailed explanation is given below. \n1. First, if we view the flatness, which is Hessian's fro-norm in the current paper, as a complexity measure that can effectively control the model's capacity, then the size independence implies that the complexity of SGD solutions does not increase as increasing the model size. Therefore, it is crucial for arguing that SGD finds generalizable solutions, in particular in the over-parameterized regime. Specifically, in a separate work (we will appropriately cite it in the final version of this submission), we already proved that for two-layer neural nets (under certain conditions), \n\n$$\n \\qquad\\qquad\\qquad \\qquad gen-gap (\\theta^*) \\leq O\\left(\\frac{||H(\\theta^*)||_F}{\\sqrt{n}}\\right) \t\\qquad \\qquad \\qquad \\qquad \\qquad (P),\n$$\n\nwhere the hidden constant only depends on the input dimension linearly. Together with the size independence of $||H(\\theta^*)||_F$ guaranteed by the linear stability, we can conclude that SGD finds generalizable solutions, thereby explaining the **implicit Regularization**. \n\n2. Secondly, this size independence of Hessian's Fro-norm is also a major difference between SGD and GD. For GD, the stability only guarantees that $\\lambda_\\max(H(\\theta^*))\\leq 2/\\eta$. Therefore, in general, GD finds minima, where only the curvature of the sharpest direction is controlled. By contrast, SGD tends to find minima, where the landscape is uniformly flat in different directions due to that the whole spectrum of Hessian is controlled. A naive bound of the Hessian's Fro-norm of GD solution is \n\n$$\n\\qquad\\qquad\\qquad ||H(\\theta^*)||_F\\leq \\sqrt{\\text{rank}(H(\\theta^*))} \\lambda_\\max (H(\\theta^*))\\leq \\frac{2\\sqrt{\\min(n,p)}}{\\eta},\n $$\n\nwhere $n,p$ denotes the sample and model size, respectively. Plugging this into the preceding generalization bound (P) only yields a vacuous/trivial bound of generalization gap. This partially explains why SGD generalizes better than GD. \n\nIn a word, the size independence of Hessian's Fro-norm is critical to explaining why SGD selects generalizable minima and distinguishes SGD from GD. \n", " First, we would like to point out that the escape phenomenon in Figure 4 does occur in real training of neural networks. This is the typical case of training with a *cyclical learning rate (LR)*. We refer the reviewer to Figure 2 in (Smith, 2017), Figure 2 in (Huang et al., 2017), and Figure 2 in (Izmailov et al., 2018), where one can see that the training loss **suddenly** increases (within a few iterations) to $\\Omega(1)$ when increasing the learning rate. This intriguing phenomenon in cyclical LR training can be explained by the exponential escape behavior investigated in our paper. We believe that one can at least partially explain the implicit regularization of cyclical LR training by using the stability argument, which is one of our ongoing projects. \n\nSecond, even for the normal training of neural networks, the stability argument is still relevant. It can explain why SGD does not enter sharp regions during the whole training process since SGD is (exponentially) unstable there. Specifically, in the current paper, we only focus on the end of training (i.e., around global minima ). We also refer the reviewer to our response to Reviewer 9D5P (click [the link](https://openreview.net/forum?id=rUc8peDIM45&noteId=nIC2VFvJYnY)), where we provide a preliminary experiment showing that the Hessian's Fro-norm is below the upper bounded predicted by the stability argument during the entire SGD trajectory. We also kindly refer the reviewer to (Cohen et al., 2021), which studies the whole GD trajectory using linear stability and finds that the leading eigenvalue of Hessian is close to 2/learning_rate. All these indicate that the stability argument is also relevant for studying the training process. However, a systematic study needs much more work, which we leave to future work. \n\nThird, stability arguments can also explain why SGD/GD solutions generalize well for some simplified models. For example, recent work (Mulayoff et al., 2021; Nacson et al., 2022) shows that the largest eigenvalue of Hessian can bound the generalization gap for *univariate* two-layer networks and two-layer diagonal linear networks. Combined with the stability condition of GD, these works imply that GD only selects generalizable minima. For SGD, (Ma et al., 2021) shows that the linear stability can ensure the boundedness of the Sobolev seminorm of the implemented functions for MLP, thereby explaining the generalization. However, it is well-known that the boundedness of the Sobolev seminorm cannot explain the generalization in high dimension, since the corresponding generalization bound suffers from the curse of dimensionality. In a separate work (we will appropriately cite it in the final version of this submission), we already proved that the Fro-norm of Hessian can effectively control the generalization gap of two-layer neural networks (2LNN); see our response to Reviewer *9D5P* on the importance of size independence of flatness (click [the link](https://openreview.net/forum?id=rUc8peDIM45&noteId=82YsEyHgU0F)). Combing with the stability analysis in the current paper, we can conclude that SGD only selects minima that provably generalize for 2LNN. \n\nWe emphasize that stability analysis is a simple but very powerful tool to analyze general nonlinear dynamics. The major advantage of stability analysis is its generality. For example, our linear stability analysis applies to the training of real deep networks and yields meaningful characterizations of the dynamical behavior of SGD, e.g., explaining the selection of flat minima. In contrast, the analysis that relies on precise descriptions of the dynamic processes only works for some toy models, such as linear networks. \n\nLastly, we do agree with the reviewer that many issues cannot be explained by merely the stability argument. For example, the stability argument cannot give us some precise descriptions of the training process, such as the convergence rate.\n\n&nbsp;\n---\n**Reference**\n\nLeslie N. Smith, Nicholay Topin, *Exploring loss function topology with cyclical learning rates*, ICLR 2017 workshop track. \n\nGao Huang, et al., *Snapshot Ensembles: Train 1, get M for free*, ICLR 2017\n\nPavel Izmailov, et al., *Averaging Weights Leads to Wider Optima and Better Generalization*, UAI 2018\n\nJeremy M. Cohen, et al., *Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability*, ICLR 2021\n\nRotem Mulayoff, et al., *The implicit bias of minima stability: A view from function space*, NeurIPS 2021\n\nMor Shpigel Nacson, et al., *Implicit Bias of the Step Size in Linear Diagonal Neural Networks*, ICML 2022\n\nChao Ma, et al., *On linear stability of SGD and input-smoothness of neural networks*, NeurIPS 2021", " The decoupling approximation is introduced to heuristically explain the geometry awareness of SGD noise and to motivate our definitions of alignment factors. We stress that we are not trying to say that the decoupling approximation itself is valid. \n\n1. *In our opinion, instead of showing the validity of the decoupling approximation in equation (2), we relax it to the* nondegeneracy *of the alignment between the noise covariance and local Hessian.* The latter is much weaker than the former, which implies a complete alignment. Note that the alignment nondegeneracy has been numerically verified for practical models and theoretically justified for some toy models. In contrast, the former, i.e., the decoupling approximation, is always invalid. For example, the cifar10 experiments **numerically** show that $\\alpha(\\theta), \\mu(\\theta)$ is not close to $1$; Theorem 2.1 shows that the decoupling approximation **provably** loses a rank-1 term for linear networks. \n\n2. However, we do agree with the reviewer that a fine-grain characterization of the noise covariance in a strong sense is also important. One particular example is mentioned by the reviewer: Why the alignment factors are close to 1 for small-scale problems but kind of away from $1$ for the CIFAR-10 problem? A systematical study of how different factors, such as model architectures, model size, and sample size, affect the alignment strength is beyond scope of this paper. We leave it to future work. Here, we particularly mention that the boundedness of the alignment defined by us only implies an average concentration between the noise covariance and local Hessian. As a consequence, we can only show that the expected loss blows up exponentially in the escape process. If we want to characterize properties such as the escape directions, a stronger characterization of the noise covariance will be needed. \n\nLastly, we emphasize that the most important contribution of this paper is to show that stability can ensure that SGD only selects flat minima if the SGD noise aligns with the local landscape. A more fine-grained characterization of SGD noise is definitely important and helpful for understanding SGD dynamics but beyond the scope of the current paper. \n\n\n\n&nbsp;\n&nbsp;\n---\n**Q: Maybe the authors could try to provide estimation of how far each approximation is to the real covariance.**\n\nWe feel that this is beyond the scope of this paper, though we agree that this is a very interesting and important question. Note that the analysis of (average) linear stability is valid as long as the alignment is non-degenerate. In other words, our stability analysis works even if the decoupling approximation is invalid. Therefore, estimating the error of decoupling approximation is irrelevant. More importantly, one of the most important messages hidden in our analysis is: It is unnecessary to pursue a precise/strong characterization of the noise covariance for analyzing some properties of SGD; certain weak characterizations such as the alignments are sufficient. \n\n\n&nbsp;\n&nbsp;\n---\n**Q: In Figure 1, $\\alpha, \\beta$ and $\\mu$ or are close to 1. Figure 5 is not as the scale of these constants can be (way) smaller than 1.**\n\nThe finding that these alignments, in particular the angle alignment $\\alpha$, are close to $1$ for those small-scale experiments is striking and unexpected; it might be very important for understanding SGD dynamics. However, as explained above, in this paper, we only need that $\\mu$ to be bounded below instead of close to 1 and this has been sufficiently supported by current experiments. ", " We completely agree that this claim is not well supported and even not accurate. We have rewritten this sentence and added more experiments and discussions; we refer the reviewer to lines 337-346 (marked in blue) in the revised submission. The following tables show how the values of $\\alpha,\\beta, \\mu$ of convergent solutions change when increasing of the number of classes. Here, we only report the alignments at convergence for simplicity, and we checked that similar patterns also hold during the training. One can see a clear trend that $\\beta$ and $\\mu$ decrease with the number of classes, but the angle alignment $\\alpha$ is more robust with the number of classes. These results have not been added to the revised paper only because of the page limit. \n\nVGG16 for classifying MNIST\n\n| __#class__ | 2 | 5 | 8 | 10 |\n|-|-|-|-|-|\n| $\\mu$ | 0.22 | 0.17 | 0.11 | 0.10 |\n| $\\alpha$ | 0.79 | 0.71 | 0.86 | 0.82 |\n| $\\beta$ | 0.28 | 0.24 | 0.12 | 0.12 |\n\n\nVGG16 for classifying CIFAR-10\n\n| __#class__ | 2 | 5 | 8 | 10 |\n| ---------- | ---- | ---- | ---- | ---- |\n| $\\mu$ | 0.85 | 0.14 | 0.08 | 0.06 |\n| $\\alpha$ | 0.91 | 0.87 | 0.73 | 0.81 |\n| $\\beta$ | 0.94 | 0.17 | 0.10 | 0.08 |\n\nHowever, obtaining some conclusive results still needs much more work, such as discussing the influence of model architectures. Considering the page limit of NeurIPS, we prefer to leave the systematical study to future work. ", " \n- **figure 5, it could be useful to add VGG11 to the right panel**\n\nThanks for the suggestion. We have added the VGG11 result to the right panel of Figure 5b. Please take a look at the revised submission. \n\n- **the paragraph \"notion of flatness\" at l110 may be made more clear. Also, the role of the norm in flatness notions is not clear.**\n\nSorry for that. We refer the reviewer to lines 111-119 (marked in blue) of the revised submission, where the paragraph is rewritten to contain more details to make the statement more clear. ", " It really depends on which flatness notion is used. For example, the largest eigenvalue of Hessian should keep nearly unchanged when increasing the model size, since stability ensures $\\lambda_{\\max}(H)\\leq 2/\\eta$. However, stability cannot provide direct control of the trace of Hessian and it may depend on model size more significantly. The following tables compare how the Fro-norm and trace of Hessian change with model size for linear networks and fully-connected networks. Each cell reports both the average and standard deviation (the value in parenthesis) of trace and fro-norm over 10 independent runs. \n\nFully-connected networks.\n\n|network width/fully-connected nets|Fro-norm|Trace|\n|---|---|---|\n|10|3.4 (1.2)|6.0 (2.5)|\n|20|2.5 (0.3)|5.0 (0.9)|\n|40|2.9 (0.2)|7.4 (1.4)|\n|80|3.1 (0.1)|9.7 (1.6)|\n|160|3.1 (0.1)|12.6 (1.3)|\n|320|3.5 (0.1)|16.1 (1.0)|\n\nLinear networks.\n|network width/linear net|Fro-norm|Trace|\n|---|---|---|\n|10|2.0 (0.1)|5.3 (0.4)|\n|20|2.1(0.1)|6.0 (0.4)|\n|40|2.3 (0.1)|7.7 (0.5)|\n|80|2.4 (0.1)|9.5 (0.5)|\n|160|2.4 (0.1)|10.7 (0.2)|\n\nWe see that as expected, the trace indeed increases much more significantly than the Fro-norm. However, it is worth noting that the trace itself does not increase too much as well, which cannot be explained by our stability argument. This is probably attributed to the particularity of neural network models. A plausible explanation is that: SGD tends to find minima for neural networks, where the Hessian is low rank. In such a case, the trace is close to the Fro-norm and the latter is provably bounded independently of the model size. But explaining why SGD tends to find low-rank minima is beyond the scope of this paper. ", " A short explanation: The size-independence of flatness is crucial for ensuring the complexity of SGD solutions to be bounded independently of the model size, thereby ensuring the generalization in the over-parameterized regime. A detailed explanation is given below. \n1. First, if we view the flatness, which is Hessian's fro-norm in the current paper, as a complexity measure that can effectively control the model's capacity, then the size independence implies that the complexity of SGD solutions does not increase as increasing the model size. Therefore, it is crucial for arguing that SGD finds generalizable solutions, in particular in the over-parameterized regime. Specifically, in a separate work (we will appropriately cite it in the final version of this submission), we already proved that for two-layer neural nets (under certain conditions), \n\n$$\n \\qquad\\qquad\\qquad \\qquad gen-gap (\\theta^*) \\leq O\\left(\\frac{||H(\\theta^*)||_F}{\\sqrt{n}}\\right) \t\\qquad \\qquad \\qquad \\qquad \\qquad (P),\n$$\n\nwhere the hidden constant only depends on the input dimension linearly. Together with the size independence of $||H(\\theta^*)||_F$ guaranteed by the linear stability, we can conclude that SGD finds generalizable solutions, thereby explaining the **implicit Regularization**. \n\n2. Secondly, this size independence of Hessian's Fro-norm is also a major difference between SGD and GD. For GD, the stability only guarantees that $\\lambda_\\max(H(\\theta^*))\\leq 2/\\eta$. Therefore, in general, GD finds minima, where only the curvature of the sharpest direction is controlled. By contrast, SGD tends to find minima, where the landscape is uniformly flat in different directions due to that the whole spectrum of Hessian is controlled. A naive bound of the Hessian's Fro-norm of GD solution is \n\n$$\n\\qquad\\qquad\\qquad ||H(\\theta^*)||_F\\leq \\sqrt{\\text{rank}(H(\\theta^*))} \\lambda_\\max (H(\\theta^*))\\leq \\frac{2\\sqrt{\\min(n,p)}}{\\eta},\n $$\n\nwhere $n,p$ denotes the sample and model size, respectively. Plugging this into the preceding generalization bound (P) only yields a vacuous/trivial bound of generalization gap. This partially explains why SGD generalizes better than GD. \n\nIn a word, the size independence of Hessian's Fro-norm is critical to explaining why SGD selects generalizable minima and distinguishes SGD from GD. \n", " The paper provides an explanation of the phenomenon of SGD selecting flat minima by relating the noise structure of SGD to its linear stability. In over-parameterized models with the square loss, it shows, exploiting the geometry awareness of SGD noise (that is provable for linear networks and RFMs), that the Hessian of an accessible minimum for SGD is bounded by a term depending only on the batch size an the learning rate. Strenghts:\n- the topic of study is relevant\n- the quantitative analysis is extensive, going from simple models s.t. RFMs and linear models to modern deep neural networks\n- the paper is well written and clear - figure 3: are the results robust w.r.t. other flatness notions?\n- figure 5, it could be useful to add VGG11 to the right panel\n- the paragraph \"notion of flatness\" at l110 may be made more clear. Also the role of the norm in the flatness notions is not clear. also, size-independence plays an important role\n\nMinors:\n- l96: initiated studied The role of overparameterization and the specific loss type (quadratic) are not discussed\n\npotential negative societal impact are not discussed", " The authors of the paper present the following contributions:\n- They provide an in-depth study of the importance of the SGD noise both in term of geometry and in terms of scale.\n- They show that, if a certain alignment property is satisfied, a global minimum is linearly stable if and only if the Froebinius norm of the Hessian at optimum is upper bounded by constant independent of model and sample size ### **Strengths**\n\n- The first thing I have to say is that the paper is very well written. The exposition is clearly conducted and all assumptions and estimations are detailed, announced and commented. One may or may not validate the authors' noise model, but at least it is not hidden (as too often) and keys are given to appreciate the results\n- The main strength and difference of the article is the particular attention given on the noise model both in terms of noise and of geometry. From this, even if the results seem easy to prove, the exposition is cristal clear and more convincing with the previous literature.\n\n### **Weaknesses**\n\n- There are a lot of results and approximation stated in the article, but overall, the crux of the paper is two show that the approximations of the equation (2) at the beginning of the paper is valid: that is to say that either $\\alpha, \\beta$ or $\\mu$ are close to $1$. If Figure 1 is pretty convincing from this point of view, I have to say that Figure 5 is not as the scale of these constant can be (way) smaller than 1. I would really appreciate if the authors comment more on this point because the sentence \"*This comparison suggests that the alignment strength significantly depends on the intrinsic complexity of the problem, (nearly) independent of the model size*\" is not very convincing.\n- This is a minor weakness but overall, even if the paper is not overselling and clear about their study, I am still not convinced by the stability argument of SGD. Indeed, on never see plots like Figure 4 in real training dynamics of neural networks and this suggests -at least to me- that the taking into account the noise is not a stability issue but a dynamical one. Maybe the authors could comment a bit on this fact. \n\n\n### **Minor typos**\n\n- lign 65: $\\lambda_i$ and not $\\lambda_1$\n- lign 227: In **the** current paper\n- ligns after 235: there is a confusion between $v$ and $\\nu$\n \n In my opinion, the crux of the paper is to give a precise (or experimental) sense to the approximations of equation (2). Maybe the authors could try to provide estimation of how far each approximation is to the real covariance: \n- First regarding the influence of the loss\n- Second regarding the independence between $\\nabla f(x_i, \\theta)$ and $L_i$\n\nThis corresponds to constants $\\mu_1$ and $\\mu_2$ defined in the paper but no experiments are shown concerning these variables. I use this paragraph to conclude as I already discussed the limitations on the previous boxes. The paper tries to conduct both theoretical and experimental explanations of the wide minima selection of SGD. I'll weakly accept the paper for its clarity and detail about the noise. To raise my score, I would like the authors to justify more the fact that the escaping phenomenon is important and the alignment phenomenon of the covariance structures. \n", " Deep learning folklore holds that the gradient noise in SGD causes it to prefer \"flat minima.\" An important open question in deep learning theory is to make this folklore mathematically precise. Taking a step in that direction, this paper describes a mechanism by which SGD escapes exponentially quickly from minima where the Frobenius norm of the Hessian is too large. This mechanism is orthogonal to the curvature-driven process that causes full-batch gradient descent to escape from minima where the Hessian spectral norm exceeds 2 / step size. Strengths:\n\n -- To the best of my knowledge, the idea of an exponentially fast escape that is driven purely by noise is novel. It is interesting that a required condition for this phenomenon (noise magnitude proportional to loss value) is provably satisfied in linear nets and random feature models.\n\nWeaknesses:\n\n -- The analysis only yields a sufficient condition for instability, not a necessary-and-sufficient condition.\n\n -- While in general the stability will depend on both the full-batch component and the noise component, the analysis here considers only the noise component in isolation. Accounting for both simultaneously is going to be hard, so I don't begrudge the authors for this simplification. \n\n -- I think the paper would be clearer if the authors first presented the escape analysis (section 3) before the sufficient conditions (section 2). When I read section 2, I spent a lot of time scratching my head wondering why alpha, beta, and mu are defined the way they are. Later, when I got to section 3, I realized that \\mu is precisely what is needed to trigger exponentially fast escape.\n -- I didn't understand why the submission keeps mentioning that the flatness is independent of model size. Could the authors clarify why this is an important point?\n\n-- Are the authors sure that the linear stability analysis is only valid near a local minimum? I ask because in the full-batch case, the curvature-driven escape when sharpness exceeds 2 / (step size) is valid generically, not just near a local minimum.\n\n-- I wonder if the bound would be tighter on the real networks (VGG, ResNet) if you considered a batch size smaller than 64. Intuitively, I would expect the noise-driven escape to dominate when the batch size is small. Yes, the authors adequately addressed the limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "oZBxQX5Xqj6", "ZDUhVQGhorb", "TKpvsrf-gpe", "HX13j1t7OxR", "nips_2022_rUc8peDIM45", "45jEQ9uAYkA", "45jEQ9uAYkA", "82YsEyHgU0F", "45jEQ9uAYkA", "IuekMrQS3Aw", "IuekMrQS3Aw", "IuekMrQS3Aw", "Tgkm-OYaoIV", "Tgkm-OYaoIV", "Tgkm-OYaoIV", "nips_2022_rUc8peDIM45", "nips_2022_rUc8peDIM45", "nips_2022_rUc8peDIM45" ]
nips_2022_DSoFfnmUSjS
Recommender Transformers with Behavior Pathways
Sequential recommendation requires the recommender to capture the evolving behavior characteristics from logged user behavior data for accurate recommendations. However, user behavior sequences are viewed as a script with multiple ongoing threads intertwined. We find that only a small set of pivotal behaviors can be evolved into the user's future action. As a result, the future behavior of the user is hard to predict. We conclude this characteristic for sequential behaviors of each user as the \textit{Behavior Pathway}. Different users have their unique behavior pathways. Among existing sequential models, transformers have shown great capacity in capturing global-dependent characteristics. However, these models mainly provide a dense distribution over all previous behaviors using the self-attention mechanism, making the final predictions overwhelmed by the trivial behaviors not adjusted to each user. In this paper, we build the \textit{Recommender Transformer} (RETR) with a novel \textit{Pathway Attention} mechanism. RETR can dynamically plan the behavior pathway specified for each user, and sparingly activate the network through this behavior pathway to effectively capture evolving patterns useful for recommendation. The key design is a learned binary route to prevent the behavior pathway from being overwhelmed by trivial behaviors. We empirically verify the effectiveness of RETR on seven real-world datasets and RETR yields state-of-the-art performance.
Reject
This paper presents Recommender Transformer (RETR) with a pathway attention mechanism that can dynamically zeroing-out the interactions (e.g., the trivial/noisy ones) in transformer-based sequential recommender systems. Extensive experimental results demonstrate the effectiveness of the proposed architecture. Overall this paper received mixed reviews with borderline scores. The reviewers raised concerns around baselines and evaluations, some of which the authors promptly addressed in the revision during the rebuttal period. I also read the paper in details myself. I do agree with some of the concerns from the reviewers but I don't think a method needs to beat every other published papers to be published (and I think the current baselines are more than thorough enough). My biggest complaint about the paper is around the writing, specifically, how the proposed idea is presented. This paper tries to tackle an important question, which is that in sequential recommendation, not every interactions are useful in helping predict future interaction. The self-attention mechanism in transformer kind of addresses this problem but in a more "softer" fashion with attention weights. This paper presents a simple yet effective method to introduce a pathway mechanism that adaptively zeroing-out some of the interactions via a binary pathway router. In order to train such a model end-to-end, Gumbel-softmax sampling is utilized. The most important part of the contribution to me is that this is an improvement to the transformer architecture, as opposed to a new model which is what this paper's writing suggests -- the proposed approach is effectively model-agonistic and doesn't marry to a particular loss function or finer-grained architectural choices (number of layers, etc.). Currently there are many baselines in the paper, but each made some different model/architecture choices, which could contribute to the difference in performance (or not, but we wouldn't know). An ideal evaluation should have been to take all the transformer-based baselines that are currently in the paper, add this pathway mechanism without changing anything else, and show that the results improved over the transformer architecture. In this way, we know the improvements are exactly coming from introducing the pathway. The authors might argue some of the current results are already supporting this argument, but my point is to emphasize this point very explicitly rather than leaving it for the readers to infer. From what I read in this paper, I truly believe this pathway idea has its potential. Therefore, I would especially want the authors to further refine the presentation to better convey the idea, which in turn will hopefully increase the impact of this paper once it is eventually published. Some minor comments: * The way the paper is currently written seems to suggest there are only three types of pathways and the network is capable of capturing all of them. I am personally not a big of fan of over-interpreting what a neural net is trying to do. Therefore, I wouldn't overly focus on the characterization of different pathways and only show the qualitative examples at the end as a high-level demonstration. * In Eq 2 "softmax" should really be "sigmoid" if a 0-1 prediction is made there. Then the following line "logit" is probably not the right word here. * The qualitative examples at the end (figure 3) can be more carefully examined/labeled. For example, the current categorization is quite ambiguous -- "Indie" refers to the type of developers while "JPG" refers to the genre of the game, they are certainly not mutually exclusive.
train
[ "OyTyyFxFg8T", "eTCW1-PTT45", "M6T4QDHnx-e", "XphfpovZP6m", "D3X61dzZzOh", "pvdGHBNZflV", "HOfwrrKEyyt", "Wz5DpVDGW-1", "A63DIh6aa0_", "HhY7CtxlkgP", "8uLC8gDdkDW", "ebsumYgZ77D", "VpzkKJJBzq3", "dcTPOURa5CG", "RrmwhJNYTUJ", "NDRCG5J6_5N", "yyw3FJodPHI", "kGNUEpkZKjL", "VfnnsSrHmB", "OoQ970eZ6CA" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe are sincerely looking forward to your efforts in reviewing our paper. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to have a further discussion with you about whether your concerns have been clarified or not. Please let us know if you still have any unclear issues with our work.\n\n\nThanks again for your time and reviews.\n", " Dear Reviewer\n\nYour academic suggestions inspire us a lot and make our paper strong. We appreciate your great efforts and really enjoy having a discussion with you. \n\n\nThanks again for your time and reviews", " Thanks again for your kind suggestions and insightful comments, which have greatly inspired us to improve our work.\n\nQ1. Further clarify the difference between RETR ( $L=1$ ) and SASRec ( $L=1$ ).\n\nWe clarify that the difference between the pathway attention in RETR ( $L=1$ ) and self-attention in SASRec ( $L=1$ ) is the query, while the key and value are the same in both attention methods. We adaptively route the query tokens for the pathway attention with the behavior pathway to make the RETR concentrate more on the behavior pathway rather than trivial behaviors.\n\nHowever, the last item prediction is also influenced by other prediction pairs. Technically, we train both the RETR ( $L=1$ ) and SASRec ( $L=1$ ) using the pairwise ranking loss in Eq. (7). These predictive models are trained with different prediction pairs $1 \\rightarrow 2$, $ 2 \\rightarrow 3$, ... , $N-1 \\rightarrow N$. Here $t-1 \\rightarrow t$ means predicting the $t$-th item only conditioned on the first $t-1$ items.\n\nThus, the training process of the representation $\\mathcal{Z}_N$ of the last item ( $N$ is the length) is also influenced by the training process of different $\\mathcal{Z}_t$, $t=1,2,3,..,N-1$. (The superscript $L=1$ omitted for clarity). \n**By incorporating the pathway module, our pathway attention is trained differently compared to the self-attention using different prediction pairs, leading to different weight parameters from the RETR ( $L=1$ ) and SASRec ( $L=1$ ).** The $\\mathcal{Z}_N$ in the RETR ( $L=1$ ) will be different from $\\mathcal{Z}_N$ in the SASRec ( $L=1$ ) with different weight parameters.\n\n\nQ2: For Table 2, how many seeds have you conducted?\n\nFor all baselines and RETR in Table 2, we have conducted three random seeds and reported the average results.", " Dear Reviewer,\n\nThank you once again for your review of our work. As the discussion period is approaching its end, we would be grateful if you could confirm whether our responses and the additions we have made to the manuscript addressed your concerns, and let us know if any issues remain.\n\nThanks again for your time and reviews.", " It is very grateful for the detailed answers to each question. Most of the questions have been resolved. Therefore, I raised the score.\n\nAdditionally, I still have some questions.\n\nQ1. Could you clarify the difference between RETR(L=1) and SASRec(L=1)? \n\nI understand that the different part between SASRec and RETR is that RETR uses pathway attention instead of self-attention. Specifically, the difference is the query, while the key and value are the same in both attention methods.\n\nIn particular, when L=1, for the key and value, the two methods are exactly the same. (since they use raw behavior embedding Z^0=X_s)\n\nBoth use Z_t^1 as the final user representation. Z_t^1 is the same in the two methods if RETR processes the last item (=t-th item) on-pathway, but it is different if the last item is processed off-pathway.\n\nQ2. For table 2, how many seeds have you conducted?\n", " Dear Reviewer,\n\nThe final stage of discussion is ending soon, so please kindly let us know if our response has addressed your concerns. We will be happy to answer if there are additional issues/questions.\n\n\n\nThanks again for your time and reviews.", " Dear Reviewer,\n\nMany thanks for your time and efforts in reviewing our paper. Your reviews are highly instructive to improve our paper greatly.\n\nWe kindly remind that we are at the **final stage of discussion (Aug 3rd-9th)** and have only a few days for the discussion. We have made an exhaustive effort to try to successfully address your concerns and answer your questions, by providing all supporting experiments you requested and clarifying all questions you asked. **We have proved that the recommender still needs to deal with various behavior pathways and can be overwhelmed by trivial behaviors, whenever the behavior sequence is short or long. The MLP-based model like FMLP-Rec can also benefit from our pathway module.**\n\nIf you have any further concerns or questions, please do not hesitate to let us know, and we will be happy to answer them timely.\n\nAll the best, \nAuthors", " Dear Reviewer,\n\nThanks again for the time and reviews. Since the final stage of discussion is ending soon, please let us know if our response has addressed your concerns.\n\nAs requested, we have made every effort to prove that our RETR has the ability to capture various behavior pathways effectively. With more supporting experiments, we showed that our RETR could be further enhanced using an advanced backbones alternative to the vanilla Transformers. \n\nBesides, we provide the quantitive results to validate that our RETR can effectively capture various behavior pathways following your request. The more supporting qualitative visualization results are also provided in $\\underline{\\textrm{Figure 3 of the revised paper}}$. We further included comparisons with S3-Rec, SINE, and TGSRec on all benchmarks.\n\n\nWe will be happy to answer if there are additional issues/questions.", " We thank all reviewers for their constructive comments. Accordingly, we have revised the paper substantially and updated the new version. Please check out the following changes (highlighted in green color) in the revised paper: \n\n1. We compare RETR with more recent SOTA sequential recommendation models, including\nS3-Rec, SINE, TGSRec, Jodie, TGN. Quantitative results are shown in Table 2 and detailed analyses are shown in $\\underline{\\textrm{Section 4.1}}$. \n\n2. We add quantitative experimental results on large-scale real-world datasets (Netflix, MSD and Taobao), and each dataset contains a large number of users. Results can be found in $\\underline{\\textrm{Table 2}}$. \n\n3. We provide three typical examples corresponding to the casual, correlated and drifted behavior pathways respectively in $\\underline{\\textrm{Figure 3}}$. The detail explanations are in $\\underline{\\textrm{Section 4.3}}$. We also add visualizations in $\\underline{\\textrm{Figure 1 of Appendix}}$, which shows that the recent MLP-based method FMLP-Rec is still overwhelmed by trivial behaviors.\n\n4. We provide the ablation study for multi-head attention in $\\underline{\\textrm{Appendix A}}$.\n\n5. We moved some contents in the original paper to the appendix in the revision: the experiment results on the Beauty, Sports and Toys dataset (in $\\underline{\\textrm{Appendix B}}$ now).\n\n\nIf there are any additional comments on the revision, please do not hesitate to let us know. We are glad to answer any further questions.\n", " **Q9:** Results on large-scale datasets.\n\nAs suggested by the reviewer, we conduct experiments on three large datasets: Netflix, MSD and Taobao.\nThe detailed statistics of these datasets can be seen in Table 1 of the revised paper. Netflix, MSD and Taobao have 463,435, 571,355 and 987,994 users, which are obviously regarded as the large-scale dataset.\n\nWe here give snapshot results on Netflix, MSD and Taobao (full comparisons can be found in $\\underline{\\textrm{Table 2 in the revision}}$ ). From these results, we can see that our RETR remarkably outperforms the SOTA recent recommendation methods. \n\n\n**Netflix**:\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nS3-Rec [1] | 0.3571 | 0.4917 | 0.2819\nSINE [2] | 0.3601 | 0.4902 | 0.2796\nTGSRec [3] | 0.3512 | 0.4887 | 0.2778\nLightSANs [4] |0.3441 |0.4852 | 0.2785\nRETR | 0.3725 | 0.5142 | 0.3134\n\n\n**MSD**:\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nS3-Rec [1] | 0.5381 | 0.5315 | 0.3494\nSINE [2] | 0.5304 | 0.5264 | 0.3667\nTGSRec [3] | 0.5279 | 0.5137 | 0.3612\nLightSANs [4] |0.5163 |0.4994 | 0.3451\nRETR | 0.5981 | 0.5912 | 0.3901\n\n\n**Taobao**:\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nS3-Rec [1] | 0.0827 | 0.1336 | 0.0919\nSINE [2] | 0.0873 | 0.1580 | 0.0934\nTGSRec [3] | 0.0745 | 0.1537 | 0.0802\nLightSANs [4] |0.0694 |0.1590 | 0.0741\nRETR | 0.1195 | 0.1768 | 0.1117\n\n\n**Q10:** Minor issues.\n\nThanks for your valuable suggestions. We have fixed these issues in the revised paper and added the ablation study of multi-head attention in Appendix A. \n\n---\n\n[1] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen, “S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization,” CIKM 2020 \n\n[2] Qiaoyu Tan, Jianwei Zhang, Jiangchao Yao, Ninghao Liu, Jingren Zhou, Hongxia Yang, Xia Hu, “Sparse-Interest Network for Sequential Recommendation,” WSDM 2021 \n\n[3] Ziwei Fan, Zhiwei Liu, Jiawei Zhang, Yun Xiong, Lei Zheng, Philip S. Yu, “Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Transformer,” CIKM 2021\n\n[4] Xinyan Fan, Zheng Liu, Jianxun Lian, Wayne Xin Zhao, Xing Xie, and Ji-Rong Wen, \"Lighter and better: low-rank decomposed self-attention networks for next-item recommendation,\" SIGIR 2021\n\n[5] Li, Shiyang, et al. \"Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting.\" NeurIPS 2019.\n\n[6]Child, Rewon, et al. \"Generating long sequences with sparse transformers.\" arXiv preprint arXiv:1904.10509 (2019).\n\n[7] Kumar, Srijan, Xikun Zhang, and Jure Leskovec. \"Predicting dynamic embedding trajectory in temporal interaction networks.\" Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019.\n\n[8] Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637,412 2020", " **Q3:** In Line 172, what’s the meaning of “lose the privilege to be part of the behavior pathway”? Since the off-pathway tokens are also considered, it seems that all items will always be considered whether they are in the pathway or not.\n\nAs described in $\\underline{\\textrm{Line 172-176}}$ of the revised paper, the behavior pathway is updated hierarchically in the subsequent feed-forward procedure. Once the behavior token fails to be routed as part of the pathway in a certain block, this token cannot be part of the final behavior pathway. This is the meaning of \"losing the privilege to be part of the behavior pathway\". We further clarify this in the revised paper.\n\nOur RETR does not directly consider all items. As described in **Q1**, our pathway attention is cross-attention between the pathway and off-pathway behavior tokens. The query for pathway attention only contains the pathway information, while the key and value are the whole input tokens. This cross-attention is proved by ablations to be the most effective way to make precise recommendation. \n\n\n**Q4:** Why is it necessary to design a pathway.\n\nAs described in **Q1**, we show that SASRec using the pathway as the inputs can achieve better performance compared with the original SASRec using the whole behaviors as the inputs. It indicates that the previous self-attenton mechanism makes the pathway overwhelmed by the other trivial behaviors.\n\nThus, we design a router to capture the accurate pathway and propose a novel pathway attention, which is a cross-attention mechanism between the pathway and off-pathway tokens. This cross-attention can make our RETR not overwhelmed by the trivial off-pathway behaviors.\n\n\n**Q5:** Compare with recent baselines.\n\nWe add comparisons with the recent sequential recommendation models S3-Rec, SINE, TGSRec and LightSANs on all benchmarks. The full table is included the revised paper. \n\nWe here give snapshot results on Tmall (full comparisons can be found in $\\underline{\\textrm{Table 2}}$ in the revision). From these results, we can see that our RETR remarkably outperforms the SOTA methods. \n\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nS3-Rec [1] | 0.5423 | 0.6687 | 0.5194\nSINE [2] | 0.5411 | 0.6512 | 0.5147\nTGSRec [3] | 0.5372 | 0.6506 | 0.5121\nLightSANs [4] |0.5415 |0.6399 | 0.5119\nRETR | 0.6103 | 0.7138 | 0.5822\n\n\n\n**Q6:** Replace the proposed pathway-based method with other sparse attention methods.\n\nAs suggested by the reviewer, we replace the proposed pathway-based method with two sparse attention methods: LogSparse [5] and sparse attention [6]. We conduct experiments on Tmall:\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nRETR w/ LogSparse [5] | 0.4923 | 0.6015 | 0.4735\nRETR w/ Sparse attention [6] | 0.4871 | 0.5873 | 0.4620\nRETR | 0.6103 | 0.7138 | 0.5822\n\nThe above results show that our RETR using the pathway attention remarkably outperforms other competing methods with two sparse attention methods. These sparse attention methods cannot capture the exact behavior pathway and show worse performance than RETR.\n\n\n\n**Q7:** Compare with graph-based methods.\n\nWe add comparisons with the recent graph-based sequential recommendation models Jodie [7] and TGN [8] on all benchmarks. The full table is included revised paper. \n\nWe here give snapshot results on Tmall (full comparisons can be found in $\\underline{\\textrm{Table 2}}$ in the revision). From these results, we can see that our RETR remarkably outperforms the SOTA graph-based methods. \n\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$) \n---- | --- | --- | ---\nJodie [7] | 0.5307 | 0.6384 | 0.5003\nTGN [8] | 0.5198 | 0.6362 | 0.4997\nRETR | 0.6103 | 0.7138 | 0.5822\n\n\n**Q8:**: More justifications to show that RETR will not be overwhelmed by trivial behaviors. \n\nWe further provide visualization results for casual, correlated and drifted behavior pathways respectively in $\\underline{\\textrm{Figure 3 of the revised paper}}$. These three random samples from the steam dataset provide strong evidence that our RETR can capture various pathways and can avoid being overwhelmed by trivial behaviors. \n\nAs suggested by the Reviewer sJpS, we evaluate our RETR using the captured behavior pathway as the inputs and get comparable results on the Tmall dataset below. This provides quantitative evidence that our RETR can effectively capture the various behavior pathway and not be overwhelmed by trivial behaviors. \n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nSASRec | 0.5049 | 0.6275 | 0.4804\nRETR w/ pathway inputs | 0.6112 | 0.7142 | 0.5831\nRETR | 0.6103 | 0.7138 | 0.5822\n\n\n", " **Q1:** What’s the essential difference between RETR and sequential model with attention mechanism when putting aside the concept of pathway?\n\nOur RETR has two essential differences compared with sequential model with attention mechanism:\n\n- Our RETR designs the pathway router to capture the behavior pathway, while the other sequential models have not considered it before. \n\n - As shown in $\\underline{\\textrm{Figure 3}}$ of the revised paper, the previous self-attention mechanism mainly focuses on the recent behaviors, and cannot capture the accurate behavior pathway. The detailed analysis can be seen in **Q2**. Only our RETR can capture the precise behavior pathway. \n\n- The pathway attention for RETR is the cross-attention between the pathway behavior tokens and off-pathway tokens. Our pathway cross-attention mechanism can avoid the trivial interaction between the off-pathway tokens. \n\n - As described in $\\underline{\\textrm{Line 183-188}}$, we routed the query using the captured pathway, which masks the off-pathway tokens as 0. Thus, the query for the pathway attention only contains information from the behavior pathway. \n\n - This cross-attention mechanism forces the pathway attention to attend to the behavior pathway; To ensure that the contextual information from off-pathway behavior tokens can be captured, the key and value of the cross-attention are the original input behavior tokens. \n\n - Our pathway cross-attention mechanism avoids the trivial interaction between the off-pathway tokens. However, the previous attention mechanism for sequential models is self-attention, which will be overwhelmed by the trivial information in the off-pathway behavior tokens. \n\nWe further conduct evaluation experiments on Tmall. Firstly, we train RETR on Tmall. Secondly, we use the trained RETR on Tmall to capture the behavior pathway for each user in Tmall. Finally, We use the pathway behaviors and off-pathway behaviors as the inputs to train SASRec respectively.\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nSASRec | 0.5049 | 0.6275 | 0.4804\nSASRec w/ pathway inputs | 0.5778 | 0.6812 | 0.5425\nSASRec w/ off-pathway inputs | 0.4383 | 0.5697 | 0.4215\nRETR | 0.6103 | 0.7138 | 0.5822\n\nFrom the above results, we can see that SASRec achieves better performance using the behavior pathway as the inputs compared with the original SASRec using the whole user's behavior as the inputs. On the contrary, the off-pathway inputs hurt SASRec's performance seriously.\n\nFinally, our RETR achieves the best performance, indicating that the pathway-offpathway cross-attention is more effective than the pathway self-attention.\n\n\n\n**Q2:** In the introduction, the authors list three kinds of behavior pathways, so how can RETR capture them to make precise recommendations in each case?\n\nThe pathway router for RETR is to detect the accurate behavior pathway. As described in $\\underline{\\textrm{Line 152-153}}$, the pathway router embeds global information from the whole behavior sequence. \n\nCapturing these three kinds of behavior pathways has diverse challenges. For the **casual behavior** pathway, the recommender needs to capture the global interest of the whole sequence to avoid forgetting the early interests; For the **correlated behavior** pathway, even though previous methods can focus on the recent behaviors, these models also take the off-pathway tokens into consideration; For the **drifted behavior** pathway, the recommender needs to make decisions in a global view without focusing more on the old drifted pathway.\n\nTo overcome these challenges, the pathway router embeds global information from the whole behavior sequence and maintains the original information from the input representation. It can make the router capture the global trend of the user's behaviors and remember the recent behavior information. \n\nWe have provided additional visualization results for casual, correlated and drifted behavior pathways respectively in $\\underline{\\textrm{Figure 3 of the revised paper}}$. These three random samples from the steam dataset provide strong evidence that our RETR can capture various pathways. \n\n\n\n", " **Q1:** The authors state that user pathways fall into three categories (correlated, casual, and drifted behavior pathways), but on what evidence? Justification is needed.\n\nWe use these three categories to covering the representative types of user behaviors.\n\n- A user may be randomly or regularly interested in a particular item. \nFor the random interests, we define this phenomenon as the Casual Behavior Pathway. \nThe random interests lead to the casual behavior pathway. \n\n- If a user is regularly interested in a particular item, the user will be interested in it for a certain period. We define this phenomenon as the Correlated Behavior Pathway.\n\n- Otherwise, the user's interest is evolving over time, which is widely found in previous recommendation methods like SMRec [1]. A user’s behaviors in a particular period might drift over time and the user will be interested in another item. We conclude this phenomenon as the Drifted Behavior Pathway.\n\nWe further showcase these three types of user behaviors from datasets in $\\underline{\\textrm{Figure 3 of the revised paper}}$. \n\n\n\n**Q2:** The types and definitions of pathways should be better justified.\n\nWe give the **detail defination** for each behavior pathway:\n\n- **Casual behavior pathway**: The user clicks a particular class of items randomly at casual times. These clicked behaviors are not clicked continuously. \n\n- **Correlated behavior pathway**: The user clicks a particular class of items continuously for a certain period. \n\n- **Drifted behavior pathway**: The user clicks a particular class of items continuously at a certain period. After that time, the user starts to click another particular class of items continuously for a certain period. \n\n**Q3:** What is the difference between the Correlated behavior pathway and the Drifted behavior pathway?\n \nThe drifted behavior pathway is different from the correlated behavior pathway because it considers the evolving interests in the long-range behaviors. On the contrary, the correlated behavior pathway mainly focuses on the stable interests in the short-range behaviors. \n\n\n---\n\n[1] Chao Chen, Haoyu Geng, Nianzu Yang, Junchi Yan, Daiyue Xue, Jianping Yu, and Xiaokang Yang. Learning self-modulating attention in continuous time space with applications to sequential recommendation. ICML 2021", " **Q1:** It is unclear why we need to use this swith rounter in sequential recommendation. \n\nPrevious sequential recommendation methods have proved that the recommender can be benefited a lot from the user’s historical behaviors, even though the behavior sequence may be short. However, when meeting with the short behavior sequence, the recommender still needs to deal with various behavior pathways and can be overwhelmed by the trivial behaviors.\n\nAs shown in $\\underline{\\textrm{Appendix Figure 1}}$ in the revised paper, we show the last 10 behaviors from a random user in the Steam dataset. Further, in $\\underline{\\textrm{Appendix Figure 1}}$ we show that the state-of-the-art MLP-based model, FMLP-Rec [1], is still overwhelmed by the old drifted behaviors (simulation games). \n\nTo avoid the recommender being overwhelmed by the trivial behaviors, we design the pathway router to capture the pivotal behavior pathway that explains the user's preferences, whenever the behavior sequence is short or long. It's crucial to develop the pathway router to capture the behavior pathway for making precise recommendations.\n\nOur pathway router is a **general module** which is designed not only for the Transformers. It can also enhance MLP-based model. We apply the pathway router to the state-of-the-art MLP-based model, FMLP-Rec [1], and evaluate its performance on Taobao, MovieLens1M and Yelp. \n\n\n\nTaobao:\n\nMethod | NDCG@10 (higher is better | HR@10 (higher is better) | MRR (higher is better)\n---- | --- | --- | ---\nFMLP-Rec[1] | 0.0678 | 0.1421 | 0.0603\nFMLP-Rec + pathway router | 0.0893 | 0.1659 | 0.0834\nRETR | 0.1195 | 0.1768 | 0.1117\n\nMovieLens1M\n\nMethod | NDCG@10 (higher is better | HR@10 (higher is better) | MRR (higher is better)\n---- | --- | --- | ---\nFMLP-Rec[1] | 0.5948 | 0.6043 | 0.5519\nFMLP-Rec + pathway router | 0.6217 | 0.8293 | 0.5704\nRETR | 0.6351 | 0.8467 | 0.5921\n\nYelp:\n\nMethod | NDCG@10 (higher is better | HR@10 (higher is better) | MRR (higher is better)\n---- | --- | --- | ---\nFMLP-Rec[1] | 0.5024 | 0.7720 | 0.4299\nFMLP-Rec + pathway router | 0.5225 | 0.7946 | 0.4502\nRETR | 0.5136 | 0.7730 | 0.4354\n\n\nFrom the table above, we can see that FMLP-Rec can benefit a lot from our pathway router. The pathway router is important for sequential recommendation models to avoid being overwhelmed by trivial user behaviors.\n\n\n\n**Q2:** In the experiment, \"we pair the ground-truth item with 100 randomly sampled negative items that the user has not interacted with.\" Does this raise a sampling bias?\n\nIn the previous literature, this sampling strategy is widely used. To avoid heavy computation on all user-item pairs, we followed the strategy used in SASRec. For each user, we randomly sample 100 negative items, and rank these items with the ground-truth item. According to the rankings of these 101 items, HR@10 and NDCG@10 can be evaluated. This is a *de facto* configuration for sequential recommendation.\n\nFor fairness, we adopt the same sampling strategy for all comparing models to evaluate the performance. All in all, this sampling strategy will not raise a sampling bias. \n\n\n\n**Q3:** Clarify the novelty of the architecture.\n\n- Previous MoE-style Transformers like Switch Transformer [3] usually adjust the pathway towards different feedforward networks (FFNs). On the contrary, our RETR routes the query for the pathway attention instead of routing the FFNs. Note that RETR does not route the network but routes the exact user behaviors. Furthermore, the behavior pathway is routed hierarchically towards the feedforward procedure, while this hierarchical procedure is not used in previous MoE-style Transformers.\n\n- Previous methods like [2] often use the Gumble-Softmax to decide which task to choose. Our RETR is the first to route the user's behavior pathway using the pathway router. In contrast, the Gumble-Softmax is adopted in our work to decide whether each behavior can be selected as the behavior pathway. Since reasoning about the exact behavior pathway is crucial to the sequential recommendation performance, our use of the Gumbel-Softmax strategy to capture the behavior pathway brings new ideas and practical guides to the recommendation literature. \n\n\n---\n[1] Zhou, Kun, et al. \"Filter-enhanced MLP is all you need for sequential recommendation.\" Proceedings of the ACM Web Conference, 2022.\n\n[2] Shen, Jiayi, et al. \"Variational multi-task learning with gumbel-softmax priors.\" NeurIPS, 2021.\n\n[3]Fedus, William, Barret Zoph, and Noam Shazeer. \"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.\" JMLR, 2021.", " **Q5:** Quantitative results on whether the proposed model effectively captures a useful pathway. \n\nAs suggested by the reviewer, we give quantitive results to validate that our RETR can effectively capture various behavior pathways. We evaluate our RETR using a subset of sequences derived from the obtained behavior pathway on Tmall. \n\nTechnically, we first train RETR on Tmall. For each user, we take the captured behavior pathway from our RETR as the inputs to retrain a RETR rather than using the whole user's behaviors. From the results below, we find that using the behavior pathway as the inputs can achieve comparable results as the original RETR which uses complete user behaviors. It provides the evidence that our RETR can aptly capture the useful pathway for each user. \n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nSASRec | 0.5049 | 0.6275 | 0.4804\nRETR w/ pathway inputs | 0.6112 | 0.7142 | 0.5831\nRETR | 0.6103 | 0.7138 | 0.5822\n\n\n\n**Q6:** If the proposed model is fairly compared with the existing model.\n\nOur RETR (L=1) is different from SASRec (L=1). Each RETR block has a pathway router to route the query from the original inputs. The pathway attention of RETR is cross-attention between the pathway tokens and off-pathway tokens, while SASRec uses the self-attention without the pathway router. \n\nIn $\\underline{\\textrm{Table 3}}$, SASRec achieves the best performance when L=2 on Yelp. For fairness, the hyperparameter for all comparing models is carefully tuned to achieve their best performance. The proposed model is fairly compared with the existing model.\n\nMethod | MRR ($\\uparrow$)\n---- | ---\nSASRec (L=1) | 0.3813\nSASRec (L=2) | 0.3927\nSASRec (L=3) | 0.3922\nSASRec (L=4) | 0.3919\n\n\n**Q7:** Minor typos.\n\nWe have carefully fixed these typos in the revised paper.\n\n\n---\n[1] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen, “S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization,” CIKM 2020 \n\n[2] Qiaoyu Tan, Jianwei Zhang, Jiangchao Yao, Ninghao Liu, Jingren Zhou, Hongxia Yang, Xia Hu, “Sparse-Interest Network for Sequential Recommendation,” WSDM 2021 \n\n[3] Ziwei Fan, Zhiwei Liu, Jiawei Zhang, Yun Xiong, Lei Zheng, Philip S. Yu, “Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Transformer,” CIKM 2021\n\n", " **Q1:** Is $v_t$ in Eq. (4) the same as $\\widehat{\\mathcal{R}}_{t}^{l}$ in Eq. (5)? If so, please clarify this equation.\n\nNo, $v_t$ in Eq. (4) is not the same as $\\widehat{\\mathcal{R}}_{t}^{l}$ in Eq. (5). \n\nAs described in the main context of Eq. (3), the $\\widehat{\\mathcal{R}}_{t}^{l}$ is obtained by the argmax operation during the feedforward procedure. \n\nHowever, the argmax operation is non-differentiable. To make the backward procedure of the Gumbel-Softmax differentiable, the Gumbel-Softmax calculates the gradient from Eq. (4), which is a differentiable approximation to relax $\\widehat{\\mathcal{R}}_{t}^{l}$ to $v_t$. The whole procedure can be regarded as the reparameterization trick widely used in deep learning. \n\n\n**Q2:** The proposed idea seems to be model-agnostic. Could you apply the pathway module for other sequential recommender models, e.g., BERT4Rec?\n\nAs suggested by the reviewer, we apply the pathway module for BERT4Rec and S3-Rec on MovieLens \n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nBertRec | 0.5965 | 0.8269 | 0.5614\nBertRec + Pathway | 0.6376 | 0.8491 | 0.5972\nS3-Rec[1] | 0.6103 | 0.8312 | 0.5729\nS3-Rec[1] + Pathway | 0.6482 | 0.8577 | 0.6048\n\nIn the above table, we observe that our pathway module can improve the performance of BertRec and S3-Rec substantially. RETR can be further enhanced using advanced backbones alternative to the vanilla Transformers. \n\n\n**Q3:** In Figure 1, the authors mention various behavior pathways. Does the proposed model capture various pathways or focus on capturing drifted behavior pathways?\n\nOur RETR can capture various pathways. We further provide the qualitative results for casual, correlated, and drifted behavior pathways respectively in $\\underline{\\textrm{Figure 3 of revised paper}}$:\n\n- **Casual behavior pathway**: An example of the casual behavior pathway is shown in Figure 3(a). The RGB game is randomly clicked at casual times. Our RETR can capture all the RPG behavior pathway, while the SASRec focuses on the wrong recent adventure games. The SASRec cannot capture the early clicked RPG game.\n\n- **Correlated behavior pathway**: An example for correlated behavior pathway is shown in Figure 3(b). The indie game is clicked many times recently, leading to the final decision to an indie game. Our RETR can effectively capture the correlated behavior pathway. However, the SASRec provides higher attention scores on the recent RPG and games. On the contrary, our RETR pay no attention on these wrong results.\n\n- **Drifed behavior pathway**: An example for drifted behavior pathway is shown in Figure 3(c). The user was initially interested in the indie game, but suddenly became interested in simulation games recently and chose an indie game at last. Our RETR captures the drifted behavior pathway for the indie game and has not concentrated on the old drifted pathway -- simulation games. However, the SASRec gives more attention to the simulation games, making it being overwhelmed by trival user behaviors. \n\nThe above three random examples from the steam dataset provides the strong evidence that our RETR can capture various pathways. \n\n\n**Q4:** Compare with recent sequential recommendation models.\n\nWe add comparisons with the recent sequential recommendation models S3-Rec, SINE, TGSRec on all benchmarks. The full table is included in the $\\underline{\\textrm{revised paper}}$. \n\nWe here give snapshot results on Yelp, MovieLens1M, and Tmall (full comparisons can be found in $\\underline{\\textrm{Table 2}}$ in the revision). It can be observed that our RETR remarkably outperforms the SOTA methods. \n\n\nYelp:\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nS3-Rec [1] | 0.4937 | 0.7597 | 0.4107\nSINE [2] | 0.4902 | 0.7564 | 0.4093\nTGSRec [3] | 0.4887 | 0.7533 | 0.4072\nRETR | 0.5136 | 0.7730 | 0.4354\n\n\nMovieLens1M:\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nS3-Rec [1] | 0.6172 | 0.8352 | 0.5812\nSINE [2] | 0.6134 | 0.8311 | 0.5801\nTGSRec [3] | 0.6081 | 0.8303 | 0.5734\nRETR | 0.6351 | 0.8467 | 0.5921\n\n\n\nTmall:\n\nMethod | NDCG@10 ($\\uparrow$) | HR@10 ($\\uparrow$) | MRR ($\\uparrow$)\n---- | --- | --- | ---\nS3-Rec [1] | 0.5423 | 0.6687 | 0.5194\nSINE [2] | 0.5411 | 0.6512 | 0.5147\nTGSRec [3] | 0.5372 | 0.6506 | 0.5121\nRETR | 0.6103 | 0.7138 | 0.5822\n\n\n\n\n\n", " This paper proposes a new sequential recommendation model with behavior pathways, effectively capturing specific evolving item patterns for each user. The pathway attention using learned binary routes can effectively remove unnecessary items for a given user sequence. Experimental results show that the proposed model significantly outperforms existing models on seven benchmark datasets and achieves state-of-the-art performance. Overall, the idea of using behavior pathways is interesting to me, but it has several weak points, especially in evaluating the proposed model. Strengths\n- (Originality) It is interesting to introduce behavior pathways in long user sequences. Besides, this paper utilizes a router to choose item patterns selectively.\n- (Originality) Because the idea of using the router is a model-agnostic property, it can be effectively applied to various sequential recommender models.\n- (Clarity) It is well-written and easy to understand.\n\n\nWeaknesses\n- (Quality) Although the evaluation is extensive, the proposed model has not been compared with recent sequential recommendation models. Please refer to the following references.\n[1] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, Ji-Rong Wen, “S3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization,” CIKM 2020\n[2] Qiaoyu Tan, Jianwei Zhang, Jiangchao Yao, Ninghao Liu, Jingren Zhou, Hongxia Yang, Xia Hu, “Sparse-Interest Network for Sequential Recommendation,” WSDM 2021\n[3] Ziwei Fan, Zhiwei Liu, Jiawei Zhang, Yun Xiong, Lei Zheng, Philip S. Yu, “Continuous-Time Sequential Recommendation with Temporal Graph Collaborative Transformer,” CIKM 2021\n- (Quality) In Section 4.3, the visualization result shows that the proposed model effectively captures the pathways in user sequences. However, it is wondering if this result is generalized for other cases. It is necessary to show a quantitative result on whether the proposed model effectively captures a useful pathway. One possible evaluation is that the proposed model also shows a comparable result using a subset of sequences derived from a pathway.\n- (Quality) In Table 3, RETR(L=1) is similar to SASREC(L=1). However, RETR(L=1) shows a better performance than SASRec(L=1). It is wondering if the proposed model is fairly compared with the existing model.\n- (Clarify) There are some minor typos.\n- 1p 37line: use’s -> user’s\n- 2p 41line: second -> first\n- 2p 44line: first -> second\n - Q1) Is V^t in Eq. (4) the same as hat(R^l) in Eq. (5)? If so, please clarify this equation.\n- Q2) The proposed idea seems to be model-agnostic. Could you apply the pathway module for other sequential recommender models, e.g., BERT4Rec?\n- Q3) In Figure 1, the authors mention various behavior pathways. Does the proposed model capture various pathways or focus on capturing drifted behavior pathways?\n This paper does not address the negative societal impact. However, this paper seems not to have any negative impact.", " In this paper, the authors propose the Recommender Transformer (RETR) with a novel Pathway Attention mechanism. RETR can dynamically plan the behavior pathway specified for each user, and sparingly activate the network through this behavior pathway to effectively capture evolving patterns useful for recommendation. Strength:\n\n1. The paper is well written and easy to follow. Basically, the authors try to use the behavior pathway in the Transformer.\n\n2. The experimental results are good compared with the baselines.\n\n\nWeakness:\n1. The novelty of this work is not very high. The mechanism of router is wildey used in the MoE-style Transformer. It seems that this work applied it to the sequential recommendation. The Gumbel-softmax is also widely used to optimize the discrete binary variable.\n\n2. It is unclear why need use this swith rounter in sequential recomemndation. Usually, the length of users' behaviors is very short, like less than 25 for most of transaction. Do we really need this router in the Transformer? \n\nActually, recent work verify that a simle MLP can outperform the Transformer in the sequential recommendation.\n\nZhou, Kun, et al. \"Filter-enhanced MLP is all you need for sequential recommendation.\" Proceedings of the ACM Web Conference 2022. 2022.\n\nThe motivation of this work is thus not very strong. \n\n3. In the experiment, \"we pair the ground-truth item with 100 randomly sampled negative items that the user has not interacted with.\" Does this raise a smapling bias? It is unclear why need use this swith rounter in sequential recomemndation. Usually, the length of users' behaviors is very short, like less than 25 for most of transaction. Do we really need this router in the Transformer? \n\nActually, recent work verify that a simle MLP can outperform the Transformer in the sequential recommendation.\n\nZhou, Kun, et al. \"Filter-enhanced MLP is all you need for sequential recommendation.\" Proceedings of the ACM Web Conference 2022. 2022.\n\n\n3. In the experiment, \"we pair the ground-truth item with 100 randomly sampled negative items that the user has not interacted with.\" Does this raise a smapling bias? Yes", " This paper proposes a recommender transformer with a pathway attention mechanism. It is characterized by its ability to capture three types of user pathways and predict user action sequences with high accuracy. The paper demonstrates the usefulness of the proposed method in comparison with several state-of-the-art methods using several types of real data. - Strengths\n - Starting with the actual example in Figure 1, the motivation for proposing the method is well explained, making it easy to understand the content of the proposed technique.\n - The authors have conducted prediction experiments using seven different behavioral log datasets from various sites. They compared the accuracy of the proposed method with seven existing methods and confirmed that the proposed method outperforms them.\n - Experiments are conducted using real data, not artificial data.\n- Weaknesses\n - The meaning and boundaries of the three types of pathways are vague. The definition of each should be clearly stated. Also, are these three types sufficient?\nFor example, what is the difference between the Correlated behavior pathway and the Drifted behavior pathway? They seem to have similar properties in the local and short-term. The types and definitions of pathways should be better justified, such as providing references that support the authors' definitions.\n - Although the experimental results quantitatively demonstrate the effectiveness of the proposed method, the architecture in Figure 2 is straightforward and somewhat lacking in technical novelty. The authors state that user pathways fall into three categories (correlated, casual, and drifted behavior pathways), but on what evidence? Justification is needed. There is no mention in the paper of negative impacts on society. Also, I can't think of any.", " The authors propose Recommender Transformer (RETR) with a Pathway Attention mechanism which can generate the behavior pathway hierarchically and capture the evolving patterns dynamically through the pathway. The key design is a learned binary route to prevent the behavior pathway from being overwhelmed by trivial behaviors. The authors also show RETR has high accuracy and efficiency compared with other self-attention or transformer based sequential recommendation methods through experiments. Pros:\n1. The paper is generally easy to follow.\n2. The idea of using pathway in recommendation algorithms seems to be new. \n3. The authors conducted extensive experiments on seven datasets to prove RETR can make accurate recommendations.\n\nCons:\n1. The essential difference between RETR and other sequential recommendation methods with attention mechanism is not clear. It seems to me that RETR not only utilizes the “on-pathway tokens” but also leverages the “off-pathway tokens” as they “also convey contextual information” and the difference between the two kinds of tokens is their weight. However, in SASRec and all the other self-attention-based methods, different tokens already have varying attention weights, so that more important historical items may have higher attention weights and less important historical items may have lower attention weights. I don't understand why is it necessary to design a pathway.\n2. The targeted problem is not new, and several recent works have been proposed to address the same issue of self-attention. Apart from reference [6] in the paper, there are several others trying to improve self-attention-based recommendation methods. For example, the LightSANs work published in SIGIR ‘21. It would be useful to also conduct experimental comparisons with these more recent baselines in addition to [6]. Besides, there are many sparse attention works in the literature and it is interesting to replace the proposed pathway-based method with other sparse attention methods to see if pathway-based method is superior.\n3. Besides self-attention-based methods, other types of sequential recommendation methods, e.g., temporal graph-based sequential recommendation methods, have also achieved state-of-the-art results. Considering that attention weights can be regarded as edge weight in a graph, it might be useful to compare with some recent temporal graph based sequential recommendation methods, e.g., Jodie in KDD 19 and TGN in ICML 20. Especially, TGN also adopted attention in its model.\n4. There is no evidence/experiment to show that RETR will not be overwhelmed by trivial behaviors. As this is the main claim of the paper, it should be necessary to have more justifications. The case study in Section 4.3 seems to be a cherry-pick result.\n5. All the datasets that used in this paper are relatively small. Larger datasets such as MSD and Netflix should be more desirable.\n\nMinor issues:\n1. “in the second line” in Line 41 and “in the first line” in Line 44 should be exchanged. \n2. In column “Actions” of Table 1, the commas are not consistent. \n3. The authors should use “GRURec” or “GRU4Rec” consistently in the paper to avoid misunderstandings.\n4. The ablation study of multi-head attention is missing.\n - What’s the essential difference between RETR and sequential model with attention mechanism when putting aside the concept of pathway? \n- In the introduction, the authors list three kinds of behavior pathways, so how can RETR capture them to make precise recommendations in each case? \n- In Line 172, what’s the meaning of “lose the privilege to be part of the behavior pathway”? Since the off-pathway tokens are also considered, it seems that all items will always be considered whether they are in the pathway or not. \n The authors have adequately addressed the limitations and potential negative societal impact of their work" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 3 ]
[ "kGNUEpkZKjL", "D3X61dzZzOh", "D3X61dzZzOh", "kGNUEpkZKjL", "pvdGHBNZflV", "Wz5DpVDGW-1", "kGNUEpkZKjL", "yyw3FJodPHI", "nips_2022_DSoFfnmUSjS", "8uLC8gDdkDW", "ebsumYgZ77D", "OoQ970eZ6CA", "VfnnsSrHmB", "kGNUEpkZKjL", "NDRCG5J6_5N", "yyw3FJodPHI", "nips_2022_DSoFfnmUSjS", "nips_2022_DSoFfnmUSjS", "nips_2022_DSoFfnmUSjS", "nips_2022_DSoFfnmUSjS" ]
nips_2022_VYYf6S67pQc
Mildly Conservative Q-Learning for Offline Reinforcement Learning
Offline reinforcement learning (RL) defines the task of learning from a static logged dataset without continually interacting with the environment. The distribution shift between the learned policy and the behavior policy makes it necessary for the value function to stay conservative such that out-of-distribution (OOD) actions will not be severely overestimated. However, existing approaches, penalizing the unseen actions or regularizing with the behavior policy, are too pessimistic, which suppresses the generalization of the value function and hinders the performance improvement. This paper explores mild but enough conservatism for offline learning while not harming generalization. We propose Mildly Conservative Q-learning (MCQ), where OOD actions are actively trained by assigning them proper pseudo Q values. We theoretically show that MCQ induces a policy that behaves at least as well as the behavior policy and no erroneous overestimation will occur for OOD actions. Experimental results on the D4RL benchmarks demonstrate that MCQ achieves remarkable performance compared with prior work. Furthermore, MCQ shows superior generalization ability when transferring from offline to online, and significantly outperforms baselines. Our code is publicly available at https://github.com/dmksjfl/MCQ.
Accept
All reviewers are generally positive or borderline about this paper. Reviewer's note that the method is theoretically sound and practical to implement. Even though all of the components have been explored previously, the authors combine them in a novel approach that convincingly improves over prior works. Major concerns have been addressed by the author's response, however, I agree with reviewer fVHB that per dataset tuning of $\lambda$ muddies the comparison with previous approaches that do not do similar. I would encourage the authors to additionally report the best performance with a single setting across datasets to make the comparison clearer.
train
[ "ypA5GtDqVr7", "2GuLaPPxO23", "KRleZFwWeqk", "-7SBQjaf_06", "kiiCzzSgY7u", "pF_XLYDaMrj", "bF4CSSWbGxd", "o_wyfat7EZ", "CeYgin_IRf8", "NRgVb0baYUD", "QdRDMtPlRyj", "DBVROBYbvFJ", "camBAIVvBX", "mXjlR7JWkHw", "rckYN96AOBhD", "fZGFLJluKdJ", "t-QEeKXSLD", "ywstjKTuh4e", "3XX0PKsyDpd", "E0lc9o6_UgO", "Pr-ET5z8cn3", "7JZH26DsJ6S", "kxA_ykyvzHV", "6gjmp8wu473", "3ExOO6GRtn_", "Esk2paR47UX", "NeJy062t6gK" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the kind reply! We think many of the suggestions and comments from the reviewer are of great value to make our paper stronger. We are more than happy to include the discussion part of the CVAE into our revision.\n\nWe apologize that we misunderstand the comments from the reviewer (we think the reviewer comments that we need to know the support of behavior policy for constructing the pseudo target values). Indeed, we need to know the support of the behavior policy in the practical MCB operator (as we need to examine whether the sampled action from the learned policy is OOD). In practice, it is hard to determine whether the sampled action from the learned policy is OOD, we therefore resort to equally assigning them pseudo target values. We would like to clarify that UWAC tunes its parameter accordingly while the hyperparameters of the baseline methods seems unified (please refer to Section 5.5 in UWAC paper http://proceedings.mlr.press/v139/wu21i/wu21i.pdf). We follow the instructions of hyperparameter setup in the UWAC paper when reproducing it with its official codebase. We leave the automatic tuning of $\\lambda$ in MCQ as future work (we believe the concerns of the reviewer can be further mitigated then).\n\nWe really enjoy having such insightful discussions with the reviewer. Again, thanks for your time and efforts in making our paper better!", " \nDear authors,\n\nThank you for your timely and detailed response.\n\nThe discussion on the CVAE part is now much clear. \nI hope that the authors could add into revision the requirement of an empirically-good fitted behavior policy to mitigate the possible negative overestimation impacts from OOD actions.\n\nI agree with the authors that the whole logic of the paper is clear. \nBut when going over the described logic, there are several practical compromise especially when stepping into the practical implementation, which make the theory part less significant.\n\nAs an aside, I think the definition of practical MCB operator (L124) requires differentiating the case of $\\mu(a \\mid s) > 0$ and else. So it requires knowing the support of the behavior policy?\n\nI would like to clarify that there is a misunderstanding to my previous response.\nI did not mean that the authors should \"unify parameters across all datasets,\" but per-dataset tuning the hyperparameters via online evaluation, as in this paper, seems unfair compared with the baselines.\n\nThe reference of MOPO seems inappropriate, as it belongs to another realm of *model-based* offline RL, where per-dataset tuning of a small amount of hyperparameters seems common.\nThis is different from the baselines in the MCQ paper.\n\nI highly appreciate the efforts the authors put into making this paper better, and will remain my neutral rating for this work.\n\n", " Dear reviewer fVHB,\n\nWe thank the efforts and time you spend in reviewing our work. We really appreciate your thoughtful comments on our manuscript. Hopefully our response has addressed your concerns. As the author-reviewer discussion period is ending soon, we wonder whether there are some remaining concerns or questions. We will be glad to have a further discussion. More discussions and suggestions on further improving our paper are always welcomed!\n\nBest regards,\n\nThe authors", " We are happy that the concerns from the reviewer are addressed. We thank the reviewer for raising the score! ", " Yes, my concerns have been addressed, I will raise my score accordingly.", " Dear reviewer EAmG, \n\nWe first would like to thank the reviewer's efforts and time in reviewing our work. We were wondering if our responses have resolved your concerns. We will be happy to have further discussions with the reviewer if there are still some remaining questions! More discussions and suggestions on further improving the paper are also always welcomed! We sincerely look forward to your kind reply!\n\nBest regards,\n\nThe authors", " We would like to thank the reviewer for the positive comments on our manuscript! Thanks for thinking that our work is solid and is worth being accepted by NeurIPS!", " Thanks for the results and they are very clear. I think this work is solid and I will vote for acceptance. ", " We thank the reviewer for the kind reply. We are happy to include the results of $\\lambda=1$ in the table, which can be found below. We want to note here that taking $\\lambda=1$ will make our MCQ degenerates into the vanilla SAC (since no weight is assigned to the auxiliary loss), which is why we write $\\lambda\\in[0.7,1)$ works fairly well instead of $[0.7,1]$. We find that the performance of MCQ degrades with $\\lambda=1$ for most of the cases, indicating the benefits and effectiveness of the auxiliary loss term (i.e., actively training OOD actions by assigning them proper pseudo target values).\n\nFor MCQ, we ought to assign a comparatively large weight over in-distribution actions (the bellman error) rather than the auxiliary loss to ensure that they are properly trained. Hopefully this response can address your concern. If there are still any remaining questions, please let us know!\n\n| Task Name | $\\lambda=0$ | $\\lambda=0.1$ | $\\lambda=0.3$ | $\\lambda=0.5$ | $\\lambda=0.7$ | $\\lambda=0.9$ | $\\lambda=1.0$ |\n| ---- | :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| halfcheetah-random | 2.2$\\pm$0.6 | 4.6$\\pm$1.3 | 5.5$\\pm$0.9 | 6.3$\\pm$2.6 | 19.5$\\pm$0.6 | 27.2$\\pm$0.9 | 29.7$\\pm$1.4 |\n| hopper-random | 0.7$\\pm$0.0 | 1.2$\\pm$0.5 | 10.8$\\pm$12.1 | 31.0$\\pm$0.7 | 31.4$\\pm$0.4 | 29.4$\\pm$4.3 | 9.9$\\pm$1.5 |\n| walker2d-random | -0.1$\\pm$0.0 | -0.1$\\pm$0.0 | 0.2$\\pm$0.2 | 8.7$\\pm$7.3 | 14.4$\\pm$7.4 | 4.5$\\pm$1.3 | 0.9$\\pm$0.8 |\n| halfcheetah-medium | -0.3$\\pm$0.3 | 38.2$\\pm$0.9 | 41.4$\\pm$0.7 | 43.9$\\pm$0.5 | 49.8$\\pm$0.4 | 61.2$\\pm$0.3 | 55.2$\\pm$27.8 |\n| hopper-medium | 1.7$\\pm$0.9 | 21.5$\\pm$5.5 | 27.1$\\pm$6.9 | 56.7$\\pm$18.6 | 78.4$\\pm$4.3 | 48.6$\\pm$13.2 | 0.8$\\pm$0.0 |\n| walker2d-medium | -0.1$\\pm$0.1 | 1.5$\\pm$1.2 | 60.1$\\pm$12.2 | 68.3$\\pm$2.8 | 72.8$\\pm$5.8 | 91.0$\\pm$0.4 | -0.3$\\pm$0.2 |\n| halfcheetah-medium-replay | -1.6$\\pm$3.7 | 17.6$\\pm$3.8 | 38.3$\\pm$0.5 | 40.9$\\pm$2.0 | 41.3$\\pm$1.7 | 55.1$\\pm$2.0 | 0.8$\\pm$1.0 |\n| hopper-medium-replay | 1.8$\\pm$0.8 | 2.3$\\pm$2.1 | 3.7$\\pm$3.2 | 4.9$\\pm$2.6 | 80.7$\\pm$20.4 | 101.6$\\pm$0.8 | 7.4$\\pm$0.5 |\n| walker2d-medium-replay | -0.2$\\pm$0.1 | 0.0$\\pm$0.2 | 0.3$\\pm$0.6 | 1.2$\\pm$1.0 | 32.2$\\pm$30.9 | 91.3$\\pm$5.7 | -0.4$\\pm$0.3 |\n\nTable 1. Normalized average score of MCQ over different choices of $\\lambda$ on MuJoCo \"-v2\" datasets. The results are averaged over 4 different random seeds.", " Thank the authors for the response and the additional results. \n\nI want to apologize for making a mistake in my review. What I want to say is when $\\lambda$ approaches 1 (I regard $1-\\lambda$ as $\\lambda$ in my original review), the performance seems always increasing which confuses me about the role of the auxiliary loss. This table gives me a more clear view of the trend, where I can tell the optimal $\\lambda$ is not at 0.9 for hopper-random, walker2d-random, and hopper-medium. \n\nI want to know if the authors could also include the results for $\\lambda=1$ in this table. From the current results, simply taking $\\lambda=0.9$ gives optimal performances in most cases.", " Thanks for keeping the positive score! We also thank the reviewer for the high-quality and positive comments on our manuscript!", " **Q3: On the weighting coefficient**\n\n**A3:** We do understand this concern from the reviewer. MCQ exhibits superior performance on many non-expert datasets with the cost of tuning the weighting coefficient $\\lambda$, as there is no free lunch. We have discussed on the need of tuning the weighting coefficient $\\lambda$ as a limitation of our work. We empirically find that $0.7\\le \\lambda<1$ can guarantee a good performance, which we believe can provide some aid when applying our MCQ algorithm. \n\nWe want to clarify that offline RL defines the setting of learning without interactions with the environment, while it does not necessarily mean that one needs to unify parameters across all datasets. Due to the limited coverage of datasets, distribution shift, and extrapolation errors, it is hard to say that unifying hyperparameters can always guarantee a good performance when encountering a new unknown dataset. It is actually common and valid that we tune parameters for specific datasets in real-world applications. The role of offline RL is leaned towards providing a pre-trained policy, which is fine-tuned with limited interactions with the environment. Under such a setting, hyperparameter tuning is feasible and necessary to guarantee a good pre-trained policy. Moreover, as we show in the paper, our MCQ exhibits superior offline-to-online fine-tuning performance compared to prior methods thanks to the *mild conservatism*.\n\nThere are also many existing offline RL algorithms that tune their hyperparameters for each dataset. For example, MOPO [1], as a typical model-based offline RL algorithm, tunes its hyperparameters per dataset (please see https://github.com/tianheyu927/mopo/tree/master/examples/config/d4rl). We also follow the author's instruction and tune the parameters of UWAC when reproducing it with its official codebase.\n\nWe hope our responses can address the reviewer's concern. Again, we are willing to have further discussions with the reviewer, and we are grateful for the efforts and time that the reviewer spends to help us improve our paper.", " We appreciate the reply from the reviewer and are happy to have a further discussion with the reviewer. We give point-to-point clarifications below, which we hope can mitigate your concerns. If there are still some unaddressed questions, please let us know!\n\n**Q1: On Proposition 5 and CVAE implementation**\n\n**A1:** We appreciate the recommended paper [1] from the reviewer. There may exist some situations, e.g., the dataset is highly multi-modal, then we can simply replace the CVAE as the conditional GAN (CGAN) to better capture the different modes in the dataset. As depicted in [1], CGAN is beneficial for multi-modal datasets (though it seems GAN with the \"state-action joint matching strategy\" can best fit the toy eight Gaussian dataset, we think CGAN also exhibits good performance). Whereas for the assumption on the fitted behavior policy, our assumption requires a comparatively well-fitted $\\hat{\\mu}$. In most cases, CVAE can already fit the dataset well and guarantee a good performance. While for some hard cases, we can adopt CGAN for the density model. We will cite the recommended paper in our revision and add some notes on how to deal with multi-modal datasets (or when CVAE fails). We expect a good empirical behavior policy such that most of the actions sampled from it will be in-distribution, which help average and mitigate the negative possible overestimation impacts from OOD actions.\n\nAs for the reviewer's concern on whether CVAE can work well in practice, we want to clarify that many algorithms that leverage CVAE, e.g., BCQ [2], PLAS [3], exhibit very good performance on complex non-Markovian datasets like Adroit (please refer to Table 5 in [3]). Our MCQ also shows good performance on maze2d and Adroit datasets. We believe all the evidence can mitigate some concerns on the feasibility of the CVAE, i.e., CVAE can lead to good performance on many datasets (and especially MuJoCo as shown in our experimental results). And if CVAE fails, one can replace it with CGAN or other generative models.\n\nWe also would like to clarify here that ***our key intuition and innovations are consistent*** for both theoretical forms and practical algorithms, i.e., we construct and assign pseudo target values for OOD actions. Our practical implementation is motivated by our theoretical analysis of the MCB operator. The introduced auxiliary loss significantly boosts the performance of the base SAC algorithm, which indicates that the performance improvement comes from the theoretical insights of MCQ.\n\n[1] Yang, Shentao, et al. A Regularized Implicit Policy for Offline Reinforcement Learning. ArXiv.\n\n[2] S. Fujimoto, D. Meger, and D. Precup. Off-Policy Deep Reinforcement Learning without Exploration. ICML.\n\n[3] Zhou, W., Bajracharya, S., & Held, D. PLAS: Latent Action Space for Offline Reinforcement Learning. CoRL.\n\n**Q2: Why do we need the theory part?**\n\n**A2:** Thanks for the comment. As we discussed above, the intuition of our MCQ algorithm comes from the theoretical analysis on the tabular MDP setting. The theoretical analysis provides basic insights and foundations for our proposed auxiliary loss. We always follow the practical application of our MCB operator in the paper. For the initial version of the MCB operator, we cannot directly utilize it since it may be intractable to acquire the maximum over a continuous action space, and the behavior policy is often unknown. Then, we propose the practical MCB operator, where we fit an empirical behavior policy $\\hat{\\mu}$ and construct the pseudo target values based on it. We present theoretical analysis on the practical MCB operator in Proposition 4 and 5. Furthermore, we extend the practical MCB operator into the deep RL setting, and propose MCQ algorithm. In deep RL, it is challenging to figure out whether the learned policy will execute OOD actions. We therefore regularize all actions sampled from the learned policy. We deem that the whole logic of our paper is clear. We also note here that we actually *do not assume the prior knowledge* about the support of the in-distribution actions for the practical MCB operator (as we construct the pseudo target values based on the empirical behavior policy).", " Thanks for your response to my comments.\n\nMy rating for this paper was already and I remain positive so I leave my rating unchanged.", " Dear authors,\n\nThank you for your detailed response, which address several of my previous questions/concerns.\n\nThe following questions/concerns remains.\n\n> Under the assumption of Proposition 5, is it still possible that the Q-values of OOD actions are higher than the supremum of the in-distribution Q-values?\n\nThe response to this question does not convince me. Proposition 5 appears on Section 3.2 where the CVAE structure has not been introduced. \nThis question/concern is to challenge the authors on the theory/method developmemt, not on the practical implementation via CVAE.\n\nAs I discussed in the original/main review, there is a siginificant gap between the theory/method and the actual implementation. \nI have no doubt that the actual implementation **works well under sufficient fine-tuning**. But does that really come from the proposed method and the theoretical insights?\n\nAs a side note, it is well-known that CVAE can exhibit a mode-covering behavior, as shown in Section 2.4 of [1]. \nTherefore, there is no guarantee that CVAE fits the behavior policy well, not to mention the theoretical assumption that $D_{\\mathrm{TV}} < \\frac{1}{2}$.\n\n[1] Yang, Shentao, et al. \"A Regularized Implicit Policy for Offline Reinforcement Learning.\" arXiv preprint arXiv:2202.09673 (2022).\n\n> we actively train OOD actions by assigning them proper pseudo target values ... natural and practical solution for applying the practical MCB operator.\n\nI think this paragraph summarize this paper pretty well. In fact, without the theory part, the proposed method can be understood intuitively per you described.\n\nIn this regard, why do we need the theory part that cannot be implemented in practice and that does not add to the understanding of the actual implementation? \nFor example, the theory part seems to assume knowing the support of the in-distribution actions, which as you discussed \"is still an open problem\" and is circumvented by \"regularize the actions sampled from the learned policy equally\" .\n\n> The proposed method requires per-dataset tuning of the weighting coefficient\n\nBased on my calculation, in Table 1 of the response (part 3/3), the average score of $\\lambda = 0.7$ variants on the listed nine datasets is $46.7$, $\\lambda = 0.9$ variants $56.7$, while the per-dataset tuned variant in Table 1 of the manuscript is $62.3$.\nIn this regard, per-dataset tuning of hyperparameters is \"vital\" for the empirical results of the proposed method, since unifying hyperparameter setting will lead to siginificant drop of the performence.\nThis may not align with \"offline RL\", whose purpose is to learn good policy *without* enviromental interaction.\n\nFurther, as I discussed in the original/main review, the significance of the experimeltal results in this paper is harmed by the per-dataset tuning of the weighting coefficient on a fine grid. \nMany of the compared baselines actually unify hyperparameters across the MuJoCo datasets, e.g., CQL, IQL and TD3+BC.\nIn this regard, the comparison between the proposed method and the baselines seems unfair.\nTherefore, it is hard to judge the empirical effectiveness of the proposed implementation, which also deviates from the theoretical analysis and is thus less supported.\n\n\n\n", " **Q6: The proposed method requires per-dataset tuning of the weighting coefficient**\n\n**A6:** The weighting coefficient $\\lambda$ is a vital hyperparameter for MCQ, which balances the training of in-distribution actions and OOD actions. MCQ exhibits superior performance on non-expert datasets, which significantly outperforms IQL, TD3+BC, and CQL. Unfortunately, there is no free lunch. Nevertheless, we experimentally find that $\\lambda\\in[0.7,1)$ can generally guarantee good performance. When practically applying MCQ, one ought not to use small $\\lambda$ as we always want a comparatively large weight on in-distribution actions (i.e., standard bellman error). We demonstrate this by conducting empirical experiments on 3 *random* datasets and 6 *medium* datasets, where we search over $\\lambda\\in\\\\{0.0,0.1,0.3,0.5,0.7,0.9\\\\}$. We observe that the performance of MCQ drops and can hardly learn useful policies with a small $\\lambda$, while a large $\\lambda$ works fairly well. \n\n| Task Name | $\\lambda=0$ | $\\lambda=0.1$ | $\\lambda=0.3$ | $\\lambda=0.5$ | $\\lambda=0.7$ | $\\lambda=0.9$ |\n| ---- | :---: | :---: | :---: | :---: | :---: | :---: |\n| halfcheetah-random | 2.2$\\pm$0.6 | 4.6$\\pm$1.3 | 5.5$\\pm$0.9 | 6.3$\\pm$2.6 | 19.5$\\pm$0.6 | 27.2$\\pm$0.9 |\n| hopper-random | 0.7$\\pm$0.0 | 1.2$\\pm$0.5 | 10.8$\\pm$12.1 | 31.0$\\pm$0.7 | 31.4$\\pm$0.4 | 29.4$\\pm$4.3 |\n| walker2d-random | -0.1$\\pm$0.0 | -0.1$\\pm$0.0 | 0.2$\\pm$0.2 | 8.7$\\pm$7.3 | 14.4$\\pm$7.4 | 4.5$\\pm$1.3 |\n| halfcheetah-medium | -0.3$\\pm$0.3 | 38.2$\\pm$0.9 | 41.4$\\pm$0.7 | 43.9$\\pm$0.5 | 49.8$\\pm$0.4 | 61.2$\\pm$0.3 |\n| hopper-medium | 1.7$\\pm$0.9 | 21.5$\\pm$5.5 | 27.1$\\pm$6.9 | 56.7$\\pm$18.6 | 78.4$\\pm$4.3 | 48.6$\\pm$13.2 |\n| walker2d-medium | -0.1$\\pm$0.1 | 1.5$\\pm$1.2 | 60.1$\\pm$12.2 | 68.3$\\pm$2.8 | 72.8$\\pm$5.8 | 91.0$\\pm$0.4 |\n| halfcheetah-medium-replay | -1.6$\\pm$3.7 | 17.6$\\pm$3.8 | 38.3$\\pm$0.5 | 40.9$\\pm$2.0 | 41.3$\\pm$1.7 | 55.1$\\pm$2.0 |\n| hopper-medium-replay | 1.8$\\pm$0.8 | 2.3$\\pm$2.1 | 3.7$\\pm$3.2 | 4.9$\\pm$2.6 | 80.7$\\pm$20.4 | 101.6$\\pm$0.8 |\n| walker2d-medium-replay | -0.2$\\pm$0.1 | 0.0$\\pm$0.2 | 0.3$\\pm$0.6 | 1.2$\\pm$1.0 | 32.2$\\pm$30.9 | 91.3$\\pm$5.7 |\n\nTable 1. Normalized average score of MCQ over different choices of $\\lambda$ on MuJoCo \"-v2\" datasets. The results are averaged over 4 different random seeds.\n\n**Q7: Connections to CQL**\n\n**A7:** We want to argue here that MCQ is different from CQL. The main differences lie in: (1) CQL penalizes the Q-values of the actions sampled from the learned policy and maximizes the Q-values of the in-distribution actions; while MCQ **assigns pseudo target values for the OOD actions** such that they can be properly and actively trained. (2) CQL injects too much conservatism into the policy learning, while MCQ reserves \"mild\" conservatism as the Q-values of the OOD actions are not penalized to be small. (3) MCQ exhibits much better performance than CQL when transferring from offline to online.", " **Q4: Why in L183-184 \"in-distribution actions are still trained to approximate the optimal batch-constraint Q value\"? Is it possible that the target value of such in-distribution actions are inflated by the psedo-label?**\n\n**A4:** For the OOD actions, we actively train them by assigning them pseudo target values. While for in-distribution actions, they are still trained by leveraging the standard bellman error (which approximates the optimal Q-value). Recall that the loss function for MCQ gives below:\n$$\n\\mathcal{L}\\_{\\rm{critic}} = \\lambda \\mathbb{E}_{(s,a,r,s^\\prime)\\sim\\mathcal{D}}[(Q\\_{\\theta\\_i}(s,a)-y)^2] + (1-\\lambda)\\mathbb{E}\\_{s^{\\rm{in}}\\sim\\mathcal{D},a^{\\rm{ood}}\\sim\\pi}[(Q\\_{\\theta\\_i}(s^{\\rm{in}},a^{\\rm{ood}}) - y^\\prime)^2],\n$$\nwhere $y$ is the target value for the in-distribution actions and $y^\\prime$ is the pseudo target value for OOD actions.\n\nFor the case that there exist in-distribution actions in the sampled actions from the trained policy, the corresponding Q-value will approximate $\\lambda y + (1-\\lambda)y^\\prime$ (by taking derivatives w.r.t. Q). We see that this is a convex combination of in-distribution target value and pseudo target value. Then by assigning large weight coefficient $\\lambda$, which is suggested in our experiments, we can still make the Q-values of in-distribution actions approximate the optimal batch-constraint Q-value. Meanwhile, as we explained above, the sampled actions from the CVAE are mostly in-distribution, and will not incur severe overestimation. The target value of the in-distribution actions are therefore less likely to be inflated by the pseudo target values. Note that with the auxiliary loss, the OOD actions sampled from the trained policy are projected into the identical pseudo target value, i.e., $y^\\prime$. While for in-distribution actions, they approximates the convex combinations of in-distribution target value and OOD pseudo target value. Therefore, the Q-values of the in-distribution actions will not be identical to the Q-values of OOD actions. In this way, MCQ can still select out good in-distribution and safe actions.\n\n**Q5: The practical implementation of the method diverges from the theory**\n\n**A5:** We would like to argue that many offline RL algorithms also have discrepancies between the theoretical algorithm and the theoretically less-supported practical algorithm, e.g., BCQ [1]. BCQ involves convex combination of double critics when calculating the target values, and also incorporates a perturbation network to increase the diversity of the generated actions. They both lack theoretical support and diverge from the theory of BCQ. It is hard to keep the theoretical algorithm unchanged when combined with deep neural networks. Nevertheless, our key intuition and innovations are consistent, i.e., we actively train OOD actions by assigning them proper pseudo target values. In practice, it is hard to determine whether an action lies in the OOD region (and it is still an open problem). We thus regularize the actions sampled from the learned policy equally. To balance the training of in-distribution samples and OOD actions, we introduce a weighting coefficient $\\lambda$, which we believe is a natural and practical solution for applying the practical MCB operator.\n\n[1] S. Fujimoto, D. Meger, and D. Precup. Off-Policy Deep Reinforcement Learning without Exploration. ICML 2018.", " Thank you for your insightful comments. We give point-to-point response below. We sincerely hope you can re-evaluate our work based on the updated information. If you have any additional questions, we will be happy to have further discussions.\n\n**Q1: Can MCQ work well on higher-dimensional and/or non-Markovian datasets?**\n\n**A1:** Our empirical evaluation on maze2d and Adroit datasets show that MCQ can exhibit good performance on these datasets, where learning a good generative model can be difficult. Compared to some *common* baselines, MCQ achieves the highest average score over all datasets. We thank the reviewer for mentioning OptiDICE [1] and One-step RL [2]. We will cite these papers in our revision. But we think that it is unfair to compare to some strong baselines on some specific datasets. It is quite natural that some methods work well on some specific datasets, e.g., TD3+BC behaves well on MuJoCo dataset but fails on Adroit datasets, OptiDICE shows good performance on maze2d datasets but is not satisfying on MuJoCo datasets, etc. It is also very hard for a single algorithm to outperform all other strong baselines on all types of the datasets. We think that our selected baselines are reasonable as they can typically represent different categories of offline RL algorithms, such as CQL (value penalization method), TD3+BC (policy constraint method), IQL (that learns without querying OOD actions), etc.\n\n[1] Lee, Jongmin, et al. OptiDICE: Offline policy optimization via stationary distribution correction estimation. ICML 2021.\n\n[2] Brandfonbrener, David, et al. Offline rl without off-policy evaluation. NeurIPS 2021.\n\n**Q2: Under the assumption of Proposition 5, is it still possible that the Q-values of OOD actions are higher than the supremum of the in-distribution Q-values?**\n\n**A2:** It is an interesting question. In Proposition 5, we require that $D\\_{TV}(\\hat{\\mu}(\\cdot|s)||\\mu(\\cdot|s))\\le \\epsilon<\\frac{1}{2}$. Such assumption generally requires that the empirical density distribution fits well the true behavior policy. We want to note here that $D\\_{TV}(\\hat{\\mu}(\\cdot|s)||\\mu(\\cdot|s))\\in[0,1]$. Then ensuring that $D\\_{TV}(\\hat{\\mu}(\\cdot|s)||\\mu(\\cdot|s))<\\frac{1}{2}$ can be satisfied for most situations as CVAE fits the behavior policy in many datasets well in practice. Under such assumption and based on the theoretical results in Proposition 5, the pseudo target value has a chance to exceed $\\max\\_{a\\in\\rm{support}(\\mu)}Q(s,a)$. However, that does not indicate that bad OOD actions will be executed in practice. The reasons lie in two aspects: (1) the theoretical bound is an *upper* bound, and it does not necessarily mean that the pseudo target value will exceed $\\max\\_{a\\in\\rm{support}(\\mu)}Q(s,a)$; (2) if the learned behavior policy (CVAE) fits well the true behavior policy, most of the sampled actions from the density model (CVAE) will be in-distribution that are well-trained, i.e., they will not exceed $\\max\\_{a\\in\\rm{support}(\\mu)}Q(s,a)$. If OOD actions are involved in the actions sampled from the CVAE, its negative impact can be *averaged* and mitigated by these in-distribution actions. Therefore, the pseudo target values for the OOD actions sampled from the trained policy will not be overwhelmed by the overestimated values.\n\nEmpirically, we find MCQ exhibits good performance on non-expert datasets and behaves fairly well on expert datasets, which we believe can ease this concern to some extent.\n\n**Q3: Issues on the proof of Proposition 4.**\n\n**A3:** Thanks for pointing that out. That ought to be $\\\\{a\\_i^\\prime\\\\}^N\\sim\\rm{support}(\\hat{\\mu})$. That is, the sampled actions are from the fitted behavior policy $\\hat{\\mu}$, which is then consistent with the definition of $\\hat{\\mathcal{T}}_1$. We apologize for the typo and will revise it in our revision.", " **Q5: The offline to online experiments miss essential details.**\n\n**A5:** For the offline-to-online experiments, as is stated in the main text (line 267-269), we first train baseline methods (TD3+BC, CQL, etc.) and our MCQ for 1M gradient steps offline, and then perform online fine-tuning for another 100K gradient steps for all of them. The online samples are put into the offline buffer directly, where experiences are sampled for online adaptation. The results of baseline methods are acquired by running their official codebases, i.e., CQL (https://github.com/aviralkumar2907/CQL), TD3+BC (https://github.com/sfujim/TD3_BC), IQL (https://github.com/ikostrikov/implicit_q_learning), AWAC (https://github.com/vitchyr/rlkit). All methods are run over 4 different random seeds. We will add the missing details to the appendix.\n\nWe chose a subset of tasks for offline-to-online fine-tuning different from IQL and AWAC to ensure that our empirical experiments on offline-to-online fine-tuning are consistent to the offline experiments (just like IQL does, where it conducts offline learning in antmaze domain and performs offline-to-online fine-tuning on some datasets from antmaze). Meanwhile, we deem that the offline-to-online fine-tuning is not limited to the datasets that are adopted by previous studies. In our experiments, we observe superior performance of MCQ on non-expert datasets such as random and medium-replay in the offline stage. We then want to show that MCQ can exhibit good generalization capability on these non-expert datasets compared with prior methods when performing offline-to-online fine-tuning. We believe it is reasonable that we utilize *random* datasets and *medium-replay* datasets from D4RL MuJoCo locomotion tasks for such evaluation.", " **Q3: The method is evaluated only on the locomotion tasks from D4RL**\n\n**A3:** To show the effectiveness of MCQ, we additionally compare MCQ against baseline methods on maze2d and Adroit datasets in Appendix H. We attach the comparison results below, where we observe that MCQ is competitive or better than prior methods on these tasks.\n\n| Task Name | BC | BEAR | CQL | BCQ | TD3+BC | IQL | MCQ (ours) | \n| ---- | :---: | :---: |:---: |:---: |:---: |:---: |:---: |\n| maze2d-umaze | -3.2 | 65.7 | 18.9 | 49.1 | 25.7$\\pm$6.1 | 65.3$\\pm$13.4 | 81.5$\\pm$23.7 |\n| maze2d-umaze-dense | -6.9 | 32.6 | 14.4 | 48.4 | 39.7$\\pm$3.8 | 57.8$\\pm$12.5 | 107.8$\\pm$3.2 |\n| maze-medium | -0.5 | 25.0 | 14.6 | 17.1 | 19.5$\\pm$4.2 | 23.5$\\pm$11.1 | 54.8$\\pm$14.1 |\n| maze-medium-dense | 2.7 | 19.1 | 30.5 | 41.1 | 54.9$\\pm$6.4 | 28.1$\\pm$16.8 | 33.6$\\pm$2.9 |\n| Average Above | -2.0 | 35.6 | 19.6 | 38.9 | 35.0 | 37.2 | **69.4** |\n| pen-human | 34.4 | -1.0 | 37.5 | 68.9 | 0.0$\\pm$0.0 | 68.7$\\pm$8.6 | 68.5$\\pm$6.5 |\n| door-human | 0.5 | -0.3 | 9.9 | 0.0 | 0.0$\\pm$0.0 | 3.3$\\pm$1.3 | 2.3$\\pm$2.2 |\n| relocate-human | 0.0 | -0.3 | 0.2 | -0.1 | 0.0$\\pm$0.0 | 0.0$\\pm$0.0 | 0.1$\\pm$0.1 |\n| hammer-human | 1.5 | 0.3 | 4.4 | 0.5 | 0.0$\\pm$0.0 | 1.4$\\pm$0.6 | 0.3$\\pm$0.1 |\n| pen-cloned | 56.9 | 26.5 | 39.2 | 44.0 | 0.0$\\pm$0.0 | 35.3$\\pm$7.3 | 49.4$\\pm$4.3 |\n| door-cloned | -0.1 | -0.1 | 0.4 | 0.0 | 0.0$\\pm$0.0 | 0.5$\\pm$0.6 | 1.3$\\pm$0.4 | \n| relocate-cloned | -0.1 | -0.3 | -0.1 | -0.3 | 0.0$\\pm$0.0 | -0.2$\\pm$0.0 | 0.0$\\pm$0.0 |\n| hammer-cloned | 0.8 | 0.3 | 2.1 | 0.4 | 0.0$\\pm$0.0 | 1.7$\\pm$1.0 | 1.4$\\pm$0.5 |\n| Average Total | 7.2 | 13.9 | 14.3 | 22.4 | 11.7 | 23.8 | **33.4** |\n\nTable 2: Normalized score comparison of different baseline methods on D4RL benchmarks. 0 corresponds to a random policy and 100 corresponds to an expert policy. The results are averaged over 4 different random seeds.\n\n**Q4: The method is evaluated using only 4 seeds which might be insufficient**\n\n**A4:** We understand the concern. We run MCQ on MuJoCo datasets for another 4 seeds, yielding a total 8 random seeds, which we believe is comparatively sufficient for reliable evaluation. We summarize the results below. We observe that MCQ exhibits similar performance as reported in the main text.\n\n| Task Name | MCQ (4 seeds) | MCQ (8 seeds) |\n| ---- | :---: | :---: | \n| halfcheetah-random | 28.5$\\pm$0.6 | 28.6$\\pm$0.5 |\n| hopper-random | 31.8$\\pm$0.5 | 31.5$\\pm$0.7 |\n| walker2d-random | 17.0$\\pm$3.0 | 19.1$\\pm$5.1 |\n| halfcheetah-medium | 64.3$\\pm$0.2 | 64.2$\\pm$0.3 |\n| hopper-medium | 78.4$\\pm$4.3 | 75.6$\\pm$7.4 |\n| walker2d-medium | 91.0$\\pm$0.4 | 89.7$\\pm$1.5 |\n| halfcheetah-medium-replay | 56.8$\\pm$0.6 | 56.5$\\pm$0.8 |\n| hopper-medium-replay | 101.6$\\pm$0.8 | 101.8$\\pm$1.1 |\n| walker2d-medium-replay | 91.3$\\pm$5.7 | 91.2$\\pm$4.8 |\n| halfcheetah-medium-expert | 87.5$\\pm$1.3 | 86.4$\\pm$2.4 |\n| hopper-medium-expert | 111.2$\\pm$0.1 | 108.5$\\pm$4.6 |\n| walker2d-medium-expert | 114.2$\\pm$0.7 | 113.8$\\pm$1.9 |\n| Average Above | 72.8 | 72.2 |\n| halfcheetah-expert | 96.2$\\pm$0.4 | 95.9$\\pm$0.6 |\n| hopper-expert | 111.4$\\pm$0.4 | 111.3$\\pm$0.6 |\n| walker2d-expert | 107.2$\\pm$1.1 | 107.8$\\pm$2.3 |\n| Average Total | 79.2 | 78.8 |\n\nTable 3: Normalized average score of MCQ on D4RL benchmarks. 0 corresponds to a random policy and 100 corresponds to an expert policy. The experiments are run on MuJoCo \"-v2\" datasets.", " Thanks for your detailed and valuable comments. We provide clarification to your questions and concerns as below. If you have any further questions or comments, we will be happy to have further discussions.\n\n**Q1: The comparison to BCQ is missing.**\n\n**A1:** We actually compared our MCQ against BCQ in Appendix D, where we also compare MCQ against other recent baselines like Decision Transformer (DT), MOPO, etc. We defer these comparison to the appendix due to page limit. We attach the comparison results below. We observe that MCQ consistently outperforms BCQ on 14 out of 15 datasets.\n\n| Task Name | BCQ | MCQ (ours) | \n| ---- | :---: | :---: |\n| halfcheetah-random | 2.2 $\\pm$0.0 | **28.5$\\pm$0.6** |\n| hopper-random | 7.8$\\pm$0.6 | **31.8$\\pm$0.5** |\n| walker2d-random | 4.9$\\pm$0.1 | **17.0$\\pm$3.0** |\n| halfcheetah-medium | 46.6$\\pm$0.4 | **64.3$\\pm$0.2** | \n| hopper-medium | 59.4$\\pm$8.3 | **78.4$\\pm$4.3** |\n| walker2d-medium | 71.8$\\pm$7.2 | **91.0$\\pm$0.4** |\n| halfcheetah-medium-replay | 42.2$\\pm$0.9 | **56.8$\\pm$0.6** |\n| hopper-medium-replay | 60.9$\\pm$14.7 | **101.6$\\pm$0.8** |\n| walker2d-medium-replay | 57.0$\\pm$9.6 | **91.3$\\pm$5.7** |\n| halfcheetah-medium-expert | **95.4$\\pm$2.0** | 87.5$\\pm$1.3 |\n| hopper-medium-expert | 106.9$\\pm$5.0 | **111.2$\\pm$0.1** |\n| walker2d-medium-expert | 107.7$\\pm$3.8 | **114.2$\\pm$0.7** |\n| Average Above | 55.2 | **72.8** |\n| halfcheetah-expert | 89.9$\\pm$9.6 | **96.2$\\pm$0.4** |\n| hopper-expert | 109.0$\\pm$4.0 | **111.4$\\pm$0.4** |\n| walker2d-expert | 106.3$\\pm$5.0 | **107.2$\\pm$1.1** |\n| Average Total | 64.5 | **79.2** |\n\nTable 1: Normalized average score comparison of MCQ against BCQ on D4RL benchmarks. 0 corresponds to a random policy and 100 corresponds to an expert policy. The experiments are run on MuJoCo \"-v2\" datasets over 4 random seeds.\n\nWe also want to note here that our method, MCQ, is different from BCQ. The main differences lie in: (1) MCQ is built upon SAC while BCQ is built upon TD3; (2) MCQ properly trains OOD actions by assigning them pseudo target values while BCQ does not; (3) BCQ adds perturbation noise to increase the diversity of actions while MCQ does not.\n\n**Q2: The practical implementation of the method diverges from the theory. Did you try implementing the version of the method that regularizes only OOD actions?**\n\n**A2:** This is an interesting and important question. We would like to argue that many offline RL algorithms have this issue, e.g., BCQ [1], MOPO [2], etc. The practical implementation of BCQ involves convex combination of double critics (in target value calculation), and perturbation noise in actions. The error estimator in MOPO is set to be the maximum standard deviation of the learned models in the ensemble, which also lacks theoretical guarantee and diverges from its theory. The involvement of neural networks makes it hard for us to implement MCQ that follows its original theoretical form.\n\nAs for MCQ, if the behavior policy $\\mu(\\cdot|s)$ is previously known, then we can implement MCQ that exactly follows its theory (i.e., Definition 1). Unfortunately, we often do not have prior knowledge about the data-collecting policy $\\mu(\\cdot|s)$. We then resort to fitting an empirical distribution $\\hat{\\mu}(\\cdot|s)$, and follows Definition 2 (practical MCB operator). However, we cannot directly apply the practical MCB operator in deep RL since it is challenging to evaluate whether an action is OOD (and we cannot say that the action that does not exist in the batch is OOD, especially for continuous action space). We therefore simply assign pseudo target values for all actions sampled from the trained policy such that OOD actions are properly trained.\n\nThe actions sampled from the trained policy may have less probability of being OOD with the increment of training steps, while the risk of being OOD still exists. To mitigate such potential threats, we need to regularize actions sampled from the trained policy. In our experiments, we assign large weighting coefficient $\\lambda$ to in-distribution samples, which ensures sufficient training on in-distribution transitions. Empirical success of MCQ on non-expert datasets show that MCQ is less likely to over-penalize the optimal actions.\n\n[1] S. Fujimoto, D. Meger, and D. Precup. Off-Policy Deep Reinforcement Learning without Exploration. ICML 2018.\n\n[2] T. Yu, G. Thomas, L. Yu, S. Ermon, J. Y. Zou, S. Levine, C. Finn, and T. Ma. MOPO: Model-based Offline Policy Optimazation. NIPS 2020.\n\n[3] I. Kostrikov, A. Nair, and S. Levine. Offline Reinforcement Learning with Implicit Q-Learning. ICLR 2022.", " Thanks for your inspiring and thoughtful comments, and thanks for commenting that our paper is \"well communicated\". We provide clarification to the concerns below. We hope our responses can address your concerns.\n\n**Q1: The algorithm should be tested on a different style of tasks as well**\n\n**A1:** To show the effectiveness of our MCQ algorithm, we provide additional empirical experiments on other datasets in D4RL, maze2d, and Adroit, in the Appendix H. We attach the comparison results below (one can also refer to Appendix H), where we observe that MCQ outperforms baseline methods on many datasets, and is the best in terms of the average normalized score.\n\n| Task Name | BC | BEAR | CQL | BCQ | TD3+BC | IQL | MCQ (ours) | \n| :--- | :---: | :---: |:---: |:---: |:---: |:---: |:---: |\n| maze2d-umaze | -3.2 | 65.7 | 18.9 | 49.1 | 25.7$\\pm$6.1 | 65.3$\\pm$13.4 | 81.5$\\pm$23.7 |\n| maze2d-umaze-dense | -6.9 | 32.6 | 14.4 | 48.4 | 39.7$\\pm$3.8 | 57.8$\\pm$12.5 | 107.8$\\pm$3.2 |\n| maze-medium | -0.5 | 25.0 | 14.6 | 17.1 | 19.5$\\pm$4.2 | 23.5$\\pm$11.1 | 54.8$\\pm$14.1 |\n| maze-medium-dense | 2.7 | 19.1 | 30.5 | 41.1 | 54.9$\\pm$6.4 | 28.1$\\pm$16.8 | 33.6$\\pm$2.9 |\n| Average Above | -2.0 | 35.6 | 19.6 | 38.9 | 35.0 | 37.2 | **69.4** |\n| pen-human | 34.4 | -1.0 | 37.5 | 68.9 | 0.0$\\pm$0.0 | 68.7$\\pm$8.6 | 68.5$\\pm$6.5 |\n| door-human | 0.5 | -0.3 | 9.9 | 0.0 | 0.0$\\pm$0.0 | 3.3$\\pm$1.3 | 2.3$\\pm$2.2 |\n| relocate-human | 0.0 | -0.3 | 0.2 | -0.1 | 0.0$\\pm$0.0 | 0.0$\\pm$0.0 | 0.1$\\pm$0.1 |\n| hammer-human | 1.5 | 0.3 | 4.4 | 0.5 | 0.0$\\pm$0.0 | 1.4$\\pm$0.6 | 0.3$\\pm$0.1 |\n| pen-cloned | 56.9 | 26.5 | 39.2 | 44.0 | 0.0$\\pm$0.0 | 35.3$\\pm$7.3 | 49.4$\\pm$4.3 |\n| door-cloned | -0.1 | -0.1 | 0.4 | 0.0 | 0.0$\\pm$0.0 | 0.5$\\pm$0.6 | 1.3$\\pm$0.4 | \n| relocate-cloned | -0.1 | -0.3 | -0.1 | -0.3 | 0.0$\\pm$0.0 | -0.2$\\pm$0.0 | 0.0$\\pm$0.0 |\n| hammer-cloned | 0.8 | 0.3 | 2.1 | 0.4 | 0.0$\\pm$0.0 | 1.7$\\pm$1.0 | 1.4$\\pm$0.5 |\n| Average Total | 7.2 | 13.9 | 14.3 | 22.4 | 11.7 | 23.8 | **33.4**\n\nTable 1: Normalized score comparison of different baseline methods on D4RL benchmarks. 0 corresponds to a random policy and 100 corresponds to an expert policy.\n\n**Q2: Why do you think TD3+BC seems to be better for expert-level demonstrations (for most tasks)?**\n\n**A2:** We summarize the performance comparison of our MCQ against TD3+BC on *medium-expert* and *expert* datasets in Table 2. We find that MCQ is actually competitive to TD3+BC on most of the datasets that contain expert demonstrations. MCQ achieves the better average score on 3 out of 6 datasets, and is also better in terms of the mean score. TD3+BC behaves naturally well on expert-level datasets with the aid of the behavior cloning (BC) term (BC itself can behave well on expert datasets). While MCQ can achieve competitive performance against TD3+BC by properly training OOD actions.\n\n| Task Name | TD3+BC | MCQ (ours) | \n| ---- | :---: | :---: |\n| halfcheetah-medium-expert | **90.7$\\pm$4.3** | 87.5$\\pm$1.3 |\n| hopper-medium-expert | 98.0$\\pm$9.4 | **111.2$\\pm$0.1** |\n| walker2d-medium-expert | 110.1$\\pm$0.5 | **114.2$\\pm$0.7** |\n| halfcheetah-expert | **96.7$\\pm$1.1** | 96.2$\\pm$0.4 |\n| hopper-expert | 107.8$\\pm$7 | **111.4$\\pm$0.4** |\n| walker2d-expert | **110.2$\\pm$0.3** | 107.2$\\pm$1.1 |\n| Average | 102.25 | **104.62** |\n\nTable 2. Normalized average score comparison between TD3+BC and MCQ on datasets that contain expert demonstrations.\n\n**Q3: Code for the MCQ**\n\n**A3:** We apologize for missing the code for MCQ. To make sure that our results are reproducible, we include a thorough instructions for implementing MCQ in Appendix C.2 along with the detailed hyperparameter setup. We have also uploaded our anonymous code in https://anonymous.4open.science/r/MCQ-BE79/. Our code will be open-sourced and a formal github link will be added in the manuscript upon acceptance.", " We thank the reviewer for thinking that our method \"is very innovative and elegant in its processing of OOD actions\". We also thank the reviewer for the thoughtful comments. We hope our responses below can address your concerns. \n\n**Q1: A recent paper [1] also addresses the over-pessimism of offline RL algorithms with an adaptive method**\n\n**A1:** We thank the reviewer for recommending this interesting paper. We actually notice and pay close attention to this paper upon it is available in ArXiv. Unfortunately, this paper is publicly available on July, 2022, and we cannot get access to it before the submission deadline of this venue. We will cite this paper in our revision.\n\n[1] Ghosh, D., Ajay, A., Agrawal, P., \\& Levine, S. Offline RL Policies Should be Trained to be Adaptive. ICML 2022.\n\n**Q2: Show the benefits of the auxiliary loss with a small scale of $\\lambda$**\n\n**A2:** As a key component of our MCQ algorithm, the auxiliary loss term is of importance to the final performance of the agent. From Figure 2(a) and 2(b) of the main text, we see that decreasing $\\lambda$ will negatively affect the performance of the MCQ.\n\nRecall that the loss function for MCQ gives:\n$$\n\\mathcal{L}\\_{\\rm{critic}} = \\lambda \\mathbb{E}\\_{(s,a,r,s^\\prime)\\sim\\mathcal{D}}[(Q\\_{\\theta\\_i}(s,a)-y)^2] + (1-\\lambda)\\mathbb{E}\\_{s^{\\rm{in}}\\sim\\mathcal{D},a^{\\rm{ood}}\\sim\\pi}[(Q\\_{\\theta\\_i}(s^{\\rm{in}},a^{\\rm{ood}}) - y^\\prime)^2],\n$$\nwhere $y$ is the target value for the in-distribution samples, and $y^\\prime$ is the pseudo target values for the OOD actions.\n\nHence, $\\lambda = 0$ will make MCQ assigns no weight over in-distribution samples. The critic will be trained only with the pseudo target values for the OOD actions, which will corrupts the performance of the agent since no reward information is included in the auxiliary loss. We report the results of $\\lambda=\\\\{0, 0.1, 0.3, 0.5, 0.7, 0.9\\\\}$ on 6 medium-level datasets and 3 random datasets from D4RL MuJoCo \"v2\" datasets. The results are shown below, where we observe that a small $\\lambda$ is a bad choice for all datasets, especially $\\lambda=0$. Ideally, we find that $\\lambda\\in[0.7,1)$ works fairly well for many of the tasks. As discussed in the main text (line 245-247), small $\\lambda$ will make the critic loss term overwhelmed by the OOD actions and in-distribution actions cannot be well trained, which is harmful to the performance of the agent.\n\n| Task Name | $\\lambda=0$ | $\\lambda=0.1$ | $\\lambda=0.3$ | $\\lambda=0.5$ | $\\lambda=0.7$ | $\\lambda=0.9$ |\n| ---- | :---: | :---: | :---: | :---: | :---: | :---: |\n| halfcheetah-random | 2.2$\\pm$0.6 | 4.6$\\pm$1.3 | 5.5$\\pm$0.9 | 6.3$\\pm$2.6 | 19.5$\\pm$0.6 | 27.2$\\pm$0.9 |\n| hopper-random | 0.7$\\pm$0.0 | 1.2$\\pm$0.5 | 10.8$\\pm$12.1 | 31.0$\\pm$0.7 | 31.4$\\pm$0.4 | 29.4$\\pm$4.3 |\n| walker2d-random | -0.1$\\pm$0.0 | -0.1$\\pm$0.0 | 0.2$\\pm$0.2 | 8.7$\\pm$7.3 | 14.4$\\pm$7.4 | 4.5$\\pm$1.3 |\n| halfcheetah-medium | -0.3$\\pm$0.3 | 38.2$\\pm$0.9 | 41.4$\\pm$0.7 | 43.9$\\pm$0.5 | 49.8$\\pm$0.4 | 61.2$\\pm$0.3 |\n| hopper-medium | 1.7$\\pm$0.9 | 21.5$\\pm$5.5 | 27.1$\\pm$6.9 | 56.7$\\pm$18.6 | 78.4$\\pm$4.3 | 48.6$\\pm$13.2 |\n| walker2d-medium | -0.1$\\pm$0.1 | 1.5$\\pm$1.2 | 60.1$\\pm$12.2 | 68.3$\\pm$2.8 | 72.8$\\pm$5.8 | 91.0$\\pm$0.4 |\n| halfcheetah-medium-replay | -1.6$\\pm$3.7 | 17.6$\\pm$3.8 | 38.3$\\pm$0.5 | 40.9$\\pm$2.0 | 41.3$\\pm$1.7 | 55.1$\\pm$2.0 |\n| hopper-medium-replay | 1.8$\\pm$0.8 | 2.3$\\pm$2.1 | 3.7$\\pm$3.2 | 4.9$\\pm$2.6 | 80.7$\\pm$20.4 | 101.6$\\pm$0.8 |\n| walker2d-medium-replay | -0.2$\\pm$0.1 | 0.0$\\pm$0.2 | 0.3$\\pm$0.6 | 1.2$\\pm$1.0 | 32.2$\\pm$30.9 | 91.3$\\pm$5.7 |\n\nTable 1. Normalized average score of MCQ over different choices of $\\lambda$ on MuJoCo \"-v2\" datasets. The results are averaged over 4 different random seeds.", " This paper points out the previous offline reinforcement learning methods are too conservative about the out-of-distribution (OOD) actions and instead propose a mild conservative algorithm. It introduces an auxiliary loss term to properly train the value function for OOD actions. The proposed method, MCQ, is shown to outperform previous methods empirically and theoretically proved to behave at least as well as the behavior policy, and has no erroneous overestimation. Pros:\nThis paper is clearly written, and the whole structure is organized and easy to follow. The method is well-motivated and the claims in the paper are all supported by either theoretical analysis or experimental results. Although over-pessimism is not a new problem in offline RL, this method is very innovative and elegant in its processing of OOD actions. In comparison with the baselines, MCQ archives a remarkable improvement on random or medium datasets. The authors also make a careful analysis of the sensitivity of hyperparameters.\n\nCons:\nI want to draw the authors' attention to a recent paper that also addresses the over-pessimism of offline RL algorithms (https://arxiv.org/pdf/2207.02200.pdf), which uses an adaptive method. Besides, I am also confused about the results in Figure 2. It seems that decreasing $\\lambda$ is always beneficial to the final performance, and there is no trend that introducing the auxiliary loss can help the final performance. I think the authors should use a small scale of $\\lambda$ (0-0.3) to explicitly show the benefits of the auxiliary loss.\n Please see the main review. The authors have addresses the limitations in the paper.", " Offline RL is a topic of significant interest. One common class of approaches is to learn an action-value function but to enforce that the function is `conservative' so that it does not result in a policy which takes actions that were not in the training data (and therefore of unknown value).\n\nThis work introduces a ``mildly'' conservative Bellman operator. In particular, for actions in the support of the behavior policy the operator behaves like standard Bellman operator, but for actions outside the behavior policy support it assumes that value is delta less than an action in the support.\n\nThey show that this operator will always result in a conservative Q estimate (that is, it will not over-estimate the value of any action). They then introduce a practical approximation (where the behavior policy is estimated by a CVAE) and test on a set of offline RL control tasks. It performs notably better than prior work on poor demonstrates, but does not consistently outperform TD3+BC when expert demonstrations are available.\n Strengths:\n- Well communicated.\n\n- Principled explanation of an algorithm that empirically performs well on a set of benchmark tasks.\n\n- Offline RL is a topic of significant interest to the community and active research.\n\nWeaknesses:\n\n- Ideally, the algorithm would be tested on a different style of tasks as well (e.g. perhaps Atari), rather than only MuJoCo control tasks.\n\n- This approach does not seem to perform as well when expert level demonstrations are available.\n\nMinor:\n\nI found definition 2 (line 123) confusing since it refers to $\\mu(a|s)$ which is stated above you are trying to avoid. It is explained further what the actual \"practical\" solution is when $\\mu$ is not known, but I found this a bit confusing on first read.\n Why do you think TD3+BC seems to be better for expert-level demonstrations (for most tasks)?\n\nThe checklist for this paper indicates the code is available for reproducing the experiments but I didn't see a link anywhere to the code?\n Yes", " The paper presents a method for offline reinforcement learning based that involves assigning pseudo-Q-values to OOD values called MCQ. The method's main idea is to modify the Q-targets by detecting the out-of-distribution actions via a density model and assign the Q values to these actions similarly to BCQ by taking a maximum of a Q function over N samples from a density model. The main difference to BCQ is that the authors propose to use this backup operator only for OOD actions instead of all actions and use an actor to recover optimal actions from the modified Q-function. The method is evaluated on severals datasets from D4RL where it outperforms the baselines. # Strengths\n* The method is theoretically sound and practical. Even though all method components have been explored in prior work, the idea of not penalizing the values of OOD actions but using a BCQ-style value estimate to impute the values for these actions is novel.\n* The practical version of the algorithm (Eqn. 11) is easy to implement. \n* MCB demonstrates good empirical results on a subset of D4RL tasks.\n* The paper is well written and easy to follow.\n\n# Weaknesses\n* The method can be seen as an extension of BCQ. However, the comparison to BCQ is missing.\n* The practical implementation of the method diverges from the theory. In particular, the practical implementation of the method omits OOD evaluation and instead regularizes all actions sampled from the training policy, which are not necessary OOD and, throughout training, will certainly become less OOD which can result in over-penalizing the optimal actions due to value underestimation caused by the BCQ-style operator.\n* The method is evaluated only on the locomotion tasks from D4RL, which do not require stitching [1] (dynamical programming).\n* Also, the method is evaluated using only 4 seeds which might be insufficient.\n* The offline to online experiments miss essential details. In particular, it is unclear how the authors obtained results for other methods. Also, the authors pick a different subset of tasks for offline to online finetuning than considered in the original papers (AWAC, IQL).\n\n[1] RvS: What is Essential for Offline RL via Supervised Learning?\nS Emmons, B Eysenbach, I Kostrikov, S Levine * Did you try implementing the version of the method that regularizes only OOD actions?\n* How well does the method perform on other D4RL datasets (for example, antmaze, adroit and kitchen tasks)?\n* What implementations of baselines did you use for offline to online experiments? Right now, the main limitation of this work is limited experimental evaluation. In particular, the method is evaluated only on locomotion tasks using an insufficient number of runs. The paper considers a set of tasks different from the standard tasks used in prior work for offline to online experiments. I will raise my score if these concerns are addressed.", " This paper proposes using mild conservatism in offline RL to benefit generalization and to avoid overly conservative on OOD actions.\nSpecifically, this paper develops a Mildly Conservative Bellman (MCB) operator for offline RL, where OOD actions are actively trained and their Q values are actively queried.\nTheoretical results under the tabular setting and a practical MCB operator are provided.\nEmpirically, combining the practical MCB operator with SAC performs well on the D4RL MuJoCo locomotion tasks and when transferring from offline learning to online. ### Strengths\n1. The proposed method is well-motivated by theory.\n2. The idea of actively train the Q-values of OOD actions is interesting, though in some sense similar to the CQL paper [1].\n3. Experiments are extensive and the proposed method generally performs well.\n\n### Weaknesses\n1. Since the proposed method requires behavior cloning (BC), it is doubtful if the proposed method can work well on higher-dimensional and/or non-Markovian datasets where BC can be difficult. Theoretical results such as Proportion 5 requires sufficiently accurate BC, which may not be possible on harder datasets. \\\nIn fact, from Table 8, the proposed method performs less favorable on maze2d datasets compared with other stronger baseline such as OptiDICE [2], and is slightly worse than one-step RL [3] on Adroit dataset.\n2. There seems to be discrepancies between the impractical theoretical algorithm and the theoretically less-supported practical algorithm, in particular the practical Loss functions (L167-193). \n3. From Table 4 & 9, the proposed method requires per-dataset tuning of the weighting coefficient $\\lambda$ on a relatively fine grid, which muds the empirical significance since many of the compared baselines actually *unify* hyperparameters across the MuJoCo datasets, *e.g.*, CQL, IQL and TD3+BC.\n\n\n\\\n[1] Kumar, Aviral, et al. \"Conservative q-learning for offline reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 1179-1191.\n\\\n[2] Lee, Jongmin, et al. \"Optidice: Offline policy optimization via stationary distribution correction estimation.\" International Conference on Machine Learning. PMLR, 2021.\\\n[3] Brandfonbrener, David, et al. \"Offline rl without off-policy evaluation.\" Advances in Neural Information Processing Systems 34 (2021): 4933-4946. Below are my questions and concerns on this paper.\n\n1. From Proportion 5, even under the rather strong assumption on the discrepancy between $\\hat \\mu$ and $\\mu$, is it still possible that the OOD actions will Q-values higher than the supremum of the in-distribution Q-values? If so, how the proposed Q-learning-based method avoids the detrimental impact of those \"overestimated\" OOD actions? \n2. Why in the proof of Proportion 4 (Below L72 of Appendix), we have $\\\\{a_i'\\\\}^N \\sim \\mathrm{support}(\\mu)$? And the $\\hat T_1$ here seems inconsistent with Eq. (9).\n3. Why in L183-184 \"in-distribution actions are still trained to approximate the optimal batch-constraint Q value\"? Is it possible that the target value of such in-distribution actions are inflated by the psedo-label? \\\nSince Eq. (13) is independent of $a^{ood}$ and $\\pi$, will the psedo-target value Eq. (13) collapese $Q(s^{in}, a^{ood})$ at different $a^{ood}$ into a same value, even those in-distribution? If so, how could the proposed the method select out good in-distribution actions? The authors briefly addressed the limitations.\nNo potential negative societal impacts is discussed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "2GuLaPPxO23", "DBVROBYbvFJ", "NeJy062t6gK", "kiiCzzSgY7u", "pF_XLYDaMrj", "Esk2paR47UX", "o_wyfat7EZ", "CeYgin_IRf8", "NRgVb0baYUD", "kxA_ykyvzHV", "mXjlR7JWkHw", "camBAIVvBX", "rckYN96AOBhD", "7JZH26DsJ6S", "fZGFLJluKdJ", "t-QEeKXSLD", "ywstjKTuh4e", "NeJy062t6gK", "E0lc9o6_UgO", "Pr-ET5z8cn3", "Esk2paR47UX", "3ExOO6GRtn_", "6gjmp8wu473", "nips_2022_VYYf6S67pQc", "nips_2022_VYYf6S67pQc", "nips_2022_VYYf6S67pQc", "nips_2022_VYYf6S67pQc" ]
nips_2022_kOIaB1hzaLe
Contrastive Neural Ratio Estimation
Likelihood-to-evidence ratio estimation is usually cast as either a binary (NRE-A) or a multiclass (NRE-B) classification task. In contrast to the binary classification framework, the current formulation of the multiclass version has an intrinsic and unknown bias term, making otherwise informative diagnostics unreliable. We propose a multiclass framework free from the bias inherent to NRE-B at optimum, leaving us in the position to run diagnostics that practitioners depend on. It also recovers NRE-A in one corner case and NRE-B in the limiting case. For fair comparison, we benchmark the behavior of all algorithms in both familiar and novel training regimes: when jointly drawn data is unlimited, when data is fixed but prior draws are unlimited, and in the commonplace fixed data and parameters setting. Our investigations reveal that the highest performing models are distant from the competitors (NRE-A, NRE-B) in hyperparameter space. We make a recommendation for hyperparameters distinct from the previous models. We suggest a bound on the mutual information as a performance metric for simulation-based inference methods, without the need for posterior samples, and provide experimental results.
Accept
The three reviewers agreed that the work is a valuable contribution to its field, and presents extensive experiments. For the readers' benefit, I kindly ask the authors to take into account reviewers comments while preparing the camera-ready version. In particular, the revised version should include: - the updated results table (across seeds, per dataset); - a clearer formatting of Figure 5; - an expanded discussion points on (i) how their method compares against learning the bias of NRE-B (Ma and Collins, 2018) (ii) clarifying the part on sequential vs. amortized methods.
train
[ "qYlHxpLa2js", "A8qE-TRVEXt", "2qomx5vJA8", "kM7EsBT6K7v", "-_v160043W", "5xT8SfxBvYZ", "6rYxWp0mWyL4", "KYpLgNSy2b", "2H-UDrkBoJt", "_9rJJ8Gr0vk", "pkCqexw5lHw", "xyHuBNmqJux", "HcHqPEXBJV", "fk-Qoh_W8M9", "MCrx4moNLnV" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their clarifications of my issues and misunderstandings. I remain positive about this paper, and I keep my score. I suggest acceptance of this paper.", " **Conclusion**: I trust the reviewers will address the points above in the final version of the paper. Apart from the updated table which they have already started computing, the remaining points are text and figures (easily actionable) however I believe they are necessary to add substantial clarity, more complete references, and important discussion points to the final version. On that basis, I have raised my score as I think this is an interesting contribution:\n- for different fields (SBI, Contrastive Learning, Density-Ratio Estimation)\n- which raises interesting future questions, e.g. the impact of K >> 1 on statistical properties (robustness of the estimation to range of gamma values) and optimization properties (quicker convergence)", " I appreciate the authors' responsiveness! Apologies for the delay.\n\n- Updated results table\n\nThank you for providing the intermediate results: I understand the experiments are still running over 10 seeds and I trust the table with the variability across seeds *per* task will be added to Appendix C in the final version, as the authors said. Though it might not change the overall conclusion, I think this allows to check that there is not some \"compensation effect\" across tasks and that the \"source of variability\" is indeed sampling and training (different seeds, posterior models, stochastic optimization).\n\n- Clarification on sequential vs. amortized\n\nAs far as my understanding goes, the big picture is you're comparing different methods for estimating a posterior for SBI. You do that by comparing statistical properties (averaging across seeds in the table), optimization properties (convergence rates, Fig 5), and you mention that computational properties (amortized vs. sequential) are important as well and favor NRE-C over others methods in SBI. I am unclear however on which methods are amortized (all the NREs?) and which are sequential. This is discussed at a conceptual level in the end of the intro + the appendix, but it is not clear to me for which models and at which steps this specifically intervenes in the comparisons you're carrying out. I think it is worth clarifying, especially if your model (and the other NREs) benefit from being amortized. \n\n- Writing and relevant literature \n\nI appreciate the authors are expanding their referencing of relevant literature: links between SBI (NRE-A, NRE-B), Estimation (NCE [1], RankingNCE [2]), and Representation Learning (NCE and Negative sampling [3, 4], InfoNCE [5]) are longstanding. These links have been noted for SBI in [6]. I think this should be added to the introduction (instead of the Appendix) as it is relevant literature: it positions NRE-C in a wider context.\n\n- Clarity of Fig 5\n\nToo many contrastive parameters (all the different values of K) as well as different panels (different gammas) makes it hard to read. I would keep only two panels: left NRE-B, right NRE-C. The NRE-C single panel (bigger, given more space) would have: gamma = 1e-3 and 1 and 100 (1e-3 and 1 are the important values given Fig 1) denoted with a hue (e.g. triangle, circle, and cross markers, or different linestyles), and K=2 and 150 with two colors (e.g. blue and green). \n\nThe rest could be included in the Appendix. \n\n- Comparing NRE-B and NRE-C\n\nIn my view, this is very important to discuss further as it is the premise of this paper: c_w(x) is problematic and NRE-C avoids that altogether by changing the loss.\nFor NRE-B, as you said, the Importance Sampling diagnostic fails because the bias C(x) is not constant. However as you said, *you could learn C(x)* which should de-bias the estimation of w: this is exactly what is done in [2] for RankingNCE (your NRE-B), at the end of Assumption 2.2! However, this works under certain identifiability conditions, see Assumption 4.1 in [2], which seems to connect with what you are saying \"c_w(x) - \\hat{c}_w(x) is nearly constant over all x\" if I am not mistaken. \nIf you believe this assumption is unrealistic and that this fragilizes NRE-B as the estimate of w depends on how well C_w(x) is estimated, that is worth saying. Or if you think this should be investigated in future work, that is worth saying as well. \n\nNRE-B or InfoNCE is the powerhorse of many practical breakthroughs in contrastive learning and has been applied to representation learning, SBI, and estimation. However it relies on being able to estimate c_w(x) adequately and this discussion has largely been ignored. Because NRE-B is so widely used, this makes the discussion on c_w(x) important to bring to the table and it directly influences the estimation of w and its diagnostic. This discussion on learning c_w(z), how its estimation impacts that of w, how realistic it is to expect that it is well estimated, and referencing the prior work [2] which exactly formalizes that, is a perfect discussion point. And it provides a way to frame your method NRE-C which basically says: instead of dealing with the c_w(x) term in NRE-B, we can change the loss to remove that obstacle. \n\n\n[1] Gutmann and Hyvarinen 2012 JMLR\n\n[2] Ma and Collins 2018 (Noise Contrastive Estimation and Negative Sampling...)\n\n[3] Mnih and Teh, 2011\n\n[4] Mikolov et al. 2013 (Distributed Representations of Words...)\n\n[5] Van den Oord et a. 2018 (Representation Learning with Contrastive Predictive Coding)\n\n[6] On Contrastive Learning for Likelihood-free Inference. Conor Durkan, Iain Murray, George Papamakarios.", " We included new results in the table displayed at the link:\nhttps://openreview.net/forum?id=kOIaB1hzaLe&noteId=-_v160043W\n\nAs requested, we trained from 4 different seeds and present the average in the table. We will continue the experiments to get to 10, but some tasks are rather slow.", " Dear reviewers,\n\nWe present here a new version of table 1 where the average has been taken over both seed (with four initializations), various posteriors (data), and across tasks. This was requested by reviewer 1x3V.\n\n| Simulation budget | $10^3$ | $10^4$ | $10^5$ |\n|-------------------|--------|--------|--------|\n| Algorithm | | | |\n| \\NREC (ours) | 0.856 | 0.760 | 0.682 |\n| \\REJABC | 0.965 | 0.920 | 0.871 |\n| \\NLE | 0.826 | 0.753 | 0.723 |\n| \\NPE | 0.838 | 0.736 | 0.654 |\n| \\NRE (\\NREB) | 0.867 | 0.811 | 0.762 |\n| \\SMCABC | 0.948 | 0.873 | 0.816 |\n| \\SNLE | 0.783 | 0.704 | 0.655 |\n| \\SNPE | 0.796 | 0.677 | 0.615 |\n| \\SNRE (\\SNREB) | 0.788 | 0.703 | 0.610 |\n\nWe are happy to present a version of table 3 with the result for each task individually, but it is quite long. If any reviewer would like this, we'd be happy to provide the table.", " We appreciate the time taken to review our paper and the insights that you provided with your discussion. We’re glad about the positive assessment of the paper and wish to clarify any issues in our response!\n\n### Summary\n\n- We find your summary to be accurate, thank you for reading and responding to the paper! We do see improvement over NRE-A in some contexts as well, in particular the infinite joint draws in the C2STs hyperparameter search.\n \n\n### Strengths And Weaknesses\n\n- Indeed we agreed that the generalization of the two methods is our fundamental contribution and we are glad that you also find it to be a step forward. You are correct that we introduce the two hyperparameters, but we see them as “interpolating” between NRE-A and NRE-B. I.e. before it was only possible to select two options from these hyperparameters, now any selection is allowed. \n \n We hope that our recommendations about how to set the hyper parameters will be useful to practitioners as they improve the performance of the algorithm across the SBI benchmark. We have expanded this recommendation in the revised version of our paper and also been more specific when certain hyperparameters might be valuable--primarily discussing the effects on the diagnostic and normalizing constant. Higher K increases the normalizing constant and gamma reduces convergence rate but can sometimes lead to higher accuracy models, generally gamma=1.0 is a safe choice. \n \n In SBI data is only limited by the simulation budget of the user. Therefore, if our suggested hyperparameters are not satisfactory for a user; a hyperparameter search like we conducted should be possible in the SBI setting.", " ### Questions\n\n- You are correct that equation (2) is not influenced by K and we’re happy to discuss it further! Equation (2) represents the optimal ratio estimated by optimally minimizing the NRE-B loss function with an infinitely flexible neural network. It is not a function of K because no matter the value of K >=1, the optimal minimizer of the NRE-B loss converges to the same ratio. The paper introducing NRE-B does an empirical study of how changing K affects the convergence rate with local optimizers, finite networks, and limited training data. (Just like we do in our paper!) Please let us know if this adequately clarifies the question.\n \n- The motivation for the investigation was primarily to investigate the properties of a generalization of these two methods. Motivation for increasing the number of contrastive examples comes partially from the introduction of the idea in the paper by Durkan, et. al., but also is a natural extension of a binary classifier and goes along with advances in contrastive representation learning where the number of classes can affect performance. We also emphasize that the weighting of the classes with gamma is equally explored in our paper, both of these searches come naturally from the generalizing framework that we found for NRE-A and NRE-B. We report the simulation efficiency along with other empirical results in different regimes. \n \n We claim this simulation-efficiency motivation is inherited from Durkan, but we want to clarify what we mean there. Durkan’s paper is primarily designed to point out a connection between posterior estimation and ratio estimation and introduce a “more simulator efficient” sequential algorithm. We changed our language to read `As a part of an effort to unify different \\SBI methods and to improve simulation-efficiency, \\citet{Durkan2020} reformulated the classification task to identify which of $K$ possible $\\btheta_k$ was responsible for simulating $\\bx$.`\n \n- Excellent find! Indeed, that is a typo. We changed it to read `\\citet{Durkan2020} train a classifier that selects from among $K$ parameters $(\\btheta_1, \\ldots, \\btheta_K)$ which could have generated $\\bx$, in contrast with \\NREA's binary possibilities. One of these parameters $\\btheta_k$ is \\emph{always} drawn jointly with $\\bx$.`\n \n- Great question! Although (14) is generally intractable to compute exactly, it is possible to estimate (14) using Monte Carlo. We do exactly that for NRE-B and NRE-C in the new version of Appendix D, Figure 13. Estimating the partition function (this quantity) is an entire field of research within statistical physics and energy based models. Nested sampling provides one method, but there are many different possibilities. Since we can easily sample from the joint distribution, we computed it using simple Monte Carlo (this is generally available outside of SBI).\n \n\n### Conclusion\n\nSharp eye finding those typos and asking informative questions to improve the paper! We appreciate your time and expertise in your reply. If you found our changes adequate we hope you’ll consider improving your score to reflect the improvement in the paper itself.", " Thank you for reading our paper and providing a lot of insight into its relationship with noise contrastive estimation. We also appreciate that your questions and requests were highly actionable. We found our paper has improved by incorporating your suggestions, we hope you think so too! \n\n### Summary\n\n- We agree with the summary and want to note that the reviewer even pointed out an important connection to NCE and InfoNCE explicitly, which is quite nice. We are expanding our references to the contrastive learning literature and we will make sure this is made explicit in the text. We added a sentence about it in appendix A. `Specific connections are in the loss functions with \\NREA closely corresponding to noise-contrastive estimation (NCE) \\cite{gutmann2010noise, gutmann2012noise} and \\NREB with InfoNCE \\cite{van2018representation}.`\n \n\n### Strengths And Weaknesses\n\nStrengths\n\n- We appreciate the compliments on the strengths and the connections with contrastive learning. We are happy to see that the strengths of our method align with what we see as our main contributions.\n \n\nWeaknesses\n\n- You correctly pointed out that we average across data for every individual task and also across tasks (task averaging is not addressed in our response here). In this case, the data average is over 10 different simulated posteriors, namely the mean C2ST reported is computed 1/N \\sum_{i=1}^N C2ST[p_w(\\theta \\mid x_i), p(\\theta \\mid x_i)] with N=10. \n \n In the SBI benchmark it is computed 1/N \\sum_{i=1}^N C2ST[p_{w_i}(\\theta \\mid x_i), p(\\theta \\mid x_i)] where w_i implies that the weights have been trained differently due to different seeds (and also different simulated training data). \n \n That means the SBI benchmark trains 10 times the estimators that we do, generates 10 times the amount of training data, and averages over 10 posteriors for each observation. One huge advantage of NRE-C’s amortized estimation is the ability to train just one estimator and average over 10 different posteriors. (Many estimators in the SBI benchmark are sequential and do NOT have this property.) We find this way of averaging to be in line with the mutual information bound that we apply in figure 5 in the revised version of the paper, i.e. it tells us about the performance of the specific estimator. That’s why we did it this way. \n \n That being said, we understand that also averaging over seed is also an interesting result to present. We just started running the benchmark again, that way we can report an average with a similar evaluation technique to the SBI benchmark: over seed AND data. When it is finished, we will update our tables to represent this averaging over both seed and data (just like in the sbi benchmark.) When it’s done, we will add clarifying information about this to Appendix C.2. We do not expect significant changes in the reported results due to the averaging across posteriors (data) and tasks (in the main c2st result table).\n \n- We believe that indeed the plot is a bit hard to read. We think the lack of clarity is due to the offset between different values of K making it difficult to compare the minima of different colors (aka K) in the same plot (same gamma). In the new version of our paper we use a bound on the mutual information to validate models and select between them where the offsetting does not appear. \n \n When gamma=0.001 the effect is very small, arguably non-existent (all models converge around 5*10^2 epochs). However, in all other plots the grey line (200 contrastive parameters) achieves a minimum in fewer epochs (earlier convergence) than any other colored line (sometimes it looks more-or-less tied with pink). Hopefully this clarifies what we mean. If it is not clarified and you think we should soften / change our language, please let us know! For now, we changed the sentence in the figure so it reads`A grid search indicates that increasing $K$ leads to earlier convergence at fixed $\\gamma$, when $\\gamma > 0.001$.`", " ### Questions\n\n- We are not quite sure what is meant by [1]. If you mean “massive optimal data compression…” by J. Alsing, et. al. We’re not sure where in the paper they learn the partition function (c_w(x)). Perhaps the link was forgotten? \n \n To answer your question, you are correct that learning the unnormalized posterior is not suitable for the importance sampling diagnostic. If it were possible to learn the c_w(x), then it seems likely that this could de-bias the estimate and the result `log \\frac{p(x \\mid \\theta)}{p(x)} + c_w(x) - \\hat{c}_w(x)` (with the estimate the hat) could be closer to a normalized posterior. Yet unless `c_w(x) - \\hat{c}_w(x)` is nearly constant over all x, then we will still have the issues we raise in Appendix B.2 \n \n We show in appendix B.2 that the unknown bias drops out only when C(x_n) \\approx C(x_i) for all i and n, which occurs only when C is nearly constant. Since NRE-B does not restrict it, that means C is probably not constant in x and is unlikely to drop out by normalizing (although learning it may bring this debiased estimate closer to constant). This means the importance sampling diagnostic will still fail after normalizing. For this reason, we added the sentence in the “importance sampling diagnostic” paragraph `... with normalized importance weights is ill-posed, i.e., the problem is \\emph{not} solved by estimating the partition function.` We also added a similar sentence to appendix B.2. \n \n Furthermore, NRE-B does not outperform NRE-C in our investigations. Therefore, we still would recommend using NRE-C. We show in appendix D in an updated version of the paper that the mean E[c_w(x)] in NRE-B can get quite large, thus we wonder whether learning it is a better approach than just encouraging c_w(x) \\approx 1 from the start, as we do in NRE-C\n \n\n \n\n### Conclusion\n\nWe thank you again for your important clarifying questions. We are happy to discuss any further questions or follow up if you find that our response could use further explanation.", " ### Questions\n\n- We want to thank you for the question about limitations of NRE-C which are always important to clarify. We will break down our response down by point. Before that, one important theme in your question is about the efficiency of NRE-C compared to NRE-B regarding the generation of training data. In all training regimes presented in this paper, NRE-C and NRE-B use the same methods for simulating data. The fundamental difference lies in which training data is shown to the classifier and how the loss is computed. \n \n Since this is a natural question, we added a paragraph to Appendix A where we discuss how to sample from the distributions. This will hopefully increase the clarity of the paper and contribute to its self containment.\n \n\n- The introduction of the independently drawn class introduces only negligible computational complexity. You raise a good question about how one samples from p(x), noting that is may be difficult. This is a natural conclusion because computing the value of p(x) is incredibly expensive. However, sampling is, luckily, not difficult. Consider two pairs (t0, x0) and (t1, x1) both sampled from p(t, x). If the t’s are swapped then the sample looks like (t1, x0) and (t0, x1) and the new pairs are sampled from p(t)p(x). This is exactly what we mean when we say “bootstrap” to generate the samples we need in the loss function. It is also a standard technique used by both NRE-A and NRE-B. For the price of reordering an array, we can sample from the independent distribution! \n \n We added an entire paragraph about this in Appendix A to clarify. It’s called `Sampling in \\SBI`. It will be uploaded with the new PDF.\n \n- I’m not sure where we said that sampling from the prior is slow, would you mind pointing it out? What you read is most likely a typo and we’d be happy to correct it! It is often the case that drawing from the prior is trivial and simulating the data x can be very expensive. We note this in the first sentence of Section 3.2 “Leveraging fast priors (drawing theta).” \n \n- This is a good question to check the efficiency of NRE-B and NRE-C. If the efficiency is defined as `posterior accuracy / simulations` then NRE-B is less efficient with the right hyperparameters. The construction of this extra independently drawn class does not introduce any new simulations since the budget is (in the standard case and Section 3.3 “Simulation-based inference benchmark”) fixed beforehand and the independently drawn samples are bootstrapped. The fundamental difference is in the loss functions. If the efficiency was computed as `posterior accuracy / total computation` then we’d note that computing the loss for NRE-C is negligibly slower than NRE-B; however, this is likely on the order of nano or milliseconds per epoch.\n \n\n \n\n### Limitations\n\n- We will make the broader impact statement less broad by giving a few examples. We changed the last sentence to `This nuance can be missed by practitioners doing inference in any field;  however, special care should be taken when producing inference results that may be used for making decisions in areas like predicting hidden variables that describe human behavior or determining what factors are responsible for climate change. This list is non-comprehensive and not specific to \\SBI.`\n \n\n \n\n### Conclusion\n\nThank you again for your insight and commitment to clarity and improving the quality of our work. We hope that by addressing your concerns about self containment and clarifying the figures and notation you will consider raising your score to reflect the improvements that we’ve made. Furthermore, we hope that we adequately addressed your questions. We’d be happy to comment further if there are remaining points!", " Thank you very much for your time and insight into our work. We hope the changes we made to the paper address your concerns. We see the changes as helping clarify the message.\n\n### Summary\n\n- Thank you for the clear and accurate summary.\n\n \n\n### Strengths And Weaknesses\n\nWeaknesses\n\n- We acknowledge the point that the work is not completely self contained, especially regarding the details of the related work and simulation-based inference benchmark.\n \n\n- We added more detail regarding the relevant simulators from the benchmark to the appendix C.2. Given space, we can improve the descriptions with more detail in the “experiment” section in paragraph 2. (Although there is not much space now. Upon acceptance with another content page, we try to fit this.)\n \n- We agree that moving some of the information about related work, such as sequential / deep learning approaches, Into the main text (intro) would be helpful. (Upon acceptance we can use some of the additional content page for this.)\n \n- To make the relationship between the ratios clearer we introduced a sentence to the intro `an appealing alternative for practitioners is estimating a \\emph{ratio} between distributions. Specifically, the likelihood-to-evidence ratio $\\frac{p(\\btheta \\mid \\bx)}{p(\\btheta)} = \\frac{p(\\bx \\mid \\btheta)}{p(\\bx)} = \\frac{p(\\btheta, \\bx)}{p(\\btheta) p(\\bx)}$. It relates the prior to the posterior as can be shown using Bayes' rule.`\n \n- In order to make it clearer that we want to estimate the posterior efficiently, we added the following sentence to the first introductory paragraph `Our design goal is to produce a surrogate model $\\phat(\\btheta \\mid \\bx)$ approximating the posterior for any data $\\bx$ while limiting excessive simulation.`\n \n\n- We are glad that you brought up these concerns because we think that Figure 1 and Figure 2 are very important to make clear to the reader. Figure 1 sketches the performance of our algorithm over the hyperparameters we investigate in the paper. Figure 2 shows the relationship between our general NRE-C compared to the two previous methods NRE-A and NRE-B. \n \n\n- We changed the first sentence in Figure 1 to `Conceptual, interpolated map from investigated hyperparameters of proposed algorithm \\NREC to a measurement of posterior exactness using the Classifier Two-Sample Test.` Hopefully this helps to clarify its purpose. Please let us know if you want something else specific to be changed to help clarify the plot. \n \n- It seems that the issue with Figure 2 is the lack of clarity in the notation in the loss functions. To solve that issue we added the sentence `Notation is defined in Section~\\ref{sec:nrec}.` which refers to where the notation is introduced. We attempted to define the relevant symbols in the caption but it took up too much space. Please let us know if that addresses your concerns or if there is another way we can clarify the figure.\n \n\n- You are absolutely correct that the construction with sigma \\circ f_w is used before f_w or \\circ are introduced. We have fixed this by explaining what those symbols represent in the same paragraph that they are introduced with the standard `where f_w is a neural network with weights w` style. Regarding explaining x and theta, we generalized the examples in the first paragraph of the introduction by modifying the sentence such that it reads `This problem setting occurs across scientific domains \\cite{cole2021fast, alsing2018massive, brehmer2018constraining, hermans2020towards, lensing} where $\\btheta$ generally represents input parameters of the simulator and $\\bx$ the simulated output observation.`", " We want to thank all three reviewers for sharing their expertise and impressions of our work along with taking the time to read our paper. The positive impressions of the reviewers heartened us. We believe that the suggestions that were made in the reviews were thoughtful and relevant, thereby improving the paper further.\n\nWe took action by changing the text as suggested by the reviewers. Since the requests were primarily to include more information / clarification or to move information from the supplemental material into the main text, we moved the broader impact section to Appendix A to free up space. Our revised paper still fits within the required 9 page limit and the pdf has been updated (along with an updated appendix).\n\nThe only matters raised by reviewers that we have not finished our actions on yet are:\nThe averaging of the C2ST over both data and seed suggested by reviewer 1x3V. The reason is that the additional experiments are still running. We expect that the new results will not significantly change our reported C2ST as we have already averaged over other aspects, namely across posteriors (data) and across tasks (in the main c2st result table).\nTransferring details about other paper’s benchmark experiments into our main text. Assuming both acceptance and the allowance of an additional content page, then we will address this matter as best we can given the space provided.\nThe first matter is still open. We will update the results when the experiments finish.\n\nIn the meantime, we look forward to a fruitful discussion of the paper.\n", " This paper introduces contrastive neural ratio estimation (NRE-C) to develop a better estimator for the likelihood-to-evidence ratio p(x | \\theta)/p(x) for simulation-based inference (SBI). They first cast the parameter estimation problem as a classification problem, then augment the K-class classification problem (where the goal is to select the best parameter \\theta_k among K options) in Durkan et al. with another (K+1)th class which contains samples that were generated independently from p(\\theta)p(x). They demonstrate that NRE-C is consistent and outperforms existing baselines on simulated benchmarks. Strengths:\n- NRE-C is a clean and straight-forward generalization of Durkan et al. and the binary classification “trick” for estimating likelihood ratios.\n- The authors also performed extensive experiments and empirical evaluations on the SBI benchmarks suite.\n\nWeaknesses:\n- It’d be helpful if the paper was a bit more self-contained. There is a lot of information in the supplementary that would benefit from being moved to the main text (e.g. related works, further details on the SBI benchmark). As another example, l. 35 (equivalence of the 2 ratios) should be better explained. Why do we want these two quantities to be equivalent? The text should explicitly mention that it is to try to make posterior inference easier in SBI.\n- The presentation was also unclear. For example, the figures are pretty confusing – I’m not sure what we are supposed to be taking away from this image/caption. Figure 1 was unclear, and Figure 2 was also not clear with the different notations (e.g. b, b’). \n- There is notation (e.g. f_w, x, theta, theta_0, p, etc.) that is not explained. It’d be clearer if this notation was collected in the following section (or introduced properly before its usage) to make it easier to follow. This causes conduction, such as \\sigma \\circ f_w(\\theta, x) on l. 41.\n\n - When constructing this “hybrid” objective function (that blends NRE-A and NRE-B), what exactly are the limitations? For example, is it computationally expensive to generate samples x^{(b)} \\sim p(x)p(\\theta) to sZet up the classification problem? Given that the simulator takes a prior p(\\theta) and a way to sample from the likelihood p(x|\\theta), it seems like sampling from p(x) would be very hard). The paper also mentioned that sampling from the prior is slow. If we take into consideration this samping procedure for constructing the (K+1)th class, is NRE-C still more efficient than NRE-B? Broader Impact statement is too broad, should be more specific about what “decisions about matters that affect living creatures” are, etc.", " Density-ratio estimation can be performed via classification tasks (binary, multisample) and learn a potentially unnormalized model for the posterior distribution. \n\nThis submission highlights a simple yet important flaw: when the posterior is parameterized as a softmax (this is the case for NRE-B a.k.a. InfoNCE), the scaling indeterminacy translates to a bias in the density-ratio estimate which trickles down to the rest of the analysis. The submission proposes a new loss that removes this bias. It carefully examines how that impacts posterior estimation in practice, through a series of controlled experiments. \n **Strengths**\n\nThis work is interesting, as it highlights a simple yet important flaw in a popular loss (InfoNCE) and shows how a simple fix can lead enable better posterior learning.\n\nThe connections with NRE-A (essentially NCE, depending on how the discriminator is parameterized) and NRE-B (InfoNCE) are appreciated. Given that the authors' contribution, NRE-C, can be seen as a generalization of the previous losses, it is interesting to see in Figure 1 what the best setup is: while it has been noted in previous work that (i) using many negative samples are advantageous, the insight that (ii) imbalanced classification is favorable when the tuples of entirely noise samples are over-represented, seems novel. \n\nThe diagnostics section is appreciated as it is an important and rather open question when estimating conditional density models, such as the posterior distribution. \n\n**Weaknesses**\n\nAs I understand it, the simulations evaluate the quality of estimation by averaging across datasets (Figure 4). Is it common to report the variance *across* datasets? Could you report the variance *per* dataset across many (data-generation) *seeds*?\n\nThe effect of the hyperparameters on the convergence speed (Figure 5) is interesting, however it is not clear to me from the plot \"that increasing K leads to earlier convergence at fixed gamma\". Could the authors explain further?\n\n\n\n It is known from [1] that the NRE-B loss recovers the density ratio + a bias term. However, in [1] the bias is learnt as part of the estimation procedure. This way, the *un-normalized* density ratio (and by extension, its numerator the conditional density model, in your case the posterior) can be recovered without bias. \n\nCould you clarify why learning the bias does not solve the problem? Is it because you would only recover the *unnormalized* posterior whereas you need the *normalized* posterior in the diagnostics? I see no particular negative societal impact. ", " Authors present a generalising framework of (what they call) NRE-A and NRE-B. They argue that is has clear advantages over especially NRE-B. In the experiment section they show some good values of hyperparameters K and gamma, found on a benchmark dataset. I think the major strength is the generalising framework the authors present. This type of work usually helps fields move forward. \nThe weakness, in my eyes, is the introduction of hyperparameters, especially gamma. I am unsure how easy it is to find good values of this if data is limited. I really enjoyed your introduction and motivation; it helped me a lot. However, I could understand that equation 2 is not influenced by k? Can you elaborate here?\n\nIn general, I think the motivation to have multiple k could be more elaborate. You mention it improves simulation-efficiency; is there more to it?\n\nLine 89: \"In contrast with NRE-A, one of these parameters θ_k is always drawn jointly with x.\". As far as I could tell one of the two classes in NRE-A is exactly drawn jointly with x? So why is it in contrast?\n\nIn Equation 14: is Z_w computable? Or should it be approximated, and how good can such an approximation be? And, in data-limit (only) how quick is converge to 1? Vague answer is accepted here.\n\n\n The authors include a section on societal limitations. I agree completely with the authors here." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "6rYxWp0mWyL4", "2qomx5vJA8", "2H-UDrkBoJt", "fk-Qoh_W8M9", "nips_2022_kOIaB1hzaLe", "MCrx4moNLnV", "MCrx4moNLnV", "fk-Qoh_W8M9", "fk-Qoh_W8M9", "pkCqexw5lHw", "HcHqPEXBJV", "nips_2022_kOIaB1hzaLe", "nips_2022_kOIaB1hzaLe", "nips_2022_kOIaB1hzaLe", "nips_2022_kOIaB1hzaLe" ]
nips_2022_QotmVXC-8T
Muffliato: Peer-to-Peer Privacy Amplification for Decentralized Optimization and Averaging
Decentralized optimization is increasingly popular in machine learning for its scalability and efficiency. Intuitively, it should also provide better privacy guarantees, as nodes only observe the messages sent by their neighbors in the network graph. But formalizing and quantifying this gain is challenging: existing results are typically limited to Local Differential Privacy (LDP) guarantees that overlook the advantages of decentralization. In this work, we introduce pairwise network differential privacy, a relaxation of LDP that captures the fact that the privacy leakage from a node u to a node v may depend on their relative position in the graph. We then analyze the combination of local noise injection with (simple or randomized) gossip averaging protocols on fixed and random communication graphs. We also derive a differentially private decentralized optimization algorithm that alternates between local gradient descent steps and gossip averaging. Our results show that our algorithms amplify privacy guarantees as a function of the distance between nodes in the graph, matching the privacy-utility trade-off of the trusted curator, up to factors that explicitly depend on the graph topology. Remarkably, these factors become constant for expander graphs. Finally, we illustrate our privacy gains with experiments on synthetic and real-world datasets.
Accept
The paper eventually received a perfectly consistent evaluation from all the reviewers (4 times "accept"), so I can only recommend the acceptance.
test
[ "xEPnu8tHqWl", "wqJFwV0JyZu", "AozFp0HTU3b", "0mvslFIC-kZ", "_sW5dRImSf", "wID1iLRvmvn", "_3Paw2ftGCD", "kBm9l3V731H", "WWpQQm-u2Tb", "1XzKVVN3Fe", "nUZSEMxiXpz", "ZR-ec3vvHEh" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I really appreciate the authors for their response. I think they have answered my questions. \n\nI believe the reason for no group privacy result for the shuffle model also follows from a lack of proper adversarial definition. It has been true even in cryptography from where the shuffle model is borrowed (IKOS paper).\n\nFor composition, I might be wrong, but I think the authors are considering non-adaptive composition and settings where the underlying graph topology remains static. \n\nI am increasing the score because I really think that the privacy notion is interesting. \n\nFor my own understanding (the authors can respond even after the entire reviewing process): is there something fundamentally different or intuition to be achieved by using gossip protocols to understand the privacy on rings and complete graph?\n\n", " Thank you for addressing my concern, I've raised my score.", " I thank the authors for clarifying my questions and addressing my concerns, and I have raised my review accordingly.", " We thank all the reviewers for their useful feedback and their positive comments. In particular, we are happy that reviewers think that our relaxation offers a \"new perspective of designing privacy-oriented algorithms for decentralized optimization\" (Reviewer sTzh) and provides \"a big improvement over LDP in terms of accuracy\" (Reviewer MF8v) and a \"strong improvement in privacy utility tradeoffs over LDP\" (Reviewer BroW), with \"realistic assumptions\" (Reviewer MF8v). \nWe address the concerns and questions raised by the reviewers in separate comments below and will remain available during the discussion period if any point remains unclear.", " We thank the reviewer for stressing the novelty of our analysis and the fact that it leads to \"strong improvement in privacy utility tradeoffs over LDP\" and allows \"much tighter privacy guarantees\".\n\nRegarding the experiments, we stress that we already use real world datasets, both in term of graphs (Facebook Ego dataset) and with the (admittedly toyish) Housing dataset for gradient descent. These experiments are mostly meant to illustrate our theoretical results: we do provide theoretical convergence guarantees in the paper and run all experiments until the theoretical values of $T^{stop}$. Then, by design, our approach will provide the same improvement between LDP and our relaxation for other learning tasks. Our experiments also illustrate the improvements as the number of nodes increases, and the natural correlations between communities in the graph and the privacy losses. We also added new experiments with the time-varying graphs modeling user dropouts as requested by Reviewer MF8v. Finally, we will release the code on GitHub once published, allowing anyone to experiment further. ", " We thank the reviewer for his/her feedback and address his/her concerns below.\n\n**On the novelty with respect to [15]**\nWhile [15] indeed derives guarantees in the case of the ring and the complete graph, we stress that the studied algorithm is not the same: they study random walks where a single token walks along the edges, whereas our analysis holds for gossip and randomized gossip algorithms, which enable parallel computation and thus better scalability. Hence, even for the complete graph, the derivation of the bound is new, as we explain in the related work, line 96-99.\n\n\n**On completing our definition with group privacy, composition and post-processing**\nWe thank the reviewer for raising this interesting point, although we believe that this is not always needed for having a meaningful relaxation, for instance we are not aware of specific composition/group privacy/post-processing results for the shuffle model or privacy amplification by decentralization: they often follow from general differential privacy properties. Specifically:\n- For composition, the privacy losses from a node $u$ to a node $v$ in each computation simply sum up (as in classic Rényi DP), and this holds even if the graph is different in each computation. This is what we use to analyze our gradient descent algorithms across the different gradient steps.\n- Post-processing directly follows from the post-processing property of Rényi DP.\n- Group privacy generally relies on the worst database pair at distance $k$ for the adjacency relation considered, see for instance Proposition 2 in the paper introducing Rényi DP [43]. In our setting, this arbitrary choice is not well adapted to the structure of the graph (where we could imagine that a group corresponds to users closely related in the graph) and the fact that privacy losses are different across for each pair of nodes. We agree with the reviewer that we could alternatively analyze group privacy similarly to how we study collusion. We will write it down properly in the final version.\n\n**On the interpretation of Theorem 1**\nTheorem 1 involves the **Euclidean norm of columns** of the gossip matrix, not the spectral norm of the matrix. We will make this more clear.\nWe can have a nice interpretation of the result for constant gossip matrices. In this case, the upper bound (righthandside of Equation 6) can be thought of as a distance between nodes $u$ and $v$ in the graph, as explained in Corollary 1 (proved in Appendix C.2). More precisely, $f(u,v)$ in Theorem 1 can be expressed using the probability to be in $v$ when performing a random walk from $u$, which is intuitively decreasing as the distance between $u$ and $v$ increase. The intuition is thus that nodes that are at some non-negligible distance from each other have stronger pairwise privacy guarantees.\n\n**Mean privacy loss as comparison metric**\nIn lines 174 and below, we acknowledge the limitations of the mean privacy loss and stress that it should not be taken as a privacy guarantee by itself. However, our pairwise notion of privacy keeps track of privacy losses for each pair of nodes, giving us $\\mathcal{O}(n^2)$ numbers compared to a single one for central DP. The mean privacy loss allows to make a comparison and has two advantages:\n- it can be interpreted as the level of \"attack\" that a node can perform on all the other nodes of the graph. If the target is chosen randomly, then the expected privacy loss is the mean privacy loss.\n- it is computationally tractable and gives us closed-form formulas that are easy to compare.\n\nWe emphasize that all our results do not only bound the mean privacy loss but also provide formulas for each pairwise privacy loss that can be evaluated numerically, but are less interpretable (*cf* the previous point on the interpretation of Theorem 1).\n\n**Improvements of Algorithm 2 for arbitrary graphs**\nIndeed, Algorithm 2 does not improve upon Algorithm 1 all values of $d$ and spectral gap, as highlighted in Table 1. However, and as we explain in Footnote 2 (page 7), this is due to the fact that our randomized Muffliato algorithm is not accelerated, while Algorithm 1 uses Chebychev acceleration. Algorithm 2 can be accelerated by using the continuized version of Nesterov acceleration [23]. Doing so, Algorithm 2 would provide improvements over Algorithm 1 for all values of $d$ and spectral gap.\nWe did not pursue accelerating Algorithm 2 in an effort to keep the paper easy to follow, due to the technicality of introducing the acceleration presented in [23]. We will expand Footnote 2 into a remark where we clarify this.\n\nWe hope that we addressed the main reviewer's concerns, and that our explanations and the improvements we described will convince him/her to increase his/her score.\n", " We thank the reviewer for the positive feedback and address his/her questions below.\n\n**Clarification in theorems 2 & 3**\nTheorems 2 and 3 are convergence results, they indeed hold for all time steps greater (and not lower) than the particular value of $T^{\\rm stop}$ given by the theorem. We will correct our formulation.\n\n**On realistic implementations in building graphs/communications**\nGossiping with a constant gossip matrix $W$ and performing synchronous decentralized algorithms with such a constant matrix (such as e.g., [7,48]) indeed requires costly synchronization. However, this problem is typically addressed using randomized communications (randomized gossip matrices, as in Algorithm 2, or [10,23]): they do not require any global coordination. Indeed, randomized communications can be generated through the use of local Poisson point processes at each node, as explained in references [10,23].\n \n\n**Offline / unavailable users**\nAs stressed wisely by the reviewer, user dropouts can be dealt with using time-varying matrices, which is well captured by the general analysis of Muffliato (Theorem 1). In randomized Muffliato, users that may be offline may also be modeled by very small probability of activations (eventually equal to 0) when they are offline. Then, using the generality of our analysis (in Theorem 1 e.g.), time-varying probability activations can be handled in the same way as time varying gossip matrices. Hence, our results already include tools to handled correctly dropout, which of course can be modeled in different ways.\nEven though our analysis of Muffliato-GD in the main text uses a constant communication matrix $W$ in an effort to ease notations, time-varying communication matrices $(W_t)$ are used in the analysis in Appendix F for Muffliato GD. However, the gossip matrix needs to remain constant between each gradient step for the utility analysis we perform (in order to have a Chebychev acceleration). Using time varying matrices for every communications with acceleration could be done using results of [37], rather than a more classical Chebychev acceleration, at the cost of a more complex algorithm. In the GD experiments, we do change the sampling of the Erdös-Renyi graph after each gradient descent step, thus only assuming that coordination is possible for the time of a single step, and not the whole algorithm. \nTo complement our experiments, we added new plots for decentralized averaging in the supplemental material (see files error_dropout.pdf and privacy_dropout.pdf) where we change the graph at each iteration and we model dropouts explicitly. Specifically, at each time step, the availability of each node is modeled by an independent Bernouilli random variable and we draw a new Erdos Renyi graph over the set of available nodes. We vary the expected level of available nodes at each step from 10 to 90% and observe that the convergence time increases with the proportion of inactive users, but the achievable privacy-utility trade-off remains approximately the same (the plot shows a single random run, hence the minor variations). We will add an appendix section explaining these new simulations with more extensive parameter exploration and averaging over several random seeds.\n\nWe hope that our answers completely lift the reviewer's concerns and that these details will convince him/her to increase his/her score.", " We thank the reviewer for her/his meticulous review, the time spent reviewing the paper and checking the proofs. It will allow us to correct typos and make our proofs more clear.\n\n**On the comparison with classical $(\\epsilon, \\delta)$-DP**\nWe stated all our results in Rényi DP (RDP) because RDP is increasingly popular in differential privacy papers (due to its nice properties) and appears to be becoming the new standard. Note however that we briefly recap at line 133 the conversion from RDP to classical $(\\epsilon, \\delta)$-DP, and we can of course compute explicitely the guarantees in $(\\epsilon, \\delta)$-DP for a given result. For instance, a randomized gossip on the complete graph, being $(\\alpha, \\epsilon)$-RDP with an utility of $\\alpha \\Delta^2/ n^2 \\epsilon$ means that for a utility of $u = \\mathbb{E}(\\lVert \\bar{x} - x^{out} \\rVert^2)$ the algorithm is $(\\Delta \\sqrt{\\ln(1/\\delta)} / n \\sqrt{u}, \\delta)$ for all $\\delta >0$, compared to $(\\Delta \\sqrt{\\ln(1/\\delta)} / \\sqrt{nu}, \\delta)$ for the local DP counterpart. Hence, we match the classical amplification of order $\\sqrt{n}$ that central DP provides over local DP.\nWe can add a remark or an annex with a conversion of Table 1 to $(\\epsilon, \\delta)$-DP.\n\n\n**On the technical details of proofs**\nWe clarify the parts pointed out by the reviewer.\n- Points 1 & 2: We start the proof at line 590, which writes $\\frac{1}{2}\\mathbb{E}\\|x^t-\\bar x\\|^2 = \\frac{1}{2}\\mathbb{E}\\|P_t(W)(x^{(0)}-\\bar{x})\\|^2$. Since we want to use the property that we mention at line 587 of the paper (borrowed from Berthier et al. [7]), and since $\\frac{1}{n}\\sum_{v\\in\\mathcal{V}}x^{(0)}_v=\\bar x + \\bar \\eta$ (due to $x^{(0)}_v=x_v+\\eta_v$), we write $\\frac{1}{2}\\mathbb{E}\\|P_t(W)(x^{(0)}-\\bar{x})\\|^2=\\frac{1}{2}\\mathbb{E}\\|P_t(W)(x^{(0)}-\\bar{x} -\\bar \\eta +\\bar \\eta)\\|^2=\\frac{1}{2}\\mathbb{E}\\|P_t(W)(x+\\eta-\\bar{x} -\\bar \\eta +\\bar \\eta)\\|^2$. Then, because $x+\\eta-\\bar x-\\bar \\eta$ is 0-mean with respect to the summation of the coordinates of these vectors (not with respect to $\\mathbb{E}$), $P_t(W)(x+\\eta-\\bar x-\\bar \\eta)$ is also 0-mean, and since $P_t(W)\\bar \\eta=\\bar \\eta$, we use a bias-variance decomposition (with respect to the summation and the coordinates, not wrt $\\mathbb{E}$) to obtain $\\frac{1}{2}\\mathbb{E}\\|x^t-\\bar x\\|^2=\\frac{1}{2}\\mathbb{E}\\|P_t(W)(x+\\eta-\\bar{x}-\\bar\\eta)\\|^2+\\frac{1}{2}\\mathbb{E}\\|\\bar \\eta\\|^2$, which answers point 1. Then, we use $\\frac{1}{2}\\mathbb{E}\\|\\bar \\eta\\|^2=\\frac{\\sigma^2}{2n}$ and the property of $P_t(W)$ to get $\\frac{1}{2}\\mathbb{E}\\|P_t(W)(x+\\eta-\\bar{x}-\\bar\\eta)\\|^2\\leq (1-\\sqrt{\\lambda_W})^t\\mathbb{E}\\|x+\\eta-\\bar{x}-\\bar\\eta\\|^2$, leading to the third line in the proof. To obtain the final result, we use again a bias-variance decomposition, with respect to $\\mathbb{E}$, to obtain $\\mathbb{E}\\|x+\\eta-\\bar{x}-\\bar\\eta\\|^2=\\|x-\\bar{x}\\|^2+\\mathbb{E}\\|\\eta-\\bar\\eta\\|^2\\leq \\|x-\\bar{x}\\|^2+\\mathbb{E}\\|\\eta\\|^2=\\|x-\\bar{x}\\|^2+n\\sigma^2$, concluding this proof. \n- Point 3: there is indeed an expectation missing, and you are also right about the other typo.\n- Point 4: the inequality in line 595 is obtained using Cauchy-Schwarz inequality as written in the paper, in the following way: \n $\\big( \\sum\\_{w'}(W^t)\\_{ww'}\\big)^2\\leq \\big(\\sum\\_{w'}(W^t)\\_{ww'}^2\\big)\\big(\\sum\\_{w'} 1\\big)$ which can be rewritten further $ n (\\sum_{w'}(W^t)_{ww'}^2)=n \\rVert (W^t)_w \\lVert ^2$ .\n\nWe hope that the above details answer totally the four questions, and thank the reviewer for helping us clarify the proof. We will make sure to include this level of detail in the final version.", " This paper proposes pairwise network differential privacy (PNDP) in decentralized optimization that relates privacy loss between two nodes with their communication weights after running the proposed algorithms for T steps. Several mixing matrices are considered and the paper analyzes both simple gossip averaging and baseline decentralized optimization problems. Utility and privacy analyses are conducted for different mixing matrices and problems. The new notion of differential privacy averages privacy loss for all nodes and thus gives a new perspective of designing privacy-oriented algorithms for decentralized optimization rather than only looking at worst case privacy loss. The idea of PNDP is a pairwise version of network DP from ref. [16], which highlights and averaged privacy loss for each node instead of the worst case privacy loss. This perspective is original. This paper has quality analyses and experiments to support its claims and is clearly written. For each case considered, the paper shows clear and solid theoretical guarantees. I think this paper is significant in introducing this new idea of pairwise privacy notion, but the main confusion when I read the paper is that how to explicitly compare the utility loss given the same privacy constraints since there is a conversion of DP and Renyi DP which I didn't figure out how these results are compared in detail. I read the proof of Synchronous Muffiato and have the following questions. 1. In line 590, how to get the first bias-variance decomposition? It seems to the cross term has dependent parts and cannot cancel. 2. The first inequality here seems to rely on some properties of $P_t(W)$ while dealing with the 2nd term in the 2nd line. 3. The first term in the 2nd line misses an expectation symbol. in the first line, I guess there should be $\\|x^t - \\mathbb{1} \\bar{x}^\\top\\|$ otherwise the dimension of $\\bar{x}$ is confusing. 4. How to reach the inequality in line 495? Yes. ", " The authors propose a relaxed notion of local differential privacy in a\ndistributed model of local optimization in which users share data across\na network over rounds. The key difference from LDP is that for a particular\nround, a user only sees data from his neighbors, and thus data from many hops\naway is aggregated more and is more private. They propose muffliato, an algorithm\nfor computing noisy averages with an improved accuracy guarantee over LDP for \nseveral important types of network topologies. They show how to use \nmuffliato to perform SGD privately.\n + muffliato offers a big improvement over LDP in terms of accuracy and realistic\nassumptions about the network in which the compuatations are done.\n\n+ The authors provide a tight privacy analysis of Muffliato when the Gaussian\n mechanism is used.\n\n+ the experiments are good quality and indicate the improvements of Muffliato\n\n- Muffliato, and especially Muffliato-SGD, still require many rounds of\n coordination between users which is often hard to achieve in the decentralized\n setting.\n Theorems 2,3 hold for a particular value of T^{stop} which is upper bounded as\nstated and not for all values of T^{stop} up to that bound, right? The second \ncase seems incorrect as early iterates would not be accurate. Please make this\nmore clear in the theorem statements.\n\nIn real life, the communication networks often change between rounds of\ninteraction, and users drop out and come back online. This seems to be \ncaptured well by the general analysis of\nMuffliato, which allows W to change over time. However, for random muffliato,\nhow realistic is it to assume that some mechanism exists to sample edges randomly\nand query the users? Does this require a central authority to do the sampling,\nand what happens if the users are offline?\n\nThis extends to the Muffliatio-GD, where it appears one must assume a fixed\ncommunication matrix, or use randomized muffliato, many times.\n The communication/coordination limitation could be addressed by running experiments\nwith different communication graphs each round.\n\n(Edit) I thank the authors for addressing this concern.", " The paper studies a relaxation of local model of privacy, which they call pairwise network differential privacy. They use gossip protocols to understand the privacy loss under this relaxed notion. They show that the privacy get amplified under this notion of privacy for a range of network topology. The main strength of the paper is the new notion of privacy studied. It seems natural for certain settings, but I am not sure if it is . They also study their new notion of privacy under the setting of colluding nodes in the network. The ring graph and complete graph was studied recently, so that result is not new. The result that is new are for expanded graphs and random graph. The proof seems correct.\n\nThe main weakness of the paper is also the new notion of privacy. For a privacy notion to be of use, it has to satisfy basic properties: group privacy, composition, and robustness to post-processing. I believe the authors can extend their collusion result to get group privacy. I would suggest the authors to do that in order to make this new relaxation more rigorous. It is also unclear what would be a good notion of composition for this new notion of privacy. Do we assume that the network topology (and collusion party) do not change over the period of composition or should we imagine the composition model to be more fitted with their time-varying graph model.\n\nThe other weakness of the paper is the interpretability of the result. Theorem 1 is stated with respect to the norm of the gossip matrix (I am guessing it is the spectral norm). It would be nice if we can have a sense of this theorem with an example. \n\nThe mean privacy loss as the baseline to compare privacy does not make sense. \n\nI would expect Algorithm 2 should have improvement for all values of d and spectral gap. This does not seem to be the case for arbitrary graphs. Please take a look at above. Yes.", " This paper introduces Pairwise network DP, a relaxation of local DP that allows the privacy constraint to vary as a function of nodes in the decentralized graph (i.e. may allow close nodes to lose more privacy than distant nodes). Given this setting, the work introduces Muffliato which combines local noise injection and gossip protocols and it further derives DP optimization algorithms. The theoretical analysis demonstrates the magnitude of privacy amplification for the protocols. Intuitively, since nodes only interact directly with their neighbors, the this formulation allows for much tighter privacy guarantees for distant nodes and network topology can have a large impact on privacy utility tradeoffs. Finally, they also demonstrate privacy gain on synthetic and graph datasets. To the best of my knowledge, this work offers novel analysis to quantify the privacy amplification of the decentralized protocol for arbitrary graph types and demonstrates a strong improvement in privacy utility tradeoffs over LDP, both theoretically and empirically. \n Suggestion: Additional federated learning experiments on a real world dataset (with a real world graph) demonstrating the practical utility of this method would be helpful. The paper adequately notes its limitations and discusses its social impacts." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 1 ]
[ "wID1iLRvmvn", "_sW5dRImSf", "_3Paw2ftGCD", "nips_2022_QotmVXC-8T", "ZR-ec3vvHEh", "nUZSEMxiXpz", "1XzKVVN3Fe", "WWpQQm-u2Tb", "nips_2022_QotmVXC-8T", "nips_2022_QotmVXC-8T", "nips_2022_QotmVXC-8T", "nips_2022_QotmVXC-8T" ]
nips_2022_IPcgkUgw3t1
UniGAN: Reducing Mode Collapse in GANs using a Uniform Generator
Despite the significant progress that has been made in the training of Generative Adversarial Networks (GANs), the mode collapse problem remains a major challenge in training GANs, which refers to a lack of diversity in generative samples. In this paper, we propose a new type of generative diversity named uniform diversity, which relates to a newly proposed type of mode collapse named $u$-mode collapse where the generative samples distribute nonuniformly over the data manifold. From a geometric perspective, we show that the uniform diversity is closely related with the generator uniformity property, and the maximum uniform diversity is achieved if the generator is uniform. To learn a uniform generator, we propose UniGAN, a generative framework with a Normalizing Flow based generator and a simple yet sample efficient generator uniformity regularization, which can be easily adapted to any other generative framework. A new type of diversity metric named udiv is also proposed to estimate the uniform diversity given a set of generative samples in practice. Experimental results verify the effectiveness of our UniGAN in learning a uniform generator and improving uniform diversity.
Accept
This paper proposes UniGAN to alleviate mode collapse in GANs. They encourage the uniform distribution by arguing that samples on the manifold are equally accepted as real samples for training GANs. The paper is comprehensive in both theory and experimental results. It receives average rating score 6, leading to an ``Accept'' decision. To further improve the impact of this paper, I suggest the authors to study it in the context of modern SoTA image generation models in the future. Hopefully, It may help the GAN-based model family [1,2,3] to improve the performance, in the competition with diffusion-model, auto-regressive models. References: - [1] Alias-Free Generative Adversarial Networks (StyleGAN3) - [2] LAFITE: Towards Language-Free Training for Text-to-Image Generation - [3] ViTGAN: Training GANs with Vision Transformers
train
[ "0etdHGExFvA", "etzdautH8bl", "I7SZo037XEL", "qtmx3I2MNzP", "EkB0jdaIAO5", "odMnfhFaBX4", "tNW-Gvk0YYR", "_hIcZnLecnT", "1nL94wTTmf6", "l-KnXYFu58c" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your questions. \n\nIn terms of the FID across different datasets, we provide quantitative results on natural image datasets in Table 12-17 in supplementary. Our NF-based model can achieve the FID scores of 8.22 (CelebA), 11.22 (FFHQ), 9.16 (LSUN Car), 8.20 (LSUN Bedroom), 9.83 (LSUN Church). Though StyleGAN2 can achieve the FID scores <5 on these datasets, the total number of parameters of the StyleGAN2 (18.828M) is much larger than that of our generator (8.882M), hence the comparison between the two models is not very fair. In terms of the FID across different scales, due to limited amounts of GPUs, we will add the experimental results on high-resolution image datasets to the future version of our work.\n\nIn terms of why methods handling disconnected manifolds could achieve better performance, the BourGAN [1] paper can provide some inspiration. Specifically, the authors of BourGAN demonstrate that using a Lipschitz continuous generator to map a Gaussian prior distribution over a connected latent space to the data distribution with disconnected modes may lead to arbitrarily large gradients and hence unavoidably results in unwanted samples (see Fig. 1 of [1]). In our paper, we also adapt our generator uniformity regularization to the setting of multiple disconnected modes, see Line 147-176 in our supplementary.\n\n[1] BourGAN: Generative Networks with Metric Embeddings. NIPS, 2018. \n", " I thank the authors for the response!\n\nRegarding the high quality of CelebA dataset having low FID score and justifying lowering the power of the generator to NF-based can still result in high-quality samples if the discriminator is powerful enough:\n\nThe current paper's best FID on CelebA in the appendix is 9.16; however, StyleGAN2 itself archives the FID score of 5.2 on CelebA. It is important to consider the FID across different datasets with different scales. And I would appreciate it if the authors could help me to understand better.\n\nRegarding the usage of uniform generation for image data, based on one manifold hypothesis,\n In fact, in GAN literature, papers that design their method to handle disconnected manifolds could achieve better performance, such as: ( Diverse Image Generation via Self-Conditioned GANs) or PGMGAN therefore, I do not see why it is helpful to have that kind of generator in GAN literature, which their main advantage is high-quality samples.\n\n\n\n\n\n\n", " Thanks for your questions.\n\nQ: Evaluation on high-resolution image datasets?\n\nA: We will conduct experiments on high-resolution image datasets to further evaluate the model. Since experiments on high-resolution image datasets require a lot of time and computational resources, due to limited time before the end of the author response period and limited amounts of GPUs, we cannot immediately provide the experimental results, and please allow us to add the results to the future version of our work.\n\nQ: Analysis similar to Fig. 5 on other datasets like FFHQ, AFHQ, etc?\n\nA: We present qualitative results in supplementary (see Fig. 4-15) to observe the uniformity of the generated distribution. Fig. 5 visually demonstrates the effect of our proposed uniformity regularization on a 2D synthetic dataset. However, for natural image datasets, due to the high dimensionality of images, it is impossible to directly visualize the uniformity of the generated distribution over the entire dataset like the 2D synthetic dataset. Therefore, we randomly select a generated sample and use latent traversal to observe the uniformity of the generated distribution in a neighboring area: if the generated samples of latent traversal vary uniformly, it is considered that the generated distribution in this neighboring area is uniform. Through experiments, when the uniformity regularization is not imposed, we observed that generated samples in some regions collapsed (the samples generated by latent traversal remain the same and did not change at all, see subfigure (a) for each of Fig. 4-15), while in some regions generated samples changed drastically (see subfigure (b) for each of Fig. 4-15), which reflects that the generated distribution is not uniform. However, when the uniformity regularization is imposed, the generated samples of latent traversal in different regions vary more uniformly, which reflects that the generated distribution is more uniform than that obtained without imposing the uniformity regularization.\n\nQ: The relations between different types of mode collapses?\n\nA: As we analyze in Section 3.1 of the main text, $u$-mode collapse is a new mode collapse that focuses on the generated distribution uniformity that cannot be captured by the $\\left(\\varepsilon,\\delta\\right)$-mode collapse. Similar to the PPL metric of StyleGAN2, our proposed udiv metric and the traditional LPIPS metric cannot reflect each other, so the $u$-mode collapse and the $\\left(\\varepsilon,\\delta\\right)$-mode collapse are two different types of mode collapses, and the proposed uniformity regularization may not improve traditional diversity measured by LPIPS, but the uniform diversity. However, our analysis in Section 3.1 of the main text shows that when $u$ is fixed for the $u$-mode collapse, the $\\varepsilon$ is lower bounded for the $\\left(\\varepsilon,\\delta\\right)$-mode collapse for a given $\\delta$. Moreover, when the generated distribution is sufficiently uniform, the $\\left(\\varepsilon,\\delta\\right)$-mode collapse also cease to exist. Therefore, improving uniform diversity can theoretically improve the lower bound on the extent of the $\\left(\\varepsilon,\\delta\\right)$-mode collapse.", " Thanks for your questions.\n\nQ: How to address the limitation of inferior generative quality of NF-based models?\n\nA: We observe that using a strong discriminator can lead to high-quality generated samples. Specifically, when we use the powerful StyleGAN2 discriminator (see Table 4 of supplementary), the FID scores can be reduced to a very low level (eg, FID<10 on CelebA dataset, and see more quantitative results in Table 12-17 of supplementary), which shows that our NF-based generator can also generate high quality samples when the discriminator is powerful enough.\n\nQ: How to understand the zero-padding manner of the proposed NF-based generator?\n\nA: Each layer of the generator consists of a padding module for padding zeros to boost dimensionality of input features, and a flow module for nonlinear transformation. Corresponding to the traditional convolutional network-based generator, the padding module can be considered as an $\\mathrm{Upsample}$ layer and the flow module can be considered as a $\\mathrm{Conv}+\\mathrm{BN}+\\mathrm{ReLU}$ layer.", " Thanks for your questions.\n\nQ: Negative societal impact of our work?\n\nA: Techniques that generate high-quality fake images (especially human face images) such as DeepFake may be used for malicious purpose, which brings negative societal impacts. Although our method can improve uniformity of generated distributions, the quality of generated images may be more important in practical applications of fake image generation. Since improving generative quality is not the purpose of our work, our work may not pose a challenge to DeepFake detection that prevents malicious use of high-quality fake images, and hence the negative societal impacts may not be particularly applicable for our work.\n\nQ: Further evaluation on the FashionMNIST and partial MNIST dataset as well as the stacked-MNIST dataset?\n\nA: We provide further evaluation on the mentioned two datasets, see Table 9\\&10 for quantitative results and Fig. 8 for qualitative results in our revised supplementary. Similar to datasets that provide class labels such as MNIST, FashionMNIST and CIFAR, the mentioned two datasets have multiple discrete modes with each mode corresponding to one class. As we mentioned in Line 147-176 of supplementary, we adopt a conditional generation setting (ie, using $g\\left(z;y\\right)$ to generate an image, where $g$ is the generator, and $z$ and $y$ are the latent code and the class label, respectively) for datasets that provide class labels, because different classes (modes) correspond to different disjoint submanifolds, and the union of all the disjoint submanifolds cannot be homeomorphic to an continuous Euclidean latent space. Therefore, under the conditional generation setting $g\\left(z;y\\right)$, ideally, we can cover all the discrete modes by traversing all the class labels $y$ for $g\\left(z;y\\right)$. In our experiments, for the model trained on each dataset, we first randomly sample 10000 class labels $y^{\\left(i\\right)}$ and latent codes $z^{\\left(i\\right)}$, then obtain generated samples $\\left\\\\{x^{\\left(i\\right)}=g\\left(z^{\\left(i\\right)};y^{\\left(i\\right)}\\right)\\right\\\\}_{i=1}^{10000}$ for evaluation. Our model can cover all 11 modes of the FashionMNIST and partial MNIST dataset and most of the 1000 modes of the stacked-MNIST dataset.", " Thanks for your questions. \n\nQ: Why UniGAN achieves low IS scores on CIFAR dataset?\n\nA: Regarding the difference in IS scores between UniGAN and PGMGAN on the CIFAR dataset, in addition to being likely caused by the different generator architectures of the two models (we use an NF-based generator, while PGMGAN uses a ResBlock-based generator), it is more likely caused by the different discriminator capabilities of the two models. As we show in Table 3 of supplementary, the architecture of the discriminator we used for training on the CIFAR dataset is very simple, it consists of only a few vanilla convolutional layers and the total amount of model parameters is only 0.188M. However, the discriminator of PGMGAN consists of multiple ResBlocks, which is relatively more capable. In addition, it can be seen from supplementary that for the natural image datasets, when we use the powerful StyleGAN2 discriminator (see Table 4 of supplementary), the FID scores that measures the quality of generated samples can be reduced to a very low level (eg, FID<10 on CelebA dataset, and see more quantitative results in Table 12-17 of supplementary), which shows that our NF-based generator can also generate high quality samples when the discriminator is powerful enough.\n\nQ: Is a uniform distribution necessarily better?\n\nA: Regarding the concern that a uniform distribution is not necessarily better, it is indeed not ideal for 1D data with the support being the entire $\\mathbb{R}$ to have a uniform distribution over the entire infinite $\\mathbb{R}$ space. However, for natural image datasets such as human faces, a uniform distribution over the manifold is reasonable, because all human face images fall on a manifold restricted to a bounded region $\\left[0,255\\right]^{C\\times H\\times W}$ rather than extending to the entire infinite $\\mathbb{R}^{C\\times H\\times W}$ space, where $\\left[0,255\\right]$ is the range of pixel values and $C\\times H\\times W$ is the dimensionality of the image. Therefore, it is reasonable to adopt a uniform distribution on a finite manifold. In addition, it is subjective to adopt which kind of distribution over the support set. Although one may prefer some samples to others, we adopt the uniform distribution over the manifold because we take into account that every sample on the manifold can be equally accepted as a real image, which should also be acceptable.", " This paper proposes UniGAN a new approach to alleviate mode collapse in GANs. Assuming the manifold hypothesis, the authors motivate training a generator with uniform distribution over the data manifold M. They encourage the uniform distribution by arguing that samples on the manifold M are equally accepted as real samples for training GANs. UniGAN restricts the generator to be Normalizing Flow (NF) based to perform effective and simple regularization in pursuit of a uniform generator. Authors also propose a new measure of performance of GANs and show the effectiveness of their methods on several benchmark datasets experimentally. Pros:\n\n1- the idea of the paper clearly has been explained.\n\n2- The idea is novel.\n\n3-The paper has solid theoretical results.\n\n4- A large set of experiments has been done.\n\nCons:\n\nOne of the main reasons GANs have been preferred to other generative models despite mode collapse is their sample quality. However, the current paper restricts the generator to an NF-based model, which restricts the flexibility of the generator and consequently lowers the quality of samples. This also can be shown by looking at the Inception score of the trained model on CIFAR; UniGAN achieve IS of less than 3.9; however, PacGAN archives IS of more than 6 (please refer to Self-Cond-GAN or PGMGAN) when used with proper architecture.\n\nAlso, I have not convinced necessarily a uniform distribution is necessarily better. For an instant, consider toy data of 1D mixture of normals distribution. The support of this distribution is the entire R, and it is not ideal to have a uniform distribution over the entire space. Instead, one may prefer some samples to others. Please look at my above comments. Yes", " The paper proposes a simple yet effective way to mitigate the mode collapse issue of GANs. To this end the authors introduce the generator uniformity property, which is utilized to regularize a flow-based generator. Lastly, a new form of diversity is introduced, labelled uniform diversity. Strength:\n\nAlthough I am not particularly experienced in this particular sub-area (i.e., mitigating mode-collapse in GANs), I appreciate the effort the authors put into the technical part of the motivation. The main idea is clear and well motivated. The authors also made sure to thoroughly evaluate their model against a number baselines and metrics, which highlight the efficacy of the method.\n\nWeakness: \n\nAlthough I don’t think the paper has any strong weakness, I would like to see how the approach performs on other mode-coverage benchmarks, e.g.,\n- Fashion-MNIST and partial MNIST of [1]\n- Stacked MNIST of [2]\n\nLastly, I would also appreciate a discussion on the ethical considerations of this work, especially since it touches the task of generative modelling of faces.\n\n[1]. Rethinking Generative Mode Coverage: A Pointwise Guaranteed Approach\n\n[2] . VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning I suggest the authors follow my suggestions regarding further evaluation as well as the inclusion of a discussion on ethical considerations. The authors have provided adequate analysis of the limitations of the work. However, I would like to see a discussion on the societal impact of the work.", " In this paper, the authors propose the maximization of the uniform diversity of the generative distribution as a means for avoiding mode collapse. The main idea relies on the diffeomorphism between input uniform distribution and the samples that can be obtained using a normalizing flow -based generator. The key insight is due to this property being preserved, one can maximize the uniform diversity. \n\nThe paper provides useful theoretical and empirical analysis results that validate their work Strengths:\n1) The use of uniform diversity as a measure to evaluate mode collapse in GANs is interesting. The ideas are also related to the Epsilon-Delta analysis of PacGAN\n2) Thorough evalution of their work by considering various existing techniques to avoid mode-collapse and showing that incorporating uniform diversity improves their performance\n3) Detailed theoretical analysis of their work with detailed results being provided in the supplementary material.\n\nWeakness:\n1) The main observation is that the existing uniform diversity can be achieved if one obtains the diffeomorphism between the distribution and the sample space. However, this exists only in cases such as normalizing flows. In general, normalizing flows have invertible properties but suffer from being inferior in terms of sample generation. It is not clear, how exactly this limitation is addressed in this work.\n2) As observed, the normalizing flows are limited in terms of the size of the latent space being exactly the same as the original sample space. The method adopted to solve this by zero-padding appears to be a heuristic. This needs to be better understood. It will be useful if the authors could comment on the weaknesses mentioned above. The negative social impact are not particularly applicable.\n\nThe limitations to some extent are addressed in the supplementary material. ", " This paper proposes a new type of generative diversity named uniform diversity, and thus introduce the corresponding $\\mu$-mode collapse. And the authors propose a new framework called UniGAN to learn the uniform generator. This framework is formulated with a normalizing flow based generator and a uniformity regularization. Moreover, the authors also propose a new metric called udiv to estimate the uniform diversity. Strengths:\nThis paper proposes a new type of uniform diversity, and analyze the corresponding importance and behavior. And the proposed UniGAN shows great performance under the metric of udiv.\n\nWeakness:\n1. The experiments are conducted with small image resolution. And the uniform diversity on the high resolution images is not analyzed. The authors can combine the framework with some up-sampling layers to analyze the uniform diversity with the increase of image resolution.\n\n2. The analysis results of Fig.5 can be made on other datasets to demonstrate the correctness of the conclusion on different datasets, like FFHQ, AFHQ, etc.\n\n3. Will the proposed uniformity regularization suitable for dealing with other types of mode collapses? The relations between other types of mode collapses can be analyzed.\n\n Please refer to the weakness. More experimental results, especially the visual analysis can be provided." ]
[ -1, -1, -1, -1, -1, -1, 4, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 3, 3 ]
[ "etzdautH8bl", "odMnfhFaBX4", "l-KnXYFu58c", "1nL94wTTmf6", "_hIcZnLecnT", "tNW-Gvk0YYR", "nips_2022_IPcgkUgw3t1", "nips_2022_IPcgkUgw3t1", "nips_2022_IPcgkUgw3t1", "nips_2022_IPcgkUgw3t1" ]
nips_2022_1WZyphXPLwC
Split-kl and PAC-Bayes-split-kl Inequalities for Ternary Random Variables
We present a new concentration of measure inequality for sums of independent bounded random variables, which we name a split-kl inequality. The inequality combines the combinatorial power of the kl inequality with ability to exploit low variance. While for Bernoulli random variables the kl inequality is tighter than the Empirical Bernstein, for random variables taking values inside a bounded interval and having low variance the Empirical Bernstein inequality is tighter than the kl. The proposed split-kl inequality yields the best of both worlds. We discuss an application of the split-kl inequality to bounding excess losses. We also derive a PAC-Bayes-split-kl inequality and use a synthetic example and several UCI datasets to compare it with the PAC-Bayes-kl, PAC-Bayes Empirical Bernstein, PAC-Bayes Unexpected Bernstein, and PAC-Bayes Empirical Bennett inequalities.
Accept
This meta review is based on the reviews, the authors rebuttal and the discussion with the reviewers, and ultimately my own judgement on the paper. There was a consensus that the paper contributes an interesting new concentration of measure inequality and derive a useful PAC-Bayes inequality. I feel this work deserves to be featured at NeurIPS and will attract interest from the community. I would like to personally invite the authors to carefully revise their manuscript to take into account the remarks and suggestions made by reviewers. Congratulations!
train
[ "PMVWj1D8O_t", "x24RGeTkp-", "sHWM1XeHyjm", "eDgKMH-eRB3", "CkGhOeHDRSw", "iibT8VcK1Z3", "3B-ve311uRZ", "YdibMKOq_Vf", "s6EPBuWhoI9", "_jF2YYdoMrp", "H49zjkyUXdk", "kP0NaxeyLF3", "AfiSdGZind", "16VjHPwOpBj" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The following points raised in the discussion are not within the main focus of the paper.\n\nContinuous distributions: while the split-kl can be applied to continuous distributions, it is not designed for them, just as the kl is not designed for them. If a continuous distribution happens to be close to ternary it will be tight, but it is possible to design distributions where for any choice of $\\mu$ the resulting $Z^+$ and $Z^-$ will be far from Bernoilli and the bound may potentially be loose. We note that ternary random variables have a broad range of applications and tight inequalities for them are interesting even if they are suboptimal for continuous distributions.\n\nComparison of the first and second order bounds for the weighted majority vote: a significant part of the difference between the first and second order bounds for the weighted majority vote comes from the difference between the first and second order *oracle* bounds. This has been studied in depth by Masegosa et al. (NeurIPS, 2020) and Wu et al. (NeurIPS, 2021). In their Figure 1, Wu et al. depict the regions, where the *oracle* first order bound is better than the *oracle* second order bound and where it is the other way around. A quote from their paper accompanying the figure: “The region below the black line, where $\\mathbb E_{\\rho^2} [L(h,h’)] < \\mathbb E_\\rho[L(h)]$, is the region of low correlation of errors. In this region the second order oracle bounds are tighter than the first order oracle bound.” [and in the region above the black line the first order oracle bound is tighter]. Therefore, there will always be situations where the first order empirical bound is tighter and situations where the second order empirical bound is tighter, no matter how tight is the PAC-Bayes bound for the oracle quantities. Nevertheless, it was shown that minimization of the first order bounds almost always increases the test error, whereas minimization of the second order bounds does not lead to this effect. Therefore, the state-of-the-art approach to analysing the weighted majority vote are the second order bounds, even if they are not always the tightest. Since the contribution of our work is not at the oracle level, it makes little sense to compare to the first order bound, since most of the difference will come from the oracle level. \n\nThe natural baselines in our case are the tandem bound (TND), which uses PAC-Bayes-kl with a relaxed version of Chebyshev-Cantelli, the CCTND bound, which uses Chebyshev-Cantelli with PAC-Bayes-kl, and the CCPBB bound, which uses Chebyshev-Cantelli with PAC-Bayes-Empirical-Bennett. We introduced the CCPBUB bound, which uses Chebychev-Cantelli with PAC-Bayes-Unexpected-Bernstein and CCPBSkl, which uses Chebyshev-Cantelli with PAC-Bayes-split-kl. \n\nThe two new bounds, CCPBUB and CCPBSkl, are based on the same form of the second order oracle bound as CCPBB, $L(MV_\\rho)\\leq \\frac{\\mathbb E_{\\rho^2}[L_\\alpha(h,h’)]}{(0.5-\\alpha)^2}$. CCPBUB and CCPBSkl consistently outperform CCPBB, and are comparable among each other. To repeat the main point of our paper: the PAC-Bayes-split-kl is guaranteed to be tight for ternary random variables, in the sense that it will never be much worse than any alternative bound. And this claim is supported by the experiments. No prior bound for ternary random variables has this guarantee, and indeed the CCPBB is considerably weaker than CCPBSkl in some cases. We observed no cases where CCPBUB would be considerably weaker than CCPBSkl, and it is not trivial to construct them artificially, but we cannot exclude their existence.\n\nRegarding the comparison of our new bounds, CCPBUB and CCPBSkl, with TND and CCTND: the TND and CCTND are based on alternative ways of writing the second order oracle bound. The TND is based on a relaxation of Chebyshev-Cantelli, $L(MV_\\rho)\\leq 4\\mathbb E_{\\rho^2}[L(h,h’)]$, and CCTND is based on a different way of writing the Chebyshev-Cantelli, $L(MV_\\rho)\\leq \\frac{\\mathbb E_{\\rho^2}[L(h,h’)] - 2 \\alpha \\mathbb E_\\rho[L(h)] + \\alpha^2}{(0.5-\\alpha)^2}$. Both forms involve only binary losses. Therefore, application of PAC-Bayes-kl in their context is tight. Our contribution demonstrates that the alternative way of writing the second order bound proposed by Wu et al., $L(MV_\\rho)\\leq \\frac{\\mathbb E_{\\rho^2}[L_\\alpha(h,h’)]}{(0.5-\\alpha)^2}$, leads to comparably tight empirical bounds and that the weakness of CCPBB was caused by the weakness of PAC-Bayes-Empirical-Bennett and not by a weakness of this form of the oracle bound. It also shows that we cannot hope to get much tighter bounds out of this form of the second order oracle bound, because we know that PAC-Bayes-split-kl is tight. Note that we could not make this claim when using neither PAC-Bayes-Empirical-Bennett, nor PAC-Bayes-Unexpected-Bernstein. So this is an important contribution.", " Dear Reviewers, dear Area Chair,\n\nThank you for the engaging discussion.\n\nWe feel that several discussion threads have focused on questions, which depart from the main focus of the paper. Therefore, we would like to take the opportunity to reiterate the main focus of the paper and delimit it from the sideline discussions.\n\nThere is the kl inequality, which is tight for Bernoulli, and there are the Empirical Bernstein and Unexpected Bernstein, which can exploit small variance. And there are the ternary random variables, which appear, in particular, with introduction of excess loss in classification and in second order bounds for the weighted majority vote. All prior inequalities are potentially loose for ternary random variables (as we demonstrate in Figure 1), and for each of the inequalities there exist distributions for which they are highly suboptimal. The question of finding a better inequality for such variables has been open for about a decade and explicitly mentioned in Tolstikhin and Seldin (NeurIPS, 2013) and Mhammedi et al. (NeurIPS, 2019).\n\nThe proposed split-kl inequality with $\\mu$ set to the middle value is tight for ternary random variables (in the same sense as the kl is tight for Bernoulli), because $Z = \\mu + Z^+ - Z^-$ and $Z^+$ and $Z^-$ are Bernoulli, so the kl inequalities for them are tight. For some distributions other inequalities have minor advantages, for example, if the ternary random variable happens to have a distribution close to a Bernoulli, then the kl will be slightly tighter, but the split-kl never falls far behind any alternative. This is not true for prior inequalities, because for each of them there is a distribution for which it falls far behind (see Figure 1). While *in hindsight* the solution is admittedly simple, the question has been open for about a decade and we have discussed it in depth with several leading PAC-Bayes researchers and no one told us “oh, yeah, just split the variable and you will get it”.\n\nWhen we go to the PAC-Bayes level, there are several additional effects coming into play from the PAC-Bayes level. For example, the PAC-Bayes-split-kl has the $\\sqrt{n}$ term under the logarithm (Foong et al. (NeurIPS, 2021) have an open question on whether it can be reduced), whereas PAC-Bayes-Unexpected-Bernstein has a union bound over the grid of $\\gamma$, so sometimes the comparison goes one way, and sometimes another, but as before we can be sure that irrespective of the distribution we will never fall far behind any alternative, which is not true for prior work. And we also get some computational advantage over Empirical Bernstein and Unexpected Bernstein, because we have no parameter grids.", " We thank the reviewer for the comments.\n\nThe split-kl inequality was designed for ternary random variables, which naturally appear in multi-class classification with introduction of excess losses or in second order bounds for the weighted majority vote. And for ternary random variables we have the optimal choice of $\\mu$ - it is the middle value (we explain further below). While split-kl can be applied to continuous random variables, this is not a natural application domain, nor a domain we focus on in the paper. So we do not see this concern as a weakness of the paper.\n\nWe think it would be helpful to make a parallel with the kl inequality. The kl inequality is the ultimate choice for Bernoulli random variables in the sense that irrespective of the Bernoulli distribution it provides the tightest or almost the tightest concentration guarantees (well, the binomial bound is slightly tighter, but it does not combine with PAC-Bayes). Therefore, the kl inequality is a very powerful tool for classification problems. But it is not the ultimate choice for regression: depending on a distribution, in some cases it may be considerably tighter than alternative inequalities for continuous random variables, whereas in other cases it may potentially be much looser.\n\nSimilarly, the split-kl inequality is the ultimate choice for ternary random variables. When we take $\\mu$ to be the middle value, the kl inequalities for $Z^+$ and $Z^-$ are the tightest and we get the most out of the bound. As we demonstrate in our experiments, irrespective of the ternary distribution the split-kl is never much weaker than any alternative, and in some regimes it is much tighter. \n\nBut, as the kl is not the ultimate choice for continuous random variables, the split-kl is not the ultimate choice for them either. It is somewhat better than the kl, because it allows to exploit part of the variance, but depending on whether after the split $Z^+$ and $Z^-$ are close to Bernoulli or not, it may be tighter than other alternatives or not. And it is possible to construct distributions, where for any choice of $\\mu$ the resulting $Z^+$ and $Z^-$ are far from Bernoulli. We can add examples to the paper.\n\nFinally, we note that the natural application domain for split-kl is more or less the same as for kl, so we do not see the fact that it is not a natural choice for continuous random variables as a limitation of our work.\n", " We thank the reviewer for the reply. We would like to emphasize that the contributions of our work are not confined to numerical improvements in empirical experiments. They also include the following points:\n\n1. Design of an inequality that would simultaneously match the tightness of kl and Empirical Bernstein is a decade-old open problem, going back to the work of Mauer and Pontil (2009), who proposed the Empirical Bernstein inequality, but did not compare it to kl, on to the work of Tolstikhin and Seldin (2013), who compared PAC-Bayes-kl with PAC-Bayes-Empirical-Bernstein and showed that in some regimes one is better and in other regimes the other is better, and on to the works of Mhammedi et al. (2019), who proposed PAC-Bayes-Unexpected-Bernstein and also observed that it is sometimes tighter and sometimes weaker than PAC-Bayes-kl, and Wu et al. (2021), who proposed PAC-Bayes-Empirical-Bennett, which improved on PAC-Bayes-Empirical-Bernstein, but still had a mixed comparison with PAC-Bayes-kl. The latter three papers were published in NeurIPS. We propose the split-kl and the PAC-Bayes-split-kl inequalities and show that the base split-kl inequality is always competitive with all the alternatives, irrespective of the distribution. When it comes to PAC-Bayes and optimization of PAC-Bayes bounds, there are additional effects coming from the PAC-Bayes level, including excess losses and informed prior constructions, so the comparison is not as clear-cut as at the base level and it is challenging to give an complete guide on when it will provide the best performance and when not, because there are multiple factors involved. But we know for sure that it comes from the PAC-Bayes level, unlike prior work, where already at the base level there was no clear winner. We do not think that the hindsight simplicity of our solution can be held against us.\n\n2. We also provide an empirical comparison of Empirical Bernstein and Unexpected Bernstein inequalities and their PAC-Bayes extensions. The two inequalities have never been compared before.\n\n\n[1] Andreas Maurer and Massimiliano Pontil. Empirical Bernstein bounds and sample variance penalization. In Proceedings of the Conference on Learning Theory (COLT), 2009.\n\n[2] Ilya Tolstikhin and Yevgeny Seldin. PAC-Bayes-Empirical-Bernstein inequality. In Advances in Neural Information Processing Systems (NeurIPS), 2013.\n\n[3] Zakaria Mhammedi, Peter Grünwald, and Benjamin Guedj. PAC-Bayes un-expected Bernstein inequality. In Advances in Neural Information Processing Systems (NeurIPS), 2019.\n\n[4] Yi-Shan Wu, Andres Masegosa, Stephan Lorenzen, Christian Igel, and Yevgeny Seldin. Chebyshev-cantelli pac-bayes-bennett inequality for the weighted majority vote. In Advances in Neural Information Processing Systems (NeurIPS), 2021.\n\n", " Thank you for answering the questions and clarifying the literature on informed prior. My remaining concerns is the significance of the proposed approach in the PAC-Bayes context mentioned in the Strengths And Weaknesses section. This might be due to lack of my expert knowledge on the domain but in summary I am still not fully sure how to interprete significance of the improvement in PAC-Bayes bound values demonstrated in the experiment. My score threfore has not been changed at the moment.\n\nAs Paper2345 pointed out, substance of the new methodological idea is simple decomposition of a random variable into the positive and negative part from some centering constant. A simple approach could make a border impact if proven innovative but I would assume that it needs to be proven effective strongly given that NeurIPS is a top-tear conference.\n\nIn the first experiment, the proposed bound seems to outperform existing bounds in many datasets. However I was not fully sure about how we should interprete the level of improvement — for example with the Haberman dataset the proposed bound seems a few % smaller than others — how much difficult/significant would it be to achieve this improvement? How should we take or interprete this percentage of improvement in this linear classifier case? In the second experiment, it was mentioned that the propose approach was “competitive” with the kl and Unexpected Bernstein inequalities and outperformed both in “certain regimes”. I was again not fully sure about how to intreprete significannce of this improvement there, although I also understand this may potentially be due to my lack of expert knowledge. My whole point was how we should justify and support the empirical significance of the proposed approach from these results?\n\nIn Section 2, the author clarified a situation where the split kl concetration inequality works. I would personally like to see the similar detailed explanation and demonstration on in which situation the split kl PAC-Bayes bound works effectively and in which situation it does not. A proposal does not have to work in all situation because there is no free lunch, and therefore I would like to see clear demonstration of a potential situation we would clearly like to use the split kl PAC-Bayes bound. I assume that such elaboration would help justification of the significance/strength of the proposed approach.\n
", " Thank you for your detailed response.\n\n- My main concern remains the choice of $\\mu$. As you said, choosing a good $\\mu$ could be really difficult for the general case of bounded random variable in $[a, b]$. Taking a grid of values of $\\mu$ and taking a union bound is an interesting idea but I believe this will have adverse effect on computation time. \n- To show the dependency of the bound on choice of $\\mu$, you can re-run your experiments with different choices of $\\mu$. For example: you can choose $\\mu$ to be in $\\\\{ -0.75, -0.5, -0.25, 0.25, 0.5, 0.75 \\\\}$ in Figure 1 and run the experiments again. This would, at least empirically, show how good the bounds are if we have no prior information to choose $\\mu$. You can also add the bound you get by choosing a grid of values of $\\mu$ and taking the union bound.\n- Yes, Figure 4 and 5 are interesting. As $n$ increases, all the bounds get better in general (and Emprical Bernstein in particular). You could perhaps fix the value of $p_0 \\in \\\\{ 0.1, 0.5, 0.9 \\\\}$ and rerun the experiments for $n \\in \\\\{ 100, 1000, 5000, 10000 \\\\}$. \n(A related comment: I believe there is a typo in the caption of Figure 7 (b)) ", " We thank the reviewer for their time and feedback.\n\n“Experimental results”\n\n“... improvement as optimisation objective …”\n\nWe note that PAC-Bayes-split-kl provides a computational advantage over PAC-Bayes-Unexpected-Bernstein, because the latter uses a grid of parameters $\\gamma$, whereas the former has no parameters. Thus, the computation time is lowered by a multiplicative factor proportional to the size of the grid, in our experiments roughly 3-10, depending on the dataset.\n\n“Question 1. Is the bound stated in Theorem 3 equivalent to the different form given by [1]? It would be nice to show this.”\n\nMhammedi et al. [1] presented the Unexpected Bernstein Lemma, which is a bound on the moment generating function (Lemma 10 in Appendix A in our work and Lemma 13 in [1]). Theorem 3 is a concentration of measure inequality, which follows from the Unexpected Bernstein Lemma by a standard proof technique (see the proof of Theorem 3).\n\n“Question 2. In the experiments in section 4.2, it seems all of the bounds are based on the Cantelli-Chebyshev relaxation (with the tandem bound being ). Why have you not also compared to other bounds for the weighted majority vote, in particular the first order bound with the small-kl, which is often the tightest?”\n\nIt has been shown in prior work that minimization of the first order bound deteriorates the test error of a majority vote, because it ignores correlation of errors and overconcentrates the posterior mass on the best performing classifiers [2,3]. Therefore, even though the first order bound may be tighter than the second order bound in some cases, it is not the right tool for analysing the majority vote. Masegosa et al. [3] provided an extensive comparison of the first and second order bounds and we felt that repeating it here would overload the readers, but we could add it, if the reviewers find necessary.\n\n[1] Zakaria Mhammedi, Peter Grünwald, and Benjamin Guedj. PAC-Bayes un-expected Bernstein inequality.\n\n[2] Andrés R. Masegosa, Stephan S. Lorenzen, Christian Igel, and Yevgeny Seldin. Second order PAC-Bayesian bounds for the weighted majority vote.\n\n[3] Stephan S. Lorenzen, Christian Igel, and Yevgeny Seldin. On PAC-Bayesian bounds for random forests.\n", " We thank the reviewer for their time and feedback and for the suggestion to add synthetic experiments in the PAC-Bayes setup.\n", " We thank the reviewer for their time and feedback.\n\n“I was also not familier with how commonly or frequently \"informed prior\" is used in PAC-Bayes domain. It may be helpful to more strongly justify that \"informed prior\" is a reasonable approach to use in practice by additional references.”\n\nInformed priors were used in [1,2,3,4,5,6]. The goal of “informed priors” is to reduce the KL divergence between the posterior and the prior, which otherwise frequently dominates the PAC-Bayes bounds. This comes at the price of using some data to learn the “informed prior”, thus reducing the number of samples $n$ used for computing the bound (the remaining $n$ samples that were not used for learning the prior). Whether this price is worth paying or not depends on the data, and there are examples in both directions, as shown in the references above.\n\n“How much is \"informed prior\" crucial to produce meaningful generalisation bounds in this context? — Would it be possilble to see how much improvement of bounds are done by \"informed prior\"?”\n\nWe note that “excess losses” are formed by training a reference prediction rule $h^*$ on part of the data and then computing the “excess losses” relative to the reference prediction rule. The data used for training the reference prediction rule cannot be used for computing the bound, but it can be used for constructing an “informed prior”, so it is not that “informed priors” are “crucial”, but since there is anyway data that can be used to construct them at no extra cost, it makes a lot of sense to use it.\n\n“The models in the experiments dealt in this paper seem relatively simple. I understand that proving generalisation bounds of complex models is challenging but I was personally interested in seeing if the generalisation bound still works for more complex models e.g. LeNet for MNIST.”\n\nMany previous works [1,2,3,4] studied generalization using PAC-Bayes bounds with data-dependent prior under relatively simple models. [5,6] studied the generalization of neural networks trained by specific algorithms with a different family of the data-dependent prior. Therefore, we see the potential of applying to more complex models, which we leave for future work.\n\n[1] Amiran Ambroladze, Emilio Parrado-Hernández, and John Shawe-Taylor. Tighter PAC-Bayes bounds. In Advances in Neural Information Processing Systems (NeurIPS), 2007.\n\n[2] Emilio Parrado-Hernández, Amiran Ambroladze, John Shawe-Taylor, and Shiliang Sun. PAC-Bayes bounds with data dependent priors. Journal of Machine Learning Research, 13, 2012.\n\n[3] Omar Rivasplata, Emilio Parrado-Hernandez, John Shaws-Taylor, Shiliang Sun, and Csaba Szepesvari. PAC-Bayes bounds for stable algorithms with instance-dependent priors. In Advances in Neural Information Processing Systems (NeurIPS), 2018. \n\n[4] Zakaria Mhammedi, Peter Grünwald, and Benjamin Guedj. PAC-Bayes un-expected Bernstein inequality. In Advances in Neural Information Processing Systems (NeurIPS), 2019.\n\n[5] Gintare Karolina Dziugaite and Daniel M. Roy. Data-dependent PAC-Bayes priors via differential privacy. In Advances in Neural Information Processing Systems (NeurIPS), 2018. \n\n[6] Gintare Karolina Dziugaite, Kyle Hsu, Waseem Gharbieh, Gabriel Arpino, and Daniel M. Roy. On the role of data in PAC-Bayes bounds. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2021. \n", " We thank the reviewer for their time and feedback.\n\n“... I am not sure if this new inequality would always be tighter than kl inequality in all the regimes.”\n\nWe do not claim that split-kl is always tighter than kl, we claim that it is never much looser, but in some cases it can be significantly tighter. Indeed, for a Bernoulli random variable it is always a bit weaker, irrespective of the choice of $\\mu$, but for a random variable concentrated inside the $[a,b]$ interval and for a good choice of $\\mu$, which we discuss below, it can be significantly tighter. This is also what we show in Figure 1 and Figures 4 and 5 in the supplementary material.\n\n“... a clever choice of $\\mu$ …”\n\nFor ternary random variables, which we work with in the paper, a natural choice is to take $\\mu$ to be the middle value. With this choice both $Z^+$ and $Z^-$ are Bernoulli random variables and the kl bounds applied to $Z^+$ and $Z^-$ in split-kl are very tight. If a ternary random variable has a high probability mass on the middle value, the split-kl has the most advantage over kl. If the probability mass on the middle value is small, it means that the random variable is close to a Bernoulli random variable, and this is the regime where kl performs well, but split-kl is never much weaker. We use this value of $\\mu$ in all our comparisons.\n\nFor general random variables in a $[a,b]$ interval the optimal choice of $\\mu$ is a challenging open question. While it may be tempting to estimate the expectation of $Z$ using part of the data and set $\\mu$ to this estimate, we know that this choice is suboptimal. For example, for a ternary random variable taking values in $\\{-1,0,1\\}$ with zero probability mass on -1 and equal mass on 0 and 1, taking $\\mu = 0$ (the middle value) is significantly better than taking $\\mu = 0.5$ (the expectation). Another possibility is to take a grid of values of $\\mu$ and a union bound over the grid. Since the main focus of our work was on ternary random variables, which on their own have multiple applications, we leave the study of more general random variables to future work.\n\n\n“Can you add some experiments to show the dependency of the bound on choice of $\\mu$?”\n\nWe tried to keep a delicate balance between demonstrating the interesting behaviour of the bounds and not flooding the reader with overly many experimental setups. We are happy to add more experiments, but we would appreciate it if the reviewer could give more details on the experimental setup they would be interested to see.\n\n“It would also be helpful to discuss the tightness of various bounds as we increase $n$.”\n\nThe discussion and additional experiments are provided in Appendix D1 and Figures 4 and 5 in the supplementary material. We are happy to move some of the discussion to the body, if the reviewer finds it necessary.\n", " The authors present a new concentration of measure inequality for sum of independent bounded random variables namely split-kl inequality. They derive this new inequality by combining kl-inequalities (1 and 2) in a clever way. They provide empirical cmparison of this new inequalities with the existing concentration inequalities such as kl-inequality, Empirical Bernstein inequality and Unexpected Bernstein inequality. They show that their new inequality is tighter than all of these inequalities in some regimes.\n\nThey further extend their contribution to PAC Bayes setting and derive PAC-Bayes-split-kl inequality. Again, they empirically (in synthetic and real world data) identify regimes where their inequality performs better than other existing inequalities such as PAC-Bayes-kl, PAC-Bayes Empirical Bernstein, PAC-Bayes Unexpected Bernstein, and PAC-Bayes Empirical Bennett inequalities. Strengths:\nThe paper is easy to follow and claims stem from logical arguments. The experiments are extensive and support the claims made by authors. Theoretically, the idea is simple but interestingly, it leads to good empirical results.\n\nWeaknesses:\nIt is difficult to understand that how is this new inequality fundamentally different than the kl inequality. Without a careful choice of $\\mu$, I am not sure if this new inequality would always be tighter than kl inequality in all the regimes. My observation comes from the following argument: consider Z $\\in [a, b]$. Take $\\mu = a$, then $Z^+ = Z-a$ and $Z^- = 0$. Similalry, take $\\mu = b$, then $Z^+ = 0$ and $Z^- = b - Z$. In both these cases, we are just translating Z and both kl inequality and kl-split inequality should behave similar for these choices of $\\mu$. Of course, there might be a clever choice of $\\mu$ which makes one perform better than the other but I am not sure how to make that choice. \n - Can you add some experiments to show the dependency of the bound on choice of $\\mu$?\n- It would also be helpful to discuss the tighntness of various bounds as we increase n.\n The limitations are discussed adequately.", " The authors introduced a new approach to a concetration inquality for random variables over a bounded interval called \"split kl inequality\", which first decomposes the original random variable into three terms and then applies an existing bound \"kl ineqaulity\" to the decomposed terms. Then the authors proposed to use the split kl inequality for PAC-Bayes bounds of generalisation error of learning alrogrithms as well as to combine it with existing approaches of excess loss and informed prior. The derived PAC Bayes generalisation error bound were compared and examined in a few different experiments. The reviewer is personally very much fond of the authors' writing in this paper, which explains important matters of this work / other existing works in an intuitive and comprehensive manner. For example, the motivation of this work is nicely lined up with a proper technical level to wide audiences in introduction. In addition, the advantage of split kl inequality has been made clear in Figure 1. Comprehensive presentation and simplicity of the idea is a clear strengh of this work. My main concern is the significance / impact when we combine this idea with PAC-Bayes bounds. The derived new generalisation bound in Figure 2, 3 seemed similar to the other existing bounds at first glance, or it was unclear how to interprete the improvement level. For the first experiment for example, since the authors combined their idea of split kl inequality with existing approaches of \"informed priors\", some might get an impression from these figures that the \"informed prior\" part has already finished the majority of works to lower a bound in each bound and they may wonder about how critical the improvement by the split kl part is.\n - How much is \"informed prior\" crucial to produce meaningful generalisation bounds in this context? — Would it be possilble to see how much improvement of bounds are done by \"informed prior\"? \n- I was also not familier with how commonly or frequently \"informed prior\" is used in PAC-Bayes domain. It may be helpful to more strongly justify that \"informed prior\" is a reasonable approach to use in practice by additional references.\n- It would be visually helpful to make clear which bound is the proposed one in Figure 2 and 3 e.g. adding \"(Ours)\" or something to the name lavel of the proposed one in Figure 2 and 3.\n- The models in the experiments dealt in this paper seem relatively simple. I understand that proving generalisation bounds of complex models is challenging but I was personally interested in seeing if the generalisation bound still works for more complex models e.g. LeNet for MNIST. There would not no concern for potential negative societal impact. To me personally, the current limitation is that it is difficult to interprete from experiments or equations if the proposed idea of PAC-Bayes-split-kl inequalities has imporved the generalisation bounds to a fair defree or not. For example, would the difference of number in the figures be significant in the context of PAC-Bayes? The reviewer's position on this paper is neutral and the reviewer is happy to increase the score if the technical or practical impact is well justified.\n", " The paper introduces a new concentration inequality for the sum of iid bounded random variables. \nThe paper uses a technique of splitting the samples with a threshold and then using a kl-inequality on each part. This splitting allows using both the lower and upper bound kl-inequalities. \nThe resulting bound enjoys both the tightness of the kl-inequality and the ability to exploit the lower variance of r.v. that takes values within a segment.\nThe empirical comparison clearly shows how the tightness of the new split-kl bound in different regimes, compared to the empirical Berenstein and the standard kl inequalities.\n\nThe paper then derives PAC-Bayes-Split-kl inequality\nand applies it to the excess loss of a binary classification problem.\nThe new bound exploits the lowered variance of the excess losses compared to the binary losses, and therefore, the overall split-kl-PB bound can be competitive with the standard kl-PB bound, as demonstrated on synthetic and real-world data.\n \n### Strengths\n1. I believe the work is original and well-motivated. \n2. The use of the splitting technique is clever and novel, as far as I know. \n3. The paper is well-written and clear.\n4. The authors provide an adequate survey of related work.\n5. The empirical evaluation of the split-kl inequality clearly shows its merits.\n\n### Weaknesses\n1. The empirical evaluation of the split-kl-PAC-Bayes bound does not seem to give definitive conclusions, besides the looseness of PAC-Bayes-Empirical-Bennett on certain datasets. I suggest adding more controlled synthetic experiments, as were done in Fig 1. for the concentration bounds since it can give good intuition to when certain bounds are preferable. \n No additional questions No additional limitations", " The authors address the question of providing PAC-Bayes bounds for losses when the (empirical) variance is low, as previously addressed by e.g. [1, 2].\n\nA special case of this is finding bounds for ternary losses in {-1,0,1}, which arises in two important ways:\n1. bounds on the excess misclassification loss, which can also be used as per [1] to tighten PAC-Bayes bounds on the non-excess loss\n2. in conjunction with the Cantelli-Chebyshev relaxation given by [3] to provide bounds on the (non-randomized) weighted majority vote via PAC-Bayes.\n\nFor losses in {0, 1} the small-kl PAC-Bayes bound [e.g. 4] is usually the tightest, even when the variance is low, but not for losses in [-1, 1] (after rescaling the bound). In order to leverage this, the authors decompose translate each random variable in the sum before decomposing it into positive and negative parts,\n$$Z_i = \\mu + Z_i^+ Z_i^- = \\mu + \\max(0, Z_i-\\mu) + \\max(0, -Z_i+\\mu)$$\nbefore applying the small-kl bound to the sums of $Z_i^+$ and $Z_i^-$ separately (which are both {0, 1} valued in the ternary untranslated case). This is called the *split-kl* (PAC-Bayes) bound.\n\nThis is used to prove new concentration and PAC-Bayes bounds. These are further combined with the excess risk and informed prior ideas from [1], or the Cantelli-Chebyshev relaxation from [3], and evaluated in experimental setups taken from the above.\n\n\n\n-----\n\n[1] Zakaria Mhammedi, Peter Grünwald, and Benjamin Guedj. PAC-Bayes un-expected Bernstein inequality.\n\n[2] Ilya Tolstikhin and Yevgeny Seldin. PAC-Bayes-Empirical-Bernstein inequality.\n\n[3] Yi-Shan Wu, Andres Masegosa, Stephan Lorenzen, Christian Igel, and Yevgeny Seldin. Chebyshev-cantelli pac-bayes-bennett inequality for the weighted majority vote.\n\n[4] John Langford. Tutorial on practical prediction theory for classification.\n\n\n----\n\nUPDATE:\n\nOverall I am not satisfied with the quite limited evaluation of this bound, which does not show clear improvements from previous results. This weakens the motivation for the paper too because of the limited number of new technical ideas.\n\nTherefore I find myself much more on the borderline than my original review and I do agree with some of the criticisms of reviewer nL9t. However, given that related work has previously appeared at NeurIPS with similarly negligible empirical improvements, I will keep my \"weak accept\" score. ### Strengths\n\n**Clarity and motivation**: the paper is very well written and was a pleasure to read. The relationships to previous works [1, 2] was very well explained and the incorporation of ideas from [1] was well motivated. The alternative form of the main result from [1] is an improvement in clarity to how it is stated therein and the situation of this work within its wider context was reasonably clear. My only minor criticism is that the experiments in section 4.2 do not sufficiently explain the use of the Chebyshev-Cantelli bound and majority votes as used there. This is a shame as I think the use of the split-kl bound for majority votes is a good use case.\n \n**Relevance**: I think that the paper makes a contribution to an important and highly-active area of machine learning, improving PAC-Bayes bounds, which are among the most useful in contemporary learning theory. They bring some ideas from [1] to a wider application which is a valuable contribution.\n\n\n### Weaknesses\n\n**Technical contribution and originality**: here I think the paper falls down a bit. The main technical result is simply a decomposition of a random variable into positive and negative parts, combined with an application of the small-kl PAC-Bayes inequality. This is combined with the excess loss idea from [1] and the experimental setup therein, or the Cantelli-Chebyshev bound from [3] and their experimental setup, all of which is straightforward. Such simple ideas are can be very valuable when they lead to breakthroughs but that does not seem to be the case here, and most of the ideas used in the paper and discussed at length were originated by [1].\n\n**Experimental results**: in the more important PAC-Bayes setting the new results are quite weak, with the new bound giving very similar results to that of [1]. The bound is not shown to be any improvement as optimization objective either. The simpler concentration inequality setting is not particularly interesting except as a motivation, and for the ternary r.v.s used an even better bound would be obtained by applying the test set bound (Th. 8) to the decomposition $Z = Z^+ - Z^-$ (i.e. a \"split-Binomial\" bound). 1. Is the bound stated in Theorem 3 equivalent to the different form given by [1]? It would be nice to show this.\n2. In the experiments in section 4.2, it seems all of the bounds are based on the Cantelli-Chebyshev relaxation (with the tandem bound being $\\alpha = 0$). Why have you not also compared to other bounds for the weighted majority vote, in particular the first order bound $L(MV) \\le 2 L(\\rho)$ with the small-kl, which is often the tightest? N/A the results are primarily of a theoretical nature." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "x24RGeTkp-", "nips_2022_1WZyphXPLwC", "iibT8VcK1Z3", "CkGhOeHDRSw", "s6EPBuWhoI9", "_jF2YYdoMrp", "16VjHPwOpBj", "AfiSdGZind", "kP0NaxeyLF3", "H49zjkyUXdk", "nips_2022_1WZyphXPLwC", "nips_2022_1WZyphXPLwC", "nips_2022_1WZyphXPLwC", "nips_2022_1WZyphXPLwC" ]
nips_2022_CTqkruS5Bb
Unsupervised Object Detection Pretraining with Joint Object Priors Generation and Detector Learning
Unsupervised pretraining methods for object detection aim to learn object discrimination and localization ability from large amounts of images. Typically, recent works design pretext tasks that supervise the detector to predict the defined object priors. They normally leverage heuristic methods to produce object priors, \emph{e.g.,} selective search, which separates the prior generation and detector learning and leads to sub-optimal solutions. In this work, we propose a novel object detection pretraining framework that could generate object priors and learn detectors jointly by generating accurate object priors from the model itself. Specifically, region priors are extracted by attention maps from the encoder, which highlights foregrounds. Instance priors are the selected high-quality output bounding boxes of the detection decoder. By assuming objects as instances in the foreground, we can generate object priors with both region and instance priors. Moreover, our object priors are jointly refined along with the detector optimization. With better object priors as supervision, the model could achieve better detection capability, which in turn promotes the object priors generation. Our method improves the competitive approaches by \textbf{+1.3 AP}, \textbf{+1.7 AP} in 1\% and 10\% COCO low-data regimes object detection.
Accept
The paper received mixed reviews. Three reviewers rated borderline accept and one reviewer rated borderline reject. The authors provided detailed responses to the raised concerns/questions and supported their responses with additional ablation study, experimental result on new dataset (e.g., VOC). For reviewer fpzy (who gave borderline reject), the requested additional analysis have been provided by the authors. The major remaining issue is "The improvement over DETReg is somewhat limited". The results presented in the paper did show consistent improvement over DETReg on three settings with at least 1 mAP improvement. After reading the reviews and the responses, while there are no enthusiastic supports from the reviewers, the AC does not find sufficient ground to reject the paper. This paper introduces new ideas for unsupervised object detection pretraining and show consistent improvement over the baselines over three evaluation settings. The AC believes that this work would benefit the community and thus recommends to accept.
train
[ "YL_om5K4GnC", "WHWDYmoFg1U", "Xs_9cuktjD4", "BzZgece8050", "IeB9talZ2RX", "640H0jAukClf", "0I_SeBO-yhd", "vQMD4yoe7oM3", "UBctz5KNh1Py", "j156RgO_Pi7", "sUZh5uwjY3z", "FrISZOINau", "91chIjSkys8", "fSYYf8bI2pK", "GAvIsoJW7oA", "j5e9zzbwX2t" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time and efforts in reviewing our paper!\n\nWe kindly remind you that the discussion period will end in half a day, and thus we just wonder whether we could have the last chance to address your further concerns or questions (if you have any). We are sincerely glad to improve our paper under your suggestions!\n\nBest, \n\nAuthors of Paper 2341", " >Q: For A1, what is the performance of [6] on the low-data regime COCO fine-tuning tasks? \n\nA: JoinDet still achieves **considerable performance gains** compared with SoCo[6]. Specifically, we download the pretrained model from the official repo, and then finetune the whole detection model with the pretrained backbone. Experimental results show that our JoinDet improves SoCo[6] by **+4.40 AP** and **+4.46 AP** on 1% and 10% COCO.\n\nWe would suggest SoCo[6] belongs to type (2), and type (2) pretraining methods are generally worse than type (3) pretraining methods, e.g., our JoinDet (as shown in Tab.1 of our paper and Tab.5 of DETReg) as discussed in **A1** of the **Responses to your review (Response to Reviewer EWLn)**, please refer to it for detailed explanations.\n| Method | Pretrain | COCO 1% | COCO 10% |\n|---------|-------------|:-------------:|:-------------:|\n| SoCo[6] | Backbone | 11.49 | 26.41 |\n| JoinDet | Whole model | **15.89** (+4.40) | **30.87** (+4.46) |\n\n>Q: And do you have comparisons with type (2) on normal COCO setting (not low-data regime)?\n\nA: Our JoinDet generally has significant performance gain (about **+8 AP50**)  compared with type (2) on normal COCO setting. Specifically, we evaluate the performance of JoinDet (see Fig. 4 of our paper) as well as ReSim and PixPro (see the following table). For ReSim and PixPro, we download the pretrained model from the official GitHub repo, and then fine-tune the detector with the pretrained backbone on normal COCO setting with 10 epochs due to the time limit. The performance drop of type (2) compared with our JoinDet results from neglecting the detector head during pretraining. Detailed discussion about the difference between type (2) and type (3) can be also found in **A1** of the **Responses to your review (Response to Reviewer EWLn)**.\n| Method | Pretrain | COCO AP50 |\n|-----------|-------------|:---------:|\n| ReSim[1] | Backbone | 44.2 |\n| PixPro[2] | Backbone | 44.8 |\n| JoinDet | Whole model | **53.0** |", " Thank you for your responses. Generaly, I think you addressed most of my concerns. But I still have some questions.\nFor A1, what is the performance of [6] on the low-data regime COCO fine-tuning tasks? And do you have comparisons with type (2) on normal COCO setting (not low-data regime)?\n", " We sincerely thank you for the reviews and comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. We would be happy to address any follow-up questions. \n\nBest,\n\nAuthors of Paper 2341", " >**Q7:** Have you tried other unsupervised pretraining models as initialization and embedding loss, e.g., BYOL, do you have any ideas or perspectives on this aspect?\n\n**A7:** Thanks for the comment. Using BYOL as the backbone brings **more improvement** to JoinDet. Following the ablation setting in Sec. 4, the experimental results below show that BYOL achieves better performance (**+0.6AP**) on full data VOC finetuning. In this paper,  as did in all previous works, we use SwAV to initialize the backbone for fair comparisons.\n| Method | 10 epochs VOC | 25 epochs VOC |\n|-------------------------|:-------------:|:-------------:|\n| JoinDet(Swav) - default | 49.0 | 55.3 |\n| JoinDet(BYOL) | 49.8 | 55.9 |\n\n**Our perspectives:**  \n\n(1) Usually, a powerful representation of image classification **benefits** downstream detection tasks. Concretely, on ImageNet classification, MoCo v2[5] shows +1.8 and +10.5 top-1 accuracy gains than SimCLR[6] and MoCo[7]. Accordingly, on COCO detection, MoCo v2[1] achieves +0.6 AP and +1.0 AP performance gains on COCO when compared with SimCLR[2] and MoCo[3], respectively (reported in [8]). Furthermore, on JoinDet, a powerful representation of image classification can produce better eigen attention maps and more accurate object priors to supervise the detector learning. Meanwhile, a better pretraining for backbone initialization can provide better target features in the embedding loss (L176) during pretraining[1].  \n\n(2) However, the average accuracy on ImageNet may **NOT** be an absolute (though relatively good) metric to choose better unsupervised pretrainings for backbone initialization on the unsupervised object detection pretraining task. Specifically, SwAV[6] shows better performance (+1.0 top-1 accuracy) than BYOL, but shows lower performance (**-0.6 AP**) on JoinDet. We suggest that, as downstream datasets (COCO, Pascal VOC) contain mostly scene images, the accuracy of uncommon categories in ImageNet may not help the model represent objects in scene images. We will explore it in our future work.\n\n>**Q8:** Minor comments on writings. \n\n**A8:** We appreciate the reviewer's valuable writing comments and we will revise and update these parts in our future version.\n\n>**Q9:** In table.4, I suggest adding the IOU metric for calculating Recall, which is more clear.\n\n**A9:** Yes, we will add the IOU metric for calculating Recall. Compared with other methods, the higher Average Recall of JoinDet in Tab.4 shows that it detects more objects without supervision from the ground truth. \n\n>**Q10:** The submission does not discuss the limitations, it is encouraged to include more discussion on the limitation and possible future works and improvements that could be done.\n\n**A10:** We mention the limitations at the end of Sec.6 and we are willing to include more discussion in our future version. \n\n**Reference:**\n\n[1] Dai, Zhigang, et al. \"Up-detr: Unsupervised pre-training for object detection with transformers.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.\n\n[2] Bar, Amir, et al. \"Detreg: Unsupervised pretraining with region priors for object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[3] Zhong, Yuanyi, et al. \"Dap: Detection-aware pre-training with weak supervision.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[4] Ghiasi, Golnaz, et al. \"Simple copy-paste is a strong data augmentation method for instance segmentation.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[5] Chen, Xinlei, et al. \"Improved baselines with momentum contrastive learning.\" arXiv preprint arXiv:2003.04297 (2020).\n\n[6] Chen, Ting, et al. \"A simple framework for contrastive learning of visual representations.\" International conference on machine learning. PMLR, 2020.\n\n[7] He, Kaiming, et al. \"Momentum contrast for unsupervised visual representation learning.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n\n[8] Xie, Zhenda, et al. \"Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.", " >**Q5:** Although the work does not employ low-level unsupervised proposal methods like Selective Search, would it be more advantageous to incorporate these proposals which are generated from low-level cues? The intuition is that they may provide better or complementary coverage to some objects such as small ones, or others that have clear boundaries to the background.\n\n**A5:** **NO**. **Simply introducing low-level cues** leads to performance drops on downstream tasks. Following the ablation setting in Sec. 4, we add 30 Selective Search boxes with originally generated object priors as supervision during pretraining and find that additional selective search proposals lead to **-2.5AP** and **-0.5AP** performance drops with 10 epochs and 25 epochs finetuning on PASCAL VOC, respectively. We suggest that there are two reasons. (1) **Low-level cues lack semantic information** and will introduce lots of non-object-related supervision, which is harmful to detector pretraining when more accurate self-supervised information is presented. We refer the reviewers to see Fig.1 in the supplementary for visualizations. (2) JoinDet is already able to generate useful supervision for small objects and objects that have clear boundaries to the background. We use large-scale jittering mentioned in [4] to provide supervision for small objects.\n\n| Method | 10 epochs VOC | 25 epochs VOC |\n|----------------------------|:-------------:|:-------------:|\n| JoinDet | 49.0 | 55.3 |\n| JoinDet + selective search | 46.5 | 54.8 |\n\n>**Q6:** I'm interested in how the pre-trained object feature embedding loss affects the method, have you tried removing the feature embedding loss? Do you have any perspectives, e.g., it is a regularization that avoids the feature representation deviating too much from the pre-trained representation, or maybe it is the major driven force in the proposed object detection pre-training, as it enforces the region features to be close to an image-level pre-trained model like SwAW. Could you provide more ablation study on this part?\n\n**A6:** The embedding loss encourages the detector to capture useful information for classification[1,2], and removing the embedding loss leads to **slight performance drops** on JoinDet, which is consistent with the ablation in Table 5 in UP-DETR[1]. Specifically, the table below shows that removing the embedding loss leads to a **-0.5AP** performance drop in early fine-tuning epochs. When fine-tuning 25 epochs, the drop decreases to **-0.1AP**. Similar to UP-DETR, when the backbone is frozen during pretraining, the embedding loss (named feature reconstruction loss in UP-DETR) only influences the finetuning performance in early epochs (shown in Fig.4 and Tab.5 in UP-DETR). As we do not claim the contribution on the loss design, we directly use this loss from UP-DETR[1] and DETReg[2]. \n| Method | 10 epochs VOC | 25 epochs VOC |\n|----------------------|:-------------:|:-------------:|\n| JoinDet w/ emb loss | 49.0 | 55.3 |\n| JoinDet w/o emb loss | 48.5 | 55.2 |\n", " >**Q1:** The approach is simple but it seems the technical contributions are sparse, the major components of attention map generation and region proposal generation are lent from prior work.\n\n**A1:** Thanks for your review. We do **NOT** lend region proposal generation from prior works. **INSTEAD**, our novelty is to generate proposals (object priors) from **both region priors and instance priors** (L118), where instance priors are generated by the matchings between attention maps and the predicted bounding boxes of the detector (L128), providing complementary foreground instances for supervision. \n\nInstance priors are very important for detector pretraining. Region priors alone provide only comparable performance with DETReg as shown in the ablation study in Tab.5. Compared with using only region priors (**53.5 AP** on PASCAL VOC), adding instance priors (**55.4 AP** on PASCAL VOC) boosts JoinDet for **+1.9AP** , showing the importance of instance priors in bounding box generation.\n\nFurthermore, we propose two other contributions. First, we propose to **jointly** generate object priors and learn object detection which can provide progressively refined supervision. Second, we propose a **Box Smooth** method for box refinement, which stabilizes the pretraining during the evolvement of object prior generation. \n\n>**Q2:** In section 2.4, the authors mention it uses a pre-trained SWAV model to extract features, however, the details are not given. Specifically, is the regions cropped from the image and fed to the pre-trained SWAV backbone, or some feature cropping method is used, e.g., ROIalign.\n\n**A2:** Yes, we follow the process in UP-DETR[1] and DETReg[2], the regions cropped from the image are fed into the pretrained SwAV backbone.\n\n>**Q3:** In the introduction, the authors mention they are inspired by DINO and referred to the attention maps in the figures, however in the method section, it actually uses the Normalized Cuts algorithm to obtain the attention map. I believe it is better to rewrite the part in the introduction to be more clear and indicate in the figure captions how the attention map is obtained.\n\n**A3:** Thanks for your suggestion. We will revise this part in the introduction.\n\n>**Q4:** I actually have a question regarding the effectiveness of detection pre-training. For the Full-data finetuning experiment, the supervised training is compared as a baseline, however, would it be fair to add the pre-training time to the baseline? I guess when you extend the baseline training time, it would easily catch up with the detection pre-training methods, which would pose a concern on the effectiveness of detection pre-training for full-data finetuning.\n\n**A4:** Thanks for the comment. (1) Lots of previous papers [1,2,3] in this field use the supervised pretraining on ImageNet as a baseline pretraining method. In this paper, we follow this widely used baseline. (2) Following your suggestions, we extend the training epochs to 200 epochs for the supervised pretraining on PASCAL VOC and achieve 59.3 AP, which is still lower (**-5.1AP**) than our JoinDet (**64.4 AP**). On the relatively small dataset, PASCAL VOC, the supervised pretraining shows to be **over-fitting** with 200 epochs. We suggest that the performance gap between supervised pretraining and JoinDet verifies the effectiveness of detection pre-training.\n\nIn the **special** case where pretraining data and fine-tuning are **exactly** the same in COCO, extending the training epochs for supervised pretraining to 100 epochs, we get a detection performance of 45.6 AP. Considering our JoinDet reaches **45.6 AP** using **only** 50 epochs of fine-tuning, we suggest that **JoinDet has reached the upper bound of deformable DETR on COCO** when we fine-tune the pretrained detector with 50 epochs. \n\nStill, we suggest that **fine-tuning using less data or fewer epochs** is a more common and valuable setting because it is closer to real needs and collecting larger-scale unlabeled pretraining data through the Internet is easy and feasible.\n| Finetune dataset | Methods | Pretrain epochs on COCO without labels | Full-data Finetune epochs | AP |\n|------------------|------------|:-------------------------------:|:------------------:|:----:|\n| PASCAL VOC | Supervised | 0 | 100 | 59.5 |\n| PASCAL VOC | Supervised | 0 | 200 | 59.3 |\n| PASCAL VOC | JoinDet | 50 | 100 | **64.4** |\n| COCO | Supervised | 0 | 50 | 44.5 |\n| COCO | Supervised | 0 | 100 | 45.6 |\n| COCO | JoinDet | 50 | 50 | 45.6 |", " >**Q5:** The improvement over DETReg is somewhat limited.\n\n**A5:** JoinDet shows considerable performance gain on **three evaluation benchmarks** in unsupervised learning, e.g., low-data regimes object detection on COCO, few-epochs full-data finetuning on COCO, and full-data fine-tuning on PASCAL VOC. Specifically, when 10% COCO data are used for fine-tuning, JoinDet shows a **+1.75 AP** performance gain on DETReg (Tab. 1). When fine-tuning with fewer (10) epochs on COCO, DETReg achieves a **+3.3 AP50** performance gain when compared with DETReg (shown in Fig. 4). On full-data PASCAL VOC fine-tuning, JoinDet improves DETReg by **+1.0 AP** (Tab. 2).\n\nOnly the improvement on the special full-data fine-tuning on COCO is +0.1% (Tab. 3).  In this **special** setting, the pretraining data and fine-tuning data are **exactly** the same, and the pretrained models are finetuned by **sufficiently long epochs** (i.e., deformable DETR almost converges with 50 epochs training [3]).  We consider this setting unsuitable to evaluate the performance of pretrainings because sufficiently long epochs eliminate the performance difference among different pretrainings. More importantly, when we extend the fine-tuning epochs for supervised pretraining to 100 epochs, we get a detection performance of 45.6 AP. Considering our JoinDet reaches **45.6 AP** using **only** 50 epochs of fine-tuning, we suggest that **JoinDet has reached the upper bound of deformable DETR on COCO** when we fine-tune the pretrained model with 50 epochs. We provide these results just for making the evaluation more comprehensive following existing works. \n\nWe would call attention to one more common evaluation setting where **fine-tuning data is less or epochs are fewer** as the \"pretraining - finetuning\" paradigm always uses a large dataset to pretrain and small epochs to finetune. Because it is closer to real needs and collecting larger-scale unlabeled pretraining data through the Internet is easy and feasible.\n| Finetune dataset | Methods | Pretrain epochs on COCO without labels | Full-data Finetune epochs | AP |\n|:----------------:|:----------:|:-------------------------------:|:------------------:|:----:|\n| COCO | Supervised | 0 | 50 | 44.5 |\n| COCO | Supervised | 0 | 100 | 45.6 |\n| COCO | JoinDet | 50 | 50 | 45.6 |\n\n**Reference:**\n\n[1] Dai, Zhigang, et al. \"Up-detr: Unsupervised pre-training for object detection with transformers.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.\n\n[2] Bar, Amir, et al. \"Detreg: Unsupervised pretraining with region priors for object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[3] Zhu, Xizhou, et al. \"Deformable DETR: Deformable Transformers for End-to-End Object Detection.\" International Conference on Learning Representations. 2020.", " >**Q1:** In Section 3, authors should claim which detection framework they used for pre-training in the begging of section 3, which will make the paper clearer. Authors should highlight the setting of the pre-training methods, supervised/SwAV only pre-trains backbone, while JoinDet pre-trains whole network.\n\n**A1:** Thanks for the reviewer's writing suggestions. We will revise these parts in our future version to make the paper clearer.\n\n>**Q2:** The proposed method highly rely on the pre-trained backbone, authors initialize the ResNet50 backbone of JoinDet with SwAV, did authors explore other unsupervised pre-training methods, e.g. BYOL, SimCLR?\n\n**A2:** Following the ablation setting in Sec. 4, we change the pretraining from SwAV to BYOL and find that BYOL brings more improvements, which **further demonstrates the compatibility of our method** to backbone pretraining methods and the effectiveness of the detector pretraining paradigm. As shown in the table below, JoinDet using BYOL achieves a **+0.6 AP** performance gain in the ablation study. We only use SwAV as the backbone pretraining method in this paper for making a fair comparison because previous works (UP-DETR[1], DETReg[2]) conduct their experiments using the SwAV pretraining. There may be some other powerful unsupervised pre-training methods for this task, but it is out of the paper's scope and we will leave them for future works.\n\n| Method | 10 epochs VOC | 25 epochs VOC |\n|-------------------------|:-------------:|:-------------:|\n| JoinDet(SwAV) - default | 49.0 | 55.3 |\n| JoinDet(BYOL) | **49.8** | **55.9** |\n\n>**Q3:** There are experiments conducted in the setting where pre-training is performed on COCO/ImageNet and finetuning is performed on PASCAL VOC (Table2). Why authors did not do experiments in the setting where pre-training is conducted on ImageNet and fine-tuning is performed on COCO. I hope to see this experiment in the rebuttal.\n\n**A3:** Thanks for your review. (1) **Reasons:** our JoinDet mainly focuses on pretraining detectors on scene images that contain multiple objects, which are easier to obtain on the Internet. The region priors can highlight the foreground regions and instance priors can predict multiple object bounding boxes from foreground regions. However, ImageNet is an object-centric dataset, which mostly contains only one object, on which our design is not specifically targeted. As shown in Tab. 2, JoinDet pretrained on COCO performs better (+0.7 AP) than JoinDet pretrained on ImageNet when evaluated on PASCAL VOC because JoinDet can **mine more diverse objects** in the pretraining dataset. \n\n(2) **Experiments:** following your suggestion, we pretrain our model on ImageNet-1K and fine-tune it on COCO. The performance improvement still EXISTS, i.e., JoinDet improves DETReg for **+0.46 AP** and **+0.69 AP** on 1% and 10% COCO fine-tuning settings, respectively. \n| Method | Pretraining dataset | 1% COCO | 10% COCO |\n|---------|:-------------------:|:-------:|:--------:|\n| DETReg | ImageNet-1K | 14.76 | 29.36 |\n| JoinDet | ImageNet-1K | **15.22** | **30.05** |\n\n>**Q4:** Why authors only do experiments under low-data regime on COCO, is it consistent on PASCAL VOC?\n\n**A4:** (1) We did not do the low-data regime PASCAL VOC experiments because PASCAL VOC is already a **relatively small** downstream detection dataset when compared with COCO. Concretely, PASCAL VOC has only 20K images which are about **1/6** of the images in COCO. And previous works (UP-DETR[1], DETReg[2]) did not do these experiments.  \n\n(2) Following your suggestions, we do experiments on PASCAL VOC in low-data regimes. **Consistent performance improvement** can be found in the low-data regime object detection setting on Pascal VOC. As shown in the table below, compared with DETReg, JoinDet achieves **+2.29 AP**, and **+1.60 AP** performance gains on 1%, and 10% VOC fine-tuning settings.\n| Method | Pretrain | 1% VOC | 10% VOC |\n|---------|:-----------:|:------------:|:------------:|\n| SwAV | Backbone | 14.02 | 33.80 |\n| DETReg | Whole model | 21.12 | 44.37 |\n| JoinDet | Whole model | **23.41**(+2.29) | **45.97**(+1.60) |", " >**Q1:** Missing important comparisons: For a rough categorization, the previous contrastive pre-training methods focused on (1) image classification (e.g., Mocov2 and SimCLR) (2) general dense prediction tasks[1, 2, 3, 4] (such as object detection and segmentation) (3) detr-like object detectors. This paper belongs to (3), and only compared previous works of (1) and (3). But the detailed comparisons to the (2) are missing. It would be much better if the authors can provide deep discussions. Compare with general dense prediction tasks.\n\n**A1:** Thanks for your suggestion. We will add the results of the papers you mentioned in type (2) in our paper. The difference between type (2) and type (3): First, type (2) **only pretrains the backbone** with dense prediction tasks, while type (3) directly pretrains **all the detection components**. Type (2) neglects the detector heads, when transferred to downstream detection tasks, the transformer part in deformable DETR is initialized from scratch and does not benefit from pretraining. Second, type (2) focuses on dense contrastive learning which can learn more fine-grained features but can **NOT** empower the model to learn the location of objects. Instead, type (3) pretrains the whole detector which targets on **spatial localization learning** which is important for downstream detection tasks.  \n\nThe experimental results below show that type (2) methods have lower performance (about **-4 AP** on low-data regime COCO detection) than our JoinDet. Specifically, we evaluate the performance of ReSim[3] (which is shown in Table 1 of the paper) and PixPro[1] (added in the rebuttal) on the low-data regime COCO fine-tuning tasks. We only choose these two methods as they achieve relatively better performance than DenseCL[2] (reported in [6]) and we can not find released checkpoints for DUPR[4]. Compared with PixPro, JoinDet achieves **+4.05 AP** and **+4.63 AP** performance gains when fine-tuning on 1% and 10% COCO data, respectively.\n| Method | Pretrain | COCO 1% | COCO 10% |\n|-----------|:-----------:|:---------:|:----------:|\n| ReSim[3] | Backbone | 11.07±0.4 | 26.56±0.3 |\n| PixPro[1] | Backbone | 11.84±0.3 | 26.24±0.2 |\n| DETReg[5] | Whole model | 14.58±0.3 | 29.12±0.2 |\n| JoinDet | Whole model | 15.89±0.2 | 30.87±0.1 |\n\n>**Q2:** The advantage of pre-training is the generalization to various downstream tasks. I wonder if the representation learned in this method can be also used for other tasks such as segmentation.\n\n**A2:** **YES**, our method can be used for segmentation. There are two directions to extend our method for segmentation tasks. (1) After pretraining, given foreground object bounding boxes learned by JoinDet, the segmentation masks in the bounding boxes can be easily achieved by adding a mask branch like that in Mask-RCNN and DETR. (2) During pretraining, as the eigen attention maps and object priors are progressively refined, highlighted foreground regions in the eigen attention maps bounded by object priors can be treated as the supervision for segmentation. We will explore this extended version in our future works.\n\n**Reference:**\n\n[1] Xie, Zhenda, et al. \"Propagate yourself: Exploring pixel-level consistency for unsupervised visual representation learning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[2] Wang, Xinlong, et al. \"Dense contrastive learning for self-supervised visual pre-training.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[3] Xiao, Tete, et al. \"Region similarity representation learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[4] Ding, Jian, et al. \"Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors.\" IEEE Transactions on Pattern Analysis and Machine Intelligence 2022.\n\n[5] Bar, Amir, et al. \"Detreg: Unsupervised pretraining with region priors for object detection.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[6] Wei, Fangyun, et al. \"Aligning pretraining for detection via object-level contrastive learning.\" Advances in Neural Information Processing Systems. 2021.", " >**Q2:** Marginal improvement when more annotated data is available. From Table 3, when using the full COCO training set, only marginal improvement (0.1%) is obtained by the proposed approach comparing to the previous state-of-the-art object prior based pre-training approach DETReg. (1) Is there any explanation about why this happens and (2) any thought on how to improve it?\n\n**A2:** **Explanation:** Experiments in Tab.3 are conducted in a **special** setting, where the pretraining data and fine-tuning data are **exactly** the same and the pretrained models are finetuned by **sufficiently long epochs** (i.e., deformable DETR almost converges with 50 epochs training [2]). More importantly, when we extend the fine-tuning epochs for supervised pretraining to 100 epochs, we get a detection performance of 45.6 AP. Considering our JoinDet reaches **45.6 AP** using **only** 50 epochs of fine-tuning, we suggest that **JoinDet has reached the upper bound of deformable DETR on COCO** when we fine-tune the pretrained model with 50 epochs.\n| Finetune dataset | Methods | Pretrain epochs on COCO without labels | Full-data Finetune epochs | AP |\n|:----------------:|:----------:|:-------------------------------:|:------------------:|:----:|\n| COCO | Supervised | 0 | 50 | 44.5 |\n| COCO | Supervised | 0 | 100 | 45.6 |\n| COCO | JoinDet | 50 | 50 | 45.6 |\n\nWe consider such a specific setting unsuitable for evaluating the performance of pretraining methods because sufficiently long fine-tuning epochs lead to less disparity among different pretrainings. We provide these results just for making the evaluation more comprehensive following existing works. \n\nWe would like to further highlight that **fine-tuning with less data or fewer epochs** is a more common and valuable evaluation setting, with which our JoinDet achieves considerable improvements compared with DETReg. Specifically, on full-data PASCAL VOC fine-tuning, JoinDet improves DETReg by **+1.0 AP**. When 1% COCO data are used for fine-tuning, JoinDet shows a **+1.31 AP** performance gain on DETReg. When fine-tuning with fewer (10) epochs on COCO, DETReg achieves a **+3.3 AP50** performance gain when compared with DETReg (shown in Fig. 4).\n\n**Methods to improve the finetuning results on full coco data:** we would suggest using larger unlabeled pretraining datasets, such as COCO+ and OpenImages, which are easy to obtain on the Internet and bring out the full potential of JoinDet.\n\n**Reference**\n\n[1] Zhong, Yuanyi, et al. \"Dap: Detection-aware pre-training with weak supervision.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.\n\n[2] Zhu, Xizhou, et al. \"Deformable DETR: Deformable Transformers for End-to-End Object Detection.\" International Conference on Learning Representations. 2020.", " >**Q1:** Novelty of region prior. There is already published work which uses attention maps to generate object bounding boxes to pre-train object detector [1]. The authors should discuss and compare with this paper.\n\n**A1:** (1) Thanks for your comments. We would like to clarify that the region prior generation alone by the attention map is **NOT** our claimed novelty. Instead, our novelty is to **generate bounding boxes from both region priors and instance priors** (Line118), where instance priors that are generated by the matchings between attention maps and the predicted bounding boxes of the detector are new to the best of our knowledge. \n\n**Instance priors can provide complementary information for region priors** and are very important for detector pretraining. Specifically, region priors only provide comparable performance with DETReg as shown in the ablation study in Tab.5. Compared with using only region priors (**53.5 AP** on PASCAL VOC), adding instance priors (**55.4 AP** on PASCAL VOC) boosts JoinDet for **+1.9 AP**, showing the importance of instance prior in bounding box generation. The joint generation of region prior and instance prior corresponds to our claimed **contribution #1** in L59-61. Furthermore, our paper has deserved the other two contributions: \n\n**Contribution #2** (L61-63): our design can generate object priors (including regions priors and instance priors) and learn object detection **synchronously**, which brings **mutual evolvement** to the object prior generation and detector optimization. \n\n**Contribution #3** (L63-64): a **Box Smooth Module** is proposed to stabilize the pretraining during the refinement of generated object priors. \n\n(2) Generally, our novelty is **independent** of DAP[1], but we appreciate the reminding of the reviewer and will add it to the related works. Their differences can be summarized as three folds. First, JoinDet is totally **unsupervised** while DAP **<u><u>relies on image labels</u></u>** to generate Class Activation Maps (CAMs), which can not work without ground-truth class labels. Second, JoinDet further generates instance priors via matching the attention maps with the predicted bounding boxes and can optimize object priors and detector learning synchronously, while DAP does **NOT** generate instance priors. Third, JoinDet designs a Box Smooth Module while DAP does **NOT**. \n\nThese differences bring three advantages. First, as an unsupervised method, JoinDet can **be easily adapted to** newly collected, unlabelled datasets. Second, JoinDet progressively refines boxes to provide more and more accurate supervision during pretraining. Third, generated instance priors in JoinDet divide foreground regions into foreground instances, which provides more **object-related boxes** for supervision and is more suitable for scene image datasets.\n", " This paper focuses on unsupervised object detection pre-training using object priors. Specifically, two kinds of object priors are used in this paper. The first one is region prior which is generated from the encoder attention maps. The second one is instance prior which is obtained by selecting high-quality outputs from the detector decoder. Experiments on COCO and PASCAL VOC show that the proposed approach obtains better results than the previous unsupervised object detection pre-training approaches when only limited number of annotated data is available. ### Strengths\n- The proposed approach is interesting.\n- Promising results are obtained when limited number of annotated data is available.\n\n### Weaknesses\n- Novelty of region prior. There is already published work which uses attention maps to generate object bounding boxes to pre-train object detector [a]. The authors should discuss and compare with this paper.\n- Marginal improvement when more annotated data is available. From Table 3, when using the full COCO training set, only marginal improvement (0.1%) is obtained by the proposed approach comparing to the previous state-of-the-art object prior based pre-training approach DETReg.\n\n[a] DAP: Detection-Aware Pre-training with Weak Supervision From Table 3, when using the full COCO training set, only marginal improvement (0.1%) is obtained by the proposed approach comparing to the previous state-of-the-art object prior based pre-training approach DETReg. Is there any explanation about why this happens and any thought on how to improve it? Only marginal improvement (0.1%) is obtained by the proposed approach comparing to the previous state-of-the-art object prior based pre-training approach DETReg.", " This paper presents an unsupervised pre-training strategy for DETR-Like object detectors. The core idea is to generate object priors from the encoder to guide the self-supervised pre-training.\n\n\n Strengths:\n* Using the encoder itself to generate object priors is a good idea.\n* The proposed method outperforms other methods designed for pre-training DETR-like object detectors.\n\nWeakness:\n* Missing important comparisons: For a rough categorization, the previous contrastive pre-training methods focused on (1) **image classification** (e.g., Mocov2 and SimCLR) (2) **general dense prediction tasks**[1, 2, 3, 4] (such as object detection and segmentation) (3) **detr-like object detectors**. This paper belongs to (3), and only compared previous works of (1) and (3). But the detailed comparisons to the (2) are missing. It would be much better if the authors can provide deep discussions.\n* The advantages of pre-training is the generalization to various downstream tasks. I wonder if the representation learned in this method can be also used for other tasks such as segmentation.\n\n[1] Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning. CVPR 2021.\n[2] Dense Contrastive Learning for Self-Supervised Visual Pre-Training. CVPR 2021.\n[3] Region Similarity Representation Learning. ICCV 2021.\n[4] Deeply Unsupervised Patch Re-Identification for Pre-training Object Detectors. TPAMI 2022.\n See weakness. Yes. The authors mentioned \"there still exists a large gap from supervised training on class agnostic object proposal evaluation, which calls for further studies.\"", " This submission proposes a pre-training approach for object detection transformer, it generates pseudo bounding box regression targets by leveraging the attention cues from the pre-trained backbone network. Specifically, both object proposals generated merely in encoder attention map and object instance prediction from decoder head are used for pseudo boxes. The method is evaluated on low-data and full data fine-tuning, as well as object proposal generation. \n**Strengths**\n- The submission is well-written and easy to follow. The math definitions and theories are clear and concise.\n- It is surprising that bounding box prediction merely supervised by the attention cues and self-supervising signal could achieve such superior object proposal results, which has not been done before.\n\n**Weaknesses and parts to be improved**\n- I listed some of my concerns in the questions, which I'm not very sure, the authors are encouraged to address my concerns and I would consider raising my score.\n- The approach is simple but it seems the technical contributions are sparse, the major components of attention map generation and region proposal generation are lent from prior work.\n- In section 2.4, the authors mention it _uses a pre-trained SWAV model to extract features_, however, the details are not given. Specifically, is the regions cropped from the image and fed to the pre-trained SWAV backbone, or some feature cropping method is used, e.g., ROIalign.\n- In the introduction, the authors mention they are inspired by DINO and referred to the attention maps in the figures, however in the method section, it actually uses the Normalized Cuts algorithm to obtain the attention map. I believe it is better to rewrite the part in the introduction to be more clear and indicate in the figure captions how the attention map is obtained.\n\n**minor comments**\n- In line 84, can then **be** generated\n- In equation 3, FB is not defined\n- In equation 8, B and R should be defined properly, I guess R should be replaced with other characters as it is already used in equation 1.\n- In line 144, SWAV is not cited\n- In line 212, Detreg should be DETReg\n- In Tab.2 and Tab.3, citations are missing for the prior works.\n- In Tab.4, I suggest adding the IOU metric for calculating Recall, which is more clear. 1. I actually have a question regarding the effectiveness of detection pre-training. For the **Full-data finetuning** experiment, the supervised training is compared as a baseline, however, would it be fair to add the pre-training time to the baseline? I guess when you extend the baseline training time, it would easily catch up with the detection pre-training methods, which would pose a concern on the effectiveness of detection pre-training for full-data finetuning.\n2. Although the work does not employ low-level unsupervised proposal methods like Selective Search, would it be more advantageous to incorporate these proposals which are generated from low-level cues? The intuition is that they may provide better or complementary coverage to some objects such as small ones, or others that have clear boundaries to the background.\n3. I'm interested in how the pre-trained object feature embedding loss affects the method, have you tried removing the feature embedding loss? Do you have any perspectives, e.g., it is a regularization that avoids the feature representation deviating too much from the pre-trained representation, or maybe it is the major driven force in the proposed object detection pre-training, as it enforces the region features to be close to an image-level pre-trained model like SwAW. Could you provide more ablation study on this part?\n4. Have you tried other unsupervised pretraining models as initialization and embedding loss, e.g., BYOL, do you have any ideas or perspectives on this aspect? The submission does not discuss the limitations, it is encouraged to include more discussion on the limitation and possible future works and improvements that could be done. ", " This paper proposes an unsupervised object detection pre-training framework that can generate object priors and learn detector jointly. This work is inspired by DINO, the self-attention maps can generate region prior bounding boxes. Authors utilize Deformable DETR as their main framework. The proposed method achieves good results on low data-regime object detection on COCO and full-data finetuing on PASCAL VOC. Strengths:\n1) The motivation is good. Generating object priors by attention maps is an alternative of the popular selective search algorithm. However, a pre-trained backbone is important and necessary.\n\nWeaknesses:\n1) In Section 3, authors should claim which detection framework they used for pre-training in the begging of section 3, which will make the paper clearer. \n2) The proposed method highly rely on the pre-trained backbone, authors initialize the ResNet50 backbone of JoinDet with SwAV, did authors explore other unsupervised pre-training methods, e.g. BYOL, SimCLR?\n3) Authors should highlight the setting of the pre-training methods, supervised/SwAV only pre-trains backbone, while JoinDet pre-trains whole network. \n4) There are experiments conducted in the setting where pre-training is performed on COCO/ImageNet and finetuning is performed on PASCAL VOC (Table2). Why authors did not do experiments in the setting where pre-training is conducted on ImageNet and fine-tuning is performed on COCO. I hope to see this experiment in the rebuttal.\n5) Why authors only do experiments under low-data regime on COCO, is it consistent on PASCAL VOC?\n6) The improvement over DETReg is somewhat limited. My biggest concern comes from the experiments and the limited improvements under fine-tuning setting, see weaknesses above. Authors have discussed broader impacts in their supplementary material." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "nips_2022_CTqkruS5Bb", "Xs_9cuktjD4", "j156RgO_Pi7", "nips_2022_CTqkruS5Bb", "640H0jAukClf", "0I_SeBO-yhd", "GAvIsoJW7oA", "UBctz5KNh1Py", "j5e9zzbwX2t", "fSYYf8bI2pK", "FrISZOINau", "91chIjSkys8", "nips_2022_CTqkruS5Bb", "nips_2022_CTqkruS5Bb", "nips_2022_CTqkruS5Bb", "nips_2022_CTqkruS5Bb" ]
nips_2022_2EufPS5ABlJ
Spherical Sliced-Wasserstein
Many variants of the Wasserstein distance have been introduced to reduce its original computational burden. In particular the Sliced-Wasserstein distance (SW), which leverages one-dimensional projections for which a closed-form solution of the Wasserstein distance is available, has received a lot of interest. Yet, it is restricted to data living in Euclidean spaces, while the Wasserstein distance has been studied and used recently on manifolds. We focus more specifically on the sphere, for which we define a novel SW discrepancy, which we call spherical Sliced-Wasserstein, making a first step towards defining SW discrepancies on manifolds. Our construction is notably based on closed-form solutions of the Wasserstein distance on the circle, together with a new spherical Radon transform. Along with efficient algorithms and the corresponding implementations, we illustrate its properties in several machine learning use cases where spherical representations of data are at stake: density estimation on the sphere, variational inference or hyperspherical auto-encoders.
Reject
This paper has generated a long discussion and although it has strong theoretical merits, we all concord that the paper lacks of empirical motivations as well as a strong empirical evaluations with respect to distance distributions not exploiting manifold sructure and thosed define on a manifold. Hence, we believe that at this point it would be preferable to have such empirical evidence (ideally with quantitative results on real-world problems) before accepting the paper. Given that, we are sure that the paper will be much stronger and of broader interest to the ML community.
test
[ "cRH9RMOp31", "dNwnRqc6u1B", "qsHaJcMp7Rs", "z1X4aAbrOJp", "ZSt3kcwv8D2", "LGoymtd-MkP", "GBue9mdgco", "849608OqCzR", "5pl2ad_ugwe", "bL6DxqGgeG", "iWA6ch7jlko", "eFJBUYzZ1m", "nggcRNFe9Cb", "M1FZmheaQd", "Rk714vC5SSK" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their replies. In summary, I like the theory, even it is not mathematically challenging to build up those theory. For the practical side, the theory needs good examples to demonstrate its advantages, which is not shown in the paper. Hence, I would like to keep my score unchanged. ", " We thank you for the references.\n\nWe will look into these applications for further work.", " For toy synthetic datasets that (partially) lie on spherical manifolds, you can look at the disentanglement literature (e.g. Locatello et al. http://proceedings.mlr.press/v119/locatello20a/locatello20a.pdf, or Fumero et al. http://proceedings.mlr.press/v139/fumero21a/fumero21a.pdf), in particular, Shapes3D and Cars3D. Moreover, disentanglement, and in particular the later work of Fumero et al., could be an interesting practical application for the proposed latent space structure.", " Thank you for your reply.\n\nFor the results of FID, we believe that we cannot really compare with other works in the literature since we do not necessarily use the same architectures for the encoder and decoder. Here, we use the same architecture when we compare the methods and hence results are more comparable and it shows that SWAE and SSWAE give comparable results in term of FID.\n\nNote also that similar method which use sliced-Radon Sobolev instead of SWAE [1] report higher results than us in CIFAR10 (see Table 1, 120 for SWAE).\n\nWe report here the reconstruction loss on training and validation set (averaged over 5 trainings):\n\n|Method\\Dataset|MNIST|Fashion|CIFAR10|\n|--------------|-----|-------|-------|\n|SSWAE (Train) |4.8$\\pm$ 0.04|6.67 $\\pm$ 0.04|13.68 $\\pm$ 0.11|\n|SWAE (Train) | 4.96 $\\pm$ 0.02 |6.96 $\\pm$ 0.02|13.62 $\\pm$ 0.04|\n|SSWAE (Test) | 7.31 $\\pm$ 0.04 | 8.01 $\\pm$ 0.04 | 30.29 $\\pm$ 0.15 |\n|SWAE (Test) | 7.56 $\\pm$ 0.07 | 8.25 $\\pm$ 0.03 | 30.27 $\\pm$ 0.09 |\n\nReal datasets on the sphere are often on $S^2$. We wanted to take advantage of the slicing process and of proposition 1. Hence, we focused on experiments in higher dimension and with enforcing uniformity. We will look for applications on real datasets in future works.\n\n[1] Turinici, Gabriel. \"Radon–Sobolev Variational Auto-Encoders.\" Neural Networks 141 (2021): 294-305.", " Thank you for your responses.\n\nFrom the result of the hyperspherical autoencoder, SSWAE seems to be only comparable to SWAE. Also, the FID scores are high compared to works in the literature. I guess that the CIFAR10 dataset does not have an underlining manifold which is a hypersphere. Could authors provide the reconstruction losses on the training set and the test set on reported datasets?\n\nAlso, I wonder which application on real datasets that SSW can show the benefit of using the geodesic distance on the hypersphere.\n\nMinor: I am not sure if having the paper and the appendix in a single file violates the 9 pages limit of the rebuttal.\n\nI will ask if I have other questions.\n\nBest regards,", " We thank the reviewers for their positive and encouraging comments on our work. We added a revised version of the paper and we will sum up here the changes we have made.\n\nThe main issue raised by the reviewers is that experiments are not intensive enough. We recall here that our main objective is to propose **the first version of a sliced optimal transport divergence between measures supported on manifold**, in our case an hypersphere. Our contribution mostly lies in the construction of this divergence, and the associated properties, and less toward applications which would benefit from it. As such, our goal was to show that this divergence works in practice on selected applications that directly model data (or their representations) on hyperspheres.\n\n Nevertheless, we understand the need to show that our spherical sliced wasserstein (SSW) works better in practice than sliced wasserstein (SW) in the embedding space. We therefore conducted several complementary experiments: First, we added results on the Fashion MNIST dataset for the autoencoder experiment. Second, we completed some results that were available in the appendix of the document. In particular, we used SSW to prevent collapsing representations in self supervised learning, more specifically in the contrastive learning framework where the representations are projected on the sphere $S^2$.\nThe results obtained with SSL are competitive while slightly underperforming compared to other state of the art contrastive methods which enforce uniformity of the data (see Table 2) using an explicit interaction term between batch samples with a complexity of $O(dn^2)$. We added in the revision of the paper more comparisons using SW and SSW to enforce uniformity. In these experiments, not only does SW perform worse than SSW, but it also requires sampling and sorting a uniform distribution on the hypersphere to compute SW whereas the closed-form of SSW with the uniform distribution allows for a more efficient computation, **thanks to our new result in Proposition 1**. \n\n| Method | Encoder output | $S^2$ |\n|--------|----------------|-------|\n| Supervised | 82.26 | 81.43 |\n| SimCLR[1] | **66.55** | 59.09 |\n| _Wang and Isola._[2] | 60.53 | 55.86 |\n| SW-SSL, $\\lambda = 1, L = 10$ | 62.65 | 57.77 |\n| SW-SSL, $\\lambda = 1, L = 3$ | 62.46 | 57.64 |\n| SSW-SSL, $\\lambda = 20, L = 10$ | 64.89 | 58.91 |\n| SSW-SSL, $\\lambda = 20, L = 3$ | 63.75 | **59.75** |\n\n\nIf the paper is accepted, we will move some of these results in the main body of the paper as requested by one of the reviewers, as well as FID for the autoencoder experiment on CIFAR10 and CelebA.\n\nWe also would like the reviewers to take into account that one of the main interests of this work is to define a sliced-wasserstein based discrepancy which involves theoretically only objects which are well defined on manifolds, and we hope to pave the way towards defining such discrepancies on arbitrary manifolds, which are not necessarily embedded in Euclidean spaces, and on which the regular SW cannot be used.\n\n[1] Chen, Ting, et al. \"A simple framework for contrastive learning of visual representations.\" International conference on machine learning. PMLR (2020).\n\n[2] Wang, Tongzhou, and Phillip Isola. \"Understanding contrastive representation learning through alignment and uniformity on the hypersphere.\" International Conference on Machine Learning. PMLR (2020).", " We thank the reviewer for his/her comments. Below, we address his/her questions and concerns.\n\n**Mathematical Background.** We agree that the mathematical background is quite demanding. To make the basic understanding of the method more intuitive, we propose to add in the next version of the paper a figure and/or a table on which we would compare the main ingredients of the classical SW distance and of SSW.\n\n\n\n| |Geodesics | Projection | Integration Set |\n|-|----------|------------|-----------------|\n|SW| Lines | $P^\\theta(x) = \\langle x, \\theta\\rangle$ | $S^{d-1}$ |\n| SSW | Great circles |$P^U(x)=\\frac{U^Tx}{\\|\\|U^Tx\\|\\|_2}$ | $\\mathbb{V}_{d,2}$ |\n\n\n**Expanding the analysis on the benefits of using a spherical distribution prior with autoencoders.** \n\nThe benefit of using a spherical distribution prior with autoencoder was thoroughly studied in related works such as [1,2,3]. We agree that it would be interesting to study the latent space and the performance obtained on datasets with more complex structure, e.g. by using data which have a known spherical latent space (such as MNIST with rotated digits) . This is a nice idea that we will consider in further work. \n\nFor existing datasets with complex structure (e.g. hierarchical), it has been in part already done in several related works such as [4,5] which compare the performance obtained for different datasets and different latent spaces. Hence here, we focus on showing the ability of SSW to capture a nice latent space with a uniform prior for simple datasets such as MNIST and we expect to obtain similar behavior as in related work on datasets with more complex structure. \n\n[1] Davidson, Tim R., et al. \"Hyperspherical variational auto-encoders.\" arXiv preprint arXiv:1804.00891 (2018).\n\n[2] Xu, Jiacheng, and Greg Durrett. \"Spherical latent spaces for stable variational autoencoders.\" arXiv preprint arXiv:1808.10805 (2018).\n\n[3] Zhao, Deli, Jiapeng Zhu, and Bo Zhang. \"Latent variables on spheres for autoencoders in high dimensions.\" arXiv preprint arXiv:1912.10233 (2019).\n\n[4] Grattarola, Daniele, Lorenzo Livi, and Cesare Alippi. \"Adversarial autoencoders with constant-curvature latent manifolds.\" Applied Soft Computing 81 (2019): 105511.\n\n[5] Skopek, Ondrej, Octavian-Eugen Ganea, and Gary Bécigneul. \"Mixed-curvature variational autoencoders.\" arXiv preprint arXiv:1911.08411 (2019).\n\n", " We thank the reviewer for their comments. We address their concern below.\n\n**The applications of SSWD shown in the paper are limited.**\n\nWhile we agree that the results on the differents applications for SSW are only competitive with and not significantly outperforming SW, we would like to emphasize that SSW has many interests on its own, being a discrepancy that is intrinsic to the sphere, in the sense that we only use objects defined on the manifold. Hence, it is a first step toward defining geometric versions of SW on other manifolds which are not necessarily embedded in $\\mathbb{R}^d$ and on which we could therefore not use SW.\n\nWe also experimented on self-supervised learning on which we obtained fairly competitive results with state of the art contrastive methods. We will add in the revised version of the paper more comparisons between SW and SSW on this task. In particular, we observed that using SSW instead of SW in to enforce uniformity gives better performance with a better computational time using the closed-form provided in Proposition 1. We report on the following table the best performances obtained (by trying several regularization parameter $\\lambda$ for both SW and SSW and denoting by $L$ then umber of projection).\n\n\n| Method | Encoder output | $S^2$ |\n|--------|----------------|-------|\n| _Wang et Isola._ [1] | 60.53 | 55.86 |\n| SW-SSL, $\\lambda = 1, L = 10$ | 62.65 | 57.77 |\n| SW-SSL, $\\lambda = 1, L = 3$ | 62.46 | 57.64 |\n| SSW-SSL, $\\lambda = 20, L = 10$ | 64.89 | 58.91 |\n| SSW-SSL, $\\lambda = 20, L = 3$ | 63.75 | 59.75 |\n\n\nWe added these results in the revised version of the paper. And if accepted, we will move it to the main body of the paper.\n\n[1] Wang, Tongzhou, and Phillip Isola. \"Understanding contrastive representation learning through alignment and uniformity on the hypersphere.\" International Conference on Machine Learning. PMLR (2020).", " **Dealing with continuous measure.**\n\nFinding a closed-form for the Wasserstein distance between continuous measures is a hard problem in practice only solved to our knowledge for Gaussians and elliptical distributions. Distributions on the sphere are in general more complicated and it is therefore still an open question whether or not we can derive closed-forms for the Wasserstein distance between them, even on the circle. We do not know any results on von Mises distributions for example. Moreover, deriving a closed-form between projected measures is even more difficult since the projected measures can follow a different distribution. For example, the projection of a von Mises-Fisher distribution on a great circle does not follow a von Mises distribution in practice [1], but an infinite mixture of von Mises distributions. The Wasserstein distance between other generalizations of Gaussians on the sphere still need to be studied (e.g. the Riemannian normal distribution [2] or the wrapped normal distribution [3]) or other distributions such as the power spherical distribution [4].\n\nNote however that we derived in Proposition 1 a closed-form for the Wasserstein distance on the circle between an arbitrary distribution and the uniform measure.\n\n**Additional reference.** We thank the reviewer for pointing to us this relevant work on mini-batch versions of optimal transport (MBOT) and will add it to the camera ready version of the paper. MBOT is definitely a strong competitor to sliced versions of OT, but to our knowledge it has not been yet studied for measures living on manifolds, though we can expect some of its properties to be unaltered on this specific nature of data. We will also add [5] in which it is proposed to integrate over von Mises-Fisher distribution or mixture of von Mises-Fisher distributions instead of the uniform one.\n\n\n[1] Jung, Sungkyu. \"Geodesic projection of the von Mises–Fisher distribution for projection pursuit of directional data.\" Electronic Journal of Statistics 15.1 (2021): 984-1033.\n\n[2] Hauberg, Søren. \"Directional statistics with the spherical normal distribution.\" 2018 21st International Conference on Information Fusion (FUSION). IEEE, 2018.\n\n[3] Galaz-Garcia, Fernando, et al. \"Wrapped Distributions on homogeneous Riemannian manifolds.\" arXiv preprint arXiv:2204.09790 (2022).\n\n[4] De Cao, Nicola, and Wilker Aziz. \"The power spherical distribution.\" arXiv preprint arXiv:2006.04437 (2020).\n\n[5] Nguyen, Khai, et al. \"Improving relational regularized autoencoders with spherical sliced fused Gromov Wasserstein.\" arXiv preprint arXiv:2010.01787 (2020).", " \nWe thank the reviewer for their thorough review. We address their concern and questions below.\n\n**SSW is not a metric.** We would like to emphasize that we do not know yet whether or not SSW is a metric. We conjecture that it is one, but we would need an expertise in measure theory to find out whether the set of injectivity of the spherical Radon transform is null or not. Moreover, even if SSW is not a metric, it is still a pseudo distance, and we showed in our experiments that it gives performances comparable to SW, which is promising as it is defined only by using objects well defined on manifolds.\n\n**The experiments are not intensive.**\n\nWe added in the revised version of the paper an experiment on the Fashion MNIST dataset which is slightly more complicated than MNIST. We still choose a uniform prior on $S^{10}$. We obtain on this dataset better results with SSWAE compared to other methods. We report in the next table these results. We also provide here results on CIFAR10 obtained for $\\lambda=0.1$, a latent space of dimension 64, 100 epochs, and averaged over 5 runs. We see that the results are pretty closed and it is hence difficult to conclude on which is the better metric for this task.\n\n| Method \\ Dataset | Fashion | CIFAR10 |\n|------------------|---------|---------|\n| SSWAE | **43.94 $\\pm$ 0.81**| 98.57 $\\pm$ 0.35 |\n| SWAE | 44.78 $\\pm$ 1.07 | **98.5 $\\pm$ 0.45** |\n| WAE-MMD-IMQ| 68.51 $\\pm$ 2.76 | 100.14 $\\pm$ 0.67 |\n| WAE-MMD-RBF | 70.58 $\\pm$ 1.75 | 100.27 $\\pm$ 0.74 |\n| SAE | 56.75 $\\pm$ 1.7 | 99.34 $\\pm$ 0.96 |\n| Circular GSWAE | 44.65 $\\pm$ 1.2 | - |\n\nWe did not have time to run the full experiments on other datasets such as CelebA, given it requires to find the right learning parameters and regularization strengths. Nevertheless we will include them in the camera ready version of the paper. \n\n\nAbout the tiff dataset suggested by the reviewer, or other real dataset on earth. Thank you for the suggestion, as those data are naturally embed on $S^2$. Yet, how to define a meaningfull learning task from them, as well as being able to compare quantitatively competing methods, is not clear for us, and we think the corresponding illustrations would have been redundant with other results presented in the paper. Therefore, we chose to focus on tackling higher dimensional experiments such as learning the latent space of autoencoders or the self-supervised learning experiment. \n\n\n\n**The experiments on self-supervised learning should be moved to the main text.** If accepted, we will add the SSL experiment in the main text. \n\n**Sample Complexity.** We do not have a theoretical result about the sample complexity of SSW. However, we conjecture that the same type of results as for SW holds. We will add in Appendix a plot of the empirical sample complexity of W with geodesic distance and SSW (see Figure 11 of the revised version or https://ibb.co/Y05W9yS). We observe that, contrary to W and similarly to the classical SW distance, the sample complexity of SSW seems not to depend on the dimension.\n\n", " \nWe thank the reviewer for their comments and their kind words about this work. We answer their questions and concerns below.\n\n**Link with GSW.**\n\nSSW is not a particular instance of GSW for different reasons. First of all, GSW is based on using a defining function $g:\\mathcal{X}\\times (\\mathbb{R}^n\\setminus\\{0\\})\\to\\mathbb{R}$ where $\\mathcal{X}\\subset\\mathbb{R}^d$ and which needs to satisfy different properties. By analogy, our defining function would be defined on $\\mathcal{X}\\times\\mathbb{V}_{d,2}$ as $g(x,U)=P^U(x)$ (with value in $S^1$). \n\nBut, the defining function needs to satisfy 4 properties. One of these properties is homogeneity, i.e. that for all $\\lambda\\in\\mathbb{R}$ , $g(x,\\lambda\\theta) = \\lambda g(x,\\theta)$. In our case, if we relax $U\\in\\mathbb{V}_{d,2}$ by $U\\in\\mathbb{R}^{d\\times 2}$, then we have for all $\\lambda\\in\\mathbb{R}$, $g(x,\\lambda U) = P^{\\lambda U}(x) = \\lambda \\frac{U^T x}{\\|\\lambda U^T x\\|_2} = P^U(x)$ (using the closed-form of Lemma 1). Hence, it is not homogeneous but scale invariant (which makes sense since we are restricted on the sphere). Therefore, this Radon transform is not a generalized Radon transform and SSW does not enter in the framework of GSW. \n\nMoreover, another difference is indeed that the projection is on $S^1$ while the projection of GSW in on $\\mathbb{R}$. However, it would be interesting to see if we can define analogous generalized Radon transform on manifolds by using only objects well defined on those.\n\n**Comparison with GSW.** We compare with GSW for autoencoders and find that SSWAE performs slightly better than GSWAE. We also compared on the variational inference experiment and found similar performances between SW, SSW and Circular GSW.\n\n**Was the improved separation of the latent space for SSWAE consistent among seed?** The improved separation over the latent space for a uniform prior was observed consistently by running several time the experiments with different seeds (see e.g. https://ibb.co/k0BFx3D for 4 different latent spaces obtained with different seeds). We added this observation in the revised version of the paper.\n\n**What is the measure of variability in Figure 5? Standard deviation? Standard error?** Thank you for pointing this out. As a matter of fact, we did not specify the measure of variability and will add it in the camera ready version of the work. We plot in Figure 5 a 95% confidence interval. \n\n**Claim about performances for Variational inference.** We will change it in “SSWVI performs as well as SWVI.”\n\n**Dimension issue.** We thank the reviewer for this suggestion and will add it to the next version of the paper. Moreover, we note that we could also define the same type of variants as SW to alleviate dimensional issues related to projections (e.g. max-SSW…).", " The paper introduces a new formulation to compute the distance between probabilities distribution on the hyper-sphere, namely, the Spherical Sliced-Wasserstein (SSW). The proposed formulation involves a novel spherical Radon transform (to project to great circles) and the computation of the WD on the circle.\nThe method is validated on two tasks: variational inference (matching a target distribution up to a scale) and autoencoders with a prior distribution on the spherical latent space. The paper is quite difficult to follow in the mathematical formulation without the proper background. If possible, it would be helpful to add some intuitive explanation or figure to better understand the main steps of the proposed formulation. Nevertheless, the paper tackles an interesting problem, and being able to compare density distributions on a hyperspherical domain could be useful for some machine learning tasks, as shown with the (toy) applications variational inference and autoencoders.\n\nOn this latter point, I found the experimental setup not properly explained. Most of the details are demanded of the supplementary material, which makes it difficult to understand what was done without jumping forth and back. It would also be nice to see more examples and analyses on the spherical latent space of the autoencoders (for instance, the different behaviour between datasets for which the data manifold is known to be spherical (e.g. rotations) and datasets the data manifold is presumed to be more complex or hierarchical). I would like the authors to address my previous two concerns:\n- trying to add an intuitive layer to the mathematical formulation (possibly with supporting figures)\n- expanding the analysis on the benefits of using a spherical distribution prior with autoencoder on datasets with different characteristics. yes", " The paper proposed a new version of Sliced Wasserstein Distance on the sphere as a first step to deal with manifold data. Section 2 is to recall the definition of Wasserstein distance and Sliced Wasserstein distance. In Section 3, the authors introduced the spherical sliced Wasserstein distance, mainly based on the property that the sphere could be described by spherical coordinate system. They also defined the Radon transform for the SSWD and show some properties of Wasserstein distance on circle. The implementation is shown in Section 4. Section 5 is devoted for applications which include variational inference, generative modeling and density estimation. Strengths: The idea of Spherical Sliced Wasserstein Distance is nice and interesting. The SSWD is well-defined and some of its properties are explored.\n\nWeaknesses: The applications of SSWD shown in the paper are limited. In those experiments, the SSWD does not really outperform the SWD. The authors need to find applications with spherical data, where the advantages of SSWD could be shown clearly. No It is fine.", " The paper proposes spherical sliced Wasserstein discrepancy, a variant of sliced Wasserstein on the sphere. To construct the discrepancy, the authors develop a new variant of Radon Transform which is spherical Radon Transform, and utilize the closed-form solutions of the Wasserstein distance on the circle. In more detail, the spherical sliced Wasserstein is defined as the Wasserstein distance between geodesic projected measures on the great circles measured by the uniform distribution over all the projections or Stiefel manifold. On the application side, the authors apply the new discrepancy to hyperspherical auto-encoders and density estimation on the sphere. # Strengths\n## Originality \n* The paper proposes the first variant of sliced Wasserstein on the hypersphere.\n* Spherical Radon Transform is new.\n\n## Quality\n* The authors derive the metricity of spherical sliced Wasserstein, the connection of spherical Radon Transform and geodesic projection, and the kernel of spherical Radon Transform. \n\n## Clarity\n* The paper is well-written and easy to follow.\n\n## Significance\n* The contribution of the paper is significant for estimating densities and modeling on the hypersphere and for the community of sliced Wasserstein and optimal transport. \n* Spherical sliced Wasserstein is better than sliced Wasserstein on variational inference task on the sphere.\n\n# Weaknesses\n* The injectivity of spherical Radon Transform has not been established yet hence spherical sliced Wasserstein is not a metric.\n* The experiments are not intensive. The only experiment on real data in the main paper is hyper-spherical AE on MNIST. However, the improvement is not very significant compared to the conventional sliced Wasserstein. Other datasets such as CIFAR10 and CelebA should be considered. \n* The experiments on self-supervised learning should be moved to the main text. * One benefit of sliced Wasserstein is a good sample complexity which escapes the curse of dimensionality of the Wasserstein distance. Is this still hold for spherical sliced Wasserstein?\n* It seems that both spherical sliced Wasserstein and sliced Wasserstein need to use empirical samples when dealing with continuous measures in hyperspherical autoencoder. This can be seen as the usage of mini-batch OT [1,2,3,4,5]. Can we derive some special cases of spherical sliced Wasserstein where the Wasserstein distance between projected continuous measures has closed-form e.g, von Mises Fisher distribution?\n* One paper might be related [6].\n\nI will raise my score if the authors can add additional experiments on real datasets e.g., hyperspherical autoencoder on CIFAR10, CelebA, estimating density on Tiff dataset [7], and so on.\n\n[1] \"Learning with minibatch Wasserstein : asymptotic and gradient properties\" \n[2] \"Minibatch optimal transport distances; analysis and applications\"\n[3] \"Improving Mini-batch Optimal Transport via Partial Transportation\"\n[4] \"On Transportation of Mini-batches: A Hierarchical Approach\"\n[5] \"Unbalanced minibatch Optimal Transport; applications to Domain Adaptation\"\n[6] \"Improving Relational Regularized Autoencoders with Spherical Sliced Fused Gromov Wasserstein\"\n[7] \"https://sedac.ciesin.columbia.edu/data/set/gpw-v4-population-density-adjusted-to-2015-unwpp-country-totals-rev11/data-download#\" The paper develops a fundamental tool hence there is no foreseen negative societal impact.", " The paper introduces sliced Wasserstein discrepancies defined on the hypersphere $S^{d-1}$. \n\nFrom a computational perspective, this corresponds to computing an expected Wasserstein distance on great circles. \n\nFrom a theoretical perspective, the authors introduce a new spherical Radon transform that allows to formally define the proposed discrepancy. Further, this formal treatment allows to show that the proposed discrepancy is at least a pseudo-distance on the set of distributions on $S^{d-1}$ with finite $L_p$-norm.\n\nFinally, the authors conclude with various experiments ranging from simple gradient-flow examples to more sophisticated variational inference or auto encoder experiments. The paper is overall well-written and easy to follow. The theoretical claims sound reasonable to me, although I am not a mathematician and not particularly specialized in the theory of Radon transforms. Finally, the experiments are insightful and demonstrate that the discrepancy measure behaves as expected.\n\nWhile the experiments do not show a strong performance increase over \"naively\" using the standard sliced Wasserstein distance, I still believe that the paper makes an important contribution as the introduced discrepancy is clearly more suited to the space $S^{d-1}$. Further, Figure 6 seems to show an improved class separation in the auto encoder experiment.\n\nI see the following points for further improving the paper:\n * The authors could more clearly highlight the reason why their paper is not just a particular instance of the generalized sliced Wasserstein distance (GSW) by Kolouri et al. If I understand correctly, GSW treats the \"projected particles\" as members of the real line, and hence cannot take the particular geometry of a circle into account. Or would it be possible to define the proposed discrepancy as a particular instance of a GSW distance?\n * Following up on the previous point, the authors could additionally compare to GSW in their experiments. If the proposed method can indeed not be defined as a GSW instance, the GSW approach would I guess correspond to an intermediate version between regular SW and SSW, which uses the projection onto the great circles but does not respect the circular nature of the projected space. * Was the improved separation of the latent space for SSAWE consistent among seeds? (Figure 6)\n* What is the measure of variability in Figure 5? Standard deviation? Standard error? And how many intervals are visualized? Two times? In general, the authors do not make exaggerated claims about applicability or potential of the method. In my opinion, only the statement in line 289/290 regarding improvement in the context of VI may be a bit exaggerated/unclear, as it is not indicated which variability measure is visualized in Figure 5 and further the measures overlap. Consequently, the authors could simply say that SSWVI performs just as well as SWVI.\n\nFinally, a common problem of sliced Wasserstein distances is that the required number of projections potentially grow exponentially with dimension (as e.g. argued for in the \"Generalized Sliced Wasserstein Distances\" paper by Kolouri et al.). The authors could mention this shortcoming and further link it to Figure 8b) in the appendix, as the reduction of $SSW_2^2$ distance with increasing dimension may indeed by explained by the aforementioned dimensionality issue of any sliced Wasserstein distance." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "849608OqCzR", "qsHaJcMp7Rs", "GBue9mdgco", "ZSt3kcwv8D2", "5pl2ad_ugwe", "nips_2022_2EufPS5ABlJ", "eFJBUYzZ1m", "nggcRNFe9Cb", "bL6DxqGgeG", "M1FZmheaQd", "Rk714vC5SSK", "nips_2022_2EufPS5ABlJ", "nips_2022_2EufPS5ABlJ", "nips_2022_2EufPS5ABlJ", "nips_2022_2EufPS5ABlJ" ]
nips_2022_xvLWypz8p8
On Margins and Generalisation for Voting Classifiers
We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory. These provide state-of-the-art guarantees on a number of classification tasks. Our central results leverage the Dirichlet posteriors studied recently by Zantedeschi et al. (2021) for training voting classifiers; in contrast to that work our bounds apply to non-randomised votes via the use of margins. Our contributions add perspective to the debate on the ``margins theory'' proposed by Schapire et al. (1998) for the generalisation of ensemble classifiers.
Accept
All reviewers uniformly agree on the paper being interesting and worth publishing -- a very fine read. While the authors have already uploaded an updated version of their paper with minor revisions, I encourage them to use the camera-ready version to carry further improvements taking into accounts all reviews.
train
[ "Vjmxlj0Qi6A", "rDancjdY8tw", "6Rn6uOxWsyO", "VrBS5Ja8h39", "AEJc-8C8QBJ", "v6QZpr6ZQ2K", "R1BFkTfGPui", "vaPHJxQNAom" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for answering my question and I am glad that my suggestion could improve the results! I am happy to see this paper accepted. ", " We thank the reviewer again for the strong support shown for our paper. Please see also our general response.\n\nWe will tidy up the references (thanks for pointing these out) and incorporate answers to the questions below:\n\n1. We used this initialisation scheme to replicate the setup in Zantedeschi et al. (2021). However, re-running experiments with the well-motivated uniform initialisation you suggested actually leads to improved results! Both test error and bound are improved in a variety of cases so we will change all results to this scheme for the revision. Thank you for this suggestion!\n\n2. Using the sklearn random forest on MNIST (as in, e.g. [this kaggle example](https://www.kaggle.com/code/ashwani07/mnist-classification-using-random-forest/notebook)) by default uses 100 trees, while we use only 10. We also differ in that the trees are trained using half of the dataset rather than full-dataset bag sizes. In this setup, random forest (with uniform $\\theta$) attains an error of 0.073 with 10 trees which is improved by most of the PAC-Bayes algorithms we study, with the exception of FO, which attains a (significantly worse) error of around 20%. This occurs because FO tends to put most of its weight on a single best tree (as observed by e.g. Wu et al., 2021, and Masegosa et al., 2020). We will be happy to report also our bound values for this vanilla random forest in Figure 1.\n\n3. You are correct that our bound in Theorem 2 can be used as a plug-in for any value of $\\theta$. However the classifiers/voters being weighted must be fixed based on a subset of the training set, whereas using Adaboost each new voter is chosen based on the entire dataset. Therefore some additional form of capacity control on these voters would be needed: if they overfit the training set generalisation may not be possible.\nAn extension to our work could consider freely choosing the voters from a class of fixed VC dimension (as in Schapire et al.), allowing bounds for e.g. boosting algorithms. Extending PAC-Bayes bounds to general VC classes is highly technical (see e.g. [1]), but is an important direction for future work.\n\n[1] PAC-Bayes, MAC-Bayes and Conditional Mutual Information: Fast rate bounds that handle general VC classes. Grunwald, Steinke, Zakynthinou, 2021.", " We thank the reviewer again for the positive evaluation of our paper. Please see also our general response.\n\nConcerning the originality of the contribution, while the use of Dirichlet posteriors has been introduced in Zantedeschi et al. (2021), their de-randomisation using margins is completely novel. The main technical result (Theorem 4) that is used for this combines the idea of sub-Gaussian concentration (more common in margins literature) with the special aggregation properties of the Dirichlet distribution. This results in a de-randomisation step which surprisingly does not depend on the number of base voters, a limitation of previous works. The tightness of this step leads to the great improvement in risk certificates compared with prior work.\n", " We thank the reviewer again for the strong support shown for our contributions. Please see also our general response.\n\nWe note that the exact expression of the bound in Zantedeschi et al. (2021) is included in the appendix (l. 661) due to space considerations, but we will be glad to include it in the main paper in case of acceptance with the additional space given.\n\nIn Figure 2, we evaluate our bound for the different models learned by each baseline. So the $\\theta$ used is different in each case and minimises the baseline bound (given in blue) and has given test error (green). Applying our bound Th. 2 in a plug-in fashion to this $\\theta$ gives the bounds in orange, which vary as they depend on the different $\\theta$ values used, which have different training errors and generalisation capabilities. Remarkably, our bound gives tighter values here than the bounds which have been directly optimised, likely due to their loose de-randomisation.\n\n----\n\nApologies, we now realise we missed one of your questions: \"the use of such Dirichlet distributions...allows the correlation between voters to be more carefully considered.\" This comes from Zantedeschi et al., where the idea is that with a categorical distribution the Gibbs risk is simply an average of the losses of individual predictors, without talking into account how well the combination of their predictions performs. Conversely, the Dirichlet distribution gives a (stochastic) majority vote of predictors, so if the errors of base voters are de-correlated, the better performance that results can be accounted for in the bound.\n", " We warmly thank all the reviewers for their time, positive evaluation and useful feedback that is invaluable to improve the quality of our manuscript.\n\nWe are glad all reviewers found our contributions significant and appreciated the clarity and soundness of our analyses. Reviewer KkjL noted the “important contributions to the understanding of ensemble classifiers” and called our majority vote result “very insightful”, noting that overall our results “turn[s] out tighter than prior work as well as empirically perform[s] well makes it very strong (and cool!)”. Reviewer hEGP noted that empirical results are “significant improvement compared to the state of the art” and called the manuscript “very well written (clear)”. This sentiment was shared by Reviewer uYQj calling it “well-written and easy to follow”, noting that the work “combines different aspects from theory into a non-trivial, novel bound”.\n\nAs an additional note to all reviewers, we would like to emphasise that our empirical results are not only a considerable improvement over existing ones but in many cases are effectively sharp, i.e. tightly bound the true test error. Therefore they cannot actually be further improved. We see our work therefore not only as a theoretical contribution, but one which could in certain cases be practically used, since it may be possible to avoid the use of a hold-out test set entirely. This would allow more data to be allocated to training.\n\n\n-----\n\nUPDATE: We thank all the reviewers again for their helpful comments. We have uploaded a minor revision with the margin bound figures updated to the new initialisation suggested by Reviewer uYQj. Additional commentary taking up additional space will be added in the camera ready in case of acceptance.", " On Margins and Generalisation for Voting Classifiers\n\nIn this paper, the authors take a generalization bound for stochastic majority votes from prior work and show how to derive a new margin bound for majority vote algorithms using Dirichlet priors. As stepping stone they prove a new result relating the misclassification loss to the stochastic voting loss. This bound is empirically verified to have better characteristics in that it comes closer to accurately predicting test set error than related work. The authors also derive an optimizable target version of their bound which appears to yield good results compared to other similar approaches in the literature on a variety of datasets. This paper, although very dense, is making important contributions to the understanding of ensemble classifiers, in particular to the margin argument of why such classifiers generalize. It struck me as very insightful to be able to use the stochastic voting bound to prove the majority voting result. That the result also turns out to tighter than prior work as well as empirically perform well makes it very strong (and cool!). \nOn the whole, I think this is a strong paper that is using very insightful connections from recent prior work to improve the state of the art generalization bounds on voting classifiers. Two minor points: I would have liked to see a precise statement of Zantedeschi et al's bound in section 2.3. In particular, I missed the intuition behind \"the use of such Dirichlet distributions...allows the correlation between voters to be more carefully considered.\" \nSecond, I missed why in figure 2 the \"our bound\" results change for every baseline. There is no specific limitations section, but the authors do describe gaps between their results and empirical results as well as cases when their bounds may not be accurate (albeit very briefly)", " This paper deals with the theory of (multi-category) pattern classification. The authors introduce a new PAC-Bayes bound for majority voting of finite ensembles of classifiers. It is based on Dirichlet posteriors. A variant is proposed to serve as the objective function of a training algorithm (train the model itself). Those two bounds are compared with the state-of-the-art ones in the framework of an empirical study. The contribution appears to bring a significant improvement compared to the state of the art, even though it is not that original, since it is close to a result by Zantedeschi et al. (2021). I found it technically sound. It is noticeable that the paper is very well written (clear). A few typos should be corrected. Two examples :\n-line 93 : complimentary -> complementary\n-line 470 : pac-bayesian -> PAC-Bayesian Not applicable", " The paper \"On Margins and Generalisation for Voting Classifiers\" proposes a novel generalization bound for voting classifiers (i.e. ensembles) using a Dirichlet posterior over the ensemble members. The novel bound explores a new direction for generalization bounds for ensembles and also seems to have decent results in practice. The paper is generally well-written and easy to follow. The theoretical contribution is clearly stated and the experimental evaluation seems decent. I like to paper and I do not have much to discuss about it to be honest. The paper combines different aspects from theory into a non-trivial, novel bound. It is well-written and easy to follow. The quality of the math seems sound, although I admit that I did only skimmed the appendix. I believe the use of the Dirichlet posterior is an interesting idea and can be used to as a starting point for future research. I have some minor remarks regarding the references that can easily be fixed for a camera ready and some question wrt. to the weights of each member in the ensemble (see below). In short: \n\n- (+) Novel and non-trivial combination of different results from literature\n- (+) Generally well-written and comparably easy to follow\n- (+) Decent empirical evaluation \n- (-) High test error in the empirical evaluation (see my question below)\n- (-) The citations should be improved for more consistency. I suggest to double-check references {1,14,16,17,23,24,26,36,47} via https://dblp.uni-trier.de/ and copy/paste the extended Bibkeys 1) In the appendix you write that you initialize $\\theta$ uniformly in $[0.01, 1]$ and then optimize over it. I assumed that one would initialize $\\theta$ as the weight of each ensemble member for random forests, i.e. $\\theta_i = 1/|\\mathcal H|$ in the case of random forest. Why do you use different values here? Did I miss something?\n2) The results in Fig 1. / 2. are comparably bad to e.g. classic RF. For example, a vanilla RF has a performance around $> 97 $% on mnist whereas you report errors in the range of $10-20 $%. I understand that raw performance is not the main focus of this paper (and hence I dont really care), but I am curios why there is this large performance gap. Do you think this is because $\\theta$ is optimized afterwards?\n3) I expected a generalization bound to be used as a \"plug in\" explanation for any voting classifier, e.g. as in Theorem 2. Now I wonder what the impact of $\\theta$ is here. From my point of view, $\\theta$ are simply the weights of the individual classifiers (e.g. equal for Random Forests or (normalized) floats for boosting). Is there any harm in using Theorem 2 to compute the error of, e.g. AdaBoost and if not, did you try that? There might be some limitations wrt. to $\\theta$, but I am not sure about this. Please see my question above." ]
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "rDancjdY8tw", "vaPHJxQNAom", "R1BFkTfGPui", "v6QZpr6ZQ2K", "nips_2022_xvLWypz8p8", "nips_2022_xvLWypz8p8", "nips_2022_xvLWypz8p8", "nips_2022_xvLWypz8p8" ]
nips_2022_LC1jyMUalIA
Transferring Textual Knowledge for Visual Recognition
Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research. Along with the growth of computational capacity, we now have open-source Vision-Language pre-trained models in large scales of the model architecture and amount of data. In this study, we focus on transferring knowledge for vision classification tasks. Conventional methods randomly initialize the linear classifier head for vision classification, but they leave the usage of the text encoder for downstream visual recognition tasks undiscovered. In this paper, we revise the role of the linear classifier and replace the classifier with the embedded language representations of the object categories. These language representations are initialized from the text encoder of the vision-language pre-trained model to further utilize its well-pretrained language model parameters. The empirical study shows that our method improves both the performance and the training speed of video classification, with a negligible change in the model. In particular, our paradigm achieves the state-of-the-art accuracy of 87.3% on Kinetics-400.
Reject
The paper aims to study the idea of transferring textual knowledge from vision-language pertained models to visual recognition or specifically the adaption of CLIP for downstream visual recognition tasks. The authors proposed to revise the role of the linear classifier and replace the classifier with the embedded language representations of the object categories. The idea is simple (and somewhat trivial) and authors demonstrated some promising results in experiments. Despite the positive aspects, there are several major concerns with this paper: 1) the technical depth of the method is weak (the paper only made a minor change to the paradigm of using vision-language pretrained model), 2) the novelty of the idea is limited, in fact the idea of transferring text knowledge or zero-shot/few-shot adaption of CLIP for downstream visual recognition tasks has been extensively studied in CLIP and many its variants, but there lack of comparisons with those work. 3) the empirical study is not convincing and comparison are not extensive (many CLIP variants and related baselines are not compared); also the related work was poorly written with many missing related work in recent advances of CLIP and video related CLIP variants. Overall, the paper has some interesting simple idea that may be worth for further investigation but the paper is not strong enough for publication.
train
[ "ACsO_ukx6Y", "O4zIrsOlnLt", "Onbjioc9EP6", "3G-usGXgFHl", "zvVUBZrF2KM", "1I1eqPxwr0z", "ntsFd6fJfgE", "6deBIOduCBn", "acrOG2KO5IM1", "uPxRObFqIPf", "jyRpDCMioFO", "ynEPHFueR1", "TWK_fWZTy9i", "ZlhQgurCs1y", "v2XIz1L4zQn" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer L6WZ:\n\nWe are glad that our responses addressed all your concerns and resolved the questions! And you said you will update your review accordingly.\n\nHowever, we observe that the score has not been updated yet.\n\nJust a friendly reminder that the deadline for updating your review is in two hours.\n\n\nBest,\n\nPaper 2329 Authors.", " Dear Reviewer yU63:\n\nWe are glad that our responses addressed your concerns and provided more clarity! \n\nThanks for your recognition of our work. Please let us know if you have any unclear parts of our work on the last day.\n\nBest,\n\nPaper 2329 Authors.", " Dear Reviewer L6WZ:\n\nWe are glad that our responses addressed all your concerns and resolved the questions! !\nThank you for updating your review accordingly. Please let us know if you have any unclear parts of our work on the last day.\n\n\n\nA minor note — There may be something wrong with the system since the score has not been updated yet. Could you kindly edit the review again?\n\nBest,\n\nPaper 2329 Authors.", " Thank you for your detailed responses. I believe all my questions have been clarified and will update my review accordingly.", " Hi,\n\nThe provided comments mainly answer my concerns and provide more clarity. Please include all the details in the revised paper. Based on the response, I raise my score to Borderline Accept. ", " Dear reviewer KzRg:\n\nThank you for your reply!\nWe will include the limitation and result of efficiency and in our final version. \n\nWe really appreciate your recognition of this work!\n\n\n\nBest,\n\nPaper 2329 Authors.", " Dear reviewer L6WZ:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work. Since there is only one day left in the discussion window, we sincerely hope that you would not miss our response.\n\nBest,\n\nPaper 2329 Authors.", " Dear reviewer yU63:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,\n\nPaper 2329 Authors.", " Thank you for thoroughly addressing my concerns. My concerns are covered by the responses.\nI would like to encourage the authors to include the results of efficiency (in A1) in the final revision, which demonstrates the superior efficiency of the model. Thanks for the limitation in A3, I think the response makes sense and is reasonable.\nAfter reading through this and other responses, the extra experimental results and explanations make the paper better. In my opinion, this paper is simple yet effective, provides significant performance, and could serve as a new standard for more future works in the video recognition field. I would like to raise my confidence to 5 and keep my rating at 7-accept.", " Thank you for your comments. We would like to answer your concerns.\n\n---\n\n> Q1: There's some prior work transferring text embeddings from vision-language models to visual recognition tasks: [1]\n\n> [1] Gupta et al. Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks. ICCV2017\n\n\nA1: Thanks for letting us know about this paper. We will cite it in the revision. Although the method described in [1] employs a learning objective similar to ours, we are addressing different problems: [1] is working on jointly learning aligned features embeddings for 3 specified tasks (i.e., object recognition, attribute recognition, and visual question answering), they embed the text labels using Word2vec. While we are focusing on:\n\n1) how to properly finetune visual recognition tasks from the large-scale pre-trained image-text model. \n\n2) exploring what types of the fixed classifier are optimal and examining how to leverage the inter-class correlation among the semantic information of different action categories.\n\n\n\n---\n\n> Q2: The description of the LDA formulation was a bit difficult to follow. It references a pretrained model on a certain training split, but it's not clear which model and which dataset this is referring to.\n\nA2: The details are provided in Section4.2 (line 202-206). We use the **CLIP-pretrained visual encoder** to extract video embeddings of training split on the **Kinetics-400 dataset**.\n\n---\n\n> Q3: Is a vision-only model first trained on the data, after which LDA is fit, then used to initialize W for a newly trained vision model?\n\n\nA3: We directly use the official CLIP-pretrained visual encoder to extract video embeddings, and the visual encoder is `not finetuned` on Kinetics-400. Then we perform LDA on the pre-extracted video embeddings of the training set in Kinetics-400 to initialize W and freeze it for finetuning the visual encoder on the Kinetics-400 dataset.\n\n\n\n---\n\n> Q4: The authors should probably verify whether there's any overlap in training data between the pretrained CLIP model and the dowstream task's data.\n\n\nA4: In this paper, we mainly focus on the video recognition task with the Kinetics dataset. As shown in Fig.17 of CLIP official paper, CLIP has done the data overlap analysis on the Kinetics-700 dataset. They observe that there are less than 1% overlaps and many overlaps on Kinetics-700 are in fact all black transition frames. Then they conduct the experiment on overlapping data. The results show that the Kinetics-700 has no performance improvement, and even has an apparent 20% accuracy drop on the overlapping data.\n\n\n---\n\n> Q5: For correctness' sake, I don't believe it's accurate to state the dimensionality reduction as part of the LDA algorithm, but rather something that can be done within the LDA framework.\n\n\nA5: Thanks for the comment! First, our previous version states that \"LDA first projects the feature vectors into a lower dimension space\" because of the first sentence \"One way to view a linear classification model is in terms of dimensionality reduction.\" from the LDA introduction in chapter 4.1.4, page 186 of the renowned textbook [6].\n\nTo avoid being controversial, we will change the sentence \"Intuitively, the LDA first projects the feature vectors into a lower dimension space that maximizes the inter-class covariance and then estimates the likelihood of a sample to the class distributions.\" to \"Intuitively, the LDA simultaneously maximizes the inter-class covariance and minimizes intra-class covariance.\"\n\n\n[2] CM Bishop. Pattern recognition and machine learning. Vol. 4. No. 4. New York: springer, 2006.\n\n---\n\n\n> Q6: If dimensionality reduction is performed with LDA, it would be necessary to ablate whether the results would be better or worse without it, as the overall results seem really close to that of the proposed initialization method.\n\n\nA6: Sorry for the confusion. LDA is commonly used for feature classification or feature dimensionality reduction. However, in this work, we only use LDA for `feature classification` (in order to get \"discriminant coefficients\" as the classifier) instead of feature dimensionality reduction. \n\n\nFor better understanding, we show the code which generates the LDA coefficient and `there is no dimension reduction`.\n\n```\nimport numpy as np\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\ninput = np.load('feats_labels_400class.npz')\nfeats = input['feats'] # size: [24000, 512]\nlabels = input['labels'] # size: [24000,]\nlda = LDA()\nlda.fit(feats, labels)\nclassifier = lda.coef_ # size: [400, 512]\n```\n\n---\n\nAs the major concerns come from the confusion about LDA classifier, we respectfully ask the reviewer to consider increasing the score if satisfied with the explanations above. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.", " Thank you for your appreciation of this work.\n\n---\n\n> Q1: The authors may include more discussion about the computational cost and efficiency issues.\n\nA1: Thank you for your suggestion. In the following table, we present the computational cost and efficiency of our models, where **\"vid/s\"** represents the average number of videos per second. The larger \"vid/s\" represents higher efficiency. We follow the common inference settings by using a single NVIDIA A100 GPU to measure the throughput. We use a batch size of 16 to measure the throughput. \n\nOur models achieve the superior throughput and less FLOPs comparing with previous transformer-based methods on Kinetics-400 dataset.\n\n\n\n\n\n| Method | Frames | Top1 | FLOPs | Params | Throughput|\n| --- | --- | --- | --- | --- | --- |\n| ViViT-L/16-320 | 8 | 81.3% | 3992G | 310.8M | 4.2 vid/s |\n| **Ours ViT-B/32** | 8 | 78.5% | 23.7G | 71.6M | 322.5 vid/s |\n| **Ours ViT-B/16** | 8 | 81.5% | 90.3G | 69.9M | 126.5 vid/s |\n| **Ours ViT-L/14** | 8 | 85.4% | 415.4G | 230.4M | 35.5 vid/s |\n\n\n\n---\n\n> Q2: What are the training details for few-shot video recognition?\n\nA2: Details are provided in Section 4.3 line 268-274. In this paper, we study a more challenging K-shot All-way situation. For the results in Table 8, we scale the few-shot task up to categorize all the action categories in the three dataset (i.e., Kinetics400, UCF51, HMDB51) with just one sample (e.g., K=1) per category for training. All training details are the same as all-shot video recognition (those in other experiments) except for more epochs (i.e.,200 for few-shot video recognition while 30 for other experiments).\n\n\n---\n\n> Q3: The authors may include the study of failure cases to help the readers to understand the limitation of this work.\n\nA3: We will discuss the following limitation in the revision:\n\nWhen the annotated category labels do not contain semantic information, their textual features will obviously not contain semantic knowledge. For example, if the class labels are numerical values such as 0, 1, 2, etc., rather than semantic descriptions such as \"eating burger\", \"eating cake\", \"eating chips\", etc., in this case, the textual classifier will not contain knowledge, and it is better to use the LDA classifier.\n\n\n---\n\n> Q4: What's the benefit of using randomized orthogonal matrix?\n\nA4: \n1) We would like to clarify that randomized orthogonal matrix is just one of the four possible initialization methods. Randomized orthogonal matrix is not advocated. Our proposed initialization is the forth, which is Textual embedding vectors.\n2) Benefits of randomized orthogonal matrix: We remove the inter-class correlation of classifier by using randomized orthogonal matrix. As expected, this initialization has inferior performance.", " Thank you for your comments. We would like to answer your questions.\n\n---\n\n> Q1: I find that the paper is quite confusing and hard to read. I would suggest to re-write some parts by specifying clearly what is the input, output of the model and what aims to be learnt or not (Section 2, lines 169-181, but not only limited to this).\n\n> Also, please be more specific about the downstream task and clearly specify what is the goal of the task, the input and output of the model.\n\n---\nA1: Sorry for the confusion. **Video recognition** is our downstream task, which takes a video as input, and then fed into a learned model to estimate the action category of the video. The default pipeline of the video recognition is described as follows.\n\n\n* `Input:` The input has a size of 8x224x224x3 for 8 frames sampled from the video.\n* `Video encoder:` The input above is fed into the learnable visual encoder to get the video embedding (e.g., the size of 1x512).\n* `Output:` The model's output is a vector (size: 1x400) which provides the prediction value for each class. Specifically, the video embedding (size: 1x512) from the video encoder is passed to a classifier (size: 400x512) to produce the output vector.\n\n\n**The learnable part**: The classifier in our paradigm is intialized from the textual embedding of the class names and then frozen (fixed), leaving only the parameters in the **video encoder** to be learned. Our novelty is in appropriately initializing the classifier.\n\n\n---\n\n> Q2: Suggestions Add some visual highlights to the tables (bold, italic, etc) so that the viewer knows what to look at.\n\nA2: Thank you for your suggestions. We will highlight the key results of the tables in the revision.\n\n---\n> Q3: Any insights on why DistilBERT performs the same as CLIP in Table 1?\n\nA3: Both DistillBERT and CLIP are pre-trained with large-scale data, so they both have strong language modeling capabilities and can generate **good semantic targets**. Although the good semantic targets generated by DistillBERT are not aligned with the visual features of CLIP, it is easy to fit them with trainable visual encoders. Our observations in the experiment can also validate this, the loss of DistillBERT will be higher than CLIP in the early stage, but it will quickly decrease to the same level.\n\n\n\n---\n\n> Q4: Why there is no comparison with other methods on the Image Recognition task?\n\nA4: In the ImageNet experiment of the original submission, our main purpose is to show that our approach can significantly speed up model convergence, so we can train the ViT-L/14 only takes 10 epochs to achieve 87.12%. Notably, instead of a large epoch (i.e., 300 epochs), complex augmentation (i.e., Gaussian Blur, Solarization, MixUp, etc) and various optimization tricks, we train models on ImageNet with only 10 epochs and the most basic augmentation (i.e., Random Crop).\n\nHere, we show the results of ViT-L model trained with our method for 10/30 epochs compared with other methods trained with 300 epochs as follows.\n\n\n| Method | Top1 | Epoch | \n| --- | --- | --- |\n| DeiT | 84.9% | 300 |\n| MLP-Mixer | 85.3% | 300 |\n| Meta Knowledge Distillation | 86.5% | 600 |\n| **Ours** | **87.1%** | **10** |\n| **Ours** | **87.9%** | **30** |\n\n---\n\nAs the major concerns come from what the video recognition task is, the missing visual highlights of experiment Tables, and comparisons on image recognition, we respectfully ask the reviewer to consider increasing the score if satisfied with the explanations above.", " This paper revises the role of widely used classifiers from a novel perspective that has been overlooked. Based on the vision-language pre-trained model, this work introduces text priors into the standard recognition framework in a fixed-classifier fashion without additional training cost. A variety of different fixed classifiers are also discussed, and qualitative and quantitative results are provided. This work presents a simple yet effective paradigm for visual recognition. Prior to this, the traditional recognition paradigm often used pre-trained visual encoders with a randomly initialized classifier to finetune downstream recognition models without considering pre-trained text encoders. This study showed how properly using visual-language pre-trained models can significantly improve recognition performance and speed up model convergence.\n Strengths:\nThis work democratizes training on large-scale video/image datasets (i.e., Kinetics400, ImageNet) to a certain extent, achieving good accuracy with only a few epochs and also performing well in few-shot/zero-shot scenarios. As far as I know, it seems to be the first work to achieve 87+% on Kinetics400 using only publicly available pre-trained models, indicating that it is reproducible and can inspire future works. And it even performs better than works that used large-scale pre-trained models (e.g., JFT-3B/JFT-300M/FLD-900M) that were never open source.\n\nWeakness:\n- The authors may include more discussion about the computational cost and efficiency issues.\n- What are the training details for few-shot video recognition?\n- The authors may include the study of failure cases to help the readers to understand the limitation of this work.\n- What's the benefit of using randomized orthogonal matrix? Please refer to my questions in the weakness part for details. Yes.", " This work proposes a method of transferring vision-language pre-training models to downstream visual recognition tasks. To do so, they propose to initialize the final linear classification layer with corresponding embedding values from a pretrained clip model, generating embedding by passing the class label as a prompt through CLIP. The classification layer is frozen after initialization, forcing the vision pipeline to optimize around the pretrained embeddings. Experiments demonstrate improvements in video and few-shot image recognition downstream tasks. strengths\n- Proposed method is simple yet an effective means of repurposing trained embeddings for a different dowstream task\n- Large improvement in experimental settings over standard pipeline. \n- Notably, they also propose a strong baseline based on linear discriminant analysis\n\nweaknesses\n- There's some prior work transferring text embeddings from vision-language models to visual recognition tasks: [1]\n- The description of the LDA formulation was a bit difficult to follow. It references a pretrained model on a certain training split, but it's not clear which model and which dataset this is referring to.\n- The authors should probably verify whether there's any overlap in training data between the pretrained CLIP model and the dowstream task's data.\n- For correctness' sake, I don't believe it's accurate to state the dimensionality reduction as part of the LDA algorithm, but rather something that can be done within the LDA framework.\n- If dimensionality reduction is performed with LDA, it would be necessary to ablate whether the results would be better or worse without it, as the overall results seem really close to that of the proposed initialization method.\n\n\n\n\n[1] Gupta et al. Aligned Image-Word Representations Improve Inductive Transfer Across Vision-Language Tasks. ICCV2017\n - It would be great if the authors could clarify the LDA formulation, as it wasn't very clear to me. Is a vision-only model first trained on the data, after which LDA is fit, then used to initialize W for a newly trained vision model? limitations not discussed", " The paper tackles the problem of transferring knowledge from pre-trained vision-language models to other downstream tasks. It proposes a new paradigm that makes use of the pre-trained text embedding to initialize the weights of the linear classifier head when testing on downstream tasks. Finally, the authors test their approach on Visual recognition and Image recognition. The paper tackles an important problem and seems to have promising results.\n\nI find that the paper is quite confusing and hard to read. I would suggest to re-write some parts by specifying clearly what is the input, output of the model and what aims to be learnt or not (Section 2, lines 169-181, but not only limited to this).\n\nAlso, please be more specific about the downstream task and clearly specify what is the goal of the task, the input and output of the model. \n\nSuggestions\nAdd some visual highlights to the tables (bold, italic, etc) so that the viewer knows what to look at. 1. Any insights on why DistilBERT performs the same as CLIP in Table 1?\n\n2. Why there is no comparison with other methods on the Image Recognition task?\n\n There is no discussion related to the societal impact" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "3G-usGXgFHl", "zvVUBZrF2KM", "3G-usGXgFHl", "uPxRObFqIPf", "ynEPHFueR1", "acrOG2KO5IM1", "ZlhQgurCs1y", "v2XIz1L4zQn", "jyRpDCMioFO", "ZlhQgurCs1y", "TWK_fWZTy9i", "v2XIz1L4zQn", "nips_2022_LC1jyMUalIA", "nips_2022_LC1jyMUalIA", "nips_2022_LC1jyMUalIA" ]
nips_2022_NL05_JGVg99
Open-Ended Reinforcement Learning with Neural Reward Functions
Inspired by the great success of unsupervised learning in Computer Vision and Natural Language Processing, the Reinforcement Learning community has recently started to focus more on unsupervised discovery of skills. Most current approaches, like DIAYN or DADS, optimize some form of mutual information objective. We propose a different approach that uses reward functions encoded by neural networks. These are trained iteratively to reward more complex behavior. In high-dimensional robotic environments our approach learns a wide range of interesting skills including front-flips for Half-Cheetah and one-legged running for Humanoid. It is the first skill discovery algorithm that can learn such skills without relying on any form of feature engineering. In the pixel-based Montezuma's Revenge environment our method also works with minimal changes and it learns complex skills that involve interacting with items and visiting diverse locations.
Accept
After a strong rebuttal from the authors and an extensive discussion among the reviewers, I believe the paper's pros outweigh its cons and this paper will be a valuable contribution to NeurIPS. I recommend it for acceptance and encourage the authors to address the reviewers comments for the camera-ready version of the paper, especially regarding the newly added baselines and other comparisons to SOTA approaches listed by the reviewers.
train
[ "lTiBmhc4k9J", "Vh87CsY9FCa", "Eleen-Hzvx", "ULSg9jBPqts", "aGkfLxzJ8ge", "kQDMm2GYomX", "1RvKXNmM9Sw", "T49wu7SFhzFn", "9IWGK-2IYYvf", "-sIn_t_Rnx8", "Xxn2dR6f4eO", "zjZChC2B8XWl", "txkvjuM64kg", "Nw1JwwvdI8o", "LLsbYkdKD4o", "TgQvw8hdmKd", "ykWrjbSYuJD", "S6cSnxiJBg" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate that reviewers have thoroughly gone over the paper, our responses and gave very useful feedback. We'll continue working hard to add another Intrinsic Motivation baseline with checkpointing for the final version of the paper. On top of improving our baselines, we believe this builds new connections between the skill discovery literature and the Intrinsic Motivation one.", " Thank you to the authors for the thorough response and clarifications to my questions. After reading the discussion, I'm inclined to agree with tSXg that checkpointing to extract skills from exploration style rewards is an interesting direction, to my knowledge unexplored in prior work. Since the authors have addressed many of my concerns and provided additional helpful baselines, I have increased my score. ", " > By design, these exploration methods cover a wide region of the state space. This is because they reward all states that were not visited yet and thus many different parts of the state space are rewarding. On the contrary, our approach only rewards a specific and different region of the state space at each iteration. This can also be seen in the new plots for the RND baselines in Appendix J. Especially in the Ant environment each checkpoint covers a very large part of the state space and different checkpoints overlap to a very high degree.\n\nI looked at the new plots in the appendix, and for Ant it does seem the skills are better clustered than for RND.\n\n> No previous work that we are aware of used this checkpointing mechanism for skill discovery. Thus, the checkpointing is a modification of previous work. This checkpointing method could also be used as a baseline for all other skill discovery works. We propose a method that connects objectives similar to intrinsic motivation and skill discovery. But, we agree with the reviewer that the comparison to checkpointing these prior work is interesting. This is why we added the RND checkpoint baseline and believe that the results highlight the differences to mere checkpointing.\n\nOk this makes sense.\n\nOverall the above responses do address my main concerns about the contributions of the work, and I'll raise my score to a borderline accept. I would strongly encourage the authors to add a comparisons to the SOTA approaches I listed earlier in the camera ready if the paper is accepted.", " We thank the reviewer for their additional response. We are happy to hear that our responses and additional experiments are improving the paper. In light of the new results we want to respond to a few of the points you raised.\n\n> And intuitively I would expect that saving checkpoints in this way for any of the exploration methods I listed in the original review would probably also give skills which correspond to different regions of the state space. \n\nBy design, these exploration methods cover a wide region of the state space. This is because they reward all states that were not visited yet and thus many different parts of the state space are rewarding. On the contrary,our approach only rewards a specific and different region of the state space at each iteration. This can also be seen in the new plots for the RND baselines in Appendix J. Especially in the Ant environment each checkpoint covers a very large part of the state space and different checkpoints overlap to a very high degree. \n\n> I think it's great the authors are adding an RND baseline. I would also add that there are much more recent and better performing approaches like SMM [7] and LEXA [4] …\n\nWe decided to implement RND because RND is probably the most prominent recent intrinsic curiosity method and has been reused by other researchers to yield impressive results in large scale experiments, e.g. Agent57 and NeverGiveUp. \n\n> My concern was more that if this is how skills are recovered, this same process can be applied to pretty much any prior work including the intrinsic motivation papers. \n\n> and the claim in the original work that \"However, these approaches do not learn discrete skills that can be composed or fine-tuned for fast learning of new tasks.\" is not accurate.\n\nNo previous work that we are aware of used this checkpointing mechanism for skill discovery. Thus, the checkpointing is a modification of previous work. This checkpointing method could also be used as a baseline for all other skill discovery works. We propose a method that connects objectives similar to intrinsic motivation and skill discovery. But, we agree with the reviewer that the comparison to checkpointing these prior work is interesting. This is why we added the RND checkpoint baseline and believe that the results highlight the differences to mere checkpointing.\n\nTo our knowledge, our work is the first one to showcase good performance on the Humanoid environment without any form of feature engineering. The checkpointing algorithm also outperforms DIAYN and DADS. This is also an interesting finding on its own. But, as already discussed, our method has additional advantages over the mere checkpointing. We hope that our response and the new results can improve your evaluation of our work.\n", " Since our first response we worked hard on preparing additional results for the paper. Here we post these results here and discuss them. We will add these new results to the paper. Additionally, we will adapt the text of the paper according to the useful feedback of the reviewers. This includes adding the Limitations section and updating the Related work section to discuss the similarities and differences to work in Intrinsic motivation in more detail. We will also take into account the smaller suggestions made by individual reviewers.\nDue to time constraints, we cannot update the manuscript until the end of the rebuttal period but will work on it in the coming weeks. Nonetheless we hope that the strengthening of our results by the additional experiments and the proposed improvements to the text lead the reviewers to vote favorably for the paper.\n\n**DADS baseline** \n\n\n| Method | MI-metric |\n|--------|----------|\n| Ant full method | 1.33 ± 0.11 | \n| Ant DADS full | 0.33 ± 0.06 | \n| Humanoid full method |1.29 ± 0.25 | \n| Humanoid DADS full | 0.23 ± 0.05 | \n\nWe ran a DADS baseline with the hyperparameters tuned by the original authors of the DADS paper. We ran seven seeds in each environment and will run three more seeds to match the ten seeds of our experiment. In the Humanoid environment the DADS agent does not learn to survive at all, i.e. it never reaches more than 200 reward in the original task, compared to the 9k of our approach. We will add this Baseline to the paper. \n\n**Random Network distillation baseline** \n\n| Method | MI-metric | zeroshot transfer | \n|--------|----------|----------|\n| Ant full method | 1.33 ± 0.11 | 2506 ± 511 |\n| Ant RND | 0.27 ± 0.15 | 50 ± 189 |\n| Humanoid full method |1.29 ± 0.25 | 9092 ± 1063 | \n| Humanoid RND | 0.94 ± 0.17 | 7734 ± 1977 | \n\n\nWe implemented Random Network Distillation (Burda et al. 2018) for the robotic environments. We ran a hyperparameter search for the RND specific hyperparameters with optuna. We searched for 100 respectively 60 trials of 200M timesteps optimizing the MI metric of the saved checkpoints in the Ant respectively Humanoid environment. We train both the Agent and the Prediction networks online and save a checkpoint every 20M timesteps. This is the same number of steps as we use for each skill. \n\nThen, we ran the best hyperparameters for 6 (Ant) and 10 (Humanoid) random seeds until they created 50 skills each. In the above table we report the result and refer to Figure 12 and 13 in Appendix J where we added plots in the style of Figure 6 for each of the seeds. We will run four additional seeds in the Ant environment to match the ten seeds used in all other experiments.\n\nWe can see that our method outperforms RND heavily on the Ant environment. In the Humanoid environment the difference is smaller, but still noticeable. When looking at the plots in Appendix J we can also see quite a noticable difference between the two environments. In the Ant environment, the states visited by the policy overlap almost completely and each skill covers a large region of the state space. In the Humanoid, they still overlap and seem to be broader than the ones for our method (cf. Figure 9), but the difference is less striking. \n\nAs we already discussed in our first response, the goal of methods like RND and ICM (Pathak et al. 2017) is to give rewards in all novel states. At any point in time the reward function rewards many different behaviors. This leads to broader skills compared to our method which only rewards a specific region of the state space in each iteration. We believe that these RND experiments highlight this difference between our method and the intrinsic motivation literature. In the Humanoid environment, there is a greater pressure from the environment for deterministic skills. Skills with higher entropy result in less stable gaits. Thus the skills are forced to be more narrow by the environment. In contrast, the robot in the Ant environment is more stable and thus this pressure is not as strong. We believe that this difference between the two environments results in the significant difference in empirical performance and the qualitative evaluation of the skills. \n\n\n**Ablations for the robotic environments**\n\n\n| Method | MI-metric | zeroshot transfer | \n|--------|----------|----------|\n| Ant full method | 1.33 ± 0.11 | 2506 ± 511 |\n| Ant policy ablation | 1.09 ± 0.16 | 1731 ± 634 |\n| Ant value ablation | 1.28 ± 0.15 | 2246 ± 794 |\n| Humanoid full method |1.29 ± 0.25 | 9092 ± 1063 | \n| Humanoid policy ablation |1.01 ± 0.09 | 8906 ± 616| \n| Humanoid value ablation |0.88 ± 0.19 | 7356 ± 816 | \n \nIn this table we report the full results of the ablation experiment discussed in the first response with five additional seeds to match the ten seeds of the original method.\n", " Thanks to the authors for the additional information. \n\n> The reviewer is right to point out that our method has quite some resemblance to methods used in intrinsic motivation like Random Network Distillation.... Our method takes turns in learning a new reward function until convergence. It uses a previously sampled data set of experience....\n\nGot it, I understand that in the proposed approach that you are iteratively learning a reward function from scratch with negatives as seen data and positives as random exploration from the boundary. And I agree that implementing it this way with re-learning a new reward and policy from scratch iteratively is different than the way it is implemented in prior work. My point is more that at convergence this objective is maximizing state-space coverage - something that many prior works also optimize for. And many of those works [4, 7] both maximize state space coverage *and* learn skills in some form, either through a mixture of policies or through goal-conditioning.\n\n> As already discussed in the general response, we believe that this approach has some advantages for learning skills. In particular, our reward function correspond to a specific region in state space, cf. Figure 4d). When using intrinsic motivation, everything which the agent has not seen would still be rewarding. This means that check-pointing methods from intrinsic motivation would not yield reward functions which only correspond to the current learned skill. \n\nI'm not fully convinced by this line of reasoning. Again, the way the proposed approach gets \"skills\" is by saving model checkpoints. I agree this is a totally valid way of getting skills, but it is also applicable to *all* prior methods as well. And intuitively I would expect that saving checkpoints in this way for any of the exploration methods I listed in the original review would probably also give skills which correspond to different regions of the state space. For methods like [7] this is almost certainly the case since they use a mixture of policies. And again I think this is something that would be addressed by more comprehensive experiments.\n\n> As already discussed above, we will highlight the difference from our method to these intrinsic motivation work more clearly. We are working on adding a RND baseline for the robotic environments...\n\nI think it's great the authors are adding an RND baseline. I would also add that there are much more recent and better performing approaches like SMM [7] and LEXA [4], that have been shown to work on challenging robotic control tasks from pixel inputs, and also on real robots. But I understand that adding these is not feasible in the timescale of the rebuttal.\n\n> We believe that it is also a valid choice in skill discovery to have independent policy networks for different skills. It is unclear that having a single skill conditioned policy is actually advantageous for previous methods like DIAYN or DADS...\n\nI think this makes sense, and I think it's fine for different checkpoints to be \"skills\". My concern was more that if this is how skills are recovered, this same process can be applied to pretty much any prior work including the intrinsic motivation papers. Thus they are valid baselines, and the claim in the original work that \"However, these approaches do not learn discrete skills that can be composed or fine-tuned for fast learning of new tasks.\" is not accurate.\n\n> The reviewer raises a valid concern whether random exploration from the current boundary is enough. We believe that it is an interesting line of investigation. As our method has completely decoupled reward training and policy training, the exploration for the reward training can be substituted by different methods....We also added a 2d grid experiment with a narrow gap to showcase that our method can learn to go through these, cf Figure 11c) in Appendix I.\n\nAgreed this is an interesting direction for investigation. My sense is that this is perhaps where model error might be a better indicator of the frontier of exploration, as is done in works like LEXA [4] or ICM [1]. \n\nOverall, I appreciate the authors responses in the rebuttal. However I still find the positioning of the paper with respect to the prior work, and the missing comparisons to newer works a major limitation, and lean towards rejection. I do think the ongoing RND experiment is a great first step towards addressing this. I think if the paper can better explain (and show empirically) why its exploration objective is better than similar methods like SMM [7], or Plan2Explore/LEXA [3,4], I think it'd make for a good paper. \n\n", " We thank the reviewer for their additional response. We will update the paper’s related work section to highlight the similarities and distinction to work in intrinsic motivation.\n\nYes, we modified RND such that we save checkpoints every 20M environment steps which is the same number of steps we train each of the skills in our method. We ran a hyperparameter search for the RND specific hyperparameters and are now evaluating the best found hyperparameters with more seeds. Our preliminary impression is that this works to some extent, i.e. this method extracts different behaviors at different points in time. But, it seems that the policies are less narrow than the ones learned by our method. It looks like the MI metric is considerably lower than our method but higher than the DIAYN basline. We will report our detailed results and add plots like Figure 6 by tomorrow.\n", " Thank you for the responses and added material. Reviewing the other reviews, I see other reviewers had concerns about connections to intrinsic motivation (IM) approaches. I agree that the paper needs to improve the discussion on the connections to IM approaches since that line of work appears similar. However, I agree with the authors and tSXg that this paper presents an interesting connection between IM and unsupervised skill discovery. To my knowledge, no prior work has demonstrated how IM-like objectives can extract skills for downstream tasks. IM objectives have only been shown with learning a single policy.\n\nHow are the authors comparing to the RND baseline? The paper states \"However, these approaches do not learn discrete skills that can be composed or fine-tuned for fast learning of new tasks.\" Has RND been modified to save checkpoints as skills like the proposed method? ", " Thank you for the response. I think the planned edits will improve the paper. \n\nI also took a look at some of the more negative reviews, and it appears that the main point of contention is that the work is very similar to prior work in exploration objectives for RL (such as RND, ICE etc.). I agree that this is the case, but, to the the best of my knowledge, this is the first work that extracts several skills from an environment using exploration-style objectives for reward learning, which makes this paper interesting. In a way, this paper ties some of the research threads in unsupervised skill discovery and intrinsic motivation / exploration, and while this connection seems a little obvious in retrospect, I haven't seen this connection explored before. As a result, I plan to keep my rating. ", " We want to thank the reviewer for carefully going through our work and the feedback. In the following, we try to further clarify some aspects of our method. We hope that we are able to lift some of the criticisms and highlight some positive aspects of our work. Of course, we are happy to respond to any additional questions the reviewer has.\n\n**'From what I can tell, the method is training a reward to explore the boundary of states, and a policy to maximize that reward, both being updated online.'**:\n\nThe reviewer is right to point out that our method has quite some resemblance to methods used in intrinsic motivation like Random Network Distillation. What makes our method differ from them is that we do not update our reward network online. We apologize that we did not make this clear enough in the paper and will update it accordingly. Our method takes turns in learning a new reward function until convergence. It uses a previously sampled data set of experience. Then, we train the next skill for many gradient steps on the fixed reward function. \n\nAs already discussed in the general response, we believe that this approach has some advantages for learning skills. In particular, our reward function correspond to a specific region in state space, cf. Figure 4d). When using intrinsic motivation, everything which the agent has not seen would still be rewarding. This means that check-pointing methods from intrinsic motivation would not yield reward functions which only correspond to the current learned skill. We will update the discussion of our method in the paper to distinguish ourselves more from these works.\n\n**'While certainly not all are necessary, the paper should at least compare to some of these baselines, and currently none are.'**:\n\nAs already discussed above, we will highlight the difference from our method to these intrinsic motivation work more clearly. We are working on adding a RND baseline for the robotic environments. To the best of our knowledge, none of these intrinsic curiosity approaches have been shown to work on a robotic task with a similar number of joints as the Humanoid environment. If the reviewer is aware of such work, we would appreciate a pointer to it.\n\n**'So it is less that the method learns a multi-skill conditioned policy'**:\n\nWe believe that it is also a valid choice in skill discovery to have independent policy networks for different skills. It is unclear that having a single skill conditioned policy is actually advantageous for previous methods like DIAYN or DADS. In the paper Wasserstein Unsupervised Reinforcement Learning (cf. https://ojs.aaai.org/index.php/AAAI/article/view/20645 ) they note that both DIAYN and DADS performed better when the actor networks are independent. Additionally we want to point to Enhanced POET (cf. http://proceedings.mlr.press/v119/wang20l/wang20l.pdf ) where they also use a collection of independent actors to learn very complex behavior. \n\n**'For example, why does random exploration from the current policys states produce a good set of positive examples for exploring beyond it.'**:\n\nThe reviewer raises a valid concern whether random exploration from the current boundary is enough. We believe that it is an interesting line of investigation. As our method has completely decoupled reward training and policy training, the exploration for the reward training can be substituted by different methods. In this work we chose the simplest exploration strategy possible. Even with random exploration, our method is able to find skills which need a precise sequence of action to not fail. This can be seen in all environments we considered including the death zone in the 2d grid. Random exploration is able to push the boundary a few actions at a time which then can be learned by our policy. We also added a 2d grid experiment with a narrow gap to showcase that our method can learn to go through these, cf Figure 11c) in Appendix I. \n\nAs already stated above, we are happy to discuss any additional questions the reviewer has about specific design choices of our method.\n", " We want to thank the reviewer for carefully going through our work and the useful comments. We try to address your concerns below point-by-point. We hope that these comments can lift some of the criticisms and lead the reviewer to lean towards accepting the paper. Of course, we are happy to respond to any additional questions during the discussion.\n\n**'It seems that this work falls into the category of exploration with intrinsic curiosity. '**:\n\nWe agree that our method resembles work from intrinsic motivation. Though, we believe it is sufficiently different from the currently employed. To showcase this, we add a more detailed discussion of the differences to it and are implementing RND as a baseline. We refer the reviewer to the general response.\n\n**'...the skills learned in this work aren’t necessarily disentangled...'**:\n\nIt is correct that our method does not directly optimize the disentanglement of the skills. The surprising thing is that our method produces skills which are more disentangled than skills learned by DIAYN. The mutual information metric reported measures the disentanglement of the skills and the x velocity. Our method outperforms the DIAYN baseline even when they use feature engineering. As mentioned in the general response, we are currently running the DADS baseline to strengthen this claim.\n\n**'...it would be helpful to see more comparisons in the experimental section not only to additional unsupervised skill learning methods like DADS...'**:\n\nWe agree with the reviewer and will add the two already mentioned baselines.\n\n**'Is the memory consumption of the method limiting?'**\n\nThis is a valid concern. In the robotic environments we only stored 1% of the total number of negative samples (cf.Table 6 in the Appendix). In Montezuma’s Revenge we only use the negative samples from the last 15 iterations (cf. Appendix H, line 661). As the agent always starts in the same position, negative samples are quite repetitive. Thus, it is enough to only store a fraction of all examples. We believe that one could reduce the number of negative samples stored even further.\n\n**'For Figure 2, I don’t quite understand what the meaning of the colored circles is.'**:\n\nWe apologize that the explanation was not precise enough. For a specific skill we sample some number of trajectories. Then we only consider locations which were visited at least x times in total by this skill. After this, the remaining locations usually all have high reward. Then, we take the average of these locations. This means yes, the agent goes a bit deeper into the maze than the circle. We will update the explanation to make it more clear.\n\n**'it could be nice if the authors addressed the limitations of the current method more explicitly.'**:\n\nWe agree with the reviewer that limitations should be discussed more thoroughly. We will add a limitation section discussing and investigating the biggest limitation as outlined in the general response.\n\n", " We want to thank the reviewer for carefully going through our work and the overall good feedback. We will try to address your remaining concerns below point-by-point. We hope that these comments can lift some of the criticisms. Of course, we are happy to respond to any additional questions during the discussion \n\n**'The authors only show results on zero-shot transfer...'**:\n\nWe agree that tuning/composing our learned skills for downstream tasks is another important use case. In the considered task the zero shot performance is already strong, hence we did not fine-tune it. Because we do not use any feature engineering, our method is not inherently biased to perform well on this specific task.\n\n**'The method is prone to learning uninteresting skills / getting stuck in local minimas...''**:\n\nWe agree that this is the major drawback for our current method. As outlined in the general response, we will add a new section discussing and investigating this issue. In particular, we already added additional experiment plots into Appendix I to showcase this issue. These pinpoint the exact scenario where our method fails, i.e. when there is a region of states from which backtracking is impossible.\n\n**'“In expectation, 7 · 10^27 episodes are needed to do so. This shows that this maze is difficult to navigate.” How did the authors arrive at this number?'**:\n\nTo come to this number we used dynamic programming. We computed the random walk state distribution after n+1 steps with the distribution after n steps. With this we can compute the probability that the agent reaches the bottom right until the end of one episode.\n\n**'Is there a reason why they were not done for the robotics domain?'**:\n\nWe originally only ran the ablations on the 2D navigation domain because due to computational constraints. We agree with you that the robotic domain is quite different, hence we also ran the value and policy ablation for Ant and Humanoid. We point to the general response for the results.\n\n**'The paper lacks section on limitations'**:\n\nWe agree that it makes sense to discuss limitations more thoroughly. We believe that the failure mode you mentioned above is the major limitation. We point to the general response for an overview of how we investigate this more closely. \n\n\n", " We want to thank the reviewer for carefully going through our work and the overall good feedback. We will try to address your remaining concerns below point-by-point. We hope that these comments can lift some of the criticisms. Of course, we are happy to respond to any additional questions during the discussion \n\n **'the proposed method learns skills in a depth':**\n\nThis is a valid concern about our method. As outlined in the general response, we will add a more detailed discussion of this limitation. We especially refer to the new plots we added to Appendix I which investigate this limitation.\n\n**'The impact of adaptive entropy regularization and the three forward transfer methods are not ablated.'**:\n\nWe ablate this mechanism in the 2d maze where we can run enough seeds to properly evaluate the choices made. Additionally we ran the policy and value transfer ablations for the ant and humanoid environments, cf the Table in the general response.\n\n**'Furthermore, there are several arbitrary hyperparameters related to the guiding phase like the duration of the guiding phase and the time spent collecting negative actions.'**: \n\nYes, these choices are not properly evaluated. Due to computational constraints, we can not tune these parameters. E.g. in the robotic environments we fixed the duration of both phases at the beginning and only evaluated the choice which is in the final experiments. We believe that at least in the robotic environment the exact choices do not matter too much. Essentially all skills exhibit very repetitive behaviors and thus there are many very similar states in the trajectories.\n\n**'It is unclear if the method helps in other tasks like maze navigation or Montezuma's revenge.”**:\n\nWe want to point out that previous skill discovery literature like DIAYN or DADS already relied heavily on feature engineering in the robotic environments. Most commonly decrease the dimension to two. Pixel based environments have very high-dimensional states. Thus, these methods are impractical for environments like Atari.\n\n**'Why use a regression loss for training the reward function?'**:\n\nYou raise a valid point. We never tried it with a classification task and used regression from the beginning. We believe that both should work.\n\n**'Could the negative rewards be important penalties...'**:\n\nWe agree that it would be great to be able to penalize old states with negative rewards. Unfortunately, negative rewards generally lead to degenerate behavior if the agent can terminate the episode itself by dying. It learns to die as quickly as possible. This is why many environments, as e.g. the original Humanoid give a small positive reward at each timestep.\n\n**'Do these baselines also do such extensive hyperparameter tuning...'**:\n\nOur hyperparameter tuning for the base PPO learning algorithm is of similar scale than the one used in Gu et al. 2021. Unfortunately they do not report how extensively they tuned the method specific hyperparameters. \n\n**'What is the strategy used to adapt the number of training steps to ensure each skill has been learned in Montezuma's revenge?'**:\n\nWe refer the reviewer to Appendix F where we explain the strategy in detail.\n\n\n\n", " General response:\nWe thank all the reviewers for their detailed and constructive comments on the paper. We worked intensively (and still are) on addressing the reviewers suggestions and criticism which we address in the individual responses. Here we discuss new additional experiments and respond to some points raised by multiple reviewers. We outline what we will change in the manuscript. We hope that these additional experiments and changes to the manuscript will lead the reviewers to vote for accepting our work. We are happy to engage in additional discussions with the reviewers. Here is a short overview of what we add before we discuss each point separately:\n- We are adding DADS as an additional baseline for the robotic environments. \n- We perform ablations for the forward transfer mechanisms in the robotic environments\n- We are implementing Random Network Distillation for the robotic environments and will report our findings as soon as we have them.\n- We ran additional experiments on modified versions of the 2d maze to investigate how narrow our skill search is. \n- We will add a more extensive discussion of limitations to the paper.\n\n\n**DADS baseline** \n\nWe are currently running DADS experiments on both the Ant and the Humanoid environment. We will post the results as soon as possible during the discussion phase.\n\n**Ablations for the robotic environments**\n\n\n| Method | MI-metric | zeroshot transfer | \n|--------|----------|----------|\n| Ant full method | 1.33 ± 0.11 | 2506 ± 511 |\n| Ant policy ablation | 1.11 ± 0.15 | 1716 ± 628 |\n| Ant value ablation | 1.23 ± 0.18 | 1924 ± 651 |\n| Humanoid full method |1.29 ± 0.25 | 9092 ± 1063 | \n| Humanoid policy ablation |1.01 ± 0.08 | 8913 ± 236| \n| Humanoid value ablation |0.91 ± 0.24 | 7361 ± 959 | \n\nIn the robotic environments we ran the ablation for the two forward transfer mechanisms on Ant and Humanoid. We report the results for five seeds here and are running five additional seeds to match the ten seeds used in the original experiment. The forward transfer mechanisms seem to also help in the robotic environments. The MI-metric of all ablations is lower, but the magnitude of the impact depends on the environment.\n\n**New section ‘Limitations’**\n\nThe largest limitation of our current method is the narrow deep search we are conducting. If the method comes to a state where there are two possible paths, it explores along one of the paths. The drawback is that it potentially never returns to explore the other path. To investigate this, we ran additional experiments in different 2d grids. We added the figures to Appendix I such that the reviewers can view them. \n\nFirst, we investigate what happens when the two paths are dead ends. In each iteration, the reward training increases the reward in some region. As can be seen in Figure 11a), this leads the agent to backtrack from a dead end. While backtracking is not very efficient, our method is able to deal with dead ends. We also saw similar backtracking in our experiments in Montezuma’s Revenge. \n\nA more challenging scenario is when backtracking is not possible. A good example for this is losing a life in Montezuma’s Revenge. The agent can not gain lives back. To simulate this, we added traps in both possible paths, i.e. as soon as the agent crosses a line, it can not cross the line again to go back. See Figure 11b) for the result. The second time a skill goes into one of the traps the method does not recover and all future skills wander into the trap. \n\nThese two experiments characterize the failure mode of our method more precisely. We believe that using a population of skills to search from, can help to combat this issue as it is possible to explore both paths simultaneously.\n\n**Random Network Distillation and other intrinsic curiosity methods**\n\nMultiple reviewers pointed out the relation to intrinsic motivation and asked about comparisons to these works. We are implementing Random network distillation (RND) for the robotic environments and will report the result as soon as possible. \n\nThe major difference from our method to RND and other intrinsic motivation is that we do not train the reward module online. Our method takes turns of learning a reward function and a policy. We will update the related work section to make this important difference more clear.\n\nCompared to the intrinsic motivation method, each of our reward functions reward a specific region of the state space instead of everything the agent has not seen yet. This can be seen in Figure 4d). Methods like RND would also reward all states deeper in the maze. Our more narrow reward functions encode more specific behaviors/skills. Due to this, we can train on a specific reward function for many steps to master some specific behavior.\n\nFinally, to the best of our knowledge, these methods have not been showcased to work on robotic environments of similar scale as the Humanoid environment. \n\n", " The paper introduces a method for unsupervised skill discovery using learned reward functions. Pairs of reward function and policy pairs are iteratively learned. The reward function evolves by predicting negative rewards for states the agent has visited and positive rewards for states just outside the agent's reach. The policy is then transferred to the new reward using three forward skill transfer methods.\n\nThis new method is the first unsupervised skill discovery approach to work in high-dimensional environments without expert knowledge. The paper first analyzes a 2d maze task. Next, the paper shows results in continuous control tasks and compares to 3 baselines. Finally, the paper shows results on Montezuma's revenge from pixel inputs. Strengths: \n* To the best of my knowledge, learning a reward function for unsupervised skill discovery is novel. I believe this is an important step forward from prior work that relies on more hand-crafted reward objectives. \n* Unlike other skills discovery methods like DIAYN, the proposed method does not require engineering only part of the observation space as input for good performance.\n* Method demonstrates superior performance over relevant unsupervised skill discovery algorithms both in terms of the MI metric.\n* Qualitative analysis demonstrates the behavior of skills learned in all three benchmarks.\n* The algorithm does not need a specified number of skills ahead of time. New skills can continuously be added.\n\nWeaknesses:\n* While this weakness was also acknowledged in the limitations section, the proposed method learns skills in a depth first rather than breadth first way. This narrow search of skills could result in lacking skill diversity. Imagine a version of the 2d maze environment where there are multiple possible paths. I think the proposed method would find skills that only go along one path, but never discover skills down the other path because the skills progressively build off each other.\n* Several design decisions are not quantitatively evaluated. The impact of adaptive entropy regularization and the three forward transfer methods are not ablated. Furthermore, there are several arbitrary hyperparameters related to the guiding phase like the duration of the guiding phase and the time spent collecting negative actions.\n* Only the continuous control results include a comparison to baselines. It is unclear if the method helps in other tasks like maze navigation or Montezuma's revenge. \n * Why use a regression loss for training the reward function? Isn't this a classification problem for distinguishing the already visited negative states from the new positive states?\n* I do not understand the justification for clipping the rewards to [0,a]. Could the negative rewards be important penalties for avoiding exploring parts of the state space that were already visited? \n* How does the training compare to the baselines from Gu et al. 2021? Do these baselines also do such extensive hyperparameter tuning to make this comparison fair? \n* What is the strategy used to adapt the number of training steps to ensure each skill has been learned in Montezuma's revenge? The authors mention limitations of the work with the potential for better forward transfer methods and using a population of skills rather than the narrow deep search strategy. \n", " The authors propose a method for unsupervised skill discovery via reinforcement learning. The method involves training a sequence of neural network-based reward functions, and using RL to learn policies that optimize these reward functions. The reward functions are trained using positive and negative examples, where negative examples are the states reached by current policy, and positive examples are states reached by executing random actions. This encourages learning skills that cannot be visited by the current policy (and all previous policies). The authors apply this method for learning dozens of skills in simulated physics environments (half-cheetah, humanoid etc.), and hundreds of skills in Montezuma’s revenge. The skills often (but not always) recover interesting behaviors, including a humanoid hopping on one leg and the agent in Montezuma’s revenge being able to explore new rooms (beyond the first room). Strengths\n- Simple, straightforward method. \n- Impressive results in a few different domains. \n- Clear writing and presentation. \n\nWeaknesses\n- The authors only show results on zero-shot transfer (i.e. how well a learned skill performs on some standard reward function), and there are no experiments on using the learned skill(s) for quickly learning a new task (i.e. improving beyond the pre-trained skill).\n- The method is prone to learning uninteresting skills / getting stuck in local minimas. For example, in the experiments for Montezuma’s revenge, the authors noted that one of the first skills learned by the agent was to get the life counter to zero, which made it very difficult to do any further exploration (since the episode can terminate very easily once the life counter is at 0). This led the authors to manually intervene in the learning process and hide the part of the screen that shows remaining lives.\n Questions\n- “In expectation, 7 · 10^27 episodes are needed to do so. This shows that this maze is difficult to navigate.” How did the authors arrive at - this number? \n- The ablations were only performed for the 2D navigation domain. Is there a reason why they were not done for the robotics domain? Since the two domains are fairly different, it is not necessary that all conclusions from the 2D navigation experiments will hold. The paper lacks section on limitations (there is maybe only one or two lines in the conclusion that mention limitations, and they only do so in the context of future work that could be done). I would encourage the author to come up with a more comprehensive discussion of limitations (I mention some in the weaknesses above, but I would guess the authors are probably aware of more limitations). ", " The paper studies the problem of unsupervised reinforcement learning, where an agent aims to explore its environment without a reward signal and learn diverse skills. The proposed approach amounts to learning a reward function which encourages exploration, and a policy with A2C to maximize this reward. The exploration reward function itself is essentially a classifier trained to predict visited states (negatives) from states near the boundary of the visited states (positives). The policy is trained to maximize the score (go near the boundary). \n\nTo recover \"skills\", the policy checkpoints at various stages of training are used. At each phase, the reward function then policy are trained using the newly collected data. To stabilize learning, the policy/value function are reused from the previous skill in learning the new skill. \n\nExperiments show this can explore a maze environment as well as ant/humanoid environments. The proposed approach outperforms DIAYN in terms of mutual information between skill and state-distribution. *Strengths*\n- The paper studies an important problem in unsupervised RL and skill discovery. \n- Expanding the policy to cover the state space is a well motivated objective for unsupervised RL.\n\n*Weaknesses*\n- First, the way the proposed method decomposes \"skills\" is a bit unclear. From what I can tell, the method is training a reward to explore the boundary of states, and a policy to maximize that reward, both being updated online. Then \"skills\" as denoted in this work are just the policy/reward at different checkpoints in training. So it is less that the method learns a multi-skill conditioned policy, but rather it is a collection of checkpoints (which get manually selected during downstream 0-shot adaptation). \n\nIf this is the case, then the claim in the related work that intrinsic motivation papers are not comparable is not quite accurate.\n\"Another related line of research to our approach is intrinsic motivation (Stadie et al., 2015; Bellemare et al., 2016; Pathak et al., 2017; Burda et al., 2018b,a; Raileanu & Rocktaschel, 2020). These approaches have managed great success in hard-exploration Atari games. However, these approaches do not learn discrete skills that can be composed or fine-tuned for fast learning of new tasks.\"\nAny of these works could also produce skills via the same checkpointing procedure used here and thus are valid comparisons. \n\n- Moreover, the core exploration objective proposed in this work is not particularly new compared to prior work. Theres a large literature of works that use state-space coverage as their exploration+skill learning objective, including model error/disagreement [1,2,3], pseudo counts [5,6], state distribution matching [7], and exploring the boundary of states [4,8] to name a few. While certainly not all are necessary, the paper should at least compare to some of these baselines, and currently none are. And the paper should do a better job of explaining how exactly their instantiation is different from what is being proposed in these works.\n\n1. Pathak et al. Curiosity driven Exploration by Self-Supervised Prediction. \n2. Pathak et al. Self-Supervised Exploration via Disagreement. \n3. Sekar et al. Plan to explore vis Self-Supervised World Models\n4. Mendonca et al. Discovering and Achieving Goals via World Models\n5. Tang et al. # exploration: A study of count-based exploration for deep reinforcement learning.\n6. Burda et al. Exploration by Random Network Distillation.\n7. Lee et al. Efficient exploration by state marginal matching.\n8. Bharadhwaj et al. LEAF: Latent Exploration Along the Frontier. \n\n- Finally, there are a number of design choices in the proposed approach that are unclear. For example, why does random exploration from the current policys states produce a good set of positive examples for exploring beyond it. I can think of cases (like going through a narrow gap), where such exploration will not hit the states relevant for pushing the boundary. Approaches like those used in [4,7,8] seem like they could be more effective are identifying the boundary to explore.\n - If the policy at different stages of training are taken as \"skills\", why are all of the listed intrinsic motivation papers not valid comparisons?\n- How does the proposed method for state coverage differ/compare to the relevant work? There should be comparisons to at least some recent work which also does exploration+skill discovery by optimizing for state coverage. Limitations are not really discussed.", " This paper presents a method for unsupervised skill discovery in reinforcement learning environments. The authors propose a method which iteratively learns policies for new skills by defining new reward functions for each new skill. New reward functions are constructed by taking the visited states by the previously learned skill, as well as all previously learned skills, and setting a low reward at those states. Rewards are higher for states which are close to being reached by the previous policy. The method takes advantage of some policy and value function transfer techniques to make learning more efficient. Experiments are conducted using a 2D navigation environment, robotics environments simulated using BRAX, and Montezuma’s Revenge.\n Strengths of the paper:\n- The method is straightforward and appears to make fewer assumptions than other methods like DIAYN do in terms of the x-y position prior, which is a very nice property. \n- Evaluations are performed on sensible tasks and can obtain favorable results compared to DIAYN and GCRL, which is quite impressive given that the method is conceptually simple. \n- Generally the paper is clearly written; I note some suggested small changes below, but there are some other minor grammatical errors which do not impact the clarity of the text but could benefit from another round of proofreading.\n\nWeaknesses/clarifications:\n- It seems that this work falls into the category of exploration with intrinsic curiosity. The reason is that while this method does generate pairs of discrete skills and policies whereas most exploration algorithms do not, the skills learned in this work aren’t necessarily disentangled. And while the authors note that “However, these approaches do not learn discrete skills that can be composed or fine-tuned for fast learning of new tasks”, it’s not clear from this work how the discrete skills learned here can be composed or fine-tuned for completing a new task. The zero-shot transfer task investigated in the experiments requires that a *single* learned skill can solve the entire task. \n- As a follow-up to the previous point, it would be helpful to see more comparisons in the experimental section not only to additional unsupervised skill learning methods like DADS, but also intrinsic exploration methods like those mentioned in the related work section. \n - Is the memory consumption of the method limiting? The reward function fitting seems to require keeping track of *all* of the previously seen negative states for all skills, which would grow as training continues. Additionally, as the number of negative states grows, does the objective in the supervised loss for the reward function need to be rebalanced for positive and negative classes?\n- For Figure 2, I don’t quite understand what the meaning of the colored circles is. My understanding is that for each skill, a policy is learned which brings the agent from the top left further and further through the maze. But “The circles represent the average position of the most visited locations for each skill.” Does this mean that for a particular skill, the agent actually goes further into the maze than its circle is, and the circle just represents the average position for states visited during that trajectory?\n\nNitpicks/typos:\n- 268: Unsupervised -> unsupervised\n- Left quotes in several places in the manuscript are flipped\n- Can the number of samples used to learn for Montezuma’s revenge/wall clock time be reported?\n The limitations are mentioned in the conclusion, but it could be nice if the authors addressed the limitations of the current method more explicitly. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "aGkfLxzJ8ge", "Xxn2dR6f4eO", "ULSg9jBPqts", "kQDMm2GYomX", "Nw1JwwvdI8o", "ykWrjbSYuJD", "T49wu7SFhzFn", "txkvjuM64kg", "TgQvw8hdmKd", "ykWrjbSYuJD", "S6cSnxiJBg", "TgQvw8hdmKd", "LLsbYkdKD4o", "nips_2022_NL05_JGVg99", "nips_2022_NL05_JGVg99", "nips_2022_NL05_JGVg99", "nips_2022_NL05_JGVg99", "nips_2022_NL05_JGVg99" ]
nips_2022_AdK9_GTEvG
LeRaC: Learning Rate Curriculum
Most curriculum learning methods require an approach to sort the data samples by difficulty, which is often cumbersome to perform. In this work, we propose a novel curriculum learning approach termed Learning Rate Curriculum (LeRaC), which leverages the use of a different learning rate for each layer of a neural network to create a data-free curriculum during the initial training epochs. More specifically, LeRaC assigns higher learning rates to neural layers closer to the input, gradually decreasing the learning rates as the layers are placed farther away from the input. The learning rates increase at various paces during the first training iterations, until they all reach the same value. From this point on, the neural model is trained as usual. This creates a model-level curriculum learning strategy that does not require sorting the examples by difficulty and is compatible with any neural network, generating higher performance levels regardless of the architecture. We conduct comprehensive experiments on eight datasets from the computer vision (CIFAR-10, CIFAR-100, Tiny ImageNet), language (BoolQ, QNLI, RTE) and audio (ESC-50, CREMA-D) domains, considering various convolutional (ResNet-18, Wide-ResNet-50, DenseNet-121), recurrent (LSTM) and transformer (CvT, BERT, SepTr) architectures, comparing our approach with the conventional training regime. Moreover, we also compare with Curriculum by Smoothing (CBS), a state-of-the-art data-free curriculum learning approach. Unlike CBS, our performance improvements over the standard training regime are consistent across all datasets and models. Furthermore, we significantly surpass CBS in terms of training time (there is no additional cost over the standard training regime for LeRaC). Our code is freely available at: http//github.com/link.hidden.for.review.
Reject
The paper proposes a model-level curriculum learning strategy, which assigns higher initial learning rates to shallow layers than deep ones and continues increasing all learning rates until they reach the same value during the training process. It is a model- and task-agnostic approach. Reviewers appreciated the simplicity of the approach, as well as its effectiveness on a multiple domains and different neural networks. Main concerns which remains after the rebuttal are: - There is an insufficient analysis on why the method works. Some intuition has been given in the rebuttal, but all reviewers felt more analysis should be given to reach NeurIPS standards. - There is insufficient comparison with previous work on optimization. In addition, a more minor concern (in AC's opinion) remains: - On vision data, evaluations using augmentations are missing. While it can be argued that the effect of augmentations and LeRaC might be orthogonal, it remains unclear how one can improve on strong baselines there.
train
[ "A0VVxrBFgJW", "DyUmYVDGGsK", "INgXwBvjk-rt", "NibZxik_ezpl", "JJlVny7Fjo1", "DjemtwtuUVB", "vKTb3_86uSv", "5PLuUQxHU_", "P1H-gRb4qUd", "zTGkdPy_ld-", "oHN2mSRsyIH", "8UhSYbeWXCz", "SDetO11sdpE", "TMmDnXOtthQ", "4Ts1fDU6UZ", "czGPqu3OEg6" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for taking the time to read our rebuttal. We address the additional concerns below:\n- Explore the best possible performance for the chosen Dataset-DNN combination and push to improve over it. \nRe: We thank the reviewer for this suggestion. We will use it to improve our results in the final paper version.\n- Is modifying the relative distance between the learning rates sufficient?\nRe: First, we would like to underline that the delta between the initial learning rates of consecutive layers is auto-set based on the range η_1^(0) and η_n^(0). For example, let us consider a network with 5 layers. If we choose η_1^(0)=10^-1 and η_5^(0)=10^-2, than the intermediate initial learning rates are η_2^(0)=10^-1.25, η_3^(0)=10^-1.5, η_4^(0)=10^-1.75, i.e. delta is used in the exponent and is equal to -0.25 in this case. Please note that, to obtain the intermediate learning rates, we use an exponential scheduler (similar to Eq. (8)), as mentioned in the rebuttal to reviewer mKJ3. In general, we underline that it is sufficient to set two of the three hyperparameters (η_1^(0), η_n^(0), delta), as the third one can be directly inferred from the other two. In our case, we opted to infer delta from η_1^(0) and η_n^(0). Based on our understanding, the reviewer is asking if it is sufficient to set delta. This is basically equivalent to keeping the difference between η_1^(0) and η_n^(0) fixed, while changing only η_1^(0). To test this scenario, we perform experiments with Wide-ResNet-50 on CIFAR-100 and present the results below. The results indicate that keeping delta fixed while changing η_1^(0) can lead to different accuracy rates. Hence, we conclude that tuning at least two of the three hyperparameters (η_1^(0), η_n^(0), delta) is necessary.\n\nWide-ResNet-50+LeRac on CIFAR-100 {format is (η_1^(0), η_n^(0), acc)}:\n(10^-1, 10^-7, 69.25±0.37);\n(10^-2, 10^-8, 68.51±0.52);\n(10^-3, 10^-9, 68.38±0.06).", " I appreciate the authors' detailed response to my comments. I would like to add a few additional comments based on the responses provided in the rebuttal.\n- The additional evaluations on vision data using augmentations helps highlight how much the proposed work can improve on stronger baselines. As currently provided, the additional evaluations and their relevant significance tests should help showcase the potential of the proposed method. I would encourage the authors to explore the best possible performance for the chosen Dataset-DNN combination and push to improve over it. \n- It is indeed interesting that across ablation studies on k, the proposed method consistently performs better than the baseline methods while the impact of varying $\\eta_1^{0}$ is much more significant, possibly indicating a weaker threshold on k and allowing it to be freely set. Could the authors confirm whether modifying the relative distance between the learning rates of each layer leads to the same impact, as opposed to the selection of specific values, i.e., the $\\delta$ between learning rates of layers is more critical than the exact choice of value? The reason this could be integral is, if the difference is key then the idea can be adopted across any/all setups without having to re-evaluate specific new learning rates.", " We once again thank the reviewer for reading and acknowledging all our clarifications and for deciding to further increase the rating. We do believe the discussion was very fruitful, which will lead to significant improvements of our final manuscript.", " Thanks for the additional experiment again. Considering the satisfying answers from the authors, I'd like to increase my score to 5.", " We thank the reviewer for their patience and we apologize for misunderstanding the issue regarding the distances. We have now computed the distances for the low-level (first conv) and high-level (last conv) layers between the values at iteration 0 (based on random values) and the last iteration (based on values optimized until convergence) for ResNet-18 on CIFAR-10 (same model as before). The computed distances (shown below) confirm our conjecture: shallow layers contain less noise than deep layer. We hope our current answer can satisfy the reviewer's concern.\n\nResNet-18: (low-level distance = 38.36, high level-distance = 709.93).\n", " Thank the authors for the reply again. The distances shown should be the initial weights compared to the final weights, because the authors state that the shallow layers contain less noise. It will be better to show the distances changes with layers.", " We thank the reviewer for reading our rebuttal and increasing their score based on the concerns that were addressed in the rebuttal. The reviewer still considers that there are some important points that require further attention. We address these concerns below:\n\n- Quantify noise of shallow layers using distance to the final model.\nRe: In our reply to reviewer YyP2 on training dynamics, we computed the entropy of the low-level and high-level layers after k=6 epochs, before and after using LeRaC to train ResNet-18 on CIFAR-10. We agree that the idea of using the distance to final feature maps provides additional useful insights about how LeRaC works. To this end, we computed the Euclidean distances of the low-level features between epoch k and the final epoch, before and after using LeRaC. We did the same for the high-level features. The distances are shown below. The computed distances confirm our previous observations: LeRaC seems to balance the training pace of low-level and high-level layers.\n\nResNet-18 {format is (1st layer distance, last layer distance)}:\n(0.60, 0.37).\n\nResNet-18+LeRaC: \n(0.61, 0.66).\n\n- The work seems to introduce more hyperparameters.\nRe: It is true that LeRaC adds three additional hyperparameters compared to the conventional training regime. These are the initial highest and lowest learning rates, η_1^(0) and η_n^(0), and the number of iterations k to employ LeRaC. We tried our best to minimize the number of hyperparameters that require tuning by using fixed rules to adjust intermediate learning rates (e.g. Eq. (8)) or by fixing less important hyperparameters, e.g. c=10. As shown in Table 1, CBS has an identical number of additional hyperparameters to LeRaC. Furthermore, we note that data-level curriculum methods also introduce additional hyperparameters. Even a simple method that splits the examples into easy-to-hard batches that are gradually added into the training set requires at least two hyperparameters: the number of batches (should we use 3 batches - easy, medium and hard - or more?), and the number of iterations before introducing a new training batch. We thus believe that, in terms of the number of additional hyperparameters, LeRaC is comparable to CBS and other curriculum learning strategies. Please note that the same happens if we look at optimizers, e.g. Adam adds two additional hyperparameters compared to SGD.\n\n- Confusion on how the method works.\nRe: From a technical point of view, we note that our approach can also be regarded as a way to guide the optimization, which we see as an alternative to loss function smoothing. The link between curriculum learning and loss smoothing is mentioned in [12], where the authors suggest that curriculum learning strategies induce a smoothing of the loss function, where the smoothing is higher during the early training iterations (simplifying the optimization) and lower to non-existent during the late training iterations (restoring the complexity of the loss function). LeRaC is aimed at producing a similar effect, but in a softer manner by dampening the importance of optimizing the weights of high-level layers in the early training iterations. Additionally, please note that considering the reply to reviewer YyP2 on training dynamics, we observe that LeRaC tends to balance the training pace of low-level and high-level features, while the conventional regime seems to update the high-level layers at a faster pace. This could provide an additional intuitive explanation of why our method works. We will include our additional comments and results in the final paper to improve clarity.\n", " Thank the authors' effort for the rebuttal. The author addresses some of my concerns like the differences between this work and the given related works, and also, some conflicts with pretrain-fintune paradigm. Despite of the addressed concerns, there are still some important points in this paper which are still vague, like whether the noise of the shallow layers is larger in the beginning, which might be shown by the distance to the final model? Additionally, since the initial learning rate for each layer and the number of curriculum epoch needs tuning, this work seems like to introduce more hyper-parameters to achieve promising performance and the role of curriculum learning seems to become less ? The most confusing part of this paper is still how this method work, although it achieves promising performance on various scenarios. Out of all these considerations, I think this is a borderline paper that may bring performance improvement through tuning the proposed method, but the mechanism of this work needs more explanation and exploration. I will increase my score to 4.", " We thank the reviewer for the positive feedback, appreciating our clear presentation, our simple approach, as well as our empirical proof of generalization. \n\nWe next address the concerns raised by the reviewer:\n- Optimization vs. curriculum learning.\nRe: We consider Adam and related optimizers as orthogonal approaches that perform the optimization itself. Our approach, LeRaC, only aims to guide the optimization during the initial training iterations by reducing the relevance of deeper network layers. As shown in Table 1, most of the baseline architectures used in our experiments are already based on Adam or some of its variations, e.g. AdaMax, AdamW. LeRaC is applied in conjunction with these optimizers, showing improved performance over various architectures and application domains. This supports our claim that LeRaC is an orthogonal contribution to the family of Adam optimizers. Nevertheless, as also suggested by reviewer mKJ3, we will add a discussion about the relationship to optimizers in our related work section. As recommended by the reviewer, we also perform a comparison between SGD+LeRaC vs. Adam. The corresponding results of ResNet18 and Wide-ResNet-50 on CIFAR-100 are shown below. We observe that our training strategy offers significantly better performance in the presented cases.\n\nResNet18+Adam: 57.90±0.21;\nResNet18+SGD+LeRac: 66.02±0.17.\n\nWide-ResNet-50+Adam: 66.48±0.50;\nWide-ResNet-50+SGD+LeRac: 69.38±0.26.\n\n- Benefit is faster training or better generalization?\nRe: As shown in Fig. 1, we achieve faster training compared to CBS. The training time of LeRaC is identical to that of the conventional regime. Compared to the conventional regime, we demonstrate better generalization via the accuracy rates reported in Tables 2, 3, 4. After analyzing our logs, we confirm that the loss values decrease for both training and validation when we introduce LeRaC. We will introduce the corresponding figures in the final paper version.\n- Schedule Tuning.\nRe: Aside from the exponential update from Eq. (8), we also tested a linear update. The corresponding results are shown in Table 5. We observe that the linear update produces slightly lower accuracy rates. Additional experiments with various hyperparameter settings are presented in our response to reviewer YyP2/oPup.\n- Why c can be set to 10 without validation.\nRe: Learning rates are usually expressed as a power of 10, e.g. 10^-4. If we start with a learning rate of 10^-8 for some layer j and we want to increase it to 10^-4 during the first k=5 epochs, the intermediate learning rates are 10^-7, 10^-6 and 10^-5, being more intuitive to understand what happens than using some other value for c. To this end, we refrained from tuning c.\n- Number of training steps.\nRe: Although we train all models with early stopping, the plots included in Fig. 1 are trained with the maximum number of epochs for both models, as we believe it provides a more fair way to compare the wall clock time of LeRaC and CBS (early stopping would introduce more variation to the time comparison). Please kindly double check Fig. 1(d) and observe that LeRaC achieves a better accuracy after 20h of training (much earlier than CBS).\n- Justification of the noise argument.\nRe: We would like to point out that the output feature maps of a layer j are affected (a) by the initial random weights (noise) θ_j^(0) of the respective layer, and (b) by the input feature maps, which are in turn affected by the random weights of the previous layers θ_1^(0), …, θ_j-1^(0). Hence, the noise affecting the feature maps increases with each layer processing the feature maps, being multiplied with the weights from each layer along the way. Our curriculum learning strategy imposes the training of the earlier layers at a faster pace, transforming the noisy weights into discriminative patterns. As noise from the earlier layer weights is eliminated, we train the later layers at faster and faster paces, until all learning rates become equal at epoch k. We will include this explanation in the camera ready.", " The reviewer appreciates our layer-wise learning rate strategy as important, our empirical experiments as comprehensive, and our paper as well written and easy to read.\n\nIssues:\n- Relation between LeRaC and curriculum learning.\nRe: As detailed in our reply to reviewer YyP2, our method can be seen as a curriculum learning strategy that simplifies the optimization in the early training stages by restricting the model updates (in a soft manner) to certain directions (corresponding to the weights of the earlier layers). Due to the imposed soft restrictions (lower learning rates for deeper layers), the optimization is easier at the beginning. As the training progresses, all directions become equally important, and the loss function is permitted to be optimized in any direction. As the number of directions grows, the optimization task becomes more complex (it is harder to find the optimum). The relationship to curriculum learning can be discovered by noting that the complexity of the optimization increases over time, just as in curriculum learning.\n- Extend related work.\nRe: We thank the reviewer for pointing out related work on adaptive learning rates and warmup strategies (we relabel the indicated works with [R1,R2,R3,R4,R5] to avoid confusion). We agree that [R1-R5] are related to our work and we explain the differences below. In [R1], the main goal is increasing the learning rate of certain layers as necessary, to escape saddle points. Different from [R1], our strategy reduces the learning rates of deeper layers, introducing soft optimization restrictions in the initial training epochs. In [R2,R4], the authors proposed to train models with very large batches using a learning rate for each layer, by scaling the learning rate with respect to the norms of the weights / gradients. The goal of [R2,R4] is to specifically learn models with large batch sizes, e.g. formed of 8K samples. Unlike [R2,R4], we propose a more generic approach that can be applied to multiple architectures (convolutional, recurrent, transformer) under unrestricted training settings. In [R3], the authors point out that learning rate with warmup restarts is an effective strategy to improve stability of training neural models using large batches. Different from LeRaC, this approach does not employ a different learning rate for each layer. Moreover, the strategy restarts the learning rate at different moments during the entire training process, while LeRaC is applied only during the first few training epochs. Aside from these technical differences, our experiments already include a comparison of the two strategies (LeRaC vs. Linear Warmup with Cosine Annealing), as also pointed out in the response to reviewer YyP2. Indeed, the CvT results presented in Table 2 show that introducing LeRaC brings consistent improvements. We thus conclude that our strategy is a viable and distinct alternative to learning rate warmup. The relationship to Adam [R5] is detailed in the reply to reviewer Z9wJ.\n- Paper lacks theoretical analysis.\nRe: We explained the intuition behind our approach in answer to the first point. We would like to add that many contributions in the area of machine learning are not always supported by theoretical proof, and are only proven to work empirically. In some cases, the theory is developed after the empirical evidence is observed. We strongly believe that empirical studies backed by a good intuition offer a path forward that can raise the interest of many researchers.\n- Choosing initial learning rates for all layers. Sensitivity of hyperparameters.\nRe: In general, we note that our hyperparameters are tuned on validation data. Upon setting the range for the initial learning rates, i.e. η_1^(0) and η_n^(0), we set the intermediate initial learning rates using a similar formula to Eq. (8), the main difference being that initial learning rates variate according to the layer j instead of the training iteration l. To observe the sensitivity of the results to hyperparameters, we kindly ask the reviewer to see the reply to reviewers YyP2/oPup.\n- Higher learning rate to deeper layers.\nRe: We refer to this strategy as “anti-curriculum”. We present some results in the reply to reviewer YyP2.\n- For pre-trained models, fine-tuning should use the opposite strategy.\nRe: Our principle is to first let the generic (low-level) features get adapted to the new task, then optimize the deeper (high-level) features to the new task. If the generic features do not need to be adapted, their gradients will be small and the use of the usual learning rate will not affect the model. However, we do not argue against using “anti-curriculum” [12], which worked well in multiple cases. We agree with the general perspective, which we believe is valid for the entire duration of the training process. We note that our principle is only applied for the first few training iterations.\n- Figure for Eq. (8). Re: We will add it.\n- Limitations. Re: Please see reply to reviewer oPup.", " We thank the reviewer for the positive feedback, appreciating our clear and understandable presentation of curriculum learning, and our comprehensive experimental evaluation. \n\nWe next address the identified issues:\n- Not using augmentation on vision data offers a gap in exploring the behavior of LeRaC.\nRe: Following [21], we did not use data augmentation for the vision datasets. We consider data augmentation as an orthogonal method for improving results, expecting improvements for both baseline and LeRaC models. Furthermore, since we extended the experimental settings to other domains, we took the liberty to use data augmentation in the audio domain. The same augmentations (noise perturbation, time shifting, speed perturbation, mix-up and SpecAugment [61]) are used for all audio models, ensuring a fair comparison. As the reviewer suggested, we present new results with ResNet-18 and Wide-ResNet-50 on CIFAR-100 using the following augmentations: horizontal flip, rotation, solarize, blur, sharpness, auto-contrast. The results confirm that the performance gaps in the vision domain are in the same range after introducing data augmentation. We will add the extra results in the final version.\n\nResNet-18+data augmentation:\n(baseline: 71.25±0.04);\n(LeRaC: 71.52±0.22).\n\nWide-ResNet-50+data augmentation:\n(baseline: 65.42±0.66);\n(LeRaC: 67.00±0.55).\n\n- Significance testing.\nRe: As the reviewer suggested, we applied McNemar significance testing to determine if the differences with respect to the baseline are significant. In 17 of 22 cases, we found that our results are significantly better than the corresponding baseline, at a confidence threshold of 0.001. We will mark the significantly better results in the final paper version.\n\n- Exploration of the two key parameters.\nRe: In general, we note that our hyperparameters are tuned on the validation data. As the reviewer suggested, we present additional results with different ranges for η_1^(0) and η_n^(0) in the response to reviewer YyP2. Below, we also present results with ResNet-18 and Wide-ResNet-50 on CIFAR-100, considering various values for k. We observe that all configurations surpass the baselines on CIFAR-100. Moreover, we observe that the optimal values for k (k=7 for ResNet-18 and k=7 for Wide-ResNet-50) obtained on the validation set are not the values producing the best results on the test set. We will add these new results in the final ablation study.\n\nResNet-18+LeRac {format is (k,acc)}:\n(5, 66.25±0.07);\n(6, 66.20±0.05);\n(7, 66.02±0.17);\n(8, 66.91±0.23);\n(9, 66.59±0.47).\n\nWide-ResNet-50+LeRac:\n(5, 68.86±0.76);\n(6, 69.78±0.16);\n(7, 69.38±0.26);\n(8, 69.30±0.18).\n\n- Discuss plausible schemes taking into account other factors (like data, gradient flow, etc.) that could provide more consistent gains?\nRe: Our aim was to propose a simple and generic curriculum learning scheme, which can be integrated into any model for any task. To this end, we tried to avoid relying on task dependent information (e.g. data). In Table 5, we showed that combining LeRaC and CBS can boost performance. In a similar fashion, LeRaC can be combined with data-level curriculum strategies for improved performance. We leave this exploration for future work. Further performance gains can be obtained by introducing orthogonal approaches, e.g. data augmentation.\n\n- Address the potential negative implications / limitations.\nRe: One limitation, indicated by reviewer YyP2, is the need to disable other learning rate schedulers while using LeRaC. We already tested this scenario with CvT (the baseline CvT uses Linear Warmup with Cosine Annealing, which is removed when using LeRaC), observing performance gains (see Table 2 form the paper). However, disabling alternative learning rate schedulers might bring performance drops. Hence, this has to be decided on a case by case basis. Another limitation is longer training times / poor convergence if the hyperparameters are not properly configured. We recommend hyperparameter tuning on validation to avoid this outcome. Another limitation is that we tested our approach on mainstream classification tasks involving mainstream classification losses (multi-class / binary cross-entropy). We leave the integration with additional losses for future work.\n", " The reviewer appreciates our simple, novel, and effective method, our good literature review, and our consistent performance improvements over multiple architectures and datasets. \n\nIssues:\n- Motivation for setting different learning rates per layer.\nRe: Note that the output feature maps of a layer j are affected by (a) the initial random weights (noise) θ_j^(0) of the respective layer, and by (b) the input feature maps, which are in turn affected by the random weights of the previous layers θ_1^(0),…,θ_j-1^(0). Hence, the noise affecting the feature maps increases with each layer processing the feature maps, being multiplied with the weights from each layer along the way. Our curriculum learning strategy imposes the training of the earlier layers at a faster pace, transforming the noisy weights into discriminative patterns. As noise from the earlier layer weights is eliminated, we train the later layers at faster and faster paces, until all learning rates become equal at epoch k. We will include this explanation in the camera ready.\n\n- Optimization method for η_j, j={1...n}.\nRe: Note that the different learning rates η_j are not optimized during training. We set the initial learning rates η_j^(0) through validation, such that η_n^(0) is around five or six orders of magnitude lower than η^(0) and η_1^(0)=η^(0). After initialization, we apply the scheduler defined in Eq. (8).\n\n- Model sensitivity to different settings of η_j.\nRe: Since our goal was to perform curriculum learning, we restricted the settings for η_j according to Eqs. (6) and (7). As also suggested by reviewer mKJ3, another strategy is to consider the opposite setting, where we use higher learning rates for the deeper layer. We tested this approach, which we call “anti-curriculum”, in a set of new experiments with ResNet-18, Wide-ResNet-50 on CIFAR-100 and SepTr on CREMA-D. The results are shown below. Although anti-curriculum, e.g. hard negative sample mining, was shown to be useful in other tasks [12], our results indicate that learning rate anti-curriculum attains inferior performance. In another set of experiments, we show results with LeRaC using different ranges for η_1^(0) and η_n^(0). We observe that there are multiple hyperparameter configurations that surpass the baseline.\n\nAnti-curriculum:\nResNet-18 on CIFAR-100: 64.76±0.17;\nWide-ResNet-50 on CIFAR-100: 67.47±0.15;\nSepTr on CREMA-D: 68.33±0.61.\n\nResNet-18+LeRaC on CIFAR-100 {format is (η_1^(0),η_n^(0),acc)}:\n(10^-1,10^-6,65.82±0.08); \n(10^-1,10^-7,65.80±0.16);\n(10^-1,10^-9,65.59±0.49);\n(10^-1,10^-10,65.76±0.22);\n(10^-2,10^-8,65.71±0.08);\n(10^-3,10^-8,65.25±0.12).\n\nWide-ResNet-50+LeRac on CIFAR-100:\n(10^-1,10^-6,68.64±0.52);\n(10^-1,10^-7,69.25±0.37);\n(10^-1,10^-9,69.26±0.27);\n(10^-1,10^-10,69.66±0.34);\n(10^-2,10^-8,68.51±0.52);\n(10^-3,10^-8,68.71±0.47).\n\nSepTr+LeRaC on CREMA-D: \n(10^-2,10^-8,70.74±0.55); \n(10^-3,10^-8,70.61±0.49);\n(10^-5,10^-8,70.32±0.57);\n(10^-4,10^-7,70.49±0.44);\n(10^-4,10^-9,70.58±0.48).\n\n- Training dynamics. Quality of features maps of earlier layers vs. under-trained later layers.\nRe: We showed a few examples of training dynamics in Fig. 1. All four graphs exhibit a higher gap between standard training and LeRaC in the first half of the training process, suggesting that LeRaC has an important role towards faster convergence. To assess the comparative quality of low-level vs. high-level features maps obtained with conventional vs. LeRaC training, we compute the entropy of the first and last conv layers of ResNet-18 on CIFAR-10 after k=6 iterations. Conventional training seems to update deeper layers faster, observing a higher difference between the entropies of low-level and high-level features obtained with conventional training than with LeRac. Hence, LeRaC balances the training pace of low-level and high-level features.\n\nResNet-18 {format is (1st layer entropy, last layer entropy)}:\n(0.99646, 0.99050).\nResNet-18+LeRaC: \n(0.99706, 0.99683).\n\n- Interaction of LeRaC with adaptive optimizers.\nRe: Our initial learning rates and scheduler are used independently of the optimizers. As shown in Table 1, we used different optimizers, opting in each case for the best optimizer for each baseline architecture (without LeRaC). We kept the same optimizers when introducing LeRaC. Our learning rate scheduler updates the learning rates at the beginning of every iteration. We did not observe any stability / interaction issues.\n\n- Learning rate schedulers may disturb the proposed model.\nRe: Whenever a learning rate scheduler was used for training a model, we simply replaced the scheduler with LeRaC. For example, all the baseline CvT results are based on Linear Warmup with Cosine Annealing (LWCA), this being the recommended scheduler for CvT. When we introduced LeRaC, we simply deactivated LWCA. In general, we recommend deactivating other schedulers while using LeRaC for simplicity in avoiding stability issues. We will mention this limitation in the final version of the paper.", " This paper proposes setting different learning rates for each layer of a deep neural network such that the first layers (closer to the input) have a much higher learning rate than the last layers. Learning rates are all gradually increased until they reach the same value $\\eta_0$ (the optimal learning rate of no-curriculum models). Strengths:\n- Simple, novel, and effective method. \n- Good review of related work and reasoning for using model-level curriculum learning.\n- Consistent performance improvements across eight audio, image, and text datasets and seven convolutional, recurrent, and transformer architectures. \n\n\nWeaknesses:\n- Insufficient analysis on why the method works. \n- The motivation for setting different learning rates per layer is not sound. The authors state that random initialization of parameters causes the propagation of noise during the forward pass, and that assigning higher learning rate to first layers and lower learning rates to last layers prevents this propagation of noise. But there is no proof for this argument. \n - What is the method for optimizing $\\eta_j$, j = {1 ... n}?\n- How sensitive is the model to different settings of $\\eta_j$?\n- How do the training dynamics compare to standard training where all layers are trained at the same rate? Would the earlier layers generate qualitatively different features maps than under-trained later layers? - The interaction of the model with adaptive optimizers that scale the learning rate of each parameter is not well justified. \n- Learning rate schedulers may disturb the proposed model. If the base learning rate is reduced before the learning rate of the last layer $\\eta_n$ is sufficiently increased, the last layer may never be trained enough, training may take much longer or may become unstable.", " The manuscript proposes a model-level curriculum learning framework that uses variable learning rates across a DNN to jump-start the learning process. The core idea revolves around compensating for the difference in learning across the depth of a DNN by using larger learning rates closer to the input and gradually decreasing their value towards the output layers. As intended, this curriculum is data-free and removes the need to assess the difficulty of samples, similar to how data-level curriculum operates, and instead can be applied across a range of architectures and tasks. The proposed idea improves on the performance of standard training regimes across vision, language and audio tasks, using residual, recurrent and transformer-based architectures. Strengths\n- The writing offers a clear and understandable representation of the curriculum learning approach. More importantly, the categorization of various levels of curriculum learning provides important context to the ideas being discussed.\n- The depth of experimental evaluation, stretching across datasets, tasks and architectures is commendable.\n\nWeaknesses\n- Data augmentation has proven valuable to the training process, be it commonly used approaches like random cropping/flipping or more intricate mixed approaches. They offer substantial improvement in performance and as well as alternative properties of a DNN. The proposed method is not evaluated alongside any common preprocessing techniques (vision datasets), thus offering a gap in the exploration of the behavior of the proposed algorithm alongside them.\n- The set of results posted across various tables in the manuscript is commendable. However, in certain cases the the average performance and standard deviation across multiple methods are extremely close. While this does not take away from the final conclusions, the use of significance tests/metrics could help further distinguish improvements.\n- The descriptions of the preprocessing and training regimes offer extremely important context to the results presented throughout the paper.\nHowever, an exploration of the two key parameters that make up the proposed method, various update setting for the learning rate and the choice of iteration/epoch at which to converge to the standard learning rate would offer more insight into the proposed method. - Specific to the vision datasets and results, could the authors provide insights into (a) how the use of common preprocessing techniques affects the behavior of the proposed approach, and (b) the balance in peak performance achievable in standard training vs. the proposed method when preprocessing is included. While preprocessing techniques add some element of stochasticity, it is important to understand an algorithm in that context.\n- The addition of significance tests/metrics would be beneficial to further emphasize the benefits of the proposed algorithm as well as other areas for potential improvement. Could the authors reassess their results using significance metrics?\n- Moving a small portion of the experimental setup to the appendices/supplementary material and including a study of how k (iterations/epochs), from small to absurdly large, affects the improvement in performance offered by the proposed algorithm would add another dimension to the evaluations presented in this work. The results themselves should be a by-product of the suite of experiments already executed so hopefully there is minimal overhead.\n- Given the variable levels of improvement, even within the scope of the same task, could the authors discuss plausible schemes that take into account other factors (like data, gradient flow, etc.) that could provide more consistent gains?\n- At a more abstract level could the authors address the potential negative implications/limitations of their approach. While there aren't glaring limitations or issues with the proposed method, the authors can consider potential negative effects in alternative properties of the DNN, as a by-product.\nSince there isn't an evaluation of the calibration quality or adversarial robustness of the final solutions, these could be potential issues or limitations to the proposed algorithm. \nKeeping robust/adversarial training or alternative objective functions to classification in mind could lead to unexpected outcomes.\n", " The paper proposes a kind of model-level curriculum learning strategy, which assigns higher initial learning rates to shallow layers than deep ones and continues increasing all learning rates until they reach the same value during the training process. It is a model- and task-agnostic approach. To verify its effectiveness, the authors apply it to multiple domains and different neural networks. Strengths:\n\n1.The proposed strategy concerning layer-wise learning rate is important.\n\n2.The empirical experiments are sufficient and comprehensive to prove the generality and effectiveness of the strategy.\n\n3.This paper is well written and easy to read.\n\nWeaknesses:\n\n1.The relation between this work and curriculum learning is vague. Curriculum learning is a training strategy that learns from easy to hard, or more generally, in a certain kind of meaningful order. For example, CBS, which is mentioned in the paper, anneals the standard deviation of Gaussian kernels to pass more high-frequency information. However, the relation between this work and CL is not so clear.\n\n2.The discussed related work is not sufficient. Since this work focuses on the layer-wise learning rate, it should be discussed more clearly about how this work is different from the works about layer-wise(or adaptive) learning rate[1][2][5], which is a common and widely adopted strategy. Additionally, training with increasing learning rate in the early stages is a kind of warmup[3][4], the relations between this work and warmup strategies are required to discussed. The vague position of this work makes its contribution vague.\n\n3.This paper lacks theoretical analysis and does not explain why increasing the learning rates until reaching the same value is reasonable. It is not convincing only with the empirical results. The designed mechanism is neither intuitive nor well theoretically supported, making its justification questionable. The only way for justification is through experiments, which may be tricky, because the proposed method needs more hyper-parameters than that of the conventional methods.\n\n[1] Singh, B., De, S., Zhang, Y., Goldstein, T., & Taylor, G. (2015, December). Layer-specific adaptive learning rates for deep networks. In 2015 IEEE 14th International Conference on Machine Learning and Applications (ICMLA) (pp. 364-368). IEEE.\n\n[2] Ginsburg, B., Gitman, I., & You, Y. (2018). Large batch training of convolutional networks with layer-wise adaptive rate scaling.\n\n[3] Gotmare, A., Keskar, N. S., Xiong, C., & Socher, R. (2018). A closer look at deep learning heuristics: Learning rate restarts, warmup and distillation. \n\n[4] You, Y., Gitman, I., & Ginsburg, B. (2017). Large batch training of convolutional networks. \n\n[5] Adam:A Method for Stochastic Optimization. ICLR 2015\n 1. There are many hyperparameters of this strategy, so I’d like to know how to choose proper initial learning rates for all layers and how does the sensitivity of hyperparameters affect the experimental results?\n2. For pre-trained models, this paper states that “The same issue can occur if the weights are pre-trained on a distinct task, where the misalignment of the weights with a new task is likely higher for the high-level feature layers. ” However, it is generally believed that the learning rates of the layers close to the outputs are relatively high and the learning rates of the layers close to the inputs are low or even equals to zero in the process of fine-tuning, which seems to be the opposite of the strategy proposed in this paper. How to explain the difference between them?\n3. It may be easier to understand Eq. (8), if the authors can plot a figure for μ(j).\n4. Whether assigning higher learning rate to the deeper layers can also obtain better performance through hyper-parameter tuning ? The learning landscape of the models is not clear. The authors do not discuss the limitation. However, there is a lack of both an intuitive explanation or theoretical analysis for the proposed method, making it hard to be a convincing work. A large line of related works is missed, making the contributions vague.", " This paper proposes to modify neural net training by applying a different learning rate to each layer: the deeper layers are assigned a lower learning rate that progressively grows to match the first layer learning rate. The paper is clear, the approach is simple and can be implemented easily. The experiments shows better generalization when adopting the proposed learning rate schedule. The paper could be improved on two main axes:\n(i) presenting the approach as an optimization method seem more appropriate than the curriculum learning angle, (ii) more analysis and justifications of why the method works seem necessary.\nOverall, I feel that the paper does not explain how/if the proposed differentiated learning rate schedule is different from the effective learning rates reached by different settings of adaptive optimizers (momentum, Adadelta, Adam, SM3...).\n == Optimization vs curriculum learning ==\n\nThe proposed method uses a different learning rate per layer. This endeavor has been widely explored in the literature on optimization of neural nets while the name curriculum learning is usually employed for methods modifying the ordering of the examples during training. I agree that the name curriculum could make sense in your context but the paper needs to spend more time on previous work on optimization. This means expanding the related work section and the empirical analysis.\n\nThe Adadelta paper (Zeiler 2012) observes the benefit to use different learning rates for different layers and Adadelta results in \"effective learning rates [...] are larger for the lower layers of the network and much\nsmaller for the top layer at the beginning of training.\" Similar observations can be made for other adaptive learning rate algorithms like Adagrad, Adam, Adafactor, Adamax and AdamW or even later algorithms like e.g. SM3 (Memory-Efficient Adaptive Optimization, NeurIPS'19, see Figure 1.).\n\nTo justify your approach it seems necessary to compare your method at least with Adam and report the effective average learning rate (e.g. see Zeiler 2012's definition) per layer for both methods and how their parameters control the differentiation of learning rates across layers during training. The fact that your method is used in conjunction with Adam (or variants) could mean that either (i) the same type of learning rate schedule could be attained by changing Adam's parameters or (ii) your method's benefit comes from schedules unreachable by Adam. It seems very important to explore both methods' parameters to distinguish these two cases.\n\n== Benefit is faster training or better generalization? ==\n\nModifying the training algorithm requires reporting training curves with *both* validation and training loss over time. The paper does not report any training loss which left us wonder if the generalization benefit comes from better optimization (lower training and validation loss) versus a better generalization trajectory (reaching a lower validation loss at the same training loss). \n\n== Schedule Tuning ==\n\nCould you explain how the exponential scheduler of Eq. 8 has been selected? Is it the only option that you tried? How is the training speed affected by the choice of the additional hyperparameters (c, \\nu^{(0)}_n, k)? In particular, compared to a robust baseline like Adam, would most hyperparameter choices yield faster or slower training? I do not understand how c can be set to 10 without validation.\n\n== Number of training steps ==\n\nThe experimental section should spend more time describing how the number of training steps was selected. In which cases the baselines would reach the same or better validation loss with more steps? E.g. Figure 1.d. seems to suggest that better validation loss with CBS can be reached with more training. This improvement does not seem to agree with the sentence \"all models are trained with early stopping\", does it?\n\n== Justification of the noise argument ==\n\nAdaptive methods (Adam etc) justify higher learning rates for first layers with a curvature argument: in laymans term, to get the same impact on the output the first layers' updates require a larger learning rates than the last ones. Adadelta (Zeiler 2012) also mentions the larger learning rate in light of the vanishing gradient effect (Bengio 2014). Your motivation (from [21]) is to reduce the effect of \"noise due to untrained parameters\". This argument is unclear to the reader and it would help if you could find a way to measure/characterize that \"noise\" and show how your method reduces its effect.\n This work does not raises concerns on societal impact. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "DyUmYVDGGsK", "oHN2mSRsyIH", "NibZxik_ezpl", "JJlVny7Fjo1", "DjemtwtuUVB", "vKTb3_86uSv", "5PLuUQxHU_", "zTGkdPy_ld-", "czGPqu3OEg6", "4Ts1fDU6UZ", "TMmDnXOtthQ", "SDetO11sdpE", "nips_2022_AdK9_GTEvG", "nips_2022_AdK9_GTEvG", "nips_2022_AdK9_GTEvG", "nips_2022_AdK9_GTEvG" ]
nips_2022_pcgMNVhRslj
Alignment-guided Temporal Attention for Video Action Recognition
Temporal modeling is crucial for various video learning tasks. Most recent approaches employ either factorized (2D+1D) or joint (3D) spatial-temporal operations to extract temporal contexts from the input frames. While the former is more efficient in computation, the latter often obtains better performance. In this paper, we attribute this to a dilemma between the sufficiency and the efficiency of interactions among various positions in different frames. These interactions affect the extraction of task-relevant information shared among frames. To resolve this issue, we prove that frame-by-frame alignments have the potential to increase the mutual information between frame representations, thereby including more task-relevant information to boost effectiveness. Then we propose Alignment-guided Temporal Attention (ATA) to extend 1-dimensional temporal attention with parameter-free patch-level alignments between neighboring frames. It can act as a general plug-in for image backbones to conduct the action recognition task without any model-specific design. Extensive experiments on multiple benchmarks demonstrate the superiority and generality of our module.
Accept
Paper was reviewed by four reviewers, receiving: 2 x Borderline Rejects and 2 x Weak Accepts. Importantly, post rebuttal, [1mVh] mentioned upgrading the rating from Borderline Reject to Borderline Accept (though this is not reflected in final ratings). The general concerns raised by the reviewers included, limited improvements over the baselines and lack of certain ablations / comparisons. Much of these concerns have been addressed by various new experiments and discussions provided during the rebuttal period. [1mVh], [Ppgh] and [Fgtq] have all acknowledged that their concerns were largely resolved. [Shks], who remained the only negative reviewer, did not participate in the discussion nor acknowledged reading the author responses. AC has gone through the responses to comments of [Shks] and found them partially convincing. Overall, given the overall positive assessment of the reviewers and the generality of the proposed approach that can be combined with variety of architectures, the work will make a fine contribution to NeurIPS.
train
[ "rSvwyf9AEd6", "b0psO9G7rLZ", "-yO3DWKDtP", "p4z5VLrQhoE", "2K2U1TO0wa", "eqiHNNRMDcb", "N1P8dQfPYjH", "2gOZ_O-bMYi", "6PwBXkVpsBI" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the response from the authors who answer my questions. I have a favorable opinion of the paper and will keep my initial \"Weak Accept\" rating. I think it is a solid enough submission.\n\n", " We really appreciate your valuable comments. Below please find our specific answers to the questions. We will remain committed to clarifying any further questions that may arise during the discussion period.\n\n**The effect of temporal sampling rates.** For video recognition task, the input frames are sampled with even intervals. When the sampling rate is high, there are more input frames. Then, in order to better learn the temporal reasoning, the more frames input, the richer the video information obtained by the model, and the better the model's performance. We conducted experiments on three different input numbers in the temporal dimension: 32, 16 and 8, as shown in the following table. As the results in our paper use the input of 32 frames, the input of 16 frames and 8 frames corresponds to the downsampling rates of x2 and x4, respectively. The results demonstrate that the model's performance improves as the number of input frames increases.\n\n| Downsampling Rate | Input Frames | Top-1 Acc | Top-5 Acc | FLOPs (G) | Params (M) |\n| :---: | :---: | :---: | :---: | :---: | :---: |\n| x1 | 32 | 81.4 | 95.5 | 792.9 | 121.8 |\n| x2 | 16 | 80.9 | 95.1 | 398.3 | 121.8 |\n| x4 | 8 | 79.6 | 94.3 | 201.8 | 121.8 |\n\n**More analysis on what ATA has learned.** This is really a good suggestion. We have provided more qualitative analysis in Section D of our supplementary material. Figure 6 and Figure 7 show the feature map responses of temporal attention and ATA on Kinetics-400 and SSv2, respectively. In Figure 6, the feature maps generated by temporal attention and ATA both pay more attention to the areas where the action occurs. However, temporal attention lacks the ability of relating the corresponding regions with inconsistent contents, such as the head shown in $F_7$ and $F_8$. This trend is similar in the results of SSv2 in Figure 7. This demonstrates that ATA has learned not only the motion regions but also their changing trends.\n\n**The effect of global and local motion.** SSv2 contains videos with more complex motions than Kinetics-400. Besides global motion, there are also other types such as fast motions and inconsistent motions. We think that is the main reason why the performance of ATA is better on SSv2 than Kinetics-400. Therefore, to our understanding, ATA could handle more motion types than temporal attention. \n\n**Additional computational costs compared with TimeSformer baseline.** More input frames will lead to more computational costs. Under the same conditions (views and input frames), there are no additional computational costs for ATA compared with TimeSformer. \n\n", " We really appreciate your valuable comments. Below please find our specific answers to the questions. We will remain committed to clarifying any further questions that may arise during the discussion period.\n\n**More discussion and comparison with attention-based methods.** This is really a good suggestion. Our theoretical analysis shows that neither factorized (2D+1D) attention nor joint (3D, or global) attention obtains sufficient task-relevant information, thereby suffering from a relatively low performance. In existing methods, factorized attention and join (3D, or global) attention have fully exploited, such as TimeSformer, ViViT and Swin. In most cases, the two kinds of attention show similar performance. The following table shows the comparison of spatial attention, joint attention, factorized attention and our approach on TimeSformer based on ViT-Base model. Compared with joint attention, our ATA still has superior performance. We will include more discussion and comparison in our updated version. \n\n**The complexity analysis.** The complexity of the Kuhn-Munkres Algorithm is only cubic with $HW$ rather than with $THW$ as joint (global) attention. This keeps the complexity of our ATA the same order as factorized attention while providing better results, as shown below. Also, when the size ($hw$) or number ($t$) of input frames grow, the computational overhead of ATA will increase more slowly than that of joint (global) attention. In practical, we find that joint (global) attention costs more memory since it takes all $THW$ patches for attention. Therefore, our ATA is more efficient than other attention-based methods, bringing higher performance. We will further clarify this in our updated version. \n\n| Image Backbone | Temporal Modeling | Top-1 Acc | Top-5 Acc | FLOPs (G) | Params (M) | Complexity |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| ViT-Base | Spatial Attention | 76.0 | 92.6 | 140.3 | 86.2 | $TH^3W^3$ |\n| ViT-Base | Joint Attention | 77.4 | 92.8 | 179.7 | 86.2 | $T^3H^3W^3$ |\n| ViT-Base | Factorized Attention | 78.0 | 94.3 | 201.8 | 121.8 | $TH^3W^3 + T^3HW$ |\n| ViT-Base | ATA (Ours) | 79.6 | 94.3 | 201.8 | 121.8 | $TH^3W^3 + T^3HW$ |\n\n**Computational complexity with different number of input frames.** Your understanding is correct. According to the complexity analysis, more input frames lead to higher FLOPs while the same number of parameters.\n", " We really appreciate your valuable comments. Below please find our specific answers to the questions. We will remain committed to clarifying any further questions that may arise during the discussion period.\n\n**Relationship with Reference 4.** We would like to clarify the novelty of our method. The primary goal of this work is to prove the effectiveness of temporal alignment, theoretically and practically, on existing video and image backbones without any model-specific design, rather than to propose specific models. In other words, we hope to find a generic plug-in temporal alignment module to assist existing models. The method from reference [4] is a classic algorithm for solving bipartite matching problem rather than alignment problem. We adopt it only as an interchangeable implementation for patch-wise feature matching between adjacent frames. Therefore, our approach borrows the implementation of KMA of reference [4] and solves a different problem. We will include this clarification in our updated version. \n\n**Further clarification of Figure 1.** Task & representation shared information is obtained from two separate processes: task-relevant information learning and representation information learning. The former is the information needed by a specific task, e.g., Video Recognition, which is obtained in the training process in the form of the aggregation of multiple frames. And the latter is the information extracted from a particular frame. In the training process of video learning, the network learns to embed more task-relevant information in frame-wise representations. However, it cannot be guaranteed that the two processes harmonize. In other words, the shared part of these two kinds of information is variable. That is why we illustrate it as a trapezoid shape. Our motivation is to increase this shared information, so the polygon with the red frame in Figure 1(a) expands to the rectangle in Figure 1(b).\n\n**Comparison with other SOTA results.** This is really a good question. In our paper, we only compared our approach with ViT-based architectures, because the gaps between different architectures, e.g., network structure and pretraining models, may affect the evaluation of our concept. Swin and Uniformer have superior performance and pretraining models to ViT under the same conditions, so that we may need to employ other tuning tricks in training process to fill such gaps. Therefore, we prefer to demonstrate the effectiveness of our approach separately by integrating it into different architectures in Table 4. However, we agree that we should include these SOTA results in our updated version to provide more information to readers. \n\n**Choice of similarity function.** We choose Cosine similarity in Kuhn-Munkres Algorithm since it has been widely used to measure the feature similarity in visual learning. However, the idea of ablating similarity function is interesting. We will be more than excited to conduct the ablation and include the results in our updated version.\n\n**Comparison with spatial-temporal self-attention.** Indeed, spatial-temporal (or joint) attention can model the relationship among all the pixels along the time axis. This would take more task-relevant information for representations of single frames. However, more redundant task-irrelevant information would also be introduced. According to the experiments of TimeSformer and ViViT, the spatial-temporal attention shows inferior performance compared to factorized (2D+1D) attention. Moreover, joint spatial-temporal attention suffers from high computational and memory costs. On the contrary, ATA shares the same complexity order with factorized attention while introducing more task-relevant information for performance improvement, which shows its potential. \n\n**Performance of ATA on Table 1.** In Table 1, ATA outperforms ViViT-L, MViT-B and Mformer-HR with small margins. However, ViViT-L has much more complexity than ATA; MViT-B employs joint attention structure with higher memory cost than ATA; and Mformer-HR employs high resolution input which also has more complexity than ATA. We will include more detailed analysis in the updated version.\n", " We appreciate your valuable comments. Below please find our specific answers to the questions. We will remain committed to clarifying any further questions that may arise during the discussion period. \n\n**Ablation study for the de-alignment component.** We have conducted the ablation study for the de-alignment component on several mainstream video backbones in Table 6 of our supplementary material. We hereby summarize the results in the following table, which provides both performance and complexity comparisons between whether using de-alignment or not. According to the table, the de-alignment component brings performance improvements for all backbones at zero cost. We totally agree that this ablation will make our approach better understood. We will include this ablation in the updated main paper. \n\n| Image Backbone | Pretrain | De-alignment | Top-1 Acc | FLOPs (G) | Params (M) |\n| :---: | :---: | :---: | :---: | :---: | :---: |\n| CycleMLP-B5 | IN-1k | &cross; | 72.7 | 122.4 | 102.8 |\n| CycleMLP-B5 | IN-1k | &check; | 77.7 | 122.4 | 102.8 |\n| ConvNeXt-Base | IN-22k | &cross; | 76.0 | 198.0 | 140.5 |\n| ConvNeXt-Base | IN-22k | &check; | 80.5 | 198.0 | 140.5 |\n| ViT-Base | IN-1k | &cross; | 79.3 | 201.8 | 121.8 |\n| ViT-Base | IN-1k | &check; | 79.6 | 201.8 | 121.8 |\n\n**Justification of improvements.** In Table 1, the result of our ATA surpasses all other approaches on Kinect-400, even with relatively fewer temporal clips (views) and without high-resolution (HR) inputs. Furthermore, in Table 7 of our supplementary material, better results are achieved with more temporal clips, i.e., 81.9% top-1 accuracy. This leads to a 0.6% top-1 accuracy growth over the second best. In Table 2, our model obtains comparable results with others under the same inference setting (32 frames, 3 \\times 1 views). The performance gap between our ATA and Mformer-L is mainly because it is using Kinetics-400, besides ImageNet, as pretraining data. Moreover, the primary goal of this work is to prove the effectiveness of temporal alignment on existing video and image backbones without any re-pretraining. Therefore, the reason why our approach sometimes shows marginal gain mainly lies in the gap between the performance of our baseline and other models. We agree that we should make more thorough discussion on the performance part to provide further insights. We will include these discussions in our revised manuscript.\n\n**Independent performance of ATA module in different locations.** We have investigated this in Table 5 of our supplementary material, showing that independently inserting ATA in encoder blocks helps increase mutual information for adjacent frames and benefits the performance. We summarize it in the following table. It can be observed that inserting ATA in early stage of the model achieves better performance improvement. Considering both Table 3 in the main paper and Table 5 in the supplementary material, we find that the selection of features to align and de-align also makes a difference, besides the operations themselves. We totally agree that this ablation study would provide further insights and will include this part in the updated main paper. \n\n| Block 0-2 | Block 3-5 | Block 6-8 | Block 9-11 | Top-1 Acc | FLOPs (G) | Params (M) |\n| :---: | :---: | :---: | :---: | :---: | :---: | :---: |\n| - | - | - | - | 78.0 | 201.8 | 121.8 |\n| &check; | - | - | - | 79.3 | 201.8 | 121.8 |\n| - | &check; | - | - | 79.6 | 201.8 | 121.8 |\n| - | - | &check; | - | 79.3 | 201.8 | 121.8 |\n| - | - | - | &check; | 79.2 | 201.8 | 121.8 |\n| &check; | &check; | &check; | &check; | 79.6 | 201.8 | 121.8 |\n\n\n**The highlighted results in Table 4.** The highlighted row shows the architecture we use in comparison with other state-of-the-art results, which is the same in other tables. We agree that the highlight in Table 4 somewhat leads to confusion and will remove it in our updated version. \n\n**Symbols in Figure 5.** We appreciate your recommendation and will improve our drawings accordingly.\n", " This paper tackles the video action recognition task focusing on temporal modeling. An Alignment-guided Temporal Attention (ATA) module with similarity-based feature-level alignment is proposed to enlarge the mutual information among consecutive frames, and theoretical analysis shows that increasing mutual information in neighboring frames' feature representations can increase task-relevant information and benefit the model performance. Experiments are conducted on two video action recognition datasets, i.e., Kinetics-400 and Something-Something V2, and show that the proposed ATA module can be a general plug-in module to apply on any image backbones for video recognition task.\n\n\n - Strengths:\nThe theoretical analysis is a plus, showing that increasing mutual information in neighboring frames' feature representations can increase task-relevant information and bring performance improvement.\n\n- Weaknesses:\n1. the proposed ATA module contains one alignment component and one de-alignment component. Though it is motivated in L183 that “To keep the original location-based information and facilitate subsequent spatial operations, a recovery with de-alignment as in Eq. (7) is thus needed.\", how will the de-alignment component affect the performance?\n\n2. The main results shown in Table 1 and Table 2 do not show significant improvement compared to other models, without convincing justification for the relative inferior results. Some explanations like in L214 that\"Our approach can achieve better performance if using pretrained models on larger datasets, e.g., JFT-300M.\" just show the anticipation without the real result support.\n\n3. For the ablation study of ATA module in different locations shown in Table 3, the ATA module is inserted in an increasing order. How about the independent performance of inserting ATA module into different locations?\n\n\n 1. For the results in Table 4, why are the results in the last line highlighted? The numbers in that line are not the best compared to others.\n\n2. It is better to mark these symbols from equations (17) and (18) into the Figure 5 for easy understanding. The authors mention the privacy issue of using these video action recognition models at certain circumstances, e.g. hotels and homes,\nin the part of potential negative societal impacts. While the general solution of restricting the model usage permission is pointed, the specific policies still depend on multiple factors, e.g. social science, etc.\n\n\n", " This paper presents a method for video action recognition. It discusses the problems of factorized (2D+1D) and joint (3D) spatial-temporal operations and proposes Alignment-guided Temporal Attention (ATA) for temporal modeling. ATA achieves strong performance while maintaining low computational overheads. What’s more, this paper theoretically proves that the frame-by-frame alignment increases mutual information between neighboring frame representations, thereby potentially including more task-relevant information. Extensive experiments show the effectiveness and generality of ATA, which not only achieves satisfying results on multiple video action recognition benchmarks, but also can act as a general plug-in to extend any image backbone for video learning tasks. Strengths:\nThe paper is well organized and the writing is clear. The figures are well-understood. The experiments seem to be limited but convincing. \nWeaknesses:\nThis paper doesn’t propose a model which is original and innovative enough. Because the main contribution is the aligned temporal attention operation, however the alignment operation seems to be as same as the alignment method in reference [4].\n 1.\tThe idea of aligned temporal attention is reasonable, however, the alignment method seems to be not innovative enough. It seems that the pixel alignment method in this paper is as same as that in reference [4]. Could you please clarify the difference of these two alignments?\n2.\tCould you please explain more about the Figure 1 (a)? How can the Task & representations shared information (illustrated by red line) be obtained? Why is it a trapezoid? Is the Task-relevant information obtained from RGB videos?\n3.\tIt seems that the paper doesn’t cite the SOTA results to make the comparison, for example the articles below should be cited and compared.\n-\tLiu Z, Ning J, Cao Y, et al. Video swin transformer. CVPR. 2022.\n-\tLi K, Wang Y, Gao P, et al. Uniformer: Unified transformer for efficient spatiotemporal representation learning. ICLR, 2022.\n4.\tWhen calculating the alignment with Kuhn-Munkres Algorithm, cosine similarity is employed. Why the cosine similarity is chosen? Could you give some ablation studies based on the choice of similarity?\n 1.\tThis paper indicates that the aligned temporal attention can mine more task-relevant information from the aligned pixels. However, the spatial-temporal self-attention structure can model the attention among all the pixels along the time axis. Consequently, it seems that the alignment doesn’t have much significance.\n2.\tIt seems that the results don’t have a huge improvement. For example, in Table 1, there is no large difference between the results of MViTv1-B, ViViT-L, Mformer-HR and ATA for Top-1 Acc.\n", " The paper tackles the problem of modeling temporal context from the input frames for video action recognition task. It discusses the current limitations of existing approaches based on factorized and joint convolutions. As a result, it proposes to use frame-by-frame alignment to potentially increase mutual information of frame representations. The proposed Alignment-guided Temporal Attention (ATA) is studied on two widely used action recognition datasets, Kinetics400 and Something-Something V2 showing improvement over baselines models. **Strengths**:\n1) The paper is well-written with nice and useful visualizations. The motivation for the proposed method is clear. \n2) The discussion of the limitations of the current standard convolutional approaches for temporal modeling is thorough. The theoretical results help to better understand the internal benefits of the method.\n3) Ablation study shows the improvement over simple averaging and another attention-based method. \n4) Combination of ATA with TimrSformer achieves state-of-the-art results, outperforming even larger variants of TimeSformer (TimrSformer-L and TimrSformer-HR).\n\n**Weaknesses**:\n1) I would like to see more discussion and comparison with attention-based methods? How does frame-by-frame alignment help in comparison to global attention? The complexity of the Kuhn-Munkres Algorithm is also cubic as in global attention. So what are the benefits?\n\n 1) Why there is an increase in computational complexity over baseline TimeSformer with 8 frames. Is it just using a larger number of input frames (32)?\n2) Would be more beneficial to have a more detailed comparison and discussion with different attention methods like vanilla attention and some factorized and more efficient versions of attention. I find the discussion of the limitations is not sufficient and detailed. ", " This paper introduce a novel alignment-guided transformer architecture for video action recognition. Specifically, the proposed method makes use of KMA to obtain the alignment matrix. Then 1D temporal operation is further used to extract better video representation. The proposed method achieves competitive results on mutiple action recognition datasets. +the core idea of using KMA to align spatial information for temporal operation is very interesting and novel to my best knowledge.\n\n+The paper is overall well-written.\n\n+The mathematical derivation is solid, and the results are strong.\n\n+The proposed method is robust and can achieve consistent performance improvement using different backbones.\n\n\n-The analysis on what has been learned by the model is not thorough enough, see questions section\n\n-some details should be presented with more clarity see limitation section\n\n Q1.Have the reviewers investigates how the temporal sampling rate affects the proposed method?There are two ways to examine this.\n1. If the input is temporally downsampled by x2 x4 x8, how does this affect the model performance\n2. the ATA is done on every other frame, what is the performance difference\n\nQ2. Does the model learn better motion representation? This can be done by conducting the arrow of time experiment as in [A,B]\n[A] Xie, Saining, et al. \"Rethinking spatiotemporal feature learning: Speed-accuracy trade-offs in video classification.\" Proceedings of the European conference on computer vision (ECCV). 2018.\n[B] Zhou, Bolei, et al. \"Temporal relational reasoning in videos.\" Proceedings of the European conference on computer vision (ECCV). 2018.\n\nQ3. How does the global and local motion affects the method. According to the authors, the model performs better on ssthv2. Is this because the videos in ssthv2 have more global motion. It would be great to see some analysis on this. 1. This paper needs some additional analysis on what has been learned by the proposed method. See questions raised in Questions section.\n2. This paper should better describe the efficiency part. Though the proposed method is parameter free, how ever it does incur quite some additional computational cost when comparing with Timesformer backbone as shown in Table2. The authors should better tune the tone to help readers understand the additional computational cost" ]
[ -1, -1, -1, -1, -1, 4, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 5, 5, 4 ]
[ "-yO3DWKDtP", "6PwBXkVpsBI", "2gOZ_O-bMYi", "N1P8dQfPYjH", "eqiHNNRMDcb", "nips_2022_pcgMNVhRslj", "nips_2022_pcgMNVhRslj", "nips_2022_pcgMNVhRslj", "nips_2022_pcgMNVhRslj" ]
nips_2022_WyQAmQ8WIU
SlateFree: a Model-Free Decomposition for Reinforcement Learning with Slate Actions
We consider the problem of sequential recommendations, where at each step an agent proposes some slate of $N$ distinct items to a user from a much larger catalog of size $K>>N$. The user has unknown preferences towards the recommendations and the agent takes sequential actions that optimise (in our case minimise) some action-related cost, with the help of Reinforcement Learning. The possible item combinations for a slate is $\binom{K}{N}$, an enormous number rendering value iteration methods intractable. We prove that the slate-MDP can actually be decomposed using just $K$ item-related $Q$ functions per state, which describe the problem in a more compact and efficient way. Based on this, we propose a novel model-free SARSA and Q-learning algorithm that performs $N$ parallel iterations per step, without any prior user knowledge. We call this method SlateFree, i.e. free-of-slates, and we show numerically that it converges very fast to the exact optimum for arbitrary user profiles, and that it outperforms alternatives from the literature.
Reject
This paper considers reinforcement learning with unordered slate recommendations and shows that this problem can be decomposed into one Q-value per available item as compared to one value per possible slate in existing work. The authors derive a Bellman equation for this formulation and propose model-free algorithms based on it. They show on small synthetic tasks that these methods converge and perform favorably compared to existing methods. The reviewers appreciated the new decomposition and its potential to enable significantly more efficient algorithms. However, several reviewers also voiced concerns about the clarity of the presentation, the practicality of the approach and the strength of the assumptions. The authors were able to remove a key limiting assumption (costs only depend on state) in the rebuttal revision of the paper. This was viewed very positively and alleviated the concerns about the strong assumptions. However, the concerns about clarity and practicality could not be fully addressed by the authors' response. For this reason, the paper is recommended to be rejected. Based on the reviewers' comments, the discussions and the AC's own reading of the paper, the following suggestions would make this a very strong paper: * In the fully general setting where costs are action-dependent, the costs in the Bellman equations are policy dependent and therefore change throughout the execution of the algorithms. As the authors acknowledge, this makes it unclear whether the algorithms provably converge. The authors demonstrate good empirical behavior but their experiments are limited to small toy problems in the absence of function approximation. However, the combination of function approximation and changing (policy-dependent) cost functions may lead to less stable algorithms, a major concern in practice. A theoretical convergence analysis or empirical results with function approximation on more realistic problems would be extremely valuable here. * The paper lacks a more thorough discussion of the relation to prior works and settings. The questions raised in the reviews around generality and assumptions in this paper shows that readers are left wondering what exactly enables the results as compared to prior work. A better discussion of the exact setting and comparison to other works would be very valuable and a better use of space in the main paper than the short proofs (which could be moved to the appendix). * The addition of simple illustrative examples would greatly help convey the intuition behind the decomposition. * The setting in this paper considers the slates being unordered and ordering effects seem to not be captured by this formulation. This is in contrast to existing work. The reader may wonder whether this is crucial for the proposed decomposition. Unordered slates is certainly is a deviation from previous works in this area and most practical recommender systems settings, which would limit the applicability of this approach. Carefully discussing this and if necessary / possible extending this to ordered slates would help and strengthen the paper.
train
[ "8g89dNE-98t", "khJ61qWjc5", "PcLhW95zkqP", "Bjl_ELZZmfd", "3XduNou_bgT", "19_C-aUvMuL", "5d5180FiQfC", "ON874jfFsEI", "CvwUcDCfdf", "beN5PpN3RI", "6SAgFcKTOyV", "WujJHVMsZUo" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are grateful to the reviewer for the very interesting comments. \n\n- We understand the reviewer's worries in the case when the cost depends on both state and action. But, in fact, the situation is very simple:\n\n(case SARSA) Indeed, in the case of SlateFree-SARSA one needs to calculate $c(s,j)$ at each step. We have mentioned in Section 3 (page 7, paragraph under eq.(25) in blue) that the item-cost from Definition 3 needs to keep in memory estimates of the frequencies $r_{s,j}$ and $\\pi_s(\\omega)$ which are just the frequencies to recommend item $j$ when visiting state $s$ and the frequencies to recommend slate $\\omega$ when visiting state $s$, respectively. Obviously, these frequencies are not related to the user preferences, they are just related to the policy updates. \n\n(case Q) Luckily, as shown in eq.(26), for the SlateFree Q-updates we do not need to use the item-cost $c(s,j)$ in the updates, but rather the original slate-cost $c(s,\\omega)$ at each step. This comes directly from the updated Theorem 2 in the rebuttal version. Hence for the SlateFree Q-updates one does not need to calculate anything new related to item-costs, just use the known slate-costs in the original formulation. \n\n- Considering the follow-up question about the definition of the slate-MDPs: The user's response determines the probability to transit to state $S'$ given current state and recommendation slate $S\\times A$. This means, if the user watches video $S$ and is suggested slate $A$ of say three items (I_a, I_b, I_c), the probability distribution to go to next state S' depends on the user's behaviour. For example, a user might choose item I_a with probability 1. Some other user might chose between I_b, or I_c with probability 50-50. Another user might decide that the suggested items (I_a, I_b, I_c) are not interesting and can choose item I_d and I_e from the catalog (outside the recommendations) with probability 75-25. In general, the reaction of some user to the recommended slate is summarised in the transition probability to S', given the current state and the slate. This probability is sampled in the Q-updates eq.(25) and eq.(26) of our SlateFree method. The recommender gets feedback from users by the transitions from S to S'. In the SlateFree, which updates all items in the slate per slot, we do not care exactly which item is the one the user will chose, we rather care which slate of items is the best to be recommended, so that the user will minimize the cost with their special behaviour. \n\n- Following the reviewer's comments we plan to update notation in the final version of our paper. We can use e.g. \\tilde{Q} and \\tilde{c} for the newly introduced item-values and item-costs.\n\n- Also, we agree with the reviewer that the text should be more consistent related to K or K-1 over N combinations and we will do the appropriate corrections in the final version. \n\n- We will definitely include the example requested by both reviewer 8dBR and CrKM in the final version of our work. We have confirmed this also in our response to the reviewer CrKM.\n\nWe would like to thank again the reviewer 8dBR for the careful reading and the constructive comments. \n", " We sincerely thank the reviewer for the constructive comments. Indeed it is a good idea to include a simple example that can provide intuition and clarity. In the final version of our paper we plan to move some proofs in the Appendix, as suggested, and include an example case of a sequential recommendation system and a user with preferences towards some category, e.g. sports. Using such scenario we will explain intuitively how the algorithm updates the Q-values and finds the optimal policy.\n\nConcerning the comment about the practicality of our approach: Current sequential recommendation systems already need to struggle with such large catalogs (K=1e7) as the reviewer mentions, and do so through a combination of deep reinforcement learning and policy gradients. Such variations are in essence value approximations of temporal difference updates coming from the basic Bellman equations. \n\nBut, it is exactly these Bellman equations that we show in our work that can be written in a more compact way using our suggested SlateFree decomposition. This way we already show that our method can solve much larger problems than the ones treated by the original tabular Q-learning. Our method is the first step which establishes the decomposition and the new Q-learning algorithm. \n\nAs a follow-up to this work we plan to extend the suggested SlateFree Q-learning algorithm in order to incorporate value approximation algorithms (using Deep RL and/or policy gradients) in order to support real world size catalogs (K=1e7) and also compare our method with benchmark tasks, like the simulation in SlateQ for very large datasets.\n\nThank you very much again for your careful evaluation. ", " Thanks to the authors for their response and clarification.\n\nRemoving the assumption of dependence on the state is nice. However, I guess for implementing the SlateFree algorithm, in particular for the case when the cost depends on both states and actions, it still needs to calculate $c(s,j)$ at each step. This calculation also needs a large space based on the definition of 3. I am not even sure how to do the calculation in a completely model-free and data-driven approach like Vinalia Q-learning or SARSA, because it appears that just the cost for the current $c(s, w)$ is insufficient.\n\nI also have a follow-up question with the definition of the slate-MDPs. The transition kernel is a function from $S\\times A \\rightarrow S'$. The action space is the unordered slate of the recommended items. How is the users' behavior modeled in the definition? Does that mean the users' response has nothing to do with the MDP? Then how does the recommendation system get feedback from users?\n\nI strongly recommend using different notations to differentiate between $Q(s, w)$ and $Q(s, j),$ as well as the definition of the cost function. It is very confusing.\n\nThe reason why I asked how the 126 is calculated is because in the introduction you said K choose N but you did the K-1 choose N in the simulation section instead. It is preferable to maintain consistency. I also agree with reviewer CrKM that it would be quite helpful if you could include a simple example (maybe the Youtube example you mentioned) in the definition of the slate-MDPs.", " Firstly, thanks to the authors for responding substantively and updating their proofs to remove the most unrealistic assumption. The decomposition they propose is interesting.\n\nBecause of this I am raising my score. I still do think the paper is a little hard to follow, and in particular it would be great to provide some simple examples (but I also acknowledge space is tight, maybe by moving some proofs to an appendix).\n\nMy main remaining concern is the practicality of this approach. Many (most?) recommender systems have extremely large number of items. The simulation here is only to K=100, (many real recommender systems might be K=1e7 or more items) and this approach seems to require a lot of observations under a stationary policy (to avoid imposing a user behavior model like in SlateQ).\n\nI think it would be improve by evaluating on some external benchmark tasks, such as the simulation in SlateQ (and ideally a real recommender system although that may not be practical).", " We thank the reviewer for the critical reading of our manuscript. Since more than one reviewer raised questions about the generality of the decomposition related to the state-dependent cost assumption, we first provide a general answer and then respond to the reviewer's comments in detail. Our method actually solves the general problem without independence assumptions and below are the arguments.\n\nTwo important remarks - TO ALL REVIEWERS: \n\n[a] In the rebuttal version we have completely removed the assumption about state dependent cost. This was included in the submitted version as a strong -- but unnecessary -- assumption in our analysis. Besides, a similar assumption was used in [Ie et al, IJCAI'19]. As it turns out, this was a very conservative choice for SlateFree: we have found out that the proofs of Theorem 1 and Theorem 2 do not actually need such an assumption (see rebuttal version, modifications in blue). Instead, we need to introduce in the Theorems the marginal cost-item from Definition 3 $c(s,j)$, and use equality (21) for deterministic policies. (Both the cost-item definition and eq.(21) were already presented in the first submitted version.) In this way we can easily generalize the proofs. The incorporation of the marginal cost-item $c(s,j)$ in the Theorems 1 and 2 slightly modifies the SlateFree-SARSA and -Q methods. Eventually, this explains why, when we use slate-dependent costs in the numerical evaluation section, the SlateFree version converges to the vanilla tabular-Q solution, see last subsection and Fig.2 (right).\n\n[b] Definition 2 is a probability mass function (see Property 3) that results as the marginal of the transition probability given the full slate. This is clearly shown in Property 1, and Property 2, where all the remaining recommendation entries apart from the one occupied by item j are taken in expectation. We do not make any independence assumptions here, we just do a transformation of the original distribution based on marginals.\n\nAll these small modifications and comments can be found in the rebuttal version of our work marked with color blue. \n\n------------------------------------------\nSpecific to reviewer MKZi\n------------------------------------------\nAs explained above, the assumption made about the state-dependent cost in the original submission has been now removed in the rebuttal version. Hence the MDP-decomposition is general for any cost depending on both states and actions. This is why the SlateFree can solve the general problem in the last subsection of the numerical evaluations treating the whole content of the slate jointly. \n\n- As the reviewer well observed the cost assumption was not necessary and the SlateFree method works in the numerical evaluations also for costs that depend on the whole action-slate. We verify this in the rebuttal version, where both Theorems 1, 2 now are proved for general costs, and the SlateFree-SARSA and -Q have been modified to treat such costs. \n\n- Considering the comparison with SlateQ: SlateQ essentially uses item-Q values (as in our SlateFree method), but with a different definition. In their approach only one item is updated per step, the one that the user chooses out of the recommended slate. The SlateQ completely disregards the rest of the unselected items inside the recommendation slate. This is why for most cases, also for the Bernoulli distribution, our SlateFree converges faster than the SlateQ, because it updates all items in the recommended slate not just the one selected. We have included in blue in the rebuttal version a comment about this at the end of Section \"4.A Small scenario\".", " We thank the reviewer for the critical reading of our manuscript. Since more than one reviewer raised questions about the generality of the decomposition related to the state-dependent cost assumption, we first provide a general answer and then respond to the reviewer's comments in detail. Our method actually solves the general problem without independence assumptions and below are the arguments.\n\nTwo important remarks - TO ALL REVIEWERS: \n\n[a] In the rebuttal version we have completely removed the assumption about state dependent cost. This was included in the submitted version as a strong -- but unnecessary -- assumption in our analysis. Besides, a similar assumption was used in [Ie et al, IJCAI'19]. As it turns out, this was a very conservative choice for SlateFree: we have found out that the proofs of Theorem 1 and Theorem 2 do not actually need such an assumption (see rebuttal version, modifications in blue). Instead, we need to introduce in the Theorems the marginal cost-item from Definition 3 $c(s,j)$, and use equality (21) for deterministic policies. (Both the cost-item definition and eq.(21) were already presented in the first submitted version.) In this way we can easily generalize the proofs. The incorporation of the marginal cost-item $c(s,j)$ in the Theorems 1 and 2 slightly modifies the SlateFree-SARSA and -Q methods. Eventually, this explains why, when we use slate-dependent costs in the numerical evaluation section, the SlateFree version converges to the vanilla tabular-Q solution, see last subsection and Fig.2 (right).\n\n[b] Definition 2 is a probability mass function (see Property 3) that results as the marginal of the transition probability given the full slate. This is clearly shown in Property 1, and Property 2, where all the remaining recommendation entries apart from the one occupied by item j are taken in expectation. We do not make any independence assumptions here, we just do a transformation of the original distribution based on marginals.\n\nAll these small modifications and comments can be found in the rebuttal version of our work marked with color blue. \n\n---------------------------------------------\nSpecific to reviewer 8bDr\n---------------------------------------------\n\n- We understand that the decomposition is not straightforward and have tried in the rebuttal version to include certain small additions (space permitting) to help the reader, see text in blue.\n\nTo provide a bit of intuition about our method, since the SlateFree-Q algorithm will update the Q-values of all items which take part inside the slate, this will have an additive effect in the item Q-value each time an item appears in some slate. As a means of example: If the specific 4-slate (i,j,k,l) is recommended at state s, then updating item i value in Q(s,i) will provide information also about other slates where i appears (i,j,l,m) or (i,l,m,f) and which have not been tested before. Hence, item-Q values are updated more efficiently than in the vanilla-Q case.\n\nRegarding the reviewers questions:\n\n- (1.) He have corrected Line124. \n\n- (2.) The reviewer's comment about state-space S and catalog K being identical is valid. We use two different symbols because the state can be more general, e.g. a history of viewed items within a window of some size (e.g. the (say) five last viewed videos in Youtube). Then the state-space would be different than the catalog. We want also to consider such cases, and a comment is included in the rebuttal version (in blue).\n\n- (3.) In the MDP formulation we consider that the catalog is fixed, so that the viewed item will be one from the catalog K. The state-space is the catalog, but as we briefly mention in the rebuttal version, it can be all the combinations of m-consecutive viewed items. \n\nWhat we mean in lines 109-110 is that the user has the option either to select from the recommended slate one item, or completely ignore the slate and pick at random (going to the search bar and following their own preferences) one item from the general catalog. This is also shown in the User-examples of the numerical evaluation. The MDP does not stop when the user picks an item and we do not need to include a terminal state. Actually, the length of each session follows a Geometric distribution of parameter lambda (the discount). We have included in blue a modification in the text of the rebuttal version to clarify.\n\n- (4.) In the case K=10 and N=4 the tabular Q-learning needs to store per state all combinations of items from the catalog, excluding currently viewed item. So the number of these combinations is 9-over-4, i.e. binomial(9,4) = 126 per state, so the Q-table size is 10x126, where 10 is the number of catalog items (states). The SlateFree algorithm needs only 10x9 entries in the table of item Q-values, due to the decomposition we have performed. So, we obtain an important reduction in memory space.", " We thank the reviewer for the critical reading of our manuscript. Since more than one reviewer raised questions about the generality of the decomposition related to the state-dependent cost assumption, we first provide a general answer and then respond to the reviewer's comments in detail. Our method actually solves the general problem without independence assumptions and below are the arguments.\n\nTwo important remarks - TO ALL REVIEWERS: \n\n[a] In the rebuttal version we have completely removed the assumption about state dependent cost. This was included in the submitted version as a strong -- but unnecessary -- assumption in our analysis. Besides, a similar assumption was used in [Ie et al, IJCAI'19]. As it turns out, this was a very conservative choice for SlateFree: we have found out that the proofs of Theorem 1 and Theorem 2 do not actually need such an assumption (see rebuttal version, modifications in blue). Instead, we need to introduce in the Theorems the marginal cost-item from Definition 3 $c(s,j)$, and use equality (21) for deterministic policies. (Both the cost-item definition and eq.(21) were already presented in the first submitted version.) In this way we can easily generalize the proofs. The incorporation of the marginal cost-item $c(s,j)$ in the Theorems 1 and 2 slightly modifies the SlateFree-SARSA and -Q methods. Eventually, this explains why, when we use slate-dependent costs in the numerical evaluation section, the SlateFree version converges to the vanilla tabular-Q solution, see last subsection and Fig.2 (right).\n\n[b] Definition 2 is a probability mass function (see Property 3) that results as the marginal of the transition probability given the full slate. This is clearly shown in Property 1, and Property 2, where all the remaining recommendation entries apart from the one occupied by item j are taken in expectation. We do not make any independence assumptions here, we just do a transformation of the original distribution based on marginals.\n\nAll these small modifications and comments can be found in the rebuttal version of our work marked with colour blue. \n\n---------------------------------------------\nSpecific to reviewer CrKM\n---------------------------------------------\n\n- The proposed decomposition is indeed not straightforward, but it is original; it makes use of marginal quantities (in cost, transition, Q-value) that we define and work with. These quantities do not constitute independence assumptions, they rather use the original model elements to reformulate the MDP, taking expectations. We have included in the rebuttal version certain short (space permitting) additions that may help more in clarifying this.\n\nTo provide a bit of intuition about our method, since the SlateFree-Q algorithm will update the Q-values of all items which take part inside the slate, this will have an additive effect in the item Q-value each time an item appears in some slate. As a means of example: If the specific 4-slate (i,j,k,l) is recommended at state s, then updating item i value in Q(s,i) will provide information also about other slates where i appears (i,j,l,m) or (i,l,m,f) and which have not been tested before. Hence, item-Q values are updated more efficiently than in the vanilla-Q case.\n\n- The simulation setup is indeed simple with small catalog sizes, it does however serve sufficiently the purpose we want, that is to validate our decomposition, especially for small scenarios where the tabular-Q version is tractable and we can find the optimum. Note that in large real datasets no tabular-Q method is possible to work with, because the action-space would be immense making the problem intractable. Deep-RL should be used for large state-space rather than large action-space, hence other approximative methods such as amortized inference (Wiele et al 2020) should be applied. The decomposition we propose here is exact and can be used as a prior transformation in Slate problems, before applying more elaborate deep architectures like the ones mentioned. This is an interesting topic for future work.\n\n- We agree with the reviewer that the entire slate should be accounted for, and this is exactly what the SlateFree decomposition does using marginal quantities for transitions, Q-values and costs, rather than independence assumptions: It takes into account the content of all items (as the sports example given by the reviewer suggests). The item-transition probability, i.e. the transition from state s to s' given item j in the slate (see Def.2) is actually the marginal of the slate-transition probability; hence item-probabilities summarize information over all slates including item j, but they do not assume any independence. ", " We thank the reviewer for the critical reading of our manuscript. Since more than one reviewer raised questions about the generality of the decomposition related to the state-dependent cost assumption, we first provide a general answer and then respond to the reviewer's comments in detail. Our method actually solves the general problem without independence assumptions and below are the arguments.\n\nTwo important remarks - TO ALL REVIEWERS: \n\n[a] In the rebuttal version we have completely removed the assumption about state dependent cost. This was included in the submitted version as a strong -- but unnecessary -- assumption in our analysis. Besides, a similar assumption was used in [Ie et al, IJCAI'19]. As it turns out, this was a very conservative choice for SlateFree: we have found out that the proofs of Theorem 1 and Theorem 2 do not actually need such an assumption (see rebuttal version, modifications in blue). Instead, we need to introduce in the Theorems the marginal cost-item from Definition 3 $c(s,j)$, and use equality (21) for deterministic policies. (Both the cost-item definition and eq.(21) were already presented in the first submitted version.) In this way we can easily generalize the proofs. The incorporation of the marginal cost-item $c(s,j)$ in the Theorems 1 and 2 slightly modifies the SlateFree-SARSA and -Q methods. Eventually, this explains why, when we use slate-dependent costs in the numerical evaluation section, the SlateFree version converges to the vanilla tabular-Q solution, see last subsection and Fig.2 (right).\n\n[b] Definition 2 is a probability mass function (see Property 3) that results as the marginal of the transition probability given the full slate. This is clearly shown in Property 1, and Property 2, where all the remaining recommendation entries apart from the one occupied by item j are taken in expectation. We do not make any independence assumptions here, we just do a transformation of the original distribution based on marginals.\n\nAll these small modifications and comments can be found in the rebuttal version of our work marked with color blue. \n\n---------------------------------------------\nSpecific to reviewer 1QMW\n---------------------------------------------\n\n- Following the reviewer's suggestion, we have compared our method also against vanilla-SARSA. The plots are in cyan (behind the blue colour of vanilla-Q), and the SARSA code is included in the updated version of the google colab file. We have not observed substantial differences compared to the vanilla-Q method. The text in the rebuttal version is updated accordingly.\n\n- In L151, there is no missing summation; here we use the fact that the probability of the sum of disjoint events (slates) is the probability of the union of these events.\n\n- Indeed the state is the currently (or last) viewed item, so the transition is from the current item to the next one, given the recommended slate. The next state need not belong to the set of recommended items, but can be any item inside the catalog K.\nBut, more generally, the state can be the sequence of x (say five) last viewed items. Such an option is possible to be included in our approach, so the state-space would be different than the catalog in this case. We also include a comment about this in the revision.\n\n- The two probabilities mentioned by the reviewer are very different: The probability $P[\\omega|s,j]$ is the probability to recommend the specific slate $\\omega$, given that we are at state s and in the recommended slate, item j should be included (remember there are more than one slates containing item j). The probability $P[\\omega\\in A(s;j)|s]$ is the probability that ANY slate $\\omega$ that includes item j is recommended, given that we are at state s.\n\nWith the above clarifications we hope to have convinced the reviewer that the two Theorems 1 and 2 now have full generality to solve the Slate-MDP. We have updated, as mentioned, the Theorems and RL updates in the rebuttal version to include state and action-slate dependent costs.", " This paper tackles the problem of sequential slate recommendations, where at each step the agent has to choose from a combinatorially large action set. It defines some assumptions for the structure of the MDP (called SlateFree-MDP) and then derives the Bellman equations for this setup. The modified Q-learning and SARSA algorithms then operate on these modified Bellman equations. Strengths:\n- The decomposition of the original MDP results in significantly reduced computation (due to the independences introduced)\n- Short proofs help understand the properties and definitions as the reader proceeds.\nWeaknesses:\n- The assumptions, particularly Definition 2 along with the state (only) dependent cost, essentially reduce the problem to (almost) K-independent sequential slot problems. This is far from solving the original slate problem. This is further highlighted by the fact that this method needs to store atmost NK q-values overall.\n- Experiments: No comparison against vanilla-SARSA. Given that the reward depends only on the state (viewed item) along with Definition 2, I believe that SARSA would perform better than Vanilla-Q and should be an essential benchmark\nMinor typos/Corrections:\n- L54: “in the exploitation”\n- L151: Missing summation in line 2\n - The definition of the state is not quite clear to me. If the state is the last-viewed item, the transition is the next viewed state under a given policy?\n- What is the difference between the terms $P^\\pi[\\omega|s,j]$ and $P[\\omega \\in A(s;j)|s]$? Given that the policy depends only on the current state. \n- How applicable are these structural assumptions to some real-world data/application?\n The limitations are partially unaddressed as mentioned above, and the societal impact of using this on recommender systems is not discussed.\n", " This work proposes (under some assumptions) a novel approach to RL for slate based recommendations. In general, standard RL approaches are not tractable since the action space when considering slates of recommendations is combinatorially large.\n\nHere, they introduce assumptions that allow them to decompose the problem into learning action-values on each item in a slate.\n\nThey test the approach on a simulated environment they designed. Strengths:\n\nRL for slate based recommendations is an interesting topic.\n\nWeakness:\n\nI found the decomposition proposed here difficult to follow. It would be helpful to try and provide more intuition behind this approach.\n\nI have some concerns about the realism of the assumptions and whether its solving the main issues of interest in slate based recommendations (see questions).\n\nThe approach is only tested on a simple simulation tailored to this approach. There are a number of public (real) datasets that can be used for evaluating this approach using off-policy estimators or SlateQ released a simulation with their method. Testing on a broader range of environment (particularly with real data) would help better understand the useful and realism of this approach. Isn't the assumption that the cost depends only on the state and not the action very unrealistic? $c(s, \\omega)=c(s)$. In particular, in almost all recommendation problems the reward is likely to be something like the expected user engagement which is action dependent. I'm aware you relax this assumption in a specific numerical experiment (line 318), but this is still very different than the general case of action-dependence.\n\nRelated to the above point. Much of the interest in slate based approaches is that the value of recommending an item depends on other items in the slate. For example, if the state allows us to infer that the user is interested in either basketball or baseball, showing a (slate size 2) basketball recommendation and a baseball recommendation is probably a good slate. But showing two (potentially relevant) basketball videos is likely to suboptimal since it is less diverse so the additional value of the second basketball video is much lower. It seems like the approach proposed here does not address this problem of diversity and that the value of an item in a slate may depend on the other items in the slate?\n\nWhy use $\\omega$ to denote the action instead of $a$?\n I don't think this work discusses the limitations of this approach clearly. It would helpful to test on some other environments and explain more clearly the limits of the assumptions needs for this decomposition.\n", " This paper studies the sequential recommendation problem and shows that the Q functions can be decomposed based on the state-dependent cost function assumption. Evaluation results are also presented in the paper to verify the efficiency of the algorithm. **Strengths**\n1. The sequential recommendation problem is interesting.\n\n**Weaknesses**\n\n1. The presentation is a little hard to follow.\n2. The decomposition is only possible when the cost is state-dependent; no further discussions are included for other cases. For example, what if the Q function can be decomposed approximately, or is local convergence achievable in other sensorial?\n3. The technical contributions seem limited. 1. Line 124, reward→cost.\n2. If $S=K,$ why not just use one notation? \n3. I have a question about the model. From the definition from Sec.2, the state s is the currently viewed item. Can the viewed items change over time or do they remain the same available $K$ items? If the states can change, I didn’t see the formal definition of the state space. If not, what do you mean (lines 109–110) by saying \"the user moves to state $s’$ by either picking one of the recommended items or selecting something else from the catalog?” Will the MDPs stop if the user picks a recommended item? If so, the MDP should include a terminal state, and all the proofs need to be reconsidered. \n4. What is the case when K=10 and N=4, and you said the memory for the Q-table was reduced from 10x126 to 10x9 calculated? I don't think this paper has any potential negative societal impact.", " This paper considers the problem of N items recommendation system which is a intractable due to the combinatorial nature of the problem. The authors introduce a slate-MDP for solving such problems. Through theoretical analysis, the authors show that the slate-MDP can be decomposed using just item-related Q function per state. This paper also proposes a SlateFree algorithm which is shown to be insensitive to the slate size. \nSome numerical experiment results are presented to show the effectiveness of the proposed algorithm. The proposed algorithm converges faster compared to the SOTA algorithms. \n The problem considered in this paper is promising since it is a generalization of the standard recommendation system problem. Due to the size of the action space is extremely large, it is important to have a time efficient algorithm. The proposed algorithm is shown to achieve good performance for such problems. Sufficient analysis is provided to support the theorems in this paper. The idea of decomposing the slate-based Q function into individual item-based Q function makes sense to me. Overall, this paper is well written and easy to follow. The second assumption seems to be essential. By assuming the reward is only a function of individual items, the combinatorial problem is simplified. I can understand this assumption is essential for getting the theorems. The numerical evaluations shows this assumption can be raised. But a similar result does not hold for SlateQ algorithm. Can you provide more details about this comparison? Both the theorems of these algorithm assume the independency. \nIn this experiment, the SlateQ algorithm converges faster than the proposed one for User1 setup. What is the reason that SlateQ performs better when user select the item by sampling from a bernoulli distribution? It will be great if you can provide more analysis for the experiment results. \n Yes. The authors provide adequate information about this." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "PcLhW95zkqP", "Bjl_ELZZmfd", "19_C-aUvMuL", "5d5180FiQfC", "WujJHVMsZUo", "6SAgFcKTOyV", "beN5PpN3RI", "CvwUcDCfdf", "nips_2022_WyQAmQ8WIU", "nips_2022_WyQAmQ8WIU", "nips_2022_WyQAmQ8WIU", "nips_2022_WyQAmQ8WIU" ]
nips_2022_zzDrPqn57DL
BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework
Fusing the camera and LiDAR information has become a de-facto standard for 3D object detection tasks. Current methods rely on point clouds from the LiDAR sensor as queries to leverage the feature from the image space. However, people discovered that this underlying assumption makes the current fusion framework infeasible to produce any prediction when there is a LiDAR malfunction, regardless of minor or major. This fundamentally limits the deployment capability to realistic autonomous driving scenarios. In contrast, we propose a surprisingly simple yet novel fusion framework, dubbed BEVFusion, whose camera stream does not depend on the input of LiDAR data, thus addressing the downside of previous methods. We empirically show that our framework surpasses the state-of-the-art methods under the normal training settings. Under the robustness training settings that simulate various LiDAR malfunctions, our framework significantly surpasses the state-of-the-art methods by 15.7% to 28.9% mAP. To the best of our knowledge, we are the first to handle realistic LiDAR malfunction and can be deployed to realistic scenarios without any post-processing procedure.
Accept
The paper proposes a method to fuse two sources of information for Bird’s Eye View (BEV) detection, namely multi-view images and LIDAR data, in a way that any data defects in one source of information does not affect the other. Most existing camera-lidar fusion works decorate lidar points with image features and then perform detection in 3D/BEV space. This work leverages recent Lift-Splat-Shoot work for cameras, which allows one to map both camera and lidar inputs to BEV space, before fusing and applying the detection head. The reviewers appreciate the identification of the problem of present fusion methods that are susceptible to damage in one of the two sources of information, the simplicity of the method and its good empirical performance. They raise concerns regarding its novelty, given the obvious choices of the present method. The rebuttal submitted by the authors presents more empirical results and ablations. Most reviewers appreciate the contribution of the paper, and the paper is suggested for publication.
train
[ "xsk31P8dGT", "pqUNCBau8I", "6xbmrA2vqKY", "3XFfOqOiijp", "SOvZaGyaBk", "xncPIadu0wG", "lHXMdlupFF", "M4JNZ-SY-vp", "Hd6Ce57Ircd", "uVn6Us0aP_", "HNAOCsCSzZw", "B8g021IaaAm", "VrENUT1h6-" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " #### `Q6: Can you please provide results on at least one more dataset with high quality Lidar such as the Waymo Open Dataset?`\nA6: Thanks for your suggestion. We train BEVFusion equipped with PointPillars as LiDAR stream on WaymoD5-3classes and it barely improves the baseline. Due to the time constraints of rebuttal, we could not finetune the hyperparameters in detail and we will stress the problem in the future.\nAs discussed in BEVFormer [1] and TransFusion [2], we suspect the reason is that the camera system of Waymo can not capture the whole scene around the ego car, and thus the camera stream cannot perform full camera BEV space as it does on Nuscenes. Furthermore, the camera-only detectors usually do not reach their potential on Waymo, i.e., BEVFormer achieves 0.069 L2/mAPH (IoU=0.7).\n\n[1] Zhiqi Li, et al. BEVFormer: Learning Bird's-Eye-View Representation from Multi-Camera Images via Spatiotemporal Transformers. In European Conference on Computer Vision (ECCV), 2022.\n\n[2] Xuyang Bai, et al. Transfusion: Robust lidar-camera fusion for 3d object detection with transformers. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022. \t\t\n#### `Q7: What is the latency of the approach and how does it compare to the baselines?`\nA7: Thanks, we show the latency and memory usage of BEVFusion and its LiDAR and camera streams in the Table below. Latency is measured on the same machine with an Intel CPU (Xeon Gold 6126 @ 2.60GHz) and an Nvidia V100 GPU with a batch size of 1. Note that our latency bottleneck is the camera stream rather than our fusion framework (i.e., dynamic fusion module). In our camera stream, the 2D->3D projector adopted from LSS costs more than 957 ms, which can be improved through engineering deployment, i.e., concurrent processing.\n\n| Modality | | &#124; | PointPillars | | &#124; | CenterPoint | | &#124; | TransFusion-L | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Camera | LiDAR | &#124; | Memory (MB) | Latency (ms) | &#124; | Memory (MB) | Latency (ms) | &#124; | Memory (MB) | Latency (ms) |\n| | ✓ | &#124; | 7190 | 189.38 | &#124; | 12468 | 199.87 | &#124; | 12536 | 263.61 |\n| ✓ | | &#124; | 7948 | 1264.55 | &#124; | 7956 | 1278.94 | &#124; | 7950 | 1264.45 |\n| ✓ | ✓ | &#124; | 7968 | 1513.65 | &#124; | 18078 | 1464.21 | &#124; | 18086 | 1529.66 |", " We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are glad that all reviewers find our work is well motivated with valuable task definition (w9Mt, 2Tdc, 6d2t, CaaL), simple and effective (w9Mt, fBas, 2Tdc, 6d2t, CaaL),with clear explanation of the previous work (w9Mt, 2Tdc, CaaL), with comprehensive experiments on robustness (w9Mt, 2Tdc, 6d2t, CaaL), and detailed model explanation (6d2t, CaaL). \n\nBefore we respond to each reviewer's comments in detail, we revise the manuscript according to their suggestions, and we believe this makes our paper much stronger. Here is a list of changes we made:\n\n1. In Abstract and Sec.2, we fix the typo and related work discussion.\n2. In Appendix C.1, we add an ablation study of Dynamic Fusion Module under robustness settings.\n3. In Appendix C.2 and C.3, we add experiments for robustness analysis on both modality malfunctions and inferior image conditions.\n4. In Appendix D, we add experiments on the performance gain based on the object distance range.\n5. In Appendix E, we provide latency and memory footprint comparisons.\n6. In Appendix F, we provide more visualization results for failure cases and analysis.\n\nNote that, in the revised version, we mark the text modification in blue.", " We appreciate the detailed and constructive feedback from reviewer 6d2t. Please see our detailed responses below.\n#### `Q1: Related work section is confusing in a few places and can be streamlined further. The camera detectors section...Range images...There is earlier work to do this...a lot of this work came before camera started exploiting the BEV view.`\nA1: Thanks. We have fixed that in the revision.\n#### `Q2: Work does not explain the intuition why the model needs to be trained in two stages. What happens if it's trained in a single stage?`\nA2: Thanks for the suggestion. In the second stage of training, we can freeze the image-view encoder and 3D backbone to avoid accumulating gradients and save GPU memory and training time. For example, with the pre-trained TransFusion-L and TransFusion-camera, our BEVFusion needs only 8 hours of training on 8 V100 (32G) GPUs, while training from scratch requires at least 6 days on 8 A100 (80G) GPUs. Therefore, such two-stage training is easy to re-implement and memory-conserving.\n#### `Q3: 14: \"Note that we do not conduct data augmentation when multi-view image input is involved...\" Is this a limitation of camera fusion methods in general or something specifically lacking in your case? Can you please clarify?`\nA3: Adding camera augmentation requires further alignment between augmented RGB features and 3D ground truth boxes, and a joint LiDAR-camera augmentation is even more complicated. To maintain simplicity and fairly show the effectiveness and generalization ability of our framework, we do not rely on special training tricks. It is worth noticing that one setting of our method can achieve state-of-the-art without such augmentation. \n#### `Q4: It is unclear whether the approach is SOTA ... please explicitly contrast your performance relative to the nuScenes leaderboard (at least for the published approaches) ... your naming may be confusing / too generic.`\nA4: Thanks. Our result, 69.2% mAP, 71.8 NDS, achieves SOTA compared to published approaches and is publicly available on nuScenes leaderboard. We cannot reveal more information due to the double-blind policy. We kindly disagree with the comment about naming, it is a lovely coincidence that these two papers are concurrent works and release the results almost at the same time. Notably, our paper proposes a robust and general framework for LiDAR-camera fusion rather than a specific 3D perception method. \n#### `Q5: Can you provide an analysis of performance as a function of distance to object, and compare to a standard lidar approach and TransFusion/DeepFusion etc?`\nA5: Thanks for the suggestion. We show the mAP results on different subsets based on the object distance range in the two tables below. We compare BEVFusion equipped with CenterPoint, PointPillars, and TransFusion-L to its single modality stream below. BEVFusion boosts its camera stream by 10%-35.4%, 14.8%-50.2%, and 16.3-44.3%mAP for distant regions in <15m, 15-30m, and >30m, respectively. BEVFusion boosts its LiDAR stream by 1%-4.6%, 3.3%-7.2%, 5.8%-9.3% mAP for distant regions in <15m, 15-30m, and >30m, respectively.\n\n| Modality | | &#124; | | PointPillars | | &#124; | | CenterPoint | | &#124; | | TransFusion-L | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Camera | LiDAR | &#124; | <15m | 15-30m | >30m | &#124; | <15m | 15-30m | >30m | &#124; | <15m | 15-30m | >30m |\n| | ✓ | &#124; | 28.2 | 21.2 | 15.1 | &#124; | 73.1 | 57.8 | 33.6 | &#124; | 76.3 | 66.1 | 43.2 |\n| ✓ | | &#124; | 22 | 12.9 | 4.6 | &#124; | 49.1 | 23.1 | 5.8 | &#124; | 41.9 | 19.2 | 4.9 |\n| ✓ | ✓ | &#124; | 32.5 | 27.7 | 20.9 | &#124; | 77.7 | 65 | 42.9 | &#124; | 77.3 | 69.4 | 49.2 |\n\nWe compare BEVFusion with TransFusion results reported from the original paper (12e + 6e training) and our re-implement results (20e + 6e training) in the Table below, where our BEVFusion surpasses TransFusion by 1.5% mAP for >30m distant regions. The results show that our fusion framework gives a larger performance boost for distant regions where 3D objects are difficult to detect or classify in LiDAR modality.\n\n| Method | overall | <15m | 15-30m | >30m |\n| --- | --- | --- | --- | --- |\n| TransFusion | 65.6 | 75.5 | 66.9 | 43.7 |\n| TransFusion (our implement) | 66.9 | 77.6 | 68.3 | 47.7 |\n| BEVFusion | 67.9 | 77.3 | 69.4 | 49.2 |\t\n", " #### `Q5: More limitation discussions. The potential negative social impact is well discussed. But the limitation... For example, would the late-fusion style misses the opportunity to fuse intermediate LiDAR and camera features, and thus makes the pipeline suffer potential performance drop?`\nA5: Thanks for the insightful suggestions. Our fusion focuses on the high-level feature representations and it might miss the opportunity to fuse intermediate LiDAR and camera features. The alignment between intermediate LiDAR and camera features is an interesting topic and can be addressed in the future. We will add the discussion in the revision.", " We sincerely appreciate the positive feedback from the reviewer CaaL and provide detailed responses below.\n#### `Q1: ...(dynamic fusion module) ... provide some insights and analysis into the design itself... how would the fusion module work when facing incomplete LiDAR or camera inputs...`\nA1: Thanks for the suggestion. To better show the effectiveness of each part of the dynamic fusion module, we test BEVFusion equipped with TransFusion-L and show the result under robustness settings against LiDAR and camera malfunctions in Sec.4.4.1 and Sec.4.4.2. We show the result in the Table below. \n\n| | | \\| | clean | | \\| | Object | Failure | \\| | Missing | F | \\| | Preserving | F | \\| | Stuck | |\n|:---:|:---:|:--:|:-----:|:----:|:--:|:-------:|:-------:|:--:|:--------:|:----:|:--:|:----------:|:----:|:--:|:-----:|:----:|\n| CSF | AFS | \\| | mAP | NDS | \\| | mAP | NDS | \\| | mAP | NDS | \\| | mAP | NDS | \\| | mAP | NDS |\n| - | - | \\| | 64.9 | 69.9 | \\| | 34.6 | 53.6 | \\| | - | - | \\| | - | - | \\| | - | - |\n| ✓ | - | \\| | 67.3 | 70.5 | \\| | 50.1 | 57.5 | \\| | 65.4 | 70.5 | \\| | 63.5 | 68.7 | \\| | 65.6 | 69.9 |\n| ✓ | ✓ | \\| | 67.9 | 71.0 | \\| | 50.3 | 57.6 | \\| | 65.9 | 70.7 | \\| | 65.1 | 69.9 | \\| | 66.3 | 70.2 |\n\n\nWe can see that when LiDAR fails to receive points from the object, with a simple channel& spatial fusion (CSF), BEVFusion greatly improves its LiDAR stream by 15.5% mAP. When adaptive feature selection (AFS) is adopted, the mAP can be further improved by 0.2%. Under camera missing scenarios, AFS improves CSF-only by 0.5-1.6% mAP. The results show that our dynamic fusion module is still able to select the BEV information which exists to feed the final detection results under input malfunctions.\n#### `Q2: The experiments section does not provide runtime analysis, like inference time and memory footprint, and its comparison with other methods. `\nA2: Thanks, we show the latency and memory usage of BEVFusion and its LiDAR and camera streams in the Table below. Latency is measured on the same machine with an Intel CPU (Xeon Gold 6126 @ 2.60GHz) and an Nvidia V100 GPU with a batch size of 1. Note that our latency bottleneck is the camera stream rather than our fusion framework (i.e., dynamic fusion module). In our camera stream, the 2D->3D projector adopted from LSS costs more than 957 ms, which can be improved through engineering deployment, i.e., concurrent processing.\n\n| Modality | | &#124; | PointPillars | | &#124; | CenterPoint | | &#124; | TransFusion-L | |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| Camera | LiDAR | &#124; | Memory (MB) | Latency (ms) | &#124; | Memory (MB) | Latency (ms) | &#124; | Memory (MB) | Latency (ms) |\n| | ✓ | &#124; | 7190 | 189.38 | &#124; | 12468 | 199.87 | &#124; | 12536 | 263.61 |\n| ✓ | | &#124; | 7948 | 1264.55 | &#124; | 7956 | 1278.94 | &#124; | 7950 | 1264.45 |\n| ✓ | ✓ | &#124; | 7968 | 1513.65 | &#124; | 18078 | 1464.21 | &#124; | 18086 | 1529.66 |\n\n\n#### `Q3: How does the baseline method in Table 7 fuse the features?`\nA3: Thanks. In Table 7, the baseline is the LiDAR stream where only LiDAR is input, therefore no fusion is conducted. We will be more explicit in the revision.\n#### `Q4: ... would BEVFusion still works when facing both camera and LiDAR malfunction?`\nA4: Thanks. We evaluate our BEVFusion equipped with TransFusion-L under the 'LiDAR fails to receive object reflection points' (as in Table 4), 'missing front camera', and 'preserving front camera' (as in Table 5) malfunctions and rereport the result in the Table below. The results show that BEVFusion remains effective in the face of a certain degree of two-modality malfunction. However, conceptually speaking, if one object is never captured by the camera or LiDAR, our framework will not be able to identify the object.\n\n| Method | | clean | Object Failure | Missing F | Preserving F | Stuck | Object Failure + Missing F | Object Failure + Preserving F | Object Failure + Stuck |\n|-------------|-----|-------|----------------|-----------|--------------|-------|----------------------------|--------------------------------|-------------------------|\n| BEVFusion | mAP | 67.9 | 50.3 | 65.9 | 65.1 | 66.2 | 47.8 | 39.3 | 43.6 |\n| | NDS | 71.0 | 57.6 | 70.7 | 69.9 | 70.3 | 56.2 | 52.9 | 54.8 |\n| Transfusion | mAP | 66.9 | 34.6 | 65.3 | 64.4 | 65.9 | 34.2 | 33.6 | 33.9 |\n| | NDS | 70.9 | 53.6 | 70.1 | 69.3 | 70.2 | 52.7 | 51.0 | 52.4 |\n\n\n", " Thank you for your great efforts in the review of this paper. We are encouraged that the reviewer found our approach to be valuable to the community. We address the remaining concerns below.\n\n#### `Q1: Novelty and model design. The paper is novel in identifying the problem... However... it is mostly a combination of existing methods utilization single sources without much modification, thus diminishing the merit of the proposed framework. `\nA1: With respect, our framework is not a simple combination of existing methods. \nTo improve the performance of the camera stream, we design the adaptive module in FPN, a simple BEV encoder, and CB-Swin-T as 2D backbone. To improve the performance of the fusion framework, we propose the dynamic fusion module to dynamically select feature fusion. Their improvements are shown in Tables 6 and 7. Furthermore, Our framework is generalized to multiple modern architectures, where the two streams can be replaced with various existing and future architectures. In the paper, we show the generalization ability of BEVFusion over different LiDAR streams, i.e., PointPillars, CenterPoint, and TransFusion.\n#### `Q2: The design to handle one or two sources in the framework is naive... A more sophisticated design could be, when for example camera stream is dropped for a few frames, is there a chance to stick to the fusion detector, but utilizing temporal information to compensate the missing RGB data, instead of simply drop the RGB branch and the fusion, running the LiDAR branch alone, which will likely result in a sudden drastic change to the detections?`\nA2: In our camera malfunctions experiments, the setting 'stuck' denotes that 50% of camera frames are stuck. In such robust experiments, our framework inferences on two streams, where the camera input is the previous multi-view images from the latest available frame. \n\n#### `Q3: Simulation for data corruption...However more effort can be done to boost the robustness: e.g. looking for real driving sequences in extreme weather or with bad data...`\nA3: Thanks. We follow the suggestion and report the mAP of BEVFusion equipped with TransFusion-L under different lighting conditions in the Table below. Compared with CenterPoint and TransFusion, BEVFusion shows the best robustness under different lighting conditions. \nIn the future, we will try to look for real driving sequences in extreme weather or with bad data, and train/evaluate those data.\n\n| Method | Modality | Daytime | Nighttime |\n| --- | --- | --- | --- |\n| CenterPoint | L | 62.8 | 35.4 |\n| TransFusion-L | L | 64.8 | 36.2 |\n| TransFusion | LC | 67.0 | 41.8 |\n| BEVFusion | LC | 68.0 | 42.4 |\n", " We thank reviewer fBas for the thoughtful comments. Please see our responses below.\n#### `Q1: Considering the small performance improvement and the simple methodology, I would think that the contribution of this method is relatively limited.`\nA1: To the best of our knowledge, this paper is the first to identify the downside within the current LiDAR-camera fusion models that inevitably fail when LiDAR input is missing and proposes a framework that disentangles the dependency of two sources, thus being more robust in the case of data unavailability. In this sense, task identification itself is valuable to the community in defining and bringing attention to the task. Furthermore, we propose extensive evaluation designs, yielding SOTA results with both clean and robust settings. Despite the conceptual simplicity of the proposed framework, it is generative, effective, and robust. The simplicity does not diminish the contribution.\n#### `Q2: I am wondering why this simple fusion method is better than the other methods compare in this paper. If there are some results and discussions that make sense would be helpful for the readers.`\nA2: Thanks for your good question. Actually, our simple framework can surpass the state-of-the-art methods also surprises us, but this is exactly why our work is valuable to the research community. The essential reason for the superiority of our framework is not about simplicity, but about novelty. In fact, this simple approach has never been done in the current literature so no one knows this would work well until our paper. \nWe suspect one reason that this simple approach is never tried before. When MVX-Net[1] first attempted to fuse camera and LiDAR information by using a LiDAR point to query the corresponding camera features, people try to improve the original framework by proposing superior components. In essence, almost all previous works can be categorized into that framework as shown in Figure 1 of the main paper. \nAs we noticed that the previous frameworks have the downside of relying heavily on LiDAR input, we propose a completely different framework to fuse information at the BEV feature space. Moreover, we empirically show that a simple fusion module is adequate to surpass previous complex approaches, further evidence of the effectiveness of our proposed framework. \n\n[1] \t Vishwanath A Sindagi, et al. MVX-Net: Multimodal voxelnet for 3d object detection. In International Conference on Robotics and Automation (ICRA), 2019. \t\t\t\t\t\t\n#### `Q3: I would suggest showing some failure cases and discussions about them because it will contribute to the community.`\nA3: Thanks. We show the failure cases of BEVFusion equipped with PointPillars as the LiDAR stream in Fig. 6 of the revised Appendix F. The blue boxes are bounding boxes and the red-circled boxes are failed predictions. In Fig.6.(b), the camera stream fails to predict the objects at the bottom-left of the BEV map, in (c), the LiDAR stream fails to predict the objects at the bottom and detects a false positive sample at the top-left, and in (d), BEVFusion fails to predict the objects at the bottom-left. These results imply that the proposed method can balance the two streams well and successfully detect objects when one stream fails and the other succeeds, but fails accordingly when both streams fail.", " We thank reviewer w9Mt for the insightful comments and time spent reviewing the paper.\n#### `Q1: The dynamic fusion module should be explained in more detail ... with a scenario where data is missing from either of the streams...will still be able to select the BEV information... `\nA1: Thanks for the suggestion! To better show the effectiveness of each part of the dynamic fusion module, we test BEVFusion equipped with TransFusion-L and show the result under robustness settings against LiDAR and camera malfunctions in Sec.4.4.1 and Sec.4.4.2. We show the result below, where TransFusion-L is the baseline.\n\n| | | \\| | clean | | \\| | Object- | Failure | \\| | Missing | F | \\| | Preserving | F | \\| | Stuck | |\n|:---:|:---:|:--:|:-----:|:----:|:--:|:-------:|:-------:|:--:|:--------:|:----:|:--:|:----------:|:----:|:--:|:-----:|:----:|\n| CSF | AFS | \\| | mAP | NDS | \\| | mAP | NDS | \\| | mAP | NDS | \\| | mAP | NDS | \\| | mAP | NDS |\n| - | - | \\| | 64.9 | 69.9 | \\| | 34.6 | 53.6 | \\| | - | - | \\| | - | - | \\| | - | - |\n| ✓ | - | \\| | 67.3 | 70.5 | \\| | 50.1 | 57.5 | \\| | 65.4 | 70.5 | \\| | 63.5 | 68.7 | \\| | 65.6 | 69.9 |\n| ✓ | ✓ | \\| | 67.9 | 71.0 | \\| | 50.3 | 57.6 | \\| | 65.9 | 70.7 | \\| | 65.1 | 69.9 | \\| | 66.3 | 70.2 |\n\nWe can see that when LiDAR fails to receive points from the object, with a simple channel& spatial fusion (CSF), BEVFusion greatly improves its LiDAR stream by 15.5% mAP. When adaptive feature selection (AFS) is adopted, the mAP can be further improved by 0.2%. Under camera missing scenarios, AFS improves CSF-only by 0.5-1.6% mAP. The results show that our dynamic fusion module is still able to select the BEV information which exists to feed the final detection result under input malfunctions.\n#### `Q2: Citations 44-45, 32-33, 57-58 are repeated citations...grammar..`\nA2: Thanks, we have fixed that in the revision.\n#### `Q3: For training the camera stream...How is this camera stream trained as this may require a ground truth with camera based BEV 2D points and detections on that?`\nA3: Thanks. Training the camera stream relies on the same ground-truth annotation as the LiDAR stream, i.e. 3D bounding boxes, and the training does not require extra data. In the camera stream, the camera-view features are projected to the 3D ego-car coordinate features to generate pseudo voxels through the 2D->3D projector. In this way, the camera BEV feature is under the same feature space as the LiDAR BEV feature to share the same prediction head. \n#### `Q4: The authors could discuss and show some failure cases of their work.`\nA4: Thanks. We show the failure cases of BEVFusion equipped with PointPillars as the LiDAR stream in Fig. 6 of the revised Appendix F. The blue boxes are bounding boxes and the red-circled boxes are failed predictions. In Fig.6.(b), the camera stream fails to predict the objects at the bottom-left of the BEV map, in (c), the LiDAR stream fails to predict the objects at the bottom and detects a false positive sample at the top-left, and in (d), BEVFusion fails to predict the objects at the bottom-left. These results imply that the proposed method can balance the two streams well and successfully detect objects when one stream fails and the other succeeds, but fails accordingly when both streams fail.", " The authors propose a method to fuse two source of information for BEV detection namely multi-view images and LIDAR data in such a manner that any data defects in one source of information do not effect the network of the other method. Previous methods have combined the information from the two sources at different stages of the network pipeline, but they are prone to getting effected in the inference results when the data is corrupted from either source. This paper delays the combination of information to even later part of the pipeline thereby\nmitigating the effect of bad/corrupted/unavailable data. They do so by generating a pseudo BEV point cloud just from multi-view cameras and combining that information with LIDAR BEV. The combination part is based on a dynamic fusion method which selects important fused features be it from camera based BEV or LIDAR based BEV. The results are shown where missing data in the LIDAR or camera image do not effect the detection unless both are missing for the same object in the scene e.g. car. Strength:\nThe paper addresses a problem which could be a real life problem in LIDAR or image based data capture, where the LIDAR data is missing due to 3D scene material/reflectance properties or the image could be missing from a video stream. These problems can cause existing networks to fail. This paper addresses this problem which can make the commercial deployment of such systems more doable. The paper is well written with clear explanation of the previous work. The results are detailed and show scenarios where they are better than previous best results in challenging situations.\n\nWeakness:\n1. The dynamic fusion module should be explained in more detail as its one of the contributions of this paper. It should be explained with a scenario where data is missing from either of the streams and how the formulation in Eq.1 and Eq.2 will still be able to select the BEV information which exists to feed the final detection result.\n2. Citations 44-45, 32-33, 57-58 are repeated citations. Please fix it.\n4. grammar: #92 start->started, #3 discover->discovered For training the camera stream, a camera based BEV 2D point cloud is created. How is this camera stream trained as this may require a ground truth with camera based BEV 2D points and detections on that?\n The reviewer didn't find any major limitations. The authors could discuss and show some failure cases of their work.", " This paper introduced a method for point cloud object detection based on LiDAR camera fusion. The main contribution of this method is the fusion framework that combines the camera and Lidar stream. This fusion module is very simple because it mainly consists of the concatenation of LiDAR and camera streams and a typical feature selection with an average pooling and 1x1 convolution. The results show that this method slightly outperformed the other method for the comparison. Moreover, the robustness against camera or LiDAR malfunctions is shown in the results. The ablation study shows that each module employed in this method improves performance. \n Strength\n- The performance is slightly improved. \n- The methodology is very simple. \n\nWeakness\n- Considering the small performance improvement and the simple methodology, I would think that the contribution of this method is relatively limited. I am wondering why this simple fusion method is better than the other methods compare in this paper. If there are some results and discussions that make sense would be helpful for the readers. I would suggest showing some failure cases and discussions about them because it will contribute to the community.\n", " The paper proposes a framework for 3D detection from RGB and LiDAR inputs in autonomous driving scenes. The pipeline includes separate networks reasoning from RGB and LiDAR inputs independently, and uses a fusion network for refined detection when both sources are available. Also the paper considers situations of data corruption and proposed to boost the robustness in the model design. The paper is the first to identify and evaluate the problem that most existing methods do not consider situations where one or both sources are unavailable, and proposes a pipeline customized for this situation. The proposed method is evaluated in the standard settings of object detection and compared with baseline methods both qualitatively and quantitatively. Strength:\n\n[1] The task identification. As mentioned above, the paper is the first to identify the issue within the current literature and models, and proposes a pipeline accordingly which reasons from two sources independently and thus more robust when data unavailability occurs. In this sense, the task identification itself is valuable to the community in defining and bringing attention to the task.\n\n[2] Extensive design choices and evaluation. Although the proposed pipeline is mostly based on existing methods, the paper is able to evaluate various design choices to demonstrate the flexibility of the proposed framework, as well as provide extensive evaluation into the results, yields SOTA results with both sources, and robust result when only one is available.\n\nWeakness:\n\n[1] Novelty and model design. The paper is novel in identifying the problem, which is legit and valuable. However for the proposed method itself, it is mostly a combination of existing methods utilization single sources without much modification, thus diminishing the merit of the proposed framework. Also the design to handle one or two sources in the framework is naive, basically running the first stage network only if only one source is available, and running both stages when two are available. A more sophisticated design could be, when for example camera stream is dropped for a few frames, is there a chance to stick to the fusion detector, but utilizing temporal information to compensate the missing RGB data, instead of simply drop the RGB branch and the fusion, running the LiDAR branch alone, which will likely result in a sudden drastic change to the detections?\n\n[2] Simulation for data corruption. The paper proposes to augment the data to simulate possible data corruption scenarios, via dropping points and limiting FOV. However more effort can be done to boost the robustness: e.g. looking for real driving sequences in extreme weather or with bad data, and train/evaluate on those data. Please see the points in the Weakness section above. N/A", " Most existing camera-lidar fusion work decorates lidar points with image features and then performs detection in 3D/BEV space. This work leverages recent Lift-Splat-Shoot work for cameras, which allows one to map both camera and lidar inputs to BEV space, before fusing and applying the detection head. Strengths: \n- The proposed idea and its realization makes sense, and I am not aware of such published work (even though there seems to be concurrent similar work, since this seems a logical next step given the existence of LSS [52]). \n- Details in the model seem well thought out. This include the extensions to LSS (Dual-Swin-Tiny architecture, ADP), as well as the layers in the dynamic fusion module. \n- The experimental results show that this work is close to SOTA on nuScenes and that it affords significant model robustness in the case of lidar information missing compared to existing methods. \n- The model details are pretty clearly explained. \n\nWeaknesses: \n- Related work section is confusing in a few places and can be streamlined further. Examples: \n1) The Camera detectors section contains a discussion of PointPillars, which is a purely Lidar method. \n2) Range images are not really Euclidean space (see line 88) \n3) 89: \"Recently, people start to exploit these two feature modalities to increase the representation power\" --> There is earlier work to do this, if I understand correctly the statement. E.g. [5] from the paper, or End-to-End Multi-View Fusion for 3D Object Detection in LiDAR Point Clouds, by Yin Zhou et al, CoRL 2019. \n4) 90: \" Another line of work is to exploit the benefit of the bird’s eye view plane similar to the camera perception\" --> a lot of this work came before camera started exploiting the BEV view. \n\n- Intuitive explanations are lacking in a couple of instances: \n1) Work does not explain the intuition why the model needs to be trained in two stages. What happens if it's trained in a single stage? \n2) 14: \"Note that we do not conduct data augmentation when multi-view image input is involved, while data augmentation plays a critical part in other cutting edge methods.\" Is this a limitation of camera fusion methods in general or something specifically lacking in your case? Can you please clarify? \n\n- It is unclear whether the approach is SOTA on nuScenes or not. Can you please explicitly contrast your performance relative to the nuScenes leaderboard (at least for the published approaches). When exploring that leaderboard myself, I see mentions of a method called BEVFusion that is SOTA but seems to be a different method? Assuming that method is different and already on the leaderboard, your naming may be confusing / too generic. \n\n- nuScenes is a dataset with particularly poor lidar (compared to other public datasets, such as Waymo Open Dataset, Argoverse2.0 etc). Results on at least one more dataset with high quality and longer-range lidar are highly desirable. The core issue of missing lidar points may be a lot less pertinent for more modern lidars. Also, as range increases beyond ~40m to 70-200m, the approach here may actually underperform lidar-painting approaches, since BEV view can start containing errors > 10m in the camera case making fusion in BEV space difficult. To this effect, analysis of the method performance as a function of object distance, relative to SOTA fusion methods for long distances will help. \n\n\nLanguage: \nThere are minor language issues and typos in the paper, it would benefit from another proofreading pass. \n - Is the approach SOTA on nuscenes or not? Can you please explicitly contrast your performance relative to the nuScenes leaderboard (at least for the published approaches). \n\n- Can you provide an analysis of performance as a function of distance to object, and compare to a standard lidar approach and TransFusion/DeepFusion etc? \n\n- Can you please provide results on at least one more dataset with high quality Lidar such as the Waymo Open Dataset? \n\n- What is the latency of the approach and how does it compare to the baselines? See comment on weaknesses. Some core potential limitations of the existing method have not been fully explored. \n\nMy current rating is predicated on the assumption that a similar idea has not been published yet (not completely certain) and that I will receive reasonable responses to my questions. ", " Towards the problem of current methods tend to fail at situations where hardware malfunctions, this paper presents a simple yet effective LiDAR-Camera fusion framework, namely BEVFusion. By disentangling camera pipeline from LiDAR network and using a dynamic fusion module, BEVFusion achieves SOTA performance and shows robustness against LiDAR or camera malfunction at the same time. An effective modification on the camera pipeline is also proposed to boost the final performance. ## Strength\n1. The paper is well written and easy to read.\n2. Robustness of autonomous driving algorithms should be paid more attention to. This paper raises the issue and makes the attempt to addressing it.\n3. Thorough experiments are performed. Claims are well-supported. SOTA performance is achieved on both normal and robust settings of nuScenes.\n4. The clean design of the framework makes it easy to use any camera or LiDAR framework.\n\n\n## Weaknesses\n1. It is nice to see a simple yet effective module (dynamic fusion module) being proposed. But it would be nicer to provide some insights and analysis into the design itself. For example, by analyzing how would the fusion module work when facing incomplete LiDAR or camera inputs, we might gain some insights into the module design of CSF and AFS.\n2. The experiments section does not provide runtime analysis, like inference time and memory footprint, and its comparison with other methods. 1. How does the baseline method in Table 7 fuse the features?\n2. This does not affect my rating to the paper. Just out of curious, would BEVFusion still works when facing both camera and LiDAR malfunction? The potential negative social impact is well discussed. But the limitation should be discussed more. For example, would the late-fusion style misses the opportunity to fuse intermediate LiDAR and camera features, and thus makes the pipeline suffer potential performance drop?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4, 3 ]
[ "B8g021IaaAm", "nips_2022_zzDrPqn57DL", "B8g021IaaAm", "VrENUT1h6-", "VrENUT1h6-", "HNAOCsCSzZw", "uVn6Us0aP_", "Hd6Ce57Ircd", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL", "nips_2022_zzDrPqn57DL" ]
nips_2022_-V1ITIKPH6
Active Learning for Multiple Target Models
We describe and explore a novel setting of active learning (AL), where there are multiple target models to be learned simultaneously. In many real applications, the machine learning system is required to be deployed on diverse devices with varying computational resources (e.g., workstation, mobile phone, edge devices, etc.), which leads to the demand of training multiple target models on the same labeled dataset. However, it is generally believed that AL is model-dependent and untransferable, i.e., the data queried by one model may be less effective for training another model. This phenomenon naturally raises a question "Does there exist an AL method that is effective for multiple target models?" In this paper, we answer this question by theoretically analyzing the label complexity of active and passive learning under the setting with multiple target models, and conclude that AL does have potential to achieve better label complexity under this novel setting. Based on this insight, we further propose an agnostic AL sampling strategy to select the examples located in the joint disagreement regions of different target models. The experimental results on the OCR benchmarks show that the proposed method can significantly surpass the traditional active and passive learning methods under this challenging setting.
Accept
This paper studies a novel active learning setting adapted to learning multiple target model. The authors propose a setting that can benefit to all tasks by focusing on regions with high disagreements. This contribution shows in a sense that the active learning procedure can be transferable to multiple tasks. A theoretical analysis is provided in the form of a bound on label complexity. Experimental results support the claims. The reviewers have globally appreciated the contribution and most the comments raised in the reviews have been addressed in the rebuttal. The overall evaluation of the paper is positive and I propose acceptance. I recommend the authors to take into consideration the last comments of the reviewers, in particular for improving the presentation of the paper.
train
[ "QDb499zjsFs", "bF1zVNvy40X", "0zsszkObiX", "l_Ebu2_Uu47", "DFQoi1zcTlb", "bBWtcq4Duj", "symrIuZYaKc", "4ouN-UijRv7", "ePfJKVYItT3", "yen6I419NJX" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I want to keep my initial rating.", " Thank you for the detailed response. The authors appropriately addressed my questions and I updated my rating from 4 to 5 accordingly. I believe that larger-scale experiments would strengthen the algorithmic side of this paper, although this is not a main concern and merely a suggestion. The reason I didn't update my rating further is that I still think the presentation can be improved much more although it appears that this would take quite some effort. ", " > Q: It seems that the specific form of $\\sigma$ in DIAM method is not given in the paper, the authors only say that it has the same form with RobustCAL method. I suggest to add this part for a more self-contained presentation.\n\nA: Thanks for the suggestion. We have formally defined $\\sigma$ in the revised paper.\n\n> Q: It is better to introduce the structure of target models in the paper rather than in the supplementary materials for better understanding the results.\n\nA: Thanks for the suggestion. We have introduced more details of the target models in the revised paper.\n\n> Q: Theorem 1 also works for passive learning. How to guarantee the superiority of active learning?\n\nA: Many AL methods are guaranteed to have better label complexity than passive learning, e.g., CAL, which means they need less labels to obtain a $\\epsilon/2$-good classifier from the combined hypothesis space. Thus, although Theorem 1 works for both AL and passive learning, AL is still with more potential.\n\n> Q: Intuitively, the target models should share some specifications to make the active selection feasible. However, the proposed method works fine with very different target models according to the results in supplementary materials. The authors may explain more about this phenomenon.\n\nA: Thanks for the insightful question. We speculate that the proposed DIAM method queries the data falls into the joint disagreement region, which guarantees the utility to each of the target models. Thus, even the target models have very different architectures, the average accuracies of them can be improved effectively. This phenomenon also reveals that DIAM has the potential to tackle more challenging situations, i.e., improving heterogeneous target models effectively.\n", " > Q: it might be better to include evaluation other than OCR benchmark datasets.\n\nA: Thanks for the comment. We motivate the multi-model setting from the case of developing machine learning systems for diverse devices. OCR is one of the representative task with such requirement. Thus, we primarily validate our method on these tasks, in which, Kuzushiji-MNIST dataset has 70,000 data points. More practical machine learning tasks will also be considered in our future work.\n\n> Q:It is better to explicitly define the terms \"realizable and agnostic cases\" for improving clarity and readability.\n\nA: Thanks for the advice, we have made this clearer in the revised paper, e.g., in L133, L196 in the revised paper.\n\n> Q: In Theorem 1, each classifier $\\hat h_i$ is obtained by $\\min_{{h_i}\\in \\mathcal{C}_i}d(h_i, h_A)$. However, can such $\\hat h_i$ be learned from the same data? This point is unclear to me.\n\nA: This is a good question. Unfortunately, learning a good model from different hypothesis classes may require very different data. On the contrary, finding a closest classifier for a given hypothesis usually can be done by existing data. One technique that accord with this is the knowledge distillation, which tries to shape the prediction of a small model to be similar with a larger model on existing data. We have added discussion on this in the revised version.\n\n> Q: In Line 230, it would be better to explain the hyperparameter q.\n\nA: Thanks for the suggestion. We have added more explanation about this hyperparameter q in the Sec. 5 in the revised paper. More concretely, we state that _The hyperparameter $q$ controls the conservativeness of the algorithm. With a larger $q$, it will reject more less informative unlabeled data in the online setting. When $q=1$, the algorithm degenerates to query the data point which falls into any disagreement regions._\n\n> Q: The paper does not mention any potential negative social impact.\n\nA: Thanks for pointing out this problem. We have discussed about this part in the revised paper, e.g., in the last paragraph in Sec. 5.", " > Whether it’s possible that a stronger translation bound in place of Theorem 1 is likely.\n\nThank for this insightful question. We believe there does exist improvement of Theorem 1, since it does not make additional assumptions on how the hypothesis classes are related or satisfy certain conditions. Improvement should be obtained in more specific settings.\n\nFor the general bound, we are still looking for a better solution yet. Specifically, we are currently working on the shattering technique, which exploits the probability of a data points that can be shattered by the current version space V (cf. [14]). We will report our latest findings on arXiv once we have noticeable progress, thanks.\n\n> Whether the bound translation for PL is perhaps the tightest possible.\n\nThis is a very interesting question. The bound translation for PL is actually quite tight. However, it depends on the property of the given model (e.g., $\\nu$), as stated in the L181 in the paper. On the other side, we note that Theorem 1 can also be applied to PL. Considering that the target concept falls into one of the hypothesis space j which is far away from another one k, in this case, we may expect that finding a $\\epsilon/2$-good classifier from j (cf. Theorem 1) requires less data than finding an $\\epsilon$-good classifier from k (by comparing Eq. (5) and (6)).\n\nNevertheless, we emphasis that this does not mean that AL can hardly surpass PL in the multiple models setting. Because many AL methods are guaranteed to have better label complexity than PL, e.g., those analyzed by Hanneke, which means they need less labels to obtain a $\\epsilon/2$-good classifier from the combined hypothesis space. Thus, although Theorem 1 works for both AL and PL, AL is still with more potential. \n\n> Presentation could be improved significantly.\n\nThanks for the comment. We have carefully proofread the paper, and optimized the presentations. Specifically, we define and change some notations for clarity, for example, the label complexity of multiple models $\\Lambda_m$ has been replaced by $\\Lambda^M$. The symbols $D_X, D_y$, $\\nu$ are formally defined in Sec. 3.1. To improve the paper structure, Theorem 2 is removed and the label complexity of PL for multiple models is presented as in-text in L130 in the revised paper. We rearrange Sec. 3 and Sec. 4 according to the comments to improve the readability. We also remove the redundant terms in Theorem 3, and augment the introduction of the implementation of DIAM for the deep models at the end of Sec. 5.\nBesides, we rectify many grammatical errors, which include removing the redundant prepositions (e.g., tackle with), moving the parenthesis before the period, an hypothesis -> a hypothesis, etc. \n\n> I would like to see experiments on at least one larger dataset.\n\nThanks for the comment. We motivate the multi-model setting from the case of developing machine learning systems for diverse devices. OCR is one of the representative task with such requirement. Thus, we primarily validate our method on these tasks, in which, Kuzushiji-MNIST dataset has 70,000 data points. More practical machine learning tasks will also be considered in our future work.\n\n> DIAM probably doesn’t work in the batch-setting, the authors should mention this limitation.\n\nThanks for the comment. We have also pointed out this limitation and stated the future plan in the revised paper. \nIn this work, we define the problem setting of AL for multiple models and propose an informativeness-based criterion for the AL under multiple models setting, the study on representativeness is scheduled as a future work. More concretely, we are going to exploit the multiple feature representations of the data (obtained with different model architectures), which may relate to the multi-view clustering. For the current version of the method, we heuristically sample a subset from the top-rated unlabeled data to introduce the diversity for batch-mode data selection (as stated in L269 in the original paper).\n\n> The computational complexity should also be mentioned.\n\nThe running time of different compared methods are reported in the supplementary materials (cf. Sec. B.2), which show that our proposed heuristic is efficient since it only requires predict the unlabeled data at each of the later epochs. We have also discussed it at the end of Sec. 5 in revised paper, thanks.\n\n> In Theorem 2, the fact that the target concept h* may not be included in every hypothesis may be problematic for AL as well.\n\nYes, the same problem is also confronted by AL. However, according to Theorem 1, we do not care about the label complexity for a specific hypothesis space, but try to find a good classifier from the combined hypothesis space, thus it is not problematic for the AL.\n\n> why is the last sentence in Section 4.1 included here? It seems that this would better fit under Section 4.2.\n\nWe have rectified this by rearranging Sec. 4 in the revised paper, thanks for the advice.", " > Q: Theorem 3 is based on hyperparameter q=1. Is there a tighter result when q>1?\n\nA: We are still working on a tighter result of q>1. Since this setting requires the discussion for the connection between the hypothesis classes and the data distributions, more effort is needed to obtain such result. We will continuously work on improving this theorem, thanks.\n", " It is generally believed active learning is model-dependent. This paper studies the problem of active learning for multiple target models. The authors first analyze the label complexity for active learning under the setting with multiple target models, then find the label complexity of single model is an upper bound of that for multiple models under the realizable case. They also propose an agnostic disagreement-based selection criterion for the agnostic case. Experiments verify the effectiveness of the proposed approach. This paper explores a new setting of active learning where there are multiple target models to be learned. How to design effective active learning algorithms is practical and important to the machine learning community. It first analyzes the label complexity of passive learning and active learning for multiple target models under realizable case, and advocates active learning has potential improvement under this setting. For the agnostic setting, the authors propose disagreement-based selection criterion to query examples located in the joint disagreement regions. Theoretical analysis shows the proposed method achieves better label complexity than that of CAL (a representative approach) under some ideal situations. The writing is good and clear. The theoretical analysis and empirical experiments are solid. Theorem 3 is based on hyperparameter q=1. Is there a tighter result when q>1? None", " This work investigates an interesting problem, i.e., how to actively select the informative data to improve multiple target models simultaneously. First of all, the authors formally define the active learning (AL) for multiple target models problem and bridge the label complexity between AL for single and multiple models. After revealing the potential improvement of AL for this setting, a novel query strategy DIAM for multiple target models is further proposed, which prefers the data located in the joint disagreement regions. An efficient implementation for deep models is also provided. Both theoretical and empirical studies are conducted to validate the proposed approach. This work is of high quality in my view due to the following reasons:\n\n1.\tThe problem explored by this study is very important and meaningful. In many machine learning tasks, the devices to be deployed usually have varying computational resources, and there is a heavy burden of data labeling to train multiple models. These challenges become more tough with the flourishing applications of deep learning in recent years. \n2.\tFrom the technical perspective, active learning is usually believed to be model-dependent. This work presents a nice attempt towards multiple target models from both theoretical and empirical views. It broadens the learning scenarios of active learning and may inspire future work towards thinking of this problem. This contribution is significant in my opinion.\n3.\tThe paper is well-structured and easy to follow. The authors also provide enough supplementary materials and code for reproduction.\n\nWeaknesses:\n1. It seems that the specific form of \\sigma in DIAM method is not given in the paper, the authors only say that it has the same form with RobustCAL method. I suggest to add this part for a more self-contained presentation.\n2. It is better to introduce the structure of target models in the paper rather than in the supplementary materials for better understanding the results.\n 1. Theorem 1 also works for passive learning. How to guarantee the superiority of active learning?\n\n2. Intuitively, the target models should share some specifications to make the active selection feasible. However, the proposed method works fine with very different target models according to the results in supplementary materials. The authors may explain more about this phenomenon.\n Yes", " This paper studies an important setting of active learning, motivated by the need to efficiently acquire a single dataset that is useful to simultaneously train various models/architectures. Contrary to common observations against a possible improvement of AL over PL under this setting, the authors theoretically prove a label complexity bound of active learning for multiple models which combined with a bound on passive learning implies active learning may also be efficient under this setting. The authors then modify a disagreement-based algorithm used for single-model AL to that for multiple models by taking their average disagreement and prove its label complexity in the agnostic setting. Lastly, this paper proposes a heuristic to circumvent the computational complexity when employing neural networks as the hypothesis class, and empirically demonstrates that this method outperforms well-known active learning algorithms designed for single-model AL, extended using the same average over multiple models, under this multiple target setting. The main (and only) major theoretical contribution in this paper seems to be Theorem 1: It is an improved analysis method over a straightforward proof technique that translates the sample complexity of AL applied to a single hypothesis class indexed by i to that for multiple hypothesis classes as the sum of each complexity, as the authors mention (line 125). For PL, one can easily obtain a stronger translation since by definition of passive learning (random sampling), the complexity of PL for multiple hypothesis classes is the maximum over each sample complexity. I would think that for PL, this is the only guarantee possible without additional assumptions on how the hypothesis classes are related. Theorem 1 is a tool that is used to get a stronger translation from AL bounds for a single model to that for multiple models, and the authors use this to show that AL can potentially achieve an exponential improvement over PL in the multiple-model setting.\n\nThis is an interesting conclusion, but the reason I say this is the only theoretical contribution is that these bounds and corresponding comparisons between AL and PL are well-known for the single-model case, which this work uses extensively. In proving the theoretical statements in the paper, the only non-trivial modification to proofs by Henneke under this multiple-model setting is to use Theorem 1 in place of the sum of sample complexities for single-model AL. This isn’t necessarily bad, but I would appreciate if the authors mention whether it’s possible that a stronger translation bound in place of Theorem 1 is likely. Additionally, I would appreciate if the authors remark on whether the bound translation for PL (as I mentioned above) is perhaps the tightest possible, and if not, it would be nice to see an effort improving the PL bound for multiple models being max_i \\Lambda_i to another method analogous to Theorem 1 for AL.\n\nDespite these concerns on theoretical statements, it is nice to see Theorem 4 which shows that DIAM-online is better than CAL applied to multiple target models. However, its presentation could be improved by instead writing that the sample complexity of DIAM-online (e.g. by writing \\Lambda_m (DIAM) < \\Lambda_m (CAL)), as the terms are redundant.\n\nHere are a few major comments\n* Presentation could be improved significantly. Many statements are defined in-text much after they are used, and some notations are never defined formally. Word choices are also poor, making it hard to infer what the authors attempt to say. There are some redundant statements that make the reader confused whether the statements are regarding different situations, and assumed settings go back and forth between agnostic and realizable. How the disagreement regions are to be computed in practice is described briefly in the last paragraph, whereas this the algorithm used throughout experiments. \n* I like how despite this paper being mostly theoretical, the experiments are done using somewhat large models and resembles the motivation mentioned in the introduction. However, I don’t think the tasks being classification on MNIST is convincing and I would like to see experiments on at least one larger dataset (even CIFAR-10 would be fine, or others used in the base code by Cai et al. 2020).\n* The proposed problem setting is interesting and well-motivated. This work is the first to theoretically analyze the sample complexity of active learning for multiple target models, when the hypothesis classes are fixed (due to different target deployments). Section 2 compares with related work that studies similar but different problems and helps understand this problem setting’s position within literature.\n* Algorithmically, the downside is that DIAM probably doesn’t work in the batch-setting, i.e. the algorithm would collect many similar samples when present. This is of large interest in the single-model AL setting, and I think the authors should mention this in the paper as a limitation.\n* Also, the computational complexity should also be mentioned. As the authors note, computing the disagreements is computationally prohibitive for deep models. It would be nice to see how the heuristic overcomes this difficulty and what the resulting runtime is for practically-sized neural networks.\n\nMinor comments\n* Many grammatical errors should be fixed, e.g. remove “with” in line 83, must has -> is in line 125 and move the parenthesis before the period or even better, remove the parenthesis brackets and fix accordingly.\n* Notations could be improved, but this might require much effort. For example, \\Lambda_i is not defined other than the description in parenthesis in line 125-126, although I understand it to be the single-model AL complexity for class C_i (explained later in the draft). The fact that \\Lambda_m is the notation for multiple-model AL where m is a different meaning than the “i” in \\Lambda_i makes it confusing at first read.\n* I don’t think the statement in Theorem 2 should be stated as a “Theorem”. I recommend changing this to a proposition or writing it as an in-text sentence after line 125, since the two statements are related in how to obtain a multiple-model sample complexity for AL (line 125) and PL (this statement). \n\nIn summary, two major concerns I have is originality and presentation. For originality, I would like to hear the authors' remarks on whether Theorem 1 is tight and how a bound on single-model PL may or may not be improved. Could the authors elaborate on the remark after Theorem 2? The fact that the target concept h* may not be included in every hypothesis space is not unique to PL, and is problematic for AL as well, no? Further, why is the last sentence in Section 4.1 included here? It seems that this would better fit under Section 4.2.\n\nPlease correct me if there are any misunderstandings in my comments above.\n This draft does not clearly state apparent limitations, but does mention future work. Please incorporate my comments on the algorithmic and computational limitations and add others as appropriate.", " This paper proposes a novel problem setting for active learning, where the aim is to query samples to improve multiple target models' performance simultaneously. This paper theoretically analyzes the label complexity of active learning under this setting and shows that active learning has the potential to achieve better label complexity. In addition, based on this analysis, this paper proposes an active learning strategy to query samples in the region of the joint disagreement of different target models, and experimentally shows the effectiveness of this strategy with two OCR datasets. ### Strengths\n* The problem setting tackled in this paper seems to be interesting and practically useful although it is difficult to judge novelty correctly for me because I'm not an expert in this field.\n* There is a theoretical analysis. It seems intuitive and reasonable to select samples in the disagreement area of multiple target models. However, since I'm not an expert in the field, I was not able to follow all the proof and statements.\n* Experimental results show the effectiveness of this method.\n\n### Weaknesses\n* Since the proposed method seems to be a general algorithm, it might be better to include evaluation other than OCR benchmark datasets. * it is better to explicitly define the terms \"realizable and agnostic cases\" for improving clarity and readability.\n* In Theorem 1, each classifier $\\hat{h} _i$ is obtained by ${\\rm min} _{C_i} \\ d(h_i, h_A)$. However, can such $( \\hat{h} _i ) _i$ be learned from the same data? This point is unclear to me. \n* In Line 230, it would be better to explain the hyperparameter $q$. The paper addressed the limitations in page 6 but does not mention any potential negative social impact." ]
[ -1, -1, -1, -1, -1, -1, 6, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3, 1 ]
[ "l_Ebu2_Uu47", "DFQoi1zcTlb", "4ouN-UijRv7", "yen6I419NJX", "ePfJKVYItT3", "symrIuZYaKc", "nips_2022_-V1ITIKPH6", "nips_2022_-V1ITIKPH6", "nips_2022_-V1ITIKPH6", "nips_2022_-V1ITIKPH6" ]
nips_2022_bZzS_kkJes
Neural Matching Fields: Implicit Representation of Matching Fields for Visual Correspondence
Existing pipelines of semantic correspondence commonly include extracting high-level semantic features for the invariance against intra-class variations and background clutters. This architecture, however, inevitably results in a low-resolution matching field that additionally requires an ad-hoc interpolation process as a post-processing for converting it into a high-resolution one, certainly limiting the overall performance of matching results. To overcome this, inspired by recent success of implicit neural representation, we present a novel method for semantic correspondence, called Neural Matching Field (NeMF). However, complicacy and high-dimensionality of a 4D matching field are the major hindrances, which we propose a cost embedding network to process a coarse cost volume to use as a guidance for establishing high-precision matching field through the following fully-connected network. Nevertheless, learning a high-dimensional matching field remains challenging mainly due to computational complexity, since a na\"ive exhaustive inference would require querying from all pixels in the 4D space to infer pixel-wise correspondences. To overcome this, we propose adequate training and inference procedures, which in the training phase, we randomly sample matching candidates and in the inference phase, we iteratively performs PatchMatch-based inference and coordinate optimization at test time. With these combined, competitive results are attained on several standard benchmarks for semantic correspondence. Code and pre-trained weights are available at~\url{https://ku-cvlab.github.io/NeMF/}.
Accept
The paper concerns itself with computing high resolution matchings. The authors propose to use represent matchings as maxima of neural "matching" fields, which is a novel and interesting theoretical contribution that allows to obtain high resolution matchings with fixed representation size of the neural field. The matchings are extracted from the neural field via coordinate optimization. State of the art performance is attained on a variety of semantic correspondence benchmarks. Reviewers also acknowledge that the paper is well written and easy to follow. On the downside are larger computational costs. Also newer versions of CATS, i.e. CATS++ (which is concurrent work), outperform the presented paper. Overall, the interesting theoretical contribution that might be useful in other domains and strong empirical performance make the paper a good fit for NeurIPS. In a final version the reviewer recommendations must be taken into account.
train
[ "Bl2rIqY8__", "lAkJFw7axgg", "gGqQzsDoKt9y", "8oOtz7mKmc3", "CJhVjSnGRJE", "eriM8cVkEM", "svlPLHZYana", "wPqusEvVrX", "bcgNmwuE0tm", "mxiqeWtajp3", "WCk0WCu-O96", "ft_ME3go5uS", "jJkA9bAdLPb" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers,\n\nSince the rebuttal discussion is about to end soon, if there is any other concern that we did not adequately address or is not resolved, please let us know, and we will come back to you as soon as possible if we can. \n\nThank you and best regards,\n\nThe authors of Paper 2300.\n", " Thanks to the authors for detailed answers. My questions are adequately addressed and the authors do acknowledge limitations (and benefits): _\"we wish to highlight that the proposed method may suffer from relatively larger computation and memory consumption than CATs, but the proposed approach has an advantage when the resolutions of images are high.\"_. Please discuss this statement in the camera ready version as well.\n\nI am continuing to recommend acceptance. \n", " \n\nWe thank all the reviewers for their comments, and are glad that the reviewers found our work \"novel and useful in other applications” (a2FL, vfUA), “easy to follow and understand\" (JRSW, iiVf), and “supported by extensive experiments” (a2FL, iiVf). In this rebuttal, we mainly addressed following points:\n\n1. Detailed explanation of NeMF as a conditional INR based framework (JRSW). \n2. Memory consumption comparison to show that the proposed method takes an advantageous approach to represent high-resolution matching cost (a2FL, iiVf).\n3. Details of training and cost embedding network (a2FL, vFUA) and clarified the motivations.\n4. Contributions and methodology (iiVf). \n5. Additional minor comments.", " Thanks for the comments. Our responses are in the following. If our responses do not adequately address your concerns, please let us know and we will get back to you as soon as possible.\n\n> **Concrete architecture of the cost embedding network**\n\nWe apologize for the absence of such an important detail. Our cost embedding network builds upon CATs [8], while removing appearance affinity modelling and adding convolution prior to Transformer module. We will include this in the supplementary material.\n\n> **Meanings of different colors**\n\nDifferent colors of flow maps indicate directions and magnitudes of the flows. This visualization has been widely adopted in the optical flow literature, as well as FlowNet2.0 [23]. We will include this in the caption of Figure 6.\n\n>**Figure 7 clarification**\n\nTraining and evaluation resolutions have been one of the issues present in the semantic correspondence literature along with the evaluation for PF-WILLOW. The error in PCK threshold for PF-WILLOW is addressed in CHMNet, a journal extension for CHM. The resolution issue is addressed in CATs++ and PWarpCNet. Both works argue that to\nmake a fair comparison, the training resolutions can be different\namong works, but the evaluation resolutions should be the same among works. Note that CATs++ explicitly mentions this and PWarpC details its experimental setting in the supplementary material where training resolutions are different and mentions in section 5.3 that the original image size is used during evaluation for a fair comparison.\n\n\nFollowing this protocol, we reported both quantitative and qualitative results from images at original resolutions, which would have led to different results to those in the original CATs [8] paper. We will clarify this in Figure 7 caption. Thanks for the comment.\n\n> **PF-WILLOW wrong results**\n\nRecently, CHMNet [45], a journal extension uploaded on arxiv, pointed out that since DHPF [48], the code implementations were wrong for PF-WILLOW, and newly proposed bbox-kp threshold. This is why the quantitative results of CATs on PF-WILLOW (measured using bbox-kp) are different from those of the original CATs paper. After re-computing the results, we obtained: \n\n\n|Model | PCK 0.01 | PCK 0.03 | PCK 0.05 | PCK 0.1 |\n|---|:---:|:---:|:---:|:---:|\n| CATs | 2.9 | 20.4 | 40.7 | 69.0 |\nWe will replace the values. Our sincere apologies for this.\n\n> **Generalization power**\n\nConventionally, PF-Willow is evaluated using the model trained on PF-PASCAL, which shows the generalization power of the model. In Table 1, CATs [8], trained and evaluated on PF-PASCAL, achieves 92.6 for PCK at $\\alpha= 0.1$, which 1.0 lower performance than NeMF. In contrast, at PF-WILLOW, NeMF significantly outperforms CATs for all $\\alpha$ thresholds, indicating its generalization power. Nevertheless, almost all the matching networks including NeMF do not particularly propose or address any means of regularization or loss for zero-shot setting, and this may make them incapable to perform well on unseen categories. Zero-shot semantic matching is indeed an intersting and promising future direction. Thanks for the comment.", " > **Relatively high computational costs**\n\nAlthough NeMF has higher computational costs than CATs [8], a simple approximation of memory consumption for conventional approaches would be (h x w)$^2$, which NeMF can avoid this with the proposed approach. More specifically, we designed the inference strategy in a way that significantly reduces the time taken as shown in Table 3. Also, NeMF can infer correspondences at any arbitrary resolution of images\nwith fine-grained details preserved, which is the major contributions of this paper. \n\n\nNote that COTR[31] and DMP [19] are the examples that require highly non-trivial computations and time consumption at inference. Concretely, COTR requires (H x W) / 35 seconds according to the statement \"non-optimized prototype implementation queries one point at a time, and achieves 35 correspondences per second on a NVIDIA RTX 3090 GPU.\", and DMP requires more than a minute for the complete iterations for optimization. From this, the proposed method has an advantage over them in terms of inference time.\n\nMoreover, we report memory consumption of our method in comparisons to CATs [8] and CHM [45]. Evaluating with four different resolutions of cost volume, such as 16$^4$, 32$^4$, 64$^4$, 128$^4$, we summarize the memory consumption of both approaches during training and inference as below:\n\n\n| Model | 16$^4$ | 32$^4$ | 64$^4$ | 128$^4$ |\n|---|:---:|:---:|:---:|:---:|\n| Training CHM | 708 | 1538 | OOM | OOM |\n| Inference CHM | 371 | 433 | OOM | OOM |\n| Training CATs | 454 | 3523 | OOM | OOM |\n| Inference CATs | 188 | 302 | 1882 | OOM |\n| Training NeMF | 4205 | 4205 | 4205 | 4205 |\n| Inference NeMF | 1528 | 1528 | 2443 | 6309 |\n\nAs shown in the Table, CATs [8] and CHM [45] suffer from OOM (Out of Memory) as the resolution increases, demonstrating the advantage of the proposed approach implicitly representing matching costs. \n\n>**Biggest difference between NeMF and others**\n\nAs explained in L136 to L141, we propose an INR-based learnable framework that implicitly represents high-dimensional matching field to infer correspondences at arbitrary scales. Other works typically perform matching at low-resolution (L33) and requires hand-crafted interpolation techniques. We will highlight our contribution at the end of introduction as the reveiwer a2FL suggested.", " We highly appreciate the comments, which our responses are in the following. If any of our responses do not adequately address your concerns, please let us know and we will get back to you as soon as possible.\n\n> **Cost volume construction widely used in other literature**\n\nWe wish to highlight that our main contribution does not lie on the \"usage\" of cost volume but on seamless incorporation of the implicit neural representation (INR) for semantic correspondence task where a coordinate-based neural network allows us to model a continuous matching field. With the learned network, we can implicitly represents a high-dimensional 4D matching field to infer high-precision correspondences at arbitrary scales without any post-processing procedure.\n\n> **Matching ambiguities caused by interpolation.**\n\nWe assume that the reviewer is concerned with L168, where we say \"use a quadlinear interpolation on C' ...\". We agree with the reviewer that there may exist ambiguities when hand-crafted interpolation is used to query to cost feature vector. However, similar to Convolutional occupancy network [E], Plenoxel [F] and NSVF [G], the fully connected layer is the key that alleviates the ambiguities induced from interpolation, because the learning signal sent defined at original resolution rectifies the ambiguously interpolated cost feature, which can help to alleviate the matching ambiguities initially presented. \n\n\n[E] Peng, S., Niemeyer, M., Mescheder, L., Pollefeys, M., & Geiger, A. (2020, August). Convolutional occupancy networks. In European Conference on Computer Vision (pp. 523-540). Springer, Cham. \n\n[F] Fridovich-Keil, S., Yu, A., Tancik, M., Chen, Q., Recht, B., & Kanazawa, A. (2022). Plenoxels: Radiance Fields Without Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 5501-5510).\n\n[G] Liu, J., Mantel, C., Schweiger, F., & Forchhammer, S. (2021, November). A Simulation System for Scene Synthesis in Virtual Reality. In International Conference on Virtual Reality and Mixed Reality (pp. 67-84). Springer, Cham.\n\n> **Random sampling during training**\n\nAs L180-L182 states, we use the ground-truth matching score for computing the loss. More specifically, given a keypoint p_s at source image, p_t (positive sample) at target image and a fixed number (e.g. 99) of keypoints p_rand (negative samples) randomly sampled, we use cross-entropy loss to maximize the probability of predicting the ground-truth. If we sample too many negative samples, we agree that long-tail problem may be induced, but based on our empirical observations, we did not observe such an issue. We wish to highlight that as the reviewer suggested, we included a discussion regarding the advanced sampling strategy. We would be glad to attempt better sampling strategy in our future work. Thanks for the comment.\n\n> **Dense flow map to sparse keypoints conversion**\n\nThanks for the comment, we will include one or two sentences to indicate that sparse keypoints are converted to dense flow map following the protocol of CATs [8] (keypoints to dense flow implementation). \n\n> **Is the inference conducted on the validation nor test set?**\n\nDuring training and validation, we do not use PatchMatch-based sampling and coordinate optimization. These two strategies are designed for the test phase.\n\n> **Why there is back-propagation during the inference process?**\n\nThe coordinate optimization requires back-propagation as detailed in Section 4.4. Concretely, because the network is naturally differentiable, we use a gradient descent to optimize the target coordinate **y** in the direction of decreasing the negative log likelihood of the matching score, with respect to the corresponding source coordinate **x**. A high-level explanation for this is that using the learned network, we want to find the target coordinate that best fits to the given source coordinate to be matched. To this end, we only move around the coordinates in a direction that finds the most corresponding target coordinates.\n\n>**Does NeMF use the output of PatchMatch as pseudo labels to train the model?**\n\nWe wish to clarify that the proposed PatchMatch-based sampling is not used as psuedo-labels for training, but it is only used at test-phase. We perform test-time optimization because we optimize the coordinates in a direction that maximizes the correctness of the correspondence using the learned network (L222). As detailed in Section 4.4, although the proposed PatchMatch-based inference strategy could prevent exhaustive searching, this may degrade the performance due to several reasons including insufficient number of iterations and a limited search range. To address this issue, we mitigate the potential erroneous inference by adopting test-time optimization strategy that directly optimize coordinates **y** that maximizes the correctness of the correspondence using the learned the network.", " > **Concrete components to be held in Table 2**\n\nWe will include pseudo-code or a table as in R-MVSNet [C] for Table 2 in supplementary material as the reviewer kindly suggested. Note that for the concerete components in Table 2, NeMF was applied to 4D volume which was obtained with the following cases;\n\n(I). A raw correlation map constructed from a pair of feature maps,\n\n(II). 4D volume applying the center-pivot convolutions (HSNet [D]) to the raw correlation map,\n\n(III). 4D volume applying CATs (without appearance affinity modeling) to the raw correlation map\n\n(IV). Combination of (II) and (III), i.e., 4D volume applying the center-pivot convolutions and CATs (without appearance affinity modeling) sequentially.\n\n[C] Yao, Y., Luo, Z., Li, S., Shen, T., Fang, T., & Quan, L. (2019). Recurrent mvsnet for high-resolution multi-view stereo depth inference. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5525-5534).\nNote that the coordinate optimization was commonly applied to four cases.\n\n[D] Min, J., Kang, D., & Cho, M. (2021). Hypercorrelation squeeze for few-shot segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 6941-6952).\n\n> **Stress test**\n\nWe would like to thank the reviewer for suggesting an interesting direction. Although we were unable to obtain the complete quantitative result on SPair-71k for the stress test, we measured the memory consumption and the summary is shown below. Note that we conducted this experiment with 128$^4$ resolution, and 100,000 batch-size for coordinates:\n\n\n| Phase | Resolution | Memory [MiB] |\n|---|:---:|:---:|\n| Training | $12^4$ | 3766 |\n| Training | $16^4$ | 4205 | \n| Inference | $12^4$ | 4048 | \n| Inference | $16^4$ | 6308 |\n\nWe observed that by changing the resolution of the cost volume, the memory is greatly reduced both in training and inference. Once the training finishes, we will compare with the best results to see if this stress test substantiates its effectiveness. \n\n> **D4 and D5**\n\nThanks for the valuable suggestion and pointing out the absence of field of year. We can add in the broader impact that the proposed approach that implicitly represents high dimensional matching fields has a potential for a general use to those works that employ high resolution cost volumes with fine-grained information preserved, e.g., semantic segmentation and optical flow. ", " We thank the reviewer for the thorough comments, which we respond in the following. If any of our responses do not adequately address your concerns, please let us know and we will get back to you as soon as possible.\n\n\n> **NeMF using 5D cost tensor while CATs uses 4D**\n\nIn fact, CATs [8] utilizes 5D tensors as it uses multi-level cost maps, resulting in 8 x H_s x W_s x H_t x W_t when SPair-71k is used. This means that the overall computation for cost embedding network would be roughly x2 higher than CATs. Our sincere apologies for ambiguous statements or notations. In addition, we wish to highlight that the proposed method may suffer from relatively larger computation and memory consumption than CATs, but the proposed approach has an advantage when the resolutions of images are high.\n\n> **Memory consumption for training and inference**\n\nBefore the response, we wish to correct some errors in L301 and in L310, where we say resolution \"512 x 512\" and \"assuming N = 5\", when actual values are \"original\" and \"N = 25\" in line with our implementation detail. Sincere apologies for our mistakes and we will correct this.\n\nFollowing the reviewer's suggestion, we report memory consumption of our method in comparisons to CATs [8] and CHM [45]. Evaluating with four different resolutions of cost volume, such as 16$^4$, 32$^4$, 64$^4$, 128$^4$, we summarize the memory consumption of both approaches during training and inference as below:\n\n| Model | 16$^4$ | 32$^4$ | 64$^4$ | 128$^4$ |\n|---|:---:|:---:|:---:|:---:|\n| Training CHM | 708 | 1538 | OOM | OOM |\n| Inference CHM | 371 | 433 | OOM | OOM |\n| Training CATs | 454 | 3523 | OOM | OOM |\n| Inference CATs | 188 | 302 | 1882 | OOM |\n| Training NeMF | 4205 | 4205 | 4205 | 4205 |\n| Inference NeMF | 1528 | 1528 | 2443 | 6309 |\n\nAs shown in the Table, CATs and CHM suffer from OOM (Out of Memory) as the resolution increases, demonstrating the advantage of the proposed approach implicitly representing matching costs. We will include this in the paper. Note that at inference, (16) and (32) share the same memory consumption, and this shows that at very low resolutions, a trivial difference is observed. \n\nAdditionally, we wish to emphasize that without affecting the performance, we can reduce computational burden and memory consumption at inference phase by only optimizing the coordinates of interests used for evaluation at coordinate optimization phase and tuning the batch-size of the input coordinates to the PatchMatch-based sampling, which can determine the memory consumption and run-time. Assuming a set of keypoints are for querying is available, we can optimize only the coordinates of the keypoint which we want to find its corresponding keypoint at target image. This way, we can significantly reduce the run-time. The results are shown below:\n\n| Inference Strategy | Inference Time [s/img] | Memory [MiB] |\n|---|:---:|:---:|\n| PatchMatch | 1.42 | 6307 |\n| PatchMatch + Optimize all coordinates | 2.21 | 6309 | \n| PatchMatch + Optimize only keypoints | 1.65 | 6308 | \n\nFor this experiment, we assumed NeMF is representing cost volume of size 128$^4$ to show the memory comparison that will be presented below this paragraph. From the table above, we observe negligible memory change, but significant reduction in run time when only the keypoints of interest are optimized. With this strategy, we can benefit from the reduced run-time, and by reducing the batch-size of the input coordinates to the PatchMatch-based sampling we can also control the memory consumption. For this experiment setting, we use the default coordinate batch size setting, which is 100,000 and cost volume size of 128$^4$:\n\n| Batch Size | Inference Time [s/img] | Memory [MiB] |\n|---|:---:|:---:|\n| 100000 | 1.65 | 6308 |\n| 50000 | 2.27 | 4301 | \n| 25000 | 4.43 | 2730 | \n| 10000 | 9.18 | 1789 |\n\nThis table shows that tuning the batch-size can reduce the memory consumption by sacrificing run-time, meaning that users can choose to infer with high/low memory and fast/slow run-time.\n\n> **$y$ initialized similar to PatchMatch?**\n\nAs stated in L204, we utilize an average pooled cost feature volume for the initialization step, which is different from PatchMatch. \n\n> **L272 wording**\n\nAs mentioned to the comment above, we actually utilize the 4D cost tensor obtained by average pooling the 5D cost feature. We will make this clearer in the paper. Thanks for the comment. \n\n> **Missing details**\n\nWe train our networks with Geforce RTX 3090 with batch-size 20 for 20 epochs, which takes around 16 hours. For the implementation details, we build our cost embedding network upon the architecture of CATs. Specifically, notable modifications include convolutions prior to CATs architecture as justified in experiments of Table 2 and excluded appearance affinity modelling, which was adopted in CATs (concatenation of projected feature maps to cost volume). We will clarify this in the implementation detail and Section 4.2.", " We thank the reviewer for the positive assessment of our paper. If any of our responses below do not adequately address your concerns, please let us know and we will get back to you as soon as possible.\n\n> **Too many quantitative comparisons and more visual experiments**\n\nWe thank the reviewer for the positive assessment of our paper. We agree that it is a good idea to provide more visual experimental results. Nevertheless, we wish to mention that Fig.2 and Fig.6 visualize matching fields and flow maps to help the understanding of readers. Furthermore, qualitative results on each benchmark are given in supplementary material. However, following the reviewer's suggestion, we will reduce the number of compared methods in Table 1, and replace with other visual experiments, including warped source images with evolving iterations as in DMP [19].\n\n> **Does NeMF take a NN to represent a semantic correspondence or a matching cost?**\n\nThanks for the question. First, we wish to clarify that the original NeRF adopts an *unconditional* implicit neural representation (INR) encode a single scene or an image. On the other hand, the literature that utilize *conditional* INR, e.g., PixelNeRF [77], IBRNet [A], and Mvsnerf [B] overcome this shortcoming by conditioning a NeRF on an input image to learn a scene prior.\n\nRegarding the question, NeMF represents matching cost based on the *conditional* INR framework, where a matching prior can be learned by conditioning the network on the matching cost features.\n\n[A] Wang, Q., Wang, Z., Genova, K., Srinivasan, P. P., Zhou, H., Barron, J. T., ... & Funkhouser, T. (2021). Ibrnet: Learning multi-view image-based rendering. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 4690-4699).\n\n[B] Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., & Su, H. (2021). Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 14124-14133).\n\n> **How much time does the proposed method take for training?**\n\nEquipped with 4 Geforce RTX-3090 GPUs, a single epoch with batch size of 20 on SPair-71k dataset takes about 50 minutes, resulting about 16 hours to complete the whole training. We will include this in the implementation detail.\n\n>**Different matching cost for different image pairs. How does NeMF cope with this?**\n\nOur method can adaptively infer an intrinsic matching prior no matter what image pair is given as input by conditioning NeMF on the matching cost feature computed from the input images. This is in contrast to the previous approaches based on unconditional INR where image-specific prior can disrupt the generalization capacity. ", " This paper introduces neural matching fields into semantic correspondence. To the best my knowledge, this approach should be the first method to do the task using implicit neural representation. There are two problems: the computation for 4D matching field and the inference efficiency. Authors provide effect method to address the two problems. This paper employs implicit neural representation to do semantic correspondence. This should be the major contribution. According to the statement of authors, I can follow the idea easily and this idea should work.\n\nThe disadvantage of this work is the experiments. There are too many quantitative comparisons. According to the data, the performance of this method seems OK. However, authors should provide more visual experiments to convince readers. I only have one concern. Traditional Implicit Neural Representation method such as LIIF and NeRF records images into the weights of neural network. One neural network represents one image or one scene. Does NeMF take a neural network to represent a semantic correspondence or a matching cost. If so, how much time will your method cost to train a network? If not so, what is the difference between your method and other semantic correspondence methods. According to my understand, NeMF takes a network to represent a matching cost. In practice, people need a method to compute different matching cost for different image pairs. How does NeMF to deal with this situation.", " Deep learning based image matching algorithms rely on creating a 4D tensor representing the cost of matching each pair of pixels. Due to large memory consumption this 4D tensor has to be kept at a coarse scale which affects the quality of matching at a finer scale (i.e., small localization errors are prominent). Inspired from the concept of implicit neural representations, this paper proposes to learn an MLP which can efficiently interpolate in the space of matching costs. To recover the matching during inference, the paper proposes two ways: sampling and gradient based optimization. Experiments show that this approach indeed can match images significantly better at a finer scale than competitive approaches. Overall this idea of implicit neural representation combined with test time optimization can also have applications in other tasks which also compute cost volumes e.g., depth estimation, optical flow etc. A. Strengths:\n1. The idea of using MLP for recovering fine details from matching costs is novel and possibly useful in other applications as well.\n2. The paper is mostly written well and easy to understand.\n3. Extensive experiments do show improvement of this approach over competitive approaches on several datasets. \n4. Even though this approach incurs slightly more memory burden than best competitive approach (CATs[8]) but it allows to compute the matching costs at any arbitrary resolution foregoing the need for upsampling of the cost tensor.\n\nB. Weakness:\n1. The concept of implicit representation was motivated by the need for reducing memory consumption. However, this approach ended-up ultimately using a 5D cost tensor requiring 16x more memory than competitive approach of CATs[8] which uses 4D. \n2. Computation: Inference time of this approach is significantly more than competitive approaches. Although the authors do mention this limitation but memory consumption is not reported (for both training and testing). \n C. Major questions and suggestions:\n1. In equation 6, is $y$ initialized from $F_{pred}$ similar to PatchMatch? \n2. In line 272, CATs[8] are said to have a cost tensor of size $16^4$ which at first glance seems to be more than this approach of NMT. However in line 9 of Appendix it is mentioned that NMT needs even more ($16^5$). I would suggest to change the wording in line 272.\n3. I did not find details about training time, batch size, compute hardware used for training etc. Also peak memory consumption during training and inference needs to be compared with CATs. I think NMF would need more due to 5D cost representation instead of 4D.\n4. The details of cost embedding network 4.2 are fuzzy. In Table 2 please define the components concretely. See Table 1 in [R-MVSNet](https://openaccess.thecvf.com/content_CVPR_2019/supplemental/Yao_Recurrent_MVSNet_for_CVPR_2019_supplemental.pdf) for an example.\n\nD. Minor comments and suggestions (do not need to be addressed in rebuttal):\n1. It would be better to have a list of contributions at the end of introduction. This can help the reader in exactly understanding what is novel and what is not in this article. \n2. Continuing further on the point B1, have the authors tried making the cost tensor even smaller (e.g. $12^5$) to 'stress-test' the MLP? How worse are the results? It might decrease memory consumption and inference time. I understand if these results cannot be reported within the rebuttal period, but it would be good to have these for final version.\n3. Relating to point C4 above, it would be good to know the exact details of architectural changes done on top of CATs possibly in Appendix. Lines 7-9 of Appendix are not detailed enough in my opinion.\n3. I assume the concept of neural fields presented in this paper is also interesting for optical flow and depth estimation applications. Thus in broader impact these applications can also be added. \n4. The reference of [30] is missing the field of year.\n Only limitation is more computational burden (compute time and memory) than competitive approaches. This is not a major limitation because the idea itself is novel. Moreover the increase in memory consumption is still linear instead of $O(M^4)$ for $M \\times M$-sized images for conventional approaches. Nonetheless these limitations need to be made more clear in the final manuscript. ", " This paper proposes a new matching field generation method called a neural matching field (NeMF). It uses a cost embedding network consisting of convolution and self-attention layers to process the coarse cost volume to obtain cost feature representation. Through such a mechanism, NeMF can realize high-resolution semantic matching. During the training, to reduce the computational cost of the 5-D matching field, NeMF combines the convolution operations with the self-attention operations to get initial matching tensors. NeMF also incorporates random sampling when calculating the local matching fields. ***Strength***\n1. This paper is well-organized and can be easily understood by readers. The technical details are introduced clearly.\n2. The authors conducted extensive experiments on multiple benchmarks to investigate the effectiveness of different modules and designs in this paper.\n3. NeMF aims to get matching fields in high resolutions which can resolve the matching ambiguity during semantic matching.\n\n***Weakness***\n1. Some core techniques of this paper have been used in some disparity or optical flow estimation papers. In Figures 3 and 4, the way to construct 5-D matching cost volumes has been widely used in disparity estimation. Besides, although the 5-D matching cost volumes can get richer matching information, it takes in a huge amount of computational cost. When the resolution of the matching field is very large, it's too expensive to process these 5-D matching volumes.\n2. In lines 137-138, to get the semantic matching field in high resolution, NeMF uses interpolation methods (as that in MMNet, ICCV 2021) to upscale the matching fields. But such an interpretation will lead to matching ambiguity obviously. Can these ambiguities brought by interpolation methods be eliminated in subsequent modules?\n3. In section 4.3 (lines 180-182), if NeMF uses random sampling, the training will encounter the long-tail problem. Because most of the point pairs will be classified as negative matching, and only very few matched point pairs can be classified as positive matching. In fact, the negative point pairs are much more than the positive pairs. If the random sampling strategy is adopted, there will be no positive samples. And this will lead to the crash of the model training. I feel a little confused about the random sampling.\n4. In lines 185-190, how to generate dense matching flow through sparse annotations is not well-introduced.\n5. In Figure 5, is the inference conducted on the validation or test set? Why there is back-propagation during the inference process? Does NeMF use the output of PatchMatch as pseudo labels to train the model?\n6. It seems that NeMF uses the output of PatchMatch as the pseudo label. But there is no ablation study to support it. It will make readers confused about whether the performance gain comes from the model itself or its supervision.\n7. It seems that the computational cost of NeMF is relatively high. 1. What is the biggest difference between NeMF and other semantic matching methods based on 5-D matching cost volume?\n2. It seems that the output of PatchMatch is used as pseudo labels to guide the training. \n3. During the inference, it's unclear why NeMF needs back-propagation. The computational cost of NeMF is relatively high. I don't find any other negative societal impact.", " This paper proposed a novel method based on Implicit neural representation(INR) for semantic correspondence, named neural matching field (NeMF). Also a cost embedding network containing convolution and self-attention layers is established in order to represent complicated and high-dimensional continuous field better. During the inference phase, this paper introduced the patchmatch-based sampling and coordinate optimization which make the NeMF performs better and more efficient. Experiments on several standard benchmarks and ablation study validate the proposed architecture with the full resolution input image pairs. Originality: Comparing to existing methods, this paper firstly leverage the implicit neural representation in semantic correspondence.\n\nQuality: Benefiting from INR which is not coupled to spatial resolution, images can be input to the proposed network with their original resolution and SOTA performance is attained with competitive inference time. \n\nClarity: The paper is somewhat clear, but some important details are missing or unclear\n\nSignificance: The paper is likely to have moderate impact on semantic correspondence\n Figure 3 illustrates the proposed network architecture but it is not clear, especially the concrete architecture of the cost embedding network.\n\nFigure 6 demonstrates the flow maps for different N iterations, the author should give more explanations on the meanings of different colors.\n\nFigure 7 shows several semantic matches on PF-PASCAL, however, it seems that the results of the third pair in the first two columns and the second pair in the last two columns look almost the same while some correspondences predicted by CATs are marked red. Additionally, the correspondence marked red of the last image pair in the third columns looks different from that reported in the original paper(CATs)\n\nqualitative results of CATs on PF-WILLOW are different from that reported in the original paper (i.e. 50.3 when α=0.05 and 79.2 when α=0.1 )\n This paper formulates predicting semantic correspondence as a classification problem which lacks of generalization between datasets, Concretely, the network trained on one datasets with fixed number of categories may not performer well on other datasets with unseen categories" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "nips_2022_bZzS_kkJes", "wPqusEvVrX", "nips_2022_bZzS_kkJes", "jJkA9bAdLPb", "ft_ME3go5uS", "ft_ME3go5uS", "WCk0WCu-O96", "WCk0WCu-O96", "mxiqeWtajp3", "nips_2022_bZzS_kkJes", "nips_2022_bZzS_kkJes", "nips_2022_bZzS_kkJes", "nips_2022_bZzS_kkJes" ]
nips_2022_ReB7CCByD6U
Beyond Mahalanobis Distance for Textual OOD Detection
As the number of AI systems keeps growing, it is fundamental to implement and develop efficient control mechanisms to ensure the safe and proper functioning of machine learning (ML) systems. Reliable out-of-distribution (OOD) detection aims to detect test samples that are statistically far from the training distribution, as they might cause failures of in-production systems. In this paper, we propose a new detector called TRUSTED. Different from previous works, TRUSTED key components (i) include a novel OOD score relying on the concept of statistical data depth, (ii) rely on the idea’s full potential that all hidden layers of the network carry information regarding OOD. Our extensive experiments, comparing over 51k model configurations including different checkpoints, seed and various datasets, demonstrate that TRUSTED achieve state-of-the-art performances by producing an improvement of over 3 AUROC points.
Accept
The paper proposes a out-of-distribution detection approach using integrated rank weighted (IRW). Its main novel feature is leveraging the information from all layers of the model for this task. The detector can be applied to new transformer models without any training, as opposed to data-driven methods. The method is assessed in a comprehensive evaluation and code is provided for reproducibility. One of the limitations, however, is the difficulty to compare the proposed method with related work. Presentation, especially of the technical content, should be improved in the final version. The AC disagrees with the authors' complaint about the biasedness of some reviews. Indeed two reviewers had critical remarks on several aspects of the paper, yet this criticism appears to be fair and driven by the scientific discourse. The answers provided in the rebuttal have clarified most of the reviewers' concerns.
train
[ "d1L-1Ox2f0l", "rAG1fgcKgKY", "J0KVVOI3tId", "GACG2RnqUXA", "GhThF3ZnGcF", "1wKbi3XvSv6", "0cUHrj3GuT2", "GCU8yl9nYK5", "EqF3V14QmAB", "zrYNrqNvnCf", "uIcbAnI9-E" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you, the authors have addressed my questions during rebuttal.\n", " Let us thank reviewer o5Ud for their detailed answer to our response. We are glad they are acknowledging that a direct comparison against [36] and [85] is either not realistic in our setting, or outside of the scope of the paper.\n\n\nIn the following, we would like to (i) _clarify our positioning as suggested by reviewer o5Ud_ and (ii) _provide them with additional experiments we run to take into account its suggestions._\n\n\n**Clarification of our positioning.** We are addressing the OOD detection problem on pre-trained and fine-tuned classifiers. _We believe this setting is particularly well suited for real-world applications where practitioners_ usually have already access to a trained model and cannot afford to do a retraining from scratch. In order to avoid confusion, we have added a clarification both in the abstract and in the introduction of the updated manuscript.\n\n**Extending our experimental setting to LM.** Although we think extending the use of TRUSTED to Language Models (such as in Arora et al.) is somewhat outside the scope of our work, _we did our best to add relevant experiments to address reviewer o5Ud._ More precisely, we implemented the code provided by Arora et al. to allow for a fair comparison. The results we obtained are displayed in the following Table: \n\n|ID| OOD| ||AUROC|| |Accuracy||\n|-|-|-|-|-|-|-|-|-|\n ||| **_APPROVED_**| PPL | MSP| Oracle| OOD| IN-D|\n|SST-2| IMDB| **97.9** | 96.8 |66.1| 100.0 | 91.8| 92.9|\n|| Yelp |99.0 | 99.0| 57.3 |99.8 | 94.3||\n|IMDB |SST-2 |**98.9** | 96.8 |83.1| 100.0 | 90.1| 95.7|\n||Yelp | **88.9** | 76.5 |67.8 |100.0 | 96.2||\n|Yelp |SST-2 |**99.7** | 98.8| 85.8| 99.8 | 88.9 | 98.1|\n || IMDB | **90.8** | 86.7 |62.3| 100.0 | 93.1||\n|SNLI |RTE | **97.9** | 95.1| 78.7 |99.8 | 67.9| 90.2|\n || MNLI | **98.6** |96.4| 75.6| 99.7 | 80.1||\n|RTE |SNLI | **87.4** | 82.3 |47.1 |99.7 | 81.8 | 76.8| |\n | | MNLI |**89.2** | 85.3| 56.7| 97.0 | 77.1| |\n|MNLI |SNLI| **89.6** | 76.6 |58.2| 99.7 | 81.3| 85.4|\n || RTE| **87.9** | 68.2 |77.7 |96.7 | 77.1 ||\n|TOTAL | | **_93.8_** | 88.1 |68.3| 99.3 ||||\n\n\nWe would like to stress that this _Oracle detector is trained with access to OOD data and thus cannot be compared with our TRUSTED detector_ which does not use an OOD example. We nonetheless report the results of Oracle for completeness.\n\n**Clearly, these additional results support the superiority of our TRUSTED detector, even in the case of LM.**\n\n**We have added this experience (see Section B of the appendix) as an extension of our method. Upon acceptance we will carefully describing the corresponding new LM setting, the baselines and the citations of relevant works in the main paper.**\n\n\n**_We hope we have addressed the reviewer's questions on comparing with relevant related works and that they will be keen to consider raising their score accordingly._**\n\n\n\n\n", " For comparison to existing works, please provide comparison that\n1) On the same dataset & split the respective papers used\n2) Directly to the numbers in the tables reported in the respective papers (no reimplementation, or somehow prove that the reimplementation is the same or better)\n3) Use the proposed approach in the respective papers and cite the paper for the name of the approach in the table\n\nI have come to an understanding that \n1) Such a comparison against [36] is difficult because [36] provided only bar plots and didn't provide numbers for a straight comparison.\n2) Such a comparison against [85] may arguably be apples to oranges because [85] finetunes the transformer where as the proposed work did not finetune the transformer. (Apologize for the typo in the review, [82]->[85])\nNote: I do not buy the argument that MSP, energy or Mahalanobis counts as a proper comparison to [85] since they are baseline approaches used in [85].\n3) Same for Arora et al, since they use a perplexity baseline, where as perplexity may be inaccessible in the textual classification problems that the authors are working on. \n4) Plus, the BERT models trained for the respective datasets may be non-standard and difficult to reproduce.\n\nAs a result, no such straight head-to-head comparison can be performed, and the comparisons defaults to showing that proposed approach wins over a set of hard baselines.\n\nHowever I think that\n1) It would be very useful to apply the proposed approach *directly to pre-existing benchmarks* and show improvements, e.g. one from Arora et al.'s paper. It's the easiest way to provide a direct reference point to existing literature.\n2) Either a head-to-head comparison is needed ignoring some assumptions, or the proposed method is SOTA in only restricted conditions and the contribution would be less. \n3) If no such head-to-head comparisons are presented, the inherent assumptions should be made clear, for example \"no finetuning\" or \"no perplexity\" so comparisons to other contemporary methods may not be applicable. For example, emphasize OOD on pretrained models in the intro/abstract.", " Let us thank Ezx2 for their careful reading of the manuscript. We are glad that Ezx2 acknowledges the strong performances of our detector, the extensive evaluation framework, our analysis and highlights the practical importance of designing OOD detectors for textual data.\n\nBelow we address the different limitations:\n1. Ezx2 claims that our work suffers from limited technical novelty:\n * The IRW depth is both not known and not used in the deep learning community and more specifically in the NLP community. Although, we agree that IRW is borrowed from statistical literature, we argue that Mahanalobis distance applied to OOD detection in [44] has not been developed by the authors. We believe that finding, adapting and releasing new tools can benefit the NLP community. \n * To the best of our knowledge, the idea of taking advantage of all hidden layers of transformers before computing the OOD score is actually new. First, in CV, OOD scores are computed for each layer and then aggregated into a single score (see Sastry and Gomes). In our work, we do not compute an anomaly score per layer but rather aggregate the hidden representations of samples across layers. We tested several approaches including [18] and [33] but went for the simplest solution. This is quite different and actually takes advantage of the transformer architecture, where all layers share the same dimension. In a nutshell, CV (i) computes an anomaly score for each layer and (ii) aggregates them, while we (i) aggregate hidden representations and (ii) compute a single anomaly score. This is extensively explained with relevant references in the Section 3.2 Layer aggregation choice.\n * Ezx2 claims that there are no guarantees behind IRW depth (l174). In this paper, we use the IRW depth. It has been show in [R1] that the approximation of the halfspace depth suffers from the curse of dimensionality involving statistical rates of order O($(log(n)/n)^{1/(d-1)}$) (see Equation (12) in [R1]) when n is the sample size. In contrast, for the IRW depth, it has been shown in [2] that approximation of the IRW depth doesn’t suffer from the curse of dimensionality (see Corollary B.3 in [R2]). **The previous comments have been added to the manuscript.** Furthermore, as explained Line 174-178, the IRW depth inherits advantageous computational properties over the halfspace depth.\n2. We have modify the manuscript to clarify technical implementations.\n3. Let us answer the experimental concerns of Ezx2.\n * **Computation time comparison between IRW and Mahalanobis depths** has been added to Appendix D. **These experiments show computational advantages of IRW over Mahalanobis.**\n * (i) and (ii). Following the reviewer’s suggestions we include relevant additional comparison in Table 2. See results below\n\n|Method|AUC|FPR|\n| ---|---| ---|\n|DM FL | 93.8 ±9.8 | 89.2 ±20.1 |\n|DM FL+1 | 71.7 ±13.7|54.7±32.0|\n|DM F[L,L+1]|81.7±20.7| 60.7±20.0|\n|DM FL⊕L+1|83.6±10.6|61.9 ±39.3|\n|DM Fcat|90.4±11.5|84.0±22.1| \n|DM FPM|81.2±15.3|67.7±28.7|\n|DIRW FL|92.6 ±8.0| 88.5 ±17.7| \n|DIRW FL+1|82.4 ±14.0|77.2 ±24.0|\n|DIRW F[L,L+1]|95.5 ±10.0|91.2 ±15.0|\n|DIRW FL⊕L+1|95.9 ±10.0|91.0 ±20.0|\n|DIRW Fcat|96.1 ±4.9|91.8 ±14.0| \n|**TRUSTED FPM**|**97.0±4.0**|**93.2±11.5**| \n\nwhere F[L,L+1] consists in concatenating the 2 last layers, and FL⊕L+1 consist in averaging the two last layers. _**We have also improved and clarified our claim.**_\n\n\nBelow we answer the different questions raised by Ezx2.\n1. Motivation of using IRW: \n * **IRW has appealing properties (See l 187).** First, contrarily to Mahalanobis, it does not rely on a multivariate gaussian assumption, then it does not require the first two moments to be finite and to both compute and inverse $\\Sigma$ in high dimension, which can be ill-conditioned in low data regimes. Low data regimes could be harmful in practical applications since OOD detectors consider per class statistics, thus to ensure that $\\Sigma$ is well conditioned we need to ensure that enough samples per class are available. \n * **IRW is better suited for OOD detection**. TRUSTED achieves stronger results than a Mahalanobis-based detector. Additionally, our detector can be easily implemented and IRW possesses a simple geometric explanation (see Appendix). \n\n2. Connection with “deformation technique in CV”. We work on textual data as stated by the title and the paper positioning. CV is outside of the scope of this paper. \n\n**To sum up, we think there could be a misunderstanding here: our work is an NLP contribution and we made no such claim as connections with CV. Nonetheless, we did our best to answer each concern of Ezx2, and kindly ask them to reconsider our paper as an NLP contribution. In the case Ezx2 is satisfied with our detailed response, we kindly ask them to upgrade their score.**\n\n\n**References:**\n[R1]Nagy. Uniform convergence rates for the approximated halfspace and projection depth.\n\n[R2]Staerman. Affine-Invariant Integrated Rank-Weighted Depth. \n", " Let us thank FF5o for their careful reading of the manuscript. We are glad that FF5o acknowledges the strong results of our detector.\n\nBelow we address the concerns raised by FF5o.\n\n* FF5o argues our method is somewhat incremental, in particular by highlighting the fact that our aggregation procedure, used in our NLP context, is already known by the Computer Vision (CV) community. We would like to underline that aggregation techniques used in CV **are quite different from the one we are proposing here for several reasons:**\n * First, in CV, anomaly scores are computed at each layer and then aggregated into a single anomaly score (see Sastry and Gomes). In our work, we do not compute an anomaly score per layer but rather aggregate the hidden representations of samples across layers. This is quite different and actually takes advantage of the transformer architecture, where all layers share the same dimension. In a nutshell, CV (i) computes an anomaly score at each layer and (ii) aggregates them, while we (i) aggregate hidden representations and (ii) compute a single anomaly score. _This is extensively explained with relevant references in the Section 3.2 Layer aggregation choice._\n * Moreover, the way anomaly scores of each layer are aggregated in CV is somewhat intricate. For instance, Sastry rely on a complex ad-hoc heuristic. Another example is provided by Gomes and [44], who learn the best way to aggregate anomaly scores, which requires access to OOD data during training. _We would like to insist on the fact that our detector is unsupervised, meaning we **do not** need access to OOD data beforehand, making it more realistic for real-world scenarios._\n\n* To the best of our knowledge, the idea of taking advantage of all hidden layers of transformers for OOD detection is actually new. The fact that our method is simple, namely, compute an averaged representation, should not occult performances of our detector (which outperforms all previous works). **In fact, we think simplicity is the right road to building robust and understandable OOD techniques.** Large Language Models already suffer from a lack of explainability, and we humbly believe it is our mission as a community to build reliable tools to increase safety, in our case for OOD detection.\n\nWe now turn to the questions of FF5o:\n1. _Our method is not based on the use of Mahalanobis scores but on the IRW data depth,_ a tool we borrow from the statistics community. Our paragraph “Connection to the Mahalanobis-based score” is only here to show that the Mahalanobis-based detector can be grouped into the broader range of data depths methods in statistics. \n2. The point of our paper is actually to say that this notion of data depth has not been considered by the NLP community (nor the deep learning or CV community, actually). Our work shows that IRW is better suited for our purpose as it achieves better performances. Thus, **we believe that our work is not incremental, both from a conceptual and pragmatic standpoint.**\n3. Our method is indeed generic and could be tested on Images or other types of data. We believe it would be an entirely new work since NLP and CV both have their own particularities. _We chose to focus on NLP as it already implies a huge amount of experiments._\n4. Our paper relies on the same benchmark as [36,82], which is the standard benchmark for OOD in NLP. It contains hard pairs (e.g., SST vs. IMDB) as well as easier pairs.\n5. We would like to clarify our positioning here. Our goal is to have an impact on models used in the real world. Thus, we have chosen to focus on already trained networks, both small (e.g. DistillBERT) and large (e.g. BERT). We did not train models from scratch because it is known since [36] that models pre-trained on huge corpus and fine-tuned on specific tasks lend themselves better to OOD detection. This is in contrast with models that are trained from scratch on the downstream tasks solely as FF5o suggests, specifically when the dataset is small as is the case for TREC10, in which case a training from scratch struggles at learning complex textual patterns. Nonetheless, we think our extensive experiments cover different model sizes using more than 51k configurations (including for instance DistillBERT, BERT and ROBERTA). \n\n\n\nTo sum up and answer FF5o limitations part, we think their review stands from a CV perspective and do not consider our work as an NLP contribution which has its own specificities. _**We kindly ask reviewer FF5o to reconsider its opinion on the paper by considering our contribution from this NLP perspective. Considering this positioning and our detailed responses, we kindly invite FF5o to revise their grade.**_\n\n\n\n_References:_ \n\nSastry et al. Detecting out-of-distribution examples with gram matrices. ICML 2021\n\nGomes et al. An Information Geometry Approach to Out-of-Distribution Detection. ICLR2022.\n\nFort et al. Exploring the limits of out-of-distribution detection. NeurIPS 2021.\n\n", " Let us thank reviewer gSwg for their careful reading of the manuscript. _We are very glad they are acknowledging that our method shows competitive results compared to previous works and that they appreciate our efforts to provide a thorough description of the method and to explain how it compares to prior approaches._\n\nBelow, we provide detailed answers to the questions raised by gSwg.\n\n* **Role of the hyperparameters.** Our choice of parameters is reported in Supplementary A3, along with training and validation curves (see Fig 7). Preliminary experiments on BERT have allowed setting the learning rate and dropout rate for all the models. \n\n* **Classifier architectures.** The classifiers are built on a pretrained encoder with a classification head (MLP). Both encoder weights and the classification head were fine-tuned during training. \n\n* TRUSTED does not require any additional hyper-parameters except during the depth computation. The parameters of the latter are set to their default values, following what was done in [60].\n* Our method is based on the computation of the IRW depth of the aggregated information at each layer. Except for the fact that we compute this quantity conditioning to the maximum softmax probability of an input sample, we do not rely on the softmax probabilities in the aggregation procedure. Thus, we expect calibration to have little impact on the performance of our method, in contrast with softmax-based methods such as MSP.\n\n**_We hope we have addressed the reviewer's questions and that they will be keen to consider raising their score accordingly._**\n", " We thank reviewer o5Ud for their careful reading of the manuscript. We are glad they are acknowledging that our method _is well-suited for OOD detection: (i) when considering already trained models where retraining is not affordable, and (ii) when no OOD data is available (our method does not request OOD data)._ In this setting, we are glad that, in their summary, reviewer o5Ud underlines that our proposed method outperforms previously developed detection methods for textual data. \n\nIn the following, we provide detailed answers and address each issue reviewer o5Ud raised.\n* Reviewer o5Ud asks for comparison against detector methods of [36], [82], and Arora’s paper. **In fact, we have already included the aforementioned methods in our benchmark.** More precisely:\n\n * In [36], the authors study the benefits of using pretrained models on the performance of OOD detectors. They rely on the Maximum Softmax Prediction _(MSP) detector that is one of the baselines to which we compare in our paper_ (see Tab 2 and Tab 3).\n\n * The main contribution of [82] is a modification of the fine-tuning objective using contrastive learning (see Algo 2 in [82]). They study the effect of the new training on the performance of three OOD detectors, namely MSP, Energy, and Mahalanobis. _These detectors are used as baselines in our paper._ Indeed, we did not consider the detector based on the cosine distance (which is also used in [82]) as it has a lower performance than Mahalanobis-based method. \n\n * In their paper, Arora et al. consider MSP and Perplexity. We do not use Perplexity as it corresponds to a different setting than ours. Indeed, we consider trained classifiers and try to detect OOD inputs, while _Perplexity is used for language models_ and compute the likelihood of the next word in a masked sentence.\n\n\n* There are two subpoints here. \n\n * We disagree with reviewer o5Ud and think our method is, in fact, novel and brings a new tool to the NLP and deep learning community. We introduced the Integrated Rank-Weighted (IRW) depth which falls into the more general concept of data depths. The only data depth which was used for OOD detection in NLP is the Mahalanobis score. In fact, a contribution of our paper is to reveal that this score is actually a data depth. Based on this remark, we suggest using the Integrated Rank-Weight depth, which turns out to be much more efficient for our purpose. The IRW depth has never been used for textual data nor studied for OOD detection using deep neural networks. _So clearly, this contribution is novel._\n\n * Our paper relies on the same benchmark as [36,82]. However, to the best of our knowledge, our benchmark is the largest experimental study conducted on OOD detection for textual data. Indeed, in our experiments, we benchmark OOD detectors over 51K models. Specifically, we consider 3 different encoder pre-trained models (namely BERT, DistillBERT, Roberta), which are trained on 4 different in distribution datasets (20ng, SST2, TREK, IMBD) and consider 8 out-of-distribution datasets (20ng, SST2, TREK, IMBD, Multi30K, MNLI, RTE, WMT16). For each model, we consider 5/7 different checkpoints and 3 seeds. As a comparison, related work such as [48, 68, 39, 53, 59] considers at most 2 models on a single checkpoint. \nBesides, our evaluation in Section 6 investigates the effect of checkpoints and choice of random seed and reveals interesting and previously unreported factors of variability across performances. We think this variability with respect to checkpoints and seeds is actually an important contribution to our work because it calls for new evaluation practices when evaluating OOD detectors.\n\n* As mentioned in the previous paragraph, _the variability of OOD detector's performance is actually one of the takeaways of our work._ \n\n\nWe humbly believe the concerns raised by o5Ud are somewhat not correlated with their score, especially considering that they recognize the superiority of our method over previous works. **_We hope very much that our above-detailed explanations address their questions and that they will be open to consider raising their score accordingly._**\n", " This work proposes TRUSTED: an out-of-distribution detection approach that uses integrated rank weighted (IRW) depth to assess whether an example falls within a set of in-distribution samples in the embedding space of a transformer model. The average embeddings across all transformer layers are used for optimal performance. TRUSTED is compared against 1) integrated rank weighted depth using final embedding layer, 2) Mahalanobis distance instead of IRW and 3) maximum softmax probability on a text OOD benchmark and shows superior performance. Strengths:\n1. Approach and experiment settings are clearly explained.\n2. Code is provided for reproducibility.\n3. The detector can be applied to new transformer models without any training, as opposed to data-driven methods.\n\nWeaknesses:\n1. Experiments didn't show comparison against (any) existing works. \n- [36]\n- [82]\n- Arora, Udit, William Huang, and He He. \"Types of Out-of-Distribution Texts and How to Detect Them.\" EMNLP 2021.\n\n2. Approach is fairly simplistic. Novelty is limited. More suitable as a \"strong baseline\" type of approach.\n- There's even very limited contributions to benchmarking of text OOD detection given that approach is simple. At the current state, a sound & large OOD benchmark with diverse text, different types of OOD, with text from multiple domains may be the easiest way to boost the contributions of this work.\n\n3. Questions:\n- Table 2 uncertainties are large even after averaging over 1440 configurations. Why is that?\n\n=========================\n\nWith comparisons to existing works, the authors have adequately addressed my strong concerns on the experiments. I have updated the score.\n See Weaknesses. Limitations are sufficiently addressed", " The authors propose a novel out-of-distribution (OOD) detector for text classification based on similarity scores from the training distribution. The goal is to identify OOD samples with an unsupervised and fast method by leveraging information from the hidden layers of the representation model.  The main contributions are: i) use of hidden representations to compute a distance score for OOd detection, ii) comprehensive comparison of models and hyper-parameters, and iii) open source for replicability. The study shows that the proposed method has competitive results compared to related work. Strengths\n\n\n- Clear description of background knowledge and related work needed to understand the proposed approach.  \n\n- Clear description of the proposed approach.\n\n- The authors perform a comprehensive comparison with related work.\n\n\n\nWeaknesses\n\n\n- It is not clear how the parameter initialization  and selection of hyper-parameters could affect the method performance. Questions to the Authors\n\nPlease address the following questions during the rebuttal:\n\n- Could you elaborate on the hyper-parameter selection and define the classifier architecture. \n- During training, are the pre-trained models freeze?\n- As an extra contribution, the authors can compute calibration and sharpness of the predicted probability outputs. Please speculate on the relation between well calibrated output probabilities and your method to identify OOD samples. \n The authors have addressed limitations of the proposed approach.\n", " This paper proposes an OOD detector for Transformer models. Their motivation is that all hidden layers of transformer-based models contribute to OOD detection. They verify their method with extensive experiments and show the performance jump on AUROC. - Their motivation that going deeper inside the network can improve OOD detection is somehow limited. Also, intermediate representation for detection tasks is not new, at least in the vision domain. This paper works on NLP domain but is still somehow incremental.\n- Their observation that using all hidden layers is straightforward and reasonable, but simply taking an average of those latent representations is simply. - Their method is based on the Mahalanobis score, which depends on the gaussian assumption. However, this is questionable; such an assumption still fits a real-world setting. Instead of the Mahalanobis score, I am wondering if the authors consider other feature-space scores for their method. Also, the computation burden or Mahalanobis also needs to be discussed.\n- They use Integrated Rank-Weighted depth in their method, which has been used in some previous related works. This downgrade the novelty of the contribution in this paper.\n- Woking on textual OOD problem is reasonable; however, since their methods rely on the transformer model, the natural question is, could it be applied to other domains with Transformer-based models, such as the vision domain?\n- Their selected datasets have different domain characteristics. For example, IMDB is a movie review, while QA is another domain. I wonder if they consider some harder OOD settings during their experiments?\n- It is interesting their observation that training longer the classifier hurts detection. Do the authors consider training those models from scratch without pretraining, for example, train 6-layer transformer vs. 12-layer transformer from scratch and show the power of their metric without pretrained weights. The detector proposed in this paper is simple and effective. However, this method is still incremental and still has some room to be improved. The detector proposed in this paper is simple and effective. However, this method is still incremental and still has some room to be improved. For example, if this paper uses intermedia hidden layers, then if the pretrained model affects the use of hidden states needs to be discussed. Also, when it comes to the vision domain, could this method still work?", " This paper develops a new tool for out-of-distribution (OOD) detection in the NLP domain, which aims to complement the need of deformation techniques as that in computer vision. The key to detecting OOD relies on the computation of the OOD score that is realized by an integrated rank-weighted (IRW) depth. Extensive experiments are conducted to show the superiority of the proposed method. Strengths:\n1. This paper aims to solve an interesting topic.\n2. Extensive experiments are conducted and show competitive performance for the proposed method.\n3. This paper has delivered a thorough analysis of the related work.\n\nWeakness:\n1. The paper suffers from limited technical novelty. For example, \n\n1) The proposed IRW depth is borrowed for existing work. \n\n2) The paper has claimed that one of the contributions is to aggregate all hidden layers of a neural network. However, this kind of trick has been proposed in the literature. The authors should cite the related work and highlight their main differences.\n\n3) The reviewer doubts whether there is any theoretical guarantee on the proposed method. In particular, the reason for choosing \"the halfspace depth\" in line 174 is unknown. \n\n2. Some main technical implementations are not clear. For example, \n\n1) In line 149, the notation of \\delta is not defined. It makes it hard for the reviewer to know how to compute the similarity score and justify its correctness. Moreover, \\delta has appeared twice in the paper. One is in lines 132-133, and the other is in line 149. This makes the reviewer a little confused about understanding the meaning. \n\n2) In line 147, the authors define F_{PM}(S) as the distribution of the mean-aggregation of the training distribution samples with the same predicted target as x. However, it applies different aggregation strategies in Sec. 4.3, lines 246-255.\n\n3) In line 159, “a pre-score aggregation function”: the definition is not shown.\n\n4) In lines 182-183, u_k and n_{proj} are not defined. \n \n\n3. Critical issues in the experiments:\na) Though the authors claim that the motivation for introducing IRW depth is to reduce the computational complexity, there are no such experiments to verify this issue. The paper should address it through experimental support.\nb)\tIn Table 2, it contains some contradictory conclusions and lacks sufficient experiment comparison.\n\ni.\tThe paper claims the information from aggregating all layers in the neural network is more useful. However, for the D_M score in Table 2, the result of the last layer F_L is much better than of all layers F_{cat} and F_{PM}, which shows a contradiction result. The authors have tried to explain the results in lines 269-270, i.e., the multivariate Gaussian assumption may not hold for the dataset. It makes the reviewer confuse about the experimental comparison and makes unconvincing the experimental results because the aggregation of F_L is also under the Gaussian assumption and it beats F_{PM}. \n\nii.\tIn D_{IRW}, it only compares F_{cat} and F_{PM} with F_L and F_{L+1} and shows that F_{cat} and F_{PM} beat F_L and F_{L+1}. The comparison is not sufficient. For example, the authors can consider other combinations, such as concatenating F_L and F_{L+1} or averaging F_L and F_{L+1}.\n 1. As there are many different tools to solve the OOD problem, what is the exact motivation of borrowing the IRW depth to solve this problem? \n2. The reviewer expects to see the connection of the proposed IRW depth and the deformation technique in CV. Can the authors provide some theoretical results?\n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "1wKbi3XvSv6", "J0KVVOI3tId", "0cUHrj3GuT2", "uIcbAnI9-E", "zrYNrqNvnCf", "EqF3V14QmAB", "GCU8yl9nYK5", "nips_2022_ReB7CCByD6U", "nips_2022_ReB7CCByD6U", "nips_2022_ReB7CCByD6U", "nips_2022_ReB7CCByD6U" ]
nips_2022_KWN3I1koJsU
Learning Generalizable Risk-Sensitive Policies to Coordinate in Decentralized Multi-Agent General-Sum Games
While various multi-agent reinforcement learning methods have been proposed in cooperative settings, few works investigate how self-interested learning agents achieve mutual coordination in decentralized general-sum games and generalize pre-trained policies to non-cooperative opponents during execution. In this paper, we present a generalizable and sample efficient algorithm for multi-agent coordination in decentralized general-sum games without any access to other agents' rewards or observations. Specifically, we first learn the distributions over the return of individuals and estimate a dynamic risk-seeking bonus to encourage agents to discover risky coordination strategies. Furthermore, to avoid overfitting opponents' coordination strategies during training, we propose an auxiliary opponent modeling task so that agents can infer their opponents' type and dynamically alter corresponding strategies during execution. Empirically, we show that agents trained via our method can achieve mutual coordination during training and avoid being exploited by non-cooperative opponents during execution, which outperforms other baseline methods and reaches the state-of-the-art.
Reject
The paper presents a novel approach for improving coordination in general-sum games by using risk-sensitive policies based on distributional RL. While the idea is promising, there are significant questions about the paper. For example, there is concern about the lack of theoretical guarantees and intuition about when the approach will work well. There should also be a more extensive discussion of related work. For example, distributional and risk-sensitive RL have been used in multi-agent RL but more should be said about how the proposed method differs and why they can't be included in the experiments (e.g., the decentralized training methods should have no problem running in the general sum case). More extensive experiments would also be helpful. Additional domains and baselines would more clearly show the benefits of the method (and when it could potentially fail).
train
[ "9oV-tfXEqpH", "u8bRigwc4ac", "a11V2OBJQ3r", "TU1cEFlql5", "v0Jz9rqgkH8", "ky_eTfkW7v4", "c0H-uHlGSzM", "39aZAfKvS_x", "RLfacDDqeXR", "HUXyJ8upECy", "kixNcKJvWcR", "6uP1QuKpoWc", "iYs4YxM-znk", "v5eu84Qyxf2", "wd8cCfhu2Ek", "84xlBFWtHX0", "ufJX92hwkqn", "FgEud9uUcYI", "8hlgyAgOZfX", "M6PCWk4DHe6", "D5HCzOrkU0x" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer DkS2,\n\nWe have supplemented the comparison experiments with pre-existing literature, and our responses with reviewer ZHUo may eliminate some of your confusion. As the response system will be closed soon within one day. We thank you again for your comments. We hope our detailed responses could address your questions. If there are no more questions and we will appreciate it if you can kindly raise the score.\n\nSincerely yours,\n\nAuthors of Paper2297", " We thank reviewer ZHUo's valuable feedback and your efforts in the whole rebuttal stage. We answer your questions as follows:\n\n1. The risk in our paper characterized the length of the quantile distribution's lower tail, which means the potential loss of the agent's action choice. Why do we need a risk-seeking bonus? Because the action which has high risk, i.e., high potential loss, also has a higher potential payoff at the same time. So we can adjust WT's hyperparameter $\\lambda$ to construct a risk-seeking bonus to encourage agents to choose high-risk but higher payoff actions.\n\n2. We can also adjust WT's hyperparameter $\\lambda$ to 0 to construct a risk-neutral agent in order to tackle an environment that has no risk, e.g., the matrix game you presented. Now the risk-seeking bonus becomes a risk-neutral bonus, i.e., the action expectation of quantile distribution. Our method can construct an agent that is risk-seeking, risk-neutral, or risk-averse, depending on what kind of environmental dynamics the agent interacts with.\n\n3. We conducted a new experiment using the matrix game you presented. With the same hyperparameters as IPD agents, our agents can converge to 100 rewards (keep cooperation in the whole episode, which has 10 steps) during training after 23 episodes. We also conducted an experiment with risk-neutral agents, they can achieve cooperation-cooperation after 22 episodes, too.\n\n4. \"And if your agent develops some bias of actions during training (e.g., prefer to choose (a1,a1)), it will fail to coordinate with new teammates that prefer to do the other action (a2).\": \n\nYour concern is correct. Both single-agent and multi-agent training methods will prefer to overfit their environment or opponents. How to overcome it is a very essential and core problem in reinforcement learning. In the single-agent community, it is the concern of meta-learning and multi-task learning research[1]. In the multi-agent community, it is the concern of zero-shot coordination[2] and ad-hoc teamwork research[3]. Our opponent modeling method offers a new perspective on these communities. We acknowledge that it is not a strong enough method to cover every situation or even end the research in this field. Our opponent modeling method is suitable for situations where there exists a safe equilibrium that each agent gets a lower payoff but can avoid being exploited. As for the matrix game you presented, we can train ensemble cooperation policies to equip each agent and dynamically select which policy to use when coordinating with new teammates.\n\nWe hope our answer can eliminate your confusion.\n\n[1] Rakelly, Kate, et al. \"Efficient off-policy meta-reinforcement learning via probabilistic context variables.\" International conference on machine learning. PMLR, 2019.\n\n[2] Lupu, Andrei, et al. \"Trajectory diversity for zero-shot coordination.\" International Conference on Machine Learning. PMLR, 2021.\n\n[3] Mirsky, Reuth, et al. \"A Survey of Ad Hoc Teamwork: Definitions, Methods, and Open Problems.\" arXiv preprint arXiv:2202.10450 (2022).", " Thanks for your detailed response. I still have some questions about the tasks that the paper can solve. The paper may be able to solve IPDs since there's an optimal solution for both agents to choose \"cooperate\" when the horizon is limited, or there is a discounting factor. The paper distinguishes actions in multi-agent environments as \"risk-seeking\" and \"risk-avoiding\" and combines them with \"cooperate\" and \"defect\" in Prisoner's Dilemma. However, real multi-agent scenarios are more complex and may not have the basic form of IPDS. There exist plenty of environment settings where action space cannot be divided into these two categories, or this correspondence does not exist. For example, how can you define \"cooperate\" and \"defect\" actions in the symmetric matrix games below and apply your risk-seeking method? I don't think evaluating risk in such an environment can be effective since the payoff for each action is equal without knowing the teammate. And if your agent develops some bias of actions during training (e.g., prefer to choose $(a_1,a_1)$), it will fail to coordinate with new teammates that prefer to do the other action ($a_2$).\n\n | | P2: $a_1$ | P2: $a_2$ |\n | :-------: | :----------: | :----------: |\n | P1: $a_1$ | $(10, 10)$ | $(-10, -10)$ |\n | P1: $a_2$ | $(-10, -10)$ | $(10, 10)$ |", " We thank the reviewer's feedback. We answer your question \"under what condition the proposed algorithm converges to the risky cooperative policies\" as follows:\n\nPerhaps you have a misunderstanding of our method and repeated prisoner's dilemma (or iterated prisoner's dilemma). IPD is a multi-stage markov game, and PD is a one-step matrix game. Agents can be better off by choosing to defect in PD. However, this is not true in IPD when all agents can update their strategies. Consider two players play 30 steps IPD, they begin with cooperative strategies, and all agents get 2 rewards at each time step. Then at time step $11$, agent 2 chooses to defect and keeps defect from now on. It gets 3 rewards at time step $11$. However, agent 1 can observe agent 2's choice and alter its strategies. Assume at time step 16, agent 1 alters its strategy from cooperation to defection. Next, all agents get 0 rewards until the episode ends. So the rewards of agent 1 is 10 * 2 -1 * 5=15, and the rewards of agent 2 is 10 * 2 + 3 * 5=35. However, if all agents keep cooperation until the episode ends, each of them will get 50 * 2=100 rewards. So in the long run, a cooperative strategy can make agents achieve both optimal social welfare and optimal individual value at the same time. No one will deviate from cooperation to defection because their partners will defect, too. This equilibrium has been proved in the game theory community [1][2][3][4].\n\nAlthough the risk bonus is decreasing, agents have accumulated ample high return samples, i.e., actions leading to cooperation. These samples will change the return distribution shape, and agents' exploration bonus are decreasing, too. So in the end, the action which has the max Q value is the cooperation action, not the defection action.\n\nThe policy is trying to maximize individual value, so we didn't change the objective of the policy in the general-sum game. Maximizing individual value is the most important thing in the general-sum game. Defect can bring equal or higher payoff in the short term, but it is not in the long run, including ISH, Monster-Hunt, Escalation and, IPD. Cooperation will make all agents get the max individual value and the optimal social welfare. But cooperation has risks. Learning agents with our method can stand this risk for better future returns. So if all agents equip with our GRSP method, they will achieve cooperation in the end.\n\n**As long as all agents choosing cooperative policies can make everyone get better payoffs in the long run, agents with our method can converge to the risky cooperative policies.**\n\nAs for the auxiliary opponent modeling( AOM) task, our opponent modeling objective and policy objective are **optimized jointly**. If we have a cooperative opponent, the AOM objective will update policy's parameters to get a better payoff. So the AOM objective will force the policy to choose cooperation. If we have a defective opponent, the AOM objective will update the policy to change its behaviors from cooperation to defection, avoiding being exploited.\n\n***\n[1] Fudenberg, Drew, and Tirole, Jean. (1991) Game Theory. Cambridge, MA. The MIT Press.\n\n[2]Myerson, Roger B. (1991) Game Theory: Analysis of Conflict. Cambridge, MA. Harvard University Press.\n\n[3]Poundstone, William. (1992) Prisoner’s Dilemma. New York, NY. Doubleday.\n\n[4]Axelrod, Robert M. (1984) The Evolution of Cooperation. New York, NY. Basic Books, Inc.", " Thanks for the clarification, but the question 4 is still unclear. This is related to the generality of the method as the author didn’t put any restrictions on the range of problems it can solve, so I will still maintain my current score.\n\nMy question actually echoes the major concern of reviewer DkS2, which is under what condition can the proposed algorithm converge to the risky cooperative policies. In the tasks ISH, Monster-Hunt, and Escalation, where the optimal social welfare and optimal individual value can be achieved at the same time, it is reasonable for your method to converge to the cooperative strategies. \n\nHowever, IPD is a totally different story, where agents can always be better off by choosing to defect and this is why it’s called a dilemma. Given that the the policy optimizes the episodic individual reward and the coefficients of the risk bonus are decreasing, agents will definitely deviate from the cooperative policies if trained long enough. The author also mentioned the opponent modeling task, but it doesn’t make any change to the objective of the policy. Can the opponent modeling task help to achieve coordination during test? Maybe, but effect of the modeling task is only on the hidden layers of the policy, it cannot fully explain why it helps.", " Dear Reviewer metE,\n\nAs the response system will be closed soon within a few days. We thank you again for your comments. We hope our detailed responses could address your questions. More questions on our paper are always welcomed! If there are no more questions and we will appreciate it if you can kindly raise the score.\n\nSincerely yours,\n\nAuthors of Paper2297", " Dear Reviewer DtFh,\n\nWe are glad to hear that you agree the outcome is a very exciting one. Thank you very much for your feedback.\n\nSincerely yours,\n\nAuthors of Paper2297", " Dear ReviewerZHUo,\n\nAs the response system will be closed soon within a few days. We thank you again for your comments. We hope our detailed responses could address your questions. More questions on our paper are always welcomed! If there are no more questions and we will appreciate it if you can kindly raise the score.\n\nSincerely yours,\n\nAuthors of Paper2297", " 1. I'd also *love* to see a comparison to melting pot or advanced social dilemmas. Think the problem is these environments are just not computationally tractable unless the authors sit at a major commercial lab. I think the experiments on multi-step matrix games are the best you can do on an academic computational budget atm. \n\n2. I agree the methodological contributions seem limited but the outcome - independent RL agents cooperating in general sum games is a very exciting one!\n\n3. Kinda agree a comparison with LOLA is a bit unnecessary here (although kudos to the authors for getting that out). LOLA has tonnes of privilege information about an opponent (access to their weights) and its variants which don't, perform strictly worse. The only variant of opponent-shaping with similar limited access (no access to opponents policy) is MFOS [1]. This method is not currently scalable so unclear if it can be applied in the test environments here.\n\n[1] https://arxiv.org/pdf/2205.01447.pdf", " We thank again all the reviewers for their constructive and valuable comments. We really appreciate the positive comments made by reviewers who recognised our contribution to MARL.\n\nWe hope our responses, including extra experiments and revised paper, could address the questions of all the reviewers. More discussions and suggestions on our paper are also always welcomed!\n\nSincerely yours,\n\nAuthors of Paper2297", " Dear reviewer DkS2, we have supplemented the comparison experiments with [1] and [2]. The mean evaluation rewards of the last 30k time steps of all methods are shown in the table below, and the complete experiment results and our codes can be found at https://anonymous.4open.science/r/GRSP-8DEC/pic/GW.png\n\nWe are looking forward to your further feedback, and we will appreciate it if you can kindly consider raising the score. Thank you.\n\n\n| | ISH | IPD | M-H | Escalation |\n|:-------:|:------:|:------:|:-------:|:----------:|\n| GRSP | **$\\mathbf{40.00 \\pm 0.0}$** | **$\\mathbf{40.00\\pm 0.0}$** | **$\\mathbf{40.04\\pm 0.52}$** | **$\\mathbf{30.17\\pm 1.39}$** |\n| M-Qubed | $18.67\\pm 0.12$ | $10.12\\pm 0.29$ | $2.62\\pm 0.36$ | $17.42\\pm 0.08$ |\n| LOLA | $23.04\\pm 1.22$ | $3.92\\pm 0.32$ | $41.61\\pm 0.49$ | $0.06\\pm 0.01$ |\n| MADDPG | $19.91\\pm 0.03$ | $0.02\\pm 0.01$ | $4.41\\pm 0.32$ | $18.62\\pm 0.44$ |\n| MAPPO | $20.00\\pm0.02$ | $0.05\\pm0.01$ | $1.17\\pm 0.44$ | $0.00\\pm 0.00$ |\n| IAC | $19.41\\pm0.13$ |$0.09\\pm0.02$ | $1.90\\pm 1.57$ | $0.73\\pm 0.10$ |\n| LIAM | $19.69\\pm 0.10$ | $0.05\\pm 0.02$ | $0.77\\pm 0.57$ | $1.04\\pm 0.09$ |\n\n***\n[1] Crandall, Jacob W., and Michael A. Goodrich. \"Learning to compete, compromise, and cooperate in repeated general-sum games.\" Proceedings of the 22nd international conference on Machine learning. 2005.\n\n[2] Foerster, Jakob N., et al. \"Learning with opponent-learning awareness.\" arXiv preprint arXiv:1709.04326 (2017).", " We thank the reviewer’s feedback. We have explained the reasons why we do not compare to the pre-existing literature in A4. We have added a new experiment that compares our method with LOLA [1] and you can find it in the rebuttal revision. According to your suggestion, we will further supplement the comparison experiments with [2] and [3] as soon as possible.\n***\n[1]Foerster, Jakob N., et al. \"Learning with opponent-learning awareness.\" arXiv preprint arXiv:1709.04326 (2017).\n\n[2]Learning to compete, compromise, and cooperate in repeated general-sum games. InProceedings of the 22nd international conference on Machine learning.\n\n[3]Littman ML. Friend-or-foe Q-learning in general-sum games. In ICML 2001 Jun 28 (Vol. 1, pp. 322-328).", " The authors did not explain why they do not compare to the pre-existing literature. There are decades of literature on the problem they study and I believe the paper needs to relate to this literature to be acceptable. \n\nI do not understand what the authors mean that by \"few works studying multi-agent coordination in general-sum games since 2016\". How it that relevant? There are many works before that year that the paper IMO needs to relate to.", " We thank the reviewer’s helpful feedback on our submission. We summarize the reviewer’s questions and present our responses below.\n\nQ1: the experiments do not provide any insights into when and why the proposed method performs better.\n\nA1: We have revised the experiment part of our paper and updated them in the rebuttal revision. You can find our insights in sec1, sec. 4.1, sec 5.4, and sec. 5.5. And we are willing to keep revising our paper according to your valuable further feedback.\n***\n\nQ2: It is not clear what exactly are the error areas in the graphs. Confidence intervals? Variance? STD?\n\nA2: The shadowed part represents a 95% confidence interval, i.e., two standard errors of the mean. We \n***\nQ3: Some formulations are hard to follow, such as L196-197, L236,\n\nA3: We have revised the method and experiment part of our paper and updated them in the rebuttal revision.\n***\nQ4: Why did you not compare to the existing algorithms for learning cooperative outcomes in general sum games?\n\nA4: To our knowledge, there are few works studying multi-agent coordination in general-sum games since 2016. Most related works that study general-sum games assume their opponents have fixed policies, e.g., [B],[C],[D]. However, in our work, we consider achieving coordination in multiple learning agents, which is a more general case. We have compared our GRSP with [C] because it is the latest work. The problems studied in [A], [E], [F], and [G] are more related to ours. [A] proposed that agents can exchange their parameters and gradients with their opponents, which is a too strong assumption. We have compared our GRSP with [A] LOLA in the rebuttal revision. The method proposed in [E] is too weak and can not achieve complete cooperation even in iterated stag hunt games. [F] and [G] belong to reward shaping methods, and we have compared GRSP with [F] in the appendix of our paper. The method proposed in [G] can not match their official codes, so we didn't compare GRSP with them.\n\nThe reason why we compare GRSP with IAC, MADDPG, and MAPPPO is that the assumptions of these methods are similar to ours, e.g., multiple learning agents, can not access opponent's rewards or parameters, and so on. However, MADDPG assumes access to the opponent's observations and actions to train a centralized critic, which is stronger than ours. But our GRSP method still has better performance than theirs. We think this is one of the contributions of our method.\n\n[A] Foerster, Jakob N., et al. \"Learning with opponent-learning awareness.\" arXiv preprint arXiv:1709.04326 (2017).\n\n[B] Raileanu, Roberta, et al. \"Modeling others using oneself in multi-agent reinforcement learning.\" International conference on machine learning. PMLR, 2018.\n\n[C] Papoudakis, Georgios, Filippos Christianos, and Stefano Albrecht. \"Agent modelling under partial observability for deep reinforcement learning.\" Advances in Neural Information Processing Systems 34 (2021): 19210-19222.\n\n[D] Wang, Weixun, et al. \"Towards cooperation in sequential prisoner's dilemmas: a deep multiagent reinforcement learning approach.\" arXiv preprint arXiv:1803.00162 (2018).\n\n[E] Wang, Woodrow Z., et al. \"Emergent prosociality in multi-agent games through gifting.\" arXiv preprint arXiv:2105.06593 (2021).\n\n[F] Peysakhovich, Alexander, and Adam Lerer. \"Prosocial learning agents solve generalized stag hunts better than selfish ones.\" arXiv preprint arXiv:1709.02865 (2017).\n\n[G] Tang, Zhenggang, et al. \"Discovering diverse multi-agent strategic behavior via reward randomization.\" arXiv preprint arXiv:2103.04564 (2021).\n***\nQ5: What are the main reasons why the proposed algorithm finds the cooperation and the existing ones fain in some of the specific domains?\n\nA5: \nThe action whose value distribution has a long upper tail means that taking this action may receive higher potential payoffs. However, its mean value may be lower than other actions since its distribution also has a longer lower tail, which means higher risk. Agents with the expected RL method (risk-neutral policy) will not select this action, so they can't converge to the cooperation strategies.\n\nIn GRSP, the risk-seeking exploration bonus encourages agents to care more about actions whose distribution has a longer upper tail. So agents with the GRSP method will be less likely to defect to their opponents since defects bring lower future returns, more likely to coordinate with other agents, and more tolerant of the risk. \n", " We would like to thank you for your review. We are delighted that you found the paper well-written and well-executed. We summarize your questions and present our responses below.\n***\nQ1: How does using the distribution RL effect sample efficiency?\n\nA1: As shown in the ablations part Sec. 5.5 in the original version or Sec 5.4 in the rebuttal revision, IQR-DQN performs poorly and does not show sample efficiency compared with GRSP and other baseline methods, which indicates that the sample efficiency of GRSP comes from our risk-seeking exploration bonus and truncated variance, and our ablation experiments further demonstrate it empirically. As for distributional RL itself, [A] summarizes that possible reasons for DRL's superiority include the following:\n\n1. Reduced chattering: modeling a distribution may reduce prediction variance, which may help in policy iteration.\n2. Improved optimization behaviour: distributions may present a more stable learning target, or in some cases(e.g. the softmax distribution used in the C51 algorithm) have a regularizing effect in optimization for neural networks.\n3. Auxiliary tasks: the distribution offers a richer set of predictions for learning, serving as a set of auxiliary tasks which is tightly coupled to the reward.\n\nWe hope we answered your questions. \n\n[A] Lyle, Clare, Marc G. Bellemare, and Pablo Samuel Castro. \"A comparative analysis of expected and distributional reinforcement learning.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 33. No. 01. 2019.\n***\nQ2: How does risk-sensitive approaches effect exploration?\n\nA2: Let's first introduce two kinds of uncertainty in RL: epistemic uncertainty, which stems from limited data, and aleatoric uncertainty, caused by intrinsic stochasticity in the environment, and the epistemic uncertainty will vanish as the learning progress. In MARL, we claim that the aleatoric uncertainty, i.e., the intrinsic stochasticity in the environment, is caused by other learning agents' exploration and policy updating. Distributional RL, which aims to learn the distribution of returns instead of the mean value only, has been suggested as a way of measuring aleatoric uncertainty[A]. So we adopted distributional RL to capture the aleatoric uncertainty in general-sum games.\n\nFurthermore, the action whose value distribution has a longer upper tail means that taking this action may have higher potential payoffs. However, its mean value may be lower than other actions since its distribution also has a longer lower tail, which means higher risk. So agents with the expected RL method will not select this action. We proposed to utilize the risk-seeking exploration bonus to encourage agents to pay more attention to actions whose distribution has a longer upper tail. So agents with our GRSP method will be less likely to defect to others since defects bring lower future returns, and more likely to coordinate with each other although coordination has a bit of risk. \n\nIn a word, the risk-seeking approach will encourage agents to explore regions that have higher potential future returns and be more tolerant of the risk.\n\nWe hope we answered your questions. \n\n[A] Nikolov, Nikolay, et al. \"Information-directed exploration for deep reinforcement learning.\" arXiv preprint arXiv:1812.07544 (2018).", " We thank the reviewer’s valuable feedback on our work, and we present our responses below.\n\nWeakness1 and Minors 1, 2: We have revised our paper according to your valuable suggestions and fixed typos in equation 7, and the rebuttal revision has been updated.\n\nWeakness2 and 3: Both D4PG and DFAC focus on fully cooperative setting and do not consider risk-sensitive policies. To our knowledge, our work firstly proposed the algorithm which utilizes risk-seeking policies to achieve risky coordination strategies in general-sum games that is a very general problem, and firstly studied how the lower tail of the quantile distribution affects action selection and corresponding payoffs. We further proposed a novel opponent modeling algorithm to generalize the agent's policy to different opponents, thus avoiding being exploited by non-cooperative opponents during testing time. And we achieve the best and strongest results in four widely used experimental environments with the weakest assumptions.\n\nWeakness4: In fact, agents only need several episodes (less than ten) to adapt to non-cooperative opponents. You are welcome to download our codes and models in https://anonymous.4open.science/r/GRSP-8DEC/README.md to test them.\n***\nA1: Recall that $\\phi^{i}$ are parameters of the feature extractor $\\mathcal{E}_{\\phi^{i}}$ \n\nand $\\psi_{a}^{i}$ are parameters of the decision maker \n$\\mathcal{D}_{\\psi_{a}}$.\n\nDuring training, $\\phi^{i}$ can be updated by gradients from both RL objective and auxiliary opponent modeling task, and $\\psi_{a}^{i}$ can be updated by gradients only from RL objective. \n\nDuring testing, agents can not access the environmental rewards, so $\\phi^{i}$ can't be tuned anymore through RL objective. Only the feature extractor $\\mathcal{E}_{\\phi^{i}}$ can be tuned by gradients from the auxiliary opponent modeling task which only needs history actions made by the opponent.\n***\nA2: In our experiments, GRSP is the only method that can produce cooperative strategies. So we use GRSP to train cooperative agents. All other baseline methods can only produce non-cooperative agents in our experiments, and since MADDPG is a popular and easy-to-implement algorithm, we choose it to produce non-cooperative agents. \n***\nA3: Our prisoner's dilemma environment is naturally a multi-step Markov game and is widely used in [A,B,C,D]. Furthermore, the experimental results of our method are the best among others, e.g., the superior performance and no extra assumptions. All learning agents can achieve coordination strategies with our method and only need the risk-seeking bonus. This is the best result in the sequential social dilemma so far.\n\n[A] Foerster, Jakob N., et al. \"Learning with opponent-learning awareness.\" arXiv preprint arXiv:1709.04326 (2017).\n\n[B] Tang, Zhenggang, et al. \"Discovering diverse multi-agent strategic behavior via reward randomization.\" arXiv preprint arXiv:2103.04564 (2021).\n\n[C] Leibo, Joel Z., et al. \"Multi-agent reinforcement learning in sequential social dilemmas.\" arXiv preprint arXiv:1702.03037 (2017).\n\n[D]Wang, Weixun, et al. \"Towards cooperation in sequential prisoner's dilemmas: a deep multiagent reinforcement learning approach.\" arXiv preprint arXiv:1803.00162 (2018).\n***\nA4: The Iterated Prisoner's Dilemma(IPD) is a kind of Repeated Game. The equilibria of a Repeated Game can differ from those of the associated strategic game. In the strategic game prisoner's dilemma, the only Nash equilibrium is defection-defection. However, in IPD, the strategy that chooses to cooperate after every history is the best response to the strategy that starts off choosing to cooperate and “punishes” any defection by switching to defect. If one agent chooses to defect, then the other agent will alter its strategy from cooperation to defection in the next round and both of them will get lower payoffs in the end. In infinitely IPD, if the discount factor $\\delta \\geq 0.5$, then cooperation is the best response to cooperation[A]. We quote [B] to further explain this:\n\n> in the infinitely repeated game, there are an infinite number of equilibria. So equilibrium selection becomes a problem. But cooperative strategies generally form equilibria with each other: if you are playing grim-trigger, and I am playing tit-for-tat, and we both sufficiently value the possibility of future play, then we will keep cooperating and neither of us can do any better with a different strategy. \n\nIn our method, the risk-seeking exploration bonus will encourage agents to choose actions that have much higher potential cumulative rewards, i.e., cooperation. And the auxiliary opponent modeling task can alter the agent's strategy from cooperation to defection if its opponents choose to defect. The two components can constitute equilibrium strategies between agents.\n\n[A] https://economics.mit.edu/files/4754\n\n[B] Levin, N. (Ed). (2019). Introduction to Ethics: An Open Educational Resource. N.G.E. Far Press.", " We thank the reviewer’s valuable feedback on our work. We summarize the reviewer’s questions and present our responses below.\n***\nQ1:The Monster-hunt and Escalation scenarios were modified in [2], which is not cited in this paper.\n\nA1: This is a misunderstanding. We had cited these two scenarios in the last paragraph of Sec. 1, line 63 in the original version.\n***\nQ2: Sec. 4.2 is not clear. A figure showing the architecture is needed.\n\nA2: Thanks for your advice. We will plot the architecture figure and you can find it in our paper of the rebuttal revision. \n***\nQ3: Why left truncated variance is used here? Why did the index start from M/2?\n\nA3: Thanks for your valuable question. (1) A naive approach to exploration would be to use the variance of the estimated distribution as a bonus. As shown in [C], the exploration bonus from truncated variance outperforms bonus from the variance. The Right Truncated Variance tells about lower tail variability and the Left Truncated Variance tells about upper tail variability. For instantiating optimism in the face of uncertainty, the upper tail variability is more relevant than the lower tail, especially if the estimated distribution is asymmetric. Intuitively speaking, $\\sigma_{+}^{2}$ is more optimistic. $\\sigma_{+}^{2}$ is biased towards positive rewards. To increase stability, we use the left truncated measure of the variability, $\\sigma_{+}^{2}$.\n\n(2) The index starts from the median, i.e., M/2, rather than the mean due to its well-known statistical robustness [A, B, C]. \n\nReferences:\n\n[A]Huber, Peter J. \"Robust statistics.\" International encyclopedia of statistical science. Springer, Berlin, Heidelberg, 2011. 1248-1251.\n\n[B]Rousseeuw, Peter J., et al. Robust statistics: the approach based on influence functions. John Wiley & Sons, 2011.\n\n[C]Mavrin, Borislav, et al. \"Distributional reinforcement learning for efficient exploration.\" International conference on machine learning. PMLR, 2019.\n***\nQ4: Why c_tj is used here? Why combing Eqn. 4 and Eqn. 5 in Eqn. 6 is reasonable? It lacks the motivation\n\nA4: Thanks for your valuable question. (1) As shown in [A], the estimated QR distribution is a mixture of parametric and intrinsic uncertainties. As learning progresses the parametric uncertainty vanishes and the intrinsic uncertainty stays. Therefore, this left truncated variance exploration bonus will tend to be biased towards intrinsic variation, which hurts performance. To suppress intrinsic uncertainty, we need a decaying schedule. From the classical QR theory [B], it is known that parametric uncertainty decays at the following rate:\n$$\nc_t=c\\sqrt{\\frac{\\log t}{t}}\n$$\nWhere c is a constant factor. So we use c_tj as the decaying schedule.\n\n(2) The left truncated variance defined in Eqn.4 enhances the agent's exploration ability and makes the agent optimistic in the face of uncertainty, and the risk-seeking exploration bonus defined in Eqn.6 encourages agents to select actions that have higher potential payoffs. The ablation study in Sec 5.5 shows that these two objectives are equally important for agents to achieve coordination strategies efficiently.\n\nReferences:\n\n[A]Mavrin, Borislav, et al. \"Distributional reinforcement learning for efficient exploration.\" International conference on machine learning. PMLR, 2019.\n\n[B]Koenker, Roger, and Kevin F. Hallock. \"Quantile regression.\" Journal of economic perspectives 15.4 (2001): 143-156.\n***\nQ5: LIAM is actually a single-agent RL method, how did you conduct the experiments in MARL scenarios?\n\nA5: Thanks for your valuable question. (1) In my opinion, LIAM can be viewed as either a multi-agent RL method that focuses on opponent modeling or a single-agent RL method that tackles non-stationary problems. However, in the LIAM paper, paper authors evaluate LIAM in multi-agent scenarios and assume other agents have fixed policies. They use recurrent auto-encoder to model the relationship between the trajectory of the controlled agent and the modeled agents. \n\n(2) In our experiment, each agent is the controlled agent and equipped with the LIAM method to model opponents. In other words, we do not assume opponents have pre-trained fixed policies.\n***\nQ6:Did you compare your method with LOLA [3]?\n\nA6: Thanks for your valuable question. We didn't compare our method with LOLA in the original version because LOLA has too strong assumptions that it can access to opponents' parameters and gradient information. And we focus on studying multiple independent learning agents, so that is an unfair comparison. However, according to your suggestion, we will supplement the comparison experiment in the rebuttal revision.", " The paper introduces GRSP (Generalizable Risk-Sensitive MARL algorithm), a method for training agents in mixed-games to converge to cooperative nash-equilibrium. The method's impact is in its applicability to much more relaxed situations than other MARL algorithms (usually Centralised Training Decentralised Execution). The key insights for this method are two fold: \n\n1) cooperative equilibrium are usually “high-risk” so simple reward maximisation rarely converges to this.\n\n2) instead of requiring full access to co-players during training, simply being provided their action is suffice for opponent modelling. This is closer to independent learning than existing methods.\n \nThe method is then applied in two evaluation protocols: Evaluation Of Returns during training (can they converge to the desired equilibrium) and Generalisation to Novel Co-players at test-time. In both situations compelling arguments are made to show GRSP efficacy. Finally ablations are applied to distinguish how important the “high-risk” vs “opponent-modelling” insights and associated components are.\n \n1. Clean presentation\n2. Good experiment protocol\n3. Very good breath of environments\n4. Solid theoretical justification\n5. Original work.\n6. Clearly explains intuitions for method.\n\n\nWeaknesses;\n1) Very little time is given to explain training models, i’m unsure how sample efficient this method is compared to others. \n2) Generalisation Study Results could be better displayed. The opponent shaping literature has much nicer payoff graphs for showing how much an agent has cooperated with another -> these could help explain your findings.\n\n3) I have little understanding of the distortion methods discussed would be useful. \n \nI’d like more understanding of the training dynamic:\n\n\n1. How does using the distribution RL effect sample efficiency?\n2. How does risk-sensitive approaches effect exploration? \n\n None to mention :) \n", " This paper considers the problem of coordination in the more realistic general-sum game settings, by applying techniques in distributional reinforcement learning which allows the agents to leverage risk measures to guide the exploration toward risky cooperative strategies. Besides, an auxiliary opponent modeling task is used to improve the generalizability in face of non-cooperative players.\nThe method is compared with previous works in four tasks, and the empirical results demonstrate that the proposed method could improve the coordination ability of agents even without access to other agents' rewards, and quickly adapt to non-cooperative agents by fine-tuning the feature extractor. Strengths:\n\n1. This paper takes a look into a valuable point. Coordination is often studied in the fully cooperative setting, but general-sum game is more realistic. This paper considers an important but less studied problem. Also, generalization is a major but often overlooked issue in current MARL methods.\n2. The proposed method applies the risk-seeking bonus in distributional RL to solve the exploration problem in multi-agent tasks, which is well-motivated and has the potential to improve the coordination ability.\n3. The method used in this article combines risk-sensitive learning and agent modeling, both of which are promising techniques to handle the non-stationarity caused by environmental and agent level changes in multi-agent settings.\n\nWeaknesses:\n\n1. The most important part method is too short compared with the preliminaries and experiments. It is unclear to me how the feature extractor $\\mathcal{E}$ and the decision maker $D$ work, more details are needed to introduce how your framework functions as a whole.\n2. 1. There have been some works considering risk-sensitive or distributional Q-functions in MARL, such as [DFAC](https://arxiv.org/abs/2102.07936) and [D4PG](https://ieeexplore.ieee.org/document/9311945). This part of the method in this paper seems to be an implementation of risk-sensitive Q-function upon independent Q-learning (IQL), thus having rather restricted innovations.\n3. The risk-seeking bonus results in optimistic behavior during training, this is beneficial when training with cooperative agents, but on the other side, which can also be exploited by non-cooperative agents and thus fail to learn a generalizable cooperative policy.\n4. The paper claims to have generalization ability across different opponents during execution, but the way of achieving this requires long-term fine tuning during test phase (even just a part of the network will be tuned). I think such level of generalization should be expressed as \"policy transferring\" for clarity in order not to confuse others to think of zero-shot coordination (ZSC) or ad-hoc teamwork (AHT).\n\nMinors:\n1. The equation 7 makes it hard to understand what is the output of the function $D$. Besides, the parameters to be optimized on the LHS are $\\phi^i$ and $\\psi^i_s$, but $\\phi^i$ disappears on the RHS.\n2. The loss term $\\mathcal{J}$ in equation 8 is not defined. I know that you have introduced it in the preliminaries, but please define it again under your framework according to your parameterization.\n 1. Since long-term fine tuning is needed at test phase to achieve generalization, why keep $\\psi^i_a$ fixed and not tuning anymore?\n2. In line 285, you claim that the cooperative agents and non-cooperative agents are trained by GRSP and MADDPG, respectively. Can you explain why these two different algorithms can produce these two desired strategies?\n3. The proposed method seems to work well on the 5 by 5 grid-world environments, which left me wondering how well it performs on other popular Markov game environments. For example, social dilemma extends the prisoner's dilemma into multi-step settings, and the results on it can be more convincing.\n4. Can you explain why your method can converge to the highest global reward in IPD. This is very unintuitive because it is not an equilibrium, implying that self-interest agents would deviate from it and try to defect at last. Your method does not take the rewards of other agents into consideration and the learned policy is self-interest even though you have incorporated risk-seeking bonus. 1. Experiments in this work consists of \"Comparison of Returns\" only. More comprehensive demonstration experiments can better explain why the method works.\n2. Generalization of the method used in this paper requires long-term fine tuning, while a lot of works can tackle ad-hoc teamwork efficiently right now. The application value of this work is limited.\n3. During test time, the adaptation needs access to the full action history of all other agents, and fine-tune the feature extractor on it. This can be a very restrictive assumption and limits its use in practice.", " This paper presents a generalizable and sample-efficient algorithm for multi-agent coordination in decentralized general-sum games without any access to other agents’ rewards or observations. It first learns the distributions over the return of individuals and estimates a dynamic risk-seeking bonus to encourage agents to discover risky coordination strategies. Then it proposes an auxiliary opponent modeling task so that agents can infer their opponents’ type and dynamically alter corresponding strategies during execution. Empirical studies show that the trained agents outperform other baseline methods via achieving mutual coordination during training and avoiding being exploited by non-cooperative opponents during execution. Strengths: Risk-sensitive learning in zero-sum games is important topic in multi-agent systems. This paper applies risk-sensitive learning in zero-sum games. It uses the well-received distributional RL to learn return distributions and uses distribution distortion. The opponent learning was used to avoid overfitting opponents’ coordination strategies during training.\n\nWeaknesses: \n1. This paper combines many well-studies methods and applies them in zero-sum game scenario. The methodological contributions are limited.\n\n2. The evaluation scenarios are mainly on 2-agent cases, which is simple for didactic studies. However, it is hard to see if the proposed method can also perform well on complex 3+ agent scenarios. For example, social dilemma and zero-sum games scenarios in melting pot [1].\n\n3. The writing is not good. Sec. 4.2 is not clear. It is not easy to follow. A figure showing the architecture is needed.\n\n4. The Monster-hunt and Escalation scenarios were modified in [2], which is not cited in this paper.\n\n[1] Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot\n\n[2] DISCOVERING DIVERSE MULTI-AGENT STRATEGIC BEHAVIOR VIA REWARD RANDOMIZATION\n\n After reading the paper, I have some questions:\n\n1. Sec. 4.2 is not clear. A figure showing the architecture is needed.\n\n2. Why left truncated variance is used here? Why did the index start from M/2? \n\n3. Why c_tj is used here? Why combing Eqn. 4 and Eqn. 5 in Eqn. 6 is reasonable? It lacks the motivation\n\n4. LIAM is actually a single-agent RL method, how did you conduct the experiments in MARL scenarios?\n\n5. Did you compare your method with LOLA [3]?\n\n[3] Learning with Opponent-Learning Awareness\n Please see the above comments.", " This paper suggests learning risk-seeking policies in general-sum games in order to promote cooperation. The authors seek the strategy using distributional RL and risk-seeking weighting. Empirical evaluation on a range of domains shows that this algorithm finds the cooperative outcomes where other multi-agent RL methods do not, even though some of them focus on opponent modeling and coordination. *Positives*\n \nI like the idea of using risk-seeking behavior for creating coordination and I like the use of distributional RL for implementing it. I also like the empirical evaluation, which uses a sufficiently wide range of domains and clearly shows the superiority of the proposed approach over the chosen baselines.\n\n*Major concerns*\n\nMy main concerns are lack of theoretical grounding and depth of understanding conveyed by the experimental results.\n\nLearning in games is a relatively old and well studied field with many solution concepts and algorithms to achieve them. This paper presents an intuitive solution to promote cooperation, but does not analyze at all what the proposed algorithm is supposed to converge to theoretically. Are there any guarantees on the behavior of the algorithm even in simple matrix games? Is there some relation between the parameters for risk seeking and the gap between cooperative and non-cooperative equilibria of the game? Does the algorithm converge to correlated equilibria as many other learning dynamics under some conditions or even that is not guaranteed?\n\nI believe that theoretical understanding of algorithms is important, but not absolutely necessary for a good paper, if the empirical evaluation chooses the right baselines and provides an in-depth analysis of why and when the proposed algorithm is superior. Since there are many papers dealing with trying to learn cooperative outcomes in general sum games (such as [A,B]) I consider the choice of baselines insufficient. Furthermore, the experiments do not provide any insights into when and why the proposed method performs better. The ablation study is definitely a step in the right direction, but I still do not know when to expect the algorithm to work well and when not.\n\n[A] Littman ML. Friend-or-foe Q-learning in general-sum games. InICML 2001 Jun 28 (Vol. 1, pp. 322-328).\n[B] Crandall JW, Goodrich MA. Learning to compete, compromise, and cooperate in repeated general-sum games. InProceedings of the 22nd international conference on Machine learning 2005 Aug 7 (pp. 161-168).\n\nTo summarize, the overall idea of the paper is interesting to me, but the thoroughness of setting in the related work and the depth of the analysis is slightly below the bar I would expect from a NeurIPS publication.\n\n*Minor suggestions*\n\nLines 31-32 claim that training a population of strategy profiles is infeasible in real world settings, which does not seem to be true with algorithms, such as AplhaStar.\n\nIt is not clear what exactly are the error areas in the graphs. Confidence intervals? Variance? STD?\n\nSome formulations are hard to follow, such as L196-197, L236, Why did you not compare to the existing algorithms for learning cooperative outcomes in general sum games?\n\nWhat are the main reasons why the proposed algorithm finds the cooperation and the existing ones fain in some of the specific domains? There is not obvious potential negative societal impact. The limitations are IMO not explored sufficiently well. It is not clear when the proposed algorithm starts failing." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 9, 3, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "iYs4YxM-znk", "a11V2OBJQ3r", "TU1cEFlql5", "v0Jz9rqgkH8", "39aZAfKvS_x", "M6PCWk4DHe6", "RLfacDDqeXR", "8hlgyAgOZfX", "ufJX92hwkqn", "nips_2022_KWN3I1koJsU", "iYs4YxM-znk", "iYs4YxM-znk", "v5eu84Qyxf2", "D5HCzOrkU0x", "FgEud9uUcYI", "8hlgyAgOZfX", "M6PCWk4DHe6", "nips_2022_KWN3I1koJsU", "nips_2022_KWN3I1koJsU", "nips_2022_KWN3I1koJsU", "nips_2022_KWN3I1koJsU" ]
nips_2022_T7114JzrwB
ZeroC: A Neuro-Symbolic Model for Zero-shot Concept Recognition and Acquisition at Inference Time
Humans have the remarkable ability to recognize and acquire novel visual concepts in a zero-shot manner. Given a high-level, symbolic description of a novel concept in terms of previously learned visual concepts and their relations, humans can recognize novel concepts without seeing any examples. Moreover, they can acquire new concepts by parsing and communicating symbolic structures using learned visual concepts and relations. Endowing these capabilities in machines is pivotal in improving their generalization capability at inference time. In this work, we introduce Zero-shot Concept Recognition and Acquisition (ZeroC), a neuro-symbolic architecture that can recognize and acquire novel concepts in a zero-shot way. ZeroC represents concepts as graphs of constituent concept models (as nodes) and their relations (as edges). To allow inference time composition, we employ energy-based models (EBMs) to model concepts and relations. We design ZeroC architecture so that it allows a one-to-one mapping between a symbolic graph structure of a concept and its corresponding EBM, which for the first time, allows acquiring new concepts, communicating its graph structure, and applying it to classification and detection tasks (even across domains) at inference time. We introduce algorithms for learning and inference with ZeroC. We evaluate ZeroC on a challenging grid-world dataset which is designed to probe zero-shot concept recognition and acquisition, and demonstrate its capability.
Accept
The focus of this work is on the introduction of a compositional reasoning model that enables zero-shot generalization. While there are a number of limitations (e.g. the small domain, limited concepts) but reviewers were content that the demonstrated results on low-resolution image domains proved the approach can scale to more realistic task complexity. The primary open challenge for richer tasks is the identification and training of elementary concepts -- a classification that may not hold.
train
[ "Vp47R6oICO", "q1DcCYxK9z", "kEZPH8TYM6u", "7La6nOELeiNV", "SfTYA_59mHr", "3_epOAZNHDqY", "rDH4IkDeStZX", "KM8k4S_jgHe", "fI-KZesxY3B", "cOsKipe1VuS", "gVO1MIoibrv", "d2Wj0zftKrc", "-SE8DRdE5kR", "JSFjZTQPI17" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much for you review! In our response, we have attempted to addressed your concerns about scalability, few-shot learning datasets and question about learning real-world concepts. We have also added Appendix A.14 in the revised version to discuss about scalability, which our 2D to 3D domain adaptation experiment and 3D CLEVR experiments show.\n\nWe would be grateful if you could reply to our rebuttal if you have any more questions. If our answers were satisfactory for your concerns, we would be grateful if you consider updating your rating. Thank you!", " Thank you for your review! In our response, we have addressed your concerns about scalability (of learning and inference), generality and answered your questions. In the revised version of the paper, we have also added Appendix A.13 for generality, A.14 for scalability, and augment A.9 explain about the entropy term.\n\nWe would be grateful if you could reply to our rebuttal if you have any more questions. If our answers were satisfactory for your concerns, we would be extremely grateful if you consider updating your rating. Thank you!", " We thank the reviewers for detailed and constructive reviews. We are glad that the reviewers generally recognize the novelty, significance, and soundness of our method. We provide responses to each reviewer, and hope that it will resolve the concerns. We have also updated the paper and Appendix in the revision, where the main updates are:\n\n1. Added Appendix A.13 to explain the generality of our ZeroC approach, which is raised by reviewers maWL, AN9q, P8en. We also provide responses to each individual reviewer to this generality question. In summary, ZeroC is general both in terms of learning elementary concepts/relation as well as applicable to different datasets and scenarios (as shown in our experiments that we use the same model architecture for all 2D and 3D image experiments).\n\n2. Added Appendix A.14 to explain the scalability of our ZeroC approach, which is raised by reviewer Yv6p, AN9q. In summary, our experiments in larger-scale datasets, including 2D to 3D domain adaptation in Sec. 3.2 and the CLEVR experiment in Appendix A.12 shows applicability to more realistic use cases.\n\n3. Added Appendix A.15 to explain the computational complexity of our ZeroC approach, which is raised by reviewer P8en. In summary, ZeroC's computational complexity remains fairly constant, measured in terms of single forward or SGLD step.\n\n4. Expanded Appendix A.9 to explain the designing of HD-Concept dataset.\n\n5. Expanded Appendix A.2 to explain the reason why we neglect the entropy term for training the EBMs.\n", " We thank the reviewer for the positive and constructive review, and glad that the reviewer recognizes the significance, soundness of the paper and well-conducted experiments. In the following, we address the limitations raise by the reviewer:\n\n> Generalization to multiple datasets and scenarios is not straightforward.\n\n> The experimental evaluation is conducted on a single artificial dataset. While this is comprehensible for a novel task, it is not clear to what extent the proposed approach is tailored on the domain of images (e.g., see also Algorithm 1) or whether it could be easily genaralized to other domains.\n\nOur architecture is general to learn diverse concepts and relations. For all our experiments, we use the <em>same<em> network architecture (Appendix A.4), for the (1) dataset HD-Letter, (2) dataset HD-Concept (that contains more complex concepts and relations e.g. “rectangles”, “Eshape”, “inside”, “enclose”), (3) Section 3.2 “Acquiring Novel Hierarchical Concepts Across Domains”, and (4) CLEVR experiment in Appendix A.12. This shows that the algorithm is very general, not tuned toward specific concepts or scenarios. The architectures only differ in the number of input channels (since the 2D images have 10 channels and 3D images have 3 RGB channels). For future work, we can also experiment with using a single <em>model<em> to learn concepts or relations across datasets, which is out-of-scope of current work. We have added the above discussion to Appendix A.13.\n\n> Also, a discussion of the computational cost of the proposed approach is missing\n\nThanks for the suggestion! We have added Appendix A.15 in the revised version which discussed the computational cost of the proposed approach.\n\n\n> Typos\n\nThanks for spotting the typos! We have updated them in the revised version of the draft.\n\nAbove all, we hope we have resolved your concerns, which makes the paper stronger.\n", " We thank the reviewer for the review. Below we address the reviewer’s concerns and questions.\n\n**Under “Weakness”**:\n> The size of the problem makes it too simple to brute force\n\nWe respectfully disagree. Firstly, in our work, we address the <em>relative<em> difficulty between training and test dataset, where we need to classify and detect hierarchical concepts in the test dataset that are more difficult than in training. Secondly, the detection/classification task itself is not simple, it has to solve a NP-hard subgraph-isomorphism problem. For example, as stated in the paper, to detect Eshape (a hierarchical concept with 4 nodes (lines), 6 edges (relations)) in an image with distractors of a rectangle and a “T” (this image has an underlying graph of 10 nodes (lines), 45 possible edges), we will have $C_{10}^4 \\times 4!=5040$ possible mask assignments. Additionally, a model may not perfectly detect the masks for low-level concepts. Lastly, many of the detection tasks are not trivial, for example detecting “line” concept when the line is connected or crossed with other lines, detecting more complex relations of “inside”, “encompass” in HD-Concept dataset in Section 3.1, detecting “perpendicular” concept in 3D scene images in Section 3.2, and detecting more complex concept and relations in the CLEVR experiment in Appendix A.12.\n\n> Unclear whether inference would scale\n> Unclear how learning would scale\n\nWe have demonstrated that even for larger images like the 3D image (32x32x3) in Section 3.2 and CLEVR in Appendix A.12 (64x64x3), our approach achieves reasonable accuracy, significantly outperforming baselines. This shows the scalability of our methods to larger images, and as reviewer maWL puts it, “suggest possible applicability to realistic use cases: 2D-to-3D domain adaptation, and generalization to real-world images”. In terms of time complexity, the SGLD inference algorithm (Algorithm 3) uses a fixed number of iteration steps K. We find that K=60 is enough for reasonable detection accuracy for larger images. For parsing hierarchical concept from image (Algorithm 1), the detection of all concept instances can be obtained for a single SGLD run, and for the classification of relations, we can concatenate all pairs of concept masks into a single minibatch and feed into the relation-EBM, which only requires one relation-EBM forward run, which is instant. Thus the inference can scale to more complex images and also images with more complex relations. For more, please see the response to the second question in the following. We have added the above discussion to the Appendix A.14 of the revised paper.\n\n> Unclear how to learn arbitrary concepts from data (in “Limitation”)\n\n> 1. Even though it is claimed that the algorithm can learn new concepts, isn't the structure of the energy model used in this paper already tuned to the discovery of spatial relationships between horizontal and vertical lines? Or if I run the same algorithm on data with arbitrary concepts (for instance, triangle, circle, object A inside objet B, object is a closed contour, etc.) will it be able to discover those as well? (in “questions”)\n\nOur model can work with any initial set of concepts and relations. For all our experiments, we use the <em>same<em> network architecture (Appendix A.4), for the (1) dataset HD-Letter, (2) dataset HD-Concept (that contains more complex concepts and relations e.g. “rectangles”, “Eshape”, “inside”, “enclose”), (3) Section 3.2 “Acquiring Novel Hierarchical Concepts Across Domains”, and (4) CLEVR experiment in Appendix A.12. This shows that the algorithm is very general, not tuned toward specific concepts. It is definitely able to detect concepts/relations of triangle, circle, object A inside object B, object is a closed contour as the reviewer suggests should such training data is provided.\n\n> 2. How does inference scale if I increase the resolution to 256 x 256? Would it work? For the size and structure of your dataset, isn't the partition function of your energy computable (albeit expensive)?\n> 3. Same question about learning\n\nWe haven’t tried with resolution of 256x256, but we have demonstrated that our inference and learning can achieve reasonable performance for larger images like the 3D image (32x32x3) in Section 3.2 and CLEVR in Appendix A.12 (64x64x3). Scaling to larger images will be left for future work. In addition, downsampling the images to lower resolution can be performed to reduce the complexity of the inference and learning. \n\nFor the size and structure of the dataset, the partition function of the energy is not computable, since it needs to sum over all configurations of masks. For an 16x16 image, the total number of configuration of masks is $2^{16\\times16}=10^{77}$ (similar to the number of atoms in the Universe), and for an 32x32 image, the total number of configuration of masks is $2^{32\\times32}=10^{308}$. They are clearly not computable.\n\n", " > 4. Isn't the particular way in which you encode spatial relationships (relative) mean that this model cannot distinguish the concept of W and M, or 6 and 9, since they are both rotations of each other, and therefore satisfy the same relative relationships?\n\nWhether we can distinguish two compositional concepts that are rotations of each other depends on the primitive concepts/relations the ZeroC learns. If both primitive concepts and primitive relations are rotationally invariant (as in the dataset of the present paper), then the composed EBM is also invariant to the composed hierarchical concept (which can be easily proved in the Def. 2.1 “Hierarchical Composition Rule”). If, instead, we want to distinguish two compositional concepts that are rotations of each other, we can design the primitive concepts and/or relations such that they are not rotationally invariant. For example, we can define two “perpendicular” relations: “perp1” denotes perpendicular relation with the right angle on the upper left, and “perp2” denotes perpendicular relation with the right angle on the bottom right, in this way “6” and “9” can be distinguished.\n\n> 5. This model resembles heavily the ones in [1] and [2], which also describe letters as graphs of lateral relationships that entangle nodes containing edges. What are the main differences? Can this model be used to solve CAPTCHAs? Experiments showing this would definitely be much more convincing as to its capabilities.\n\nCompared to reference [1][2], our work differs in (1) goal: we focus on zero-shot recognition to compositional concepts, and zero-shot concept acquisition, while [1][2] focuses on recognizing CAPTCHAs in complex scenarios. (2) architecture, we use energy-based model as base model and compose them to recognize novel hierarchical concepts, while [1] uses a Recursive Cortical Network (RCN), and [2] first needs to construct a Generative Shape Model for the fonts, then parse factor graph by solving an optimization problem. Our ZeroC requires much less engineering effort to adapt to the specific dataset, and can learn more general concepts and relations as explained in the answer to question 1. (3) Learning: we use contrastive divergence for learning the EBMs, while RCN in [1] is learned in a bottom-up way, and [2] uses a maximum-margin structured output learning paradigm.\n\nThis model in principle is able to solve CAPTCHAs. It will be an exciting future work.\n\n> 6. I wasn't able to understand precisely which information is conveyed from ZeroC1 to ZeroC2. Could you clarify this section in the paper?\n\nThe information conveyed from ZeroC1 to ZeroC2 is the graphical structure of a hierarchical concept. For example, in Figure 3, ZeroC1 learns the graphical structure of an E shape in terms of the initial concepts and relations. The graph structure is then conveyed to ZeroC2, which enables it to classify and detect E shapes in the 3D domain. \n\n> 7. When using the loss from [12], you mention that you neglect the entropy term. What's the problem with keeping it? Would the results from [12] improve had they neglected it?\n\nThe entropy term [12] serves to increase the diversity of the generated examples. And the computation of entropy requires many examples. This is fine in [12] since the EBM there has the form of E(x) which only needs to generate images <em>unconditionally<em>, and the entropy can be estimated using all previous generated images x. In our work, our EBM are E(x,m,c) and E(x,m1,m2,c), and we need to generate the mask <em>conditionally<em>, e.g. generate mask m conditioned on the image x and label c. The entropy term would need to be a conditional entropy of m given x and c, where the pool of mask m should be different for each individual image x and label c. This requires, e.g. for each x, c, we generate over 100 masks to estimate the entropy which is computationally expensive, while currently we only need to sample 1 mask. Moreover, typically there are limited correct masks for a concept in an image, and encouraging diversity may not help the model identify the correct mask. In fact, we have tried empirically with keeping the entropy term and it results in a much worse accuracy, likely due to the above reason. We have added this discussion to Appendix A.2 of the paper.\n\n> Minor: Figure 4 in the appendix, training relation, first column, doesn't look like a perp-edge.\n\nThanks for spotting it. It was a typo when we made the figure. We have corrected it in the revised version.\n", " We thank the reviewer for the review. Below we address the reviewer’s concerns and questions.\n\nUnder “Weakness”:\n> (1) There should be more discussion about how the proposed approach can scale to real world images that can include many edges and shapes with different relations. It seems this might become prohibitively costly.\n\nIn fact, in the original submission, we have conducted 2 experiments that test our approach on more realistic images. One is the CLEVR dataset with 3D scene images (see Appendix A.12). We show that our method significantly outperforms the CADA-VAE and the statistics baseline. The other is the Section 3.2 of 2D-to-3D domain adaptation, where the second domain contains 3D scene images. As Reviewer maWL puts it, the above experiments “suggest possible applicability to realistic use cases: 2D-to-3D domain adaptation, and generalization to real-world images.” We would like to emphasize here that the main aim of the paper is the introduction of a novel framework that, for the first time, demonstrates zero-shot hierarchical concept recognition, and acquisition of such concepts, even across domains. We use a grid-world dataset for systematic evaluation as no current dataset provides suitable configurations, and also have 2 experiments with more realistic 3D images showing that our framework can handle more difficult scenarios. Scaling to real world images is an exciting next step, which will be our future work.\n\n\n> (2) The evaluation would be more convincing if it can also include results on some few-shot learning datasets such omniglot or, even better, some real world images\n\nOther existing datasets lack annotations for primitives concepts and/or relations that are necessary to discover complex concepts at inference time. For example, Omniglot lacks relation annotations, and thus unsuitable for our tasks. Note that our setting is a zero-shot learning setting, different from typical few-shot learning setting. In the response to the previous question, we have explained that our submission already contains two experiments that, as Reviewer maWL puts it, “suggest possible applicability to realistic use cases”. Scaling to real world images will be an exciting future work.\n\n> How would the basic relations and concepts be defined or discovered for real world images?\n\nFor real world images, we will still use the same pipeline, and in the data we need to provide elementary concept and relation annotations for real world images. For example, the CUB-200-2011 dataset [1] provides annotations for elementary concepts for birds, e.g. beak, belly, tail, etc. This dataset lacks relation annotations, so is unsuitable for our pipeline. A suitable dataset for our pipeline would be like an augmented dataset to the above CUB-200-2011 dataset that also has relation annotations like “connect-to”, “up”, “down”, “extend”, etc. With such annotations, we can learn both concept EBMs and relation EBMs and compose together to recognize compositional concepts like different species of birds. We have added the above discussion to the Appendix A.14 “Scalability of ZeroC” in the revised paper.\n\n[1] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The Caltech- UCSD Birds-200-2011 dataset. 2011 ", " We thank the reviewer for the positive review, and glad that the reviewer recognizes the strengths of our paper in the context of systematic generalization, and solid experiments with possible applicability to realistic use cases. Below we address the reviewer’s concerns and questions.\n\n**Under “Weaknesses”**:\n> Each grid-world dataset used for evaluation contains only 3 pre-defined concepts. There is still a long way to go in order to more rigorously test the model’s compositional generalizability.\n\nWe agree with the reviewer that compositional generalizability is shown for 3 pre-defined compositional concepts for each dataset. However, even this capability is beyond what the current methods can do. Further extensions of our approach to more concepts are planned as future work.\n\n> Object localization is achieved through a somewhat clueless search with location candidates sampled from a distribution. This might lead to an efficiency issue once the task difficulty, image resolution or object crowdedness is scaled up.\n\nAs stated in “detection” section in Section 2.2, detection with SGLD approximates a posterior sampling of $P_{M|X,C}(m|x,c)$ with a suitably learned concept EBM. When detecting compositional concepts, which are defined as a graph with constituent concepts as its nodes and constituent relations as edges, composing the corresponding EBM together actually limits the search space. Any configuration that violates at least one EBM will result in a high energy, which alleviates the complexity of the task when task difficulty, image resolution or object crowdedness scale up. Empirically, we find that our method performs well. (1) With many concept instances present: in detection task example Fig. 2 upper left, it can correctly select a compositional concept with 4 nodes (line concepts), 6 edges (relations) in the image that contains an underlying graph with 10 nodes (lines) and 45 possible edges. (2) Deal with more complex concepts and relations as shown by the results for HD-Concept dataset. (3) Scale to more realistic 3D images, both in learning concepts and relations of 3D images in Section 3.2, and also CLEVR experiments in Appendix A.12.\n\n> At present, all primitive concepts and relations are hand-crafted, but the paper didn’t explain the universality of those primitives in a general object recognition task. For example, the current task setup only touches upon parallel vs. perpendicular, but it is unclear how to define a template for an “acute angle” since it is defined as a range. In other words, the question is: how much space of the visual territory do you expect that can be approximated by your set of primitives? In order to move towards recognizing real-world objects, what other primitives should be included or what bottlenecks might need to be conquered?\n\nThe EBMs in the ZeroC framework can learn general primitives as long as labeled data is provided to demonstrate the concept or the relation, even if that concept or relation is a range that contains some intrinsic variation. For example, to learn the “acute angle” relation between two lines (with varying angles), ZeroC only needs a dataset that contains many (image, mask1, mask2, “acute-angle”) tuples where the mask1 and mask2 identify the two lines in the image that form an acute angle, with different examples containing different angles. In other words, as long as the dataset contains enough data that identifies a concept/relation in a certain range, the ZeroC can learn such primitives. This is also shown in the HD-Concept dataset in Section 3.1, where “inside”, “outside”, “non-overlap” relation primitives are learned, and each relation has intrinsic variation. For example, the two masks for “inside” relation can have different positions, relative positions, and sizes. \n\nIn order to move towards recognizing real-world objects, the ZeroC architecture is able to do that in principle. The main bottleneck is presented by labeled data, as we need to have detailed labels for many primitive concepts and relations that constitute real world objects. We will add the above discussion to the revised appendix of the paper.\n\n\n> It would be great to demonstrate how the pool of primitive recognition models could be dynamically expanded and compressed to allow for progressively increasing demands.\n\nThanks for the suggestion! Our ZeroC architecture naturally supports continued expansion or compression of the EBM pool, as newly learned compositional concept EBMs can be dynamically added to the pool. Independently trained EBMs on new concepts/relations can also be added to the pool and composed together with existing EBMs. This is an exciting future direction, but is out-of-scope of the present paper, since here we focus on proposing the framework and demonstrate the zero-shot recognition and acquisition capability of ZeroC. We have added this discussion to the Appendix A.10 “Limitation of current work” section.", " **Below are answers to the questions:**\n> Line126: “*We want to determine if the concept c appears in a given image x. We need to marginalize over the mask m.*” How sensitive is your approach to the tightness of a mask? For example, does the concept c have to be fully occupying the entire masked space to be recognized?\n\nAs is shown in Eq. (2), we use the mask $m_n^K$ at the last step K of SGLD to feed into the EBM: $E_{X,M,C}(x,\\tilde{m}_n^K,c)$ and obtain the energy for concept c. We find empirically that the SGLD process already can find a quite good mask $\\tilde{m}_n^K$ that locally minimizes the energy, where the mask overlaps quite well with the concept instance to be discovered. The more the mask differs from the mask for the concept, the higher the energy. We find that our model has reasonable tightness for the mask: not too tight to make the SGLD hard to optimize to find the correct mask, and not too loose to not be able to differentiate a correct mask and a perturbed version of the mask.\n\n> Line 149: “the mask M is the maximum of all the masks”. I suppose each constituent is being recognized independently, right? Then is it possible that the predicted masks don’t form a contiguous region, which would indicate a detection error? How would you plan to address this case?\n\nThis question is about the Hierarchical Composition Rule (Eq. 3) that detects a compositional concept given a graph specification of constituent concept and relations. The different masks $m_i$, $m_{j1}$, $m_{j2}$ are independent variables that are optimized jointly by SGLD through a common compositional EBM energy landscape, and only the mask configuration that satisfies all concept and relation as specified will result in a low energy. It is entirely possible that the predicted masks don’t form a contiguous region, which is completely normal. For example, we can have a compositional concept like “parallel-line” that consists of two disconnected lines and a relation requiring that they are parallel. The constituent masks for this compositional concept are not contiguous, which is expected. \n\n> Line 230: “The hierarchical concepts to be classified and detected are three characters which we term Concept1, Concept2, Concept3”. These three concepts look like random patterns. How did you choose them in the first place?\n\nIn this dataset, we have three primitive relations, “inside”, “enclose” and “non-overlap”, and two primitive concepts, “rectangle” and “E-shape”. The main goal of this dataset is to test whether our method can detect different relational graph structure given the same number of concept nodes. Concretely, given 2 “rectangle” concept instances and one “E-shape” instance, there are limited ways to form a compositional concept with different relational graph structures: Concept1 is where “E-shape” is not inside any of the “rectangle”. Concept2 is “E-shape” and “rectangle”_1 are both inside the other “rectangle”_2, and “E-shape” not inside “rectangle”_1. Concept3 is that “Eshape” is enclosed by one “rectangle” which is also enclosed by another “rectangle”. All three compositional concepts have the same number of constituent concepts but different relation graphs. We have added this discussion to the Appendix A.9 of the revised paper.\n\n> Does your approach exhibit any flexibility in recognizing concepts that have been slightly stretched or squeezed with the height-width ratio being altered? If such flexibility is allowed, it’d be great to visualize the performance vs. the degree of distortion.\n\nIn the dataset provided to training the concept EBMs, we have found that it is beneficial to have concepts with varying height-width ratio. For all concept dataset we provide for training, e.g. “line”, “rectangle”, “E-shape”, we all sample varying height-width ratio, which helps the EBM recognize the intrinsic variation of a concept. This answer is also connected with the above response to the “acute-angle”, where it is beneficial to provide the training data that contain the intrinsic variation of the concept.\n\n> Line 249: “During inference, its embedding for the graph structure can contain up to 10 hots, while during training, it is only up to 1-hot”. Does the performance exhibit any degradation across inference examples from 1-hot up to 10-hot? If yes, how does the degradation look like versus #hots?\n\nIn the current datasets, we don’t have compositional concepts with many different #hots. We will perform a more thorough study with more compositional concepts in future revised version.", " > Line 263: “… viewed in 3D from a certain camera angle …”. How is a 3D image input represented? Is it still a 2D matrix with 3 channels or is it represented as a 3D scene with depth?\n\nThe 3D image in Section 3.2 is represented as a 2D matrix with RGB channels, similar to how CLEVR is represented as a 2D image of a 3D scene.\n\n> Line 263 again: I suppose the camera angle may distort a perpendicular relation so that it looks like an acute angle. And this distortion varies across camera angles. How did you address this and how did you choose camera angles when you constructed this dataset?\n\nIn constructing the dataset, we have fixed the camera angle. Different locations of the angle will make the perpendicular relation look like different acute angles in an image. This is completely fine, as explained above, in that as long as the dataset contains concept instances with such intrinsic variation, the learned EBM is able to recognize it. This is supported by the empirical result that the classification and detection accuracy for 3D images is well above “statistics” baseline.\n\nLine 264: “each test task consists of a tuple of three images”. Why three images at a time?\n\nThis is because in this dataset, we have 3 compositional concepts, and for each concept we show one example. For a dataset with N compositional concept, we will then show N images where each image corresponds to one concept.\n\n> This doesn’t have to answered thoroughly but I’d love to see your thoughts: How do you expect your approach to generalize to non-90-degree angles, like the top angle in “A”, or relations about arcs?\n\nAs explained above, our method can handle quite general relations, like non-90-degree angles or relations about arcs as long as a dataset demonstrating the intrinsic variation in these concepts is provided.\n\n**Summary**\n\nAbove all, thank you for the detailed review and questions! We hope that we have resolved your concerns and answered your questions.\n", " The main contribution of this paper is a system (ZeroC) for pattern detection and classification that, once trained on elementary concepts, is able to zero-shot adapt to complex concepts as long as the complex concept can be described as a composition of elementary concepts & relations via a graph. Individual energy-based models (EBMs) can be trained to match each elementary concept or relation, each assigning low energy to patterns that fit its ‘signature’ (i.e. template). To recognize a complex pattern, individual EBMs can be composed according to the concept-composition graph, which allows for zero-shot recognition for arbitrary combinations of learned concepts and relations.\n\nTo evaluate zero-shot concept recognition under the scenario that the proposed method is designed for, this paper also introduces two datasets since no existing dataset are sufficiently relevant. One dataset is easier, containing letters and simple relations, while the other is harder, containing more complex patterns and relations. \n\nExperiments show that ZeroC outperforms existing zero-shot concept recognition approaches on the proposed datasets, in both in-domain and cross-domain (2D images $\\rightarrow$ 3D images) settings. Additional experiments on CLEVR give clues about potentially broader usage. Strengths:\n- This is a good move towards systematic generalization. The paper presents a way to train a recognition model on elementary patterns and perform inference on hierarchically composed patterns, given that the complicated pattern can be symbolically described (i.e. via a graph). \n- The experiments are solid and suggest possible applicability to realistic use cases: 2D-to-3D domain adaptation, and generalization to real-world images.\n\nWeaknesses\n- Each grid-world dataset used for evaluation contains only 3 pre-defined concepts. There is still a long way to go in order to more rigorously test the model’s compositional generalizability.\n- Object localization is achieved through a somewhat clueless search with location candidates sampled from a distribution. This might lead to an efficiency issue once the task difficulty, image resolution or object crowdedness is scaled up. \n- At present, all primitive concepts and relations are hand-crafted, but the paper didn’t explain the universality of those primitives in a general object recognition task. For example, the current task setup only touches upon parallel vs. perpendicular, but it is unclear how to define a template for an “acute angle” since it is defined as a range. In other words, the question is: how much space of the visual territory do you expect that can be approximated by your set of primitives? In order to move towards recognizing real-world objects, what other primitives should be included or what bottlenecks might need to be conquered?\n- It would be great to demonstrate how the pool of primitive recognition models could be dynamically expanded and compressed to allow for progressively increasing demands. \n Here are a handful of questions pertaining to experimentation details that could potentially make the paper more insightful.\n- Line126: “*We want to determine if the concept c appears in a given image x. We need to marginalize over the mask m.*” How sensitive is your approach to the **tightness** of a mask? For example, does the concept c have to be fully occupying the entire masked space to be recognized?\n- Line 149: “*the mask M is the maximum of all the masks*”. I suppose each constituent is being recognized independently, right? Then is it possible that the predicted masks don’t form a contiguous region, which would indicate a detection error? How would you plan to address this case?\n- Line 230: “*The hierarchical concepts to be classified and detected are three characters which we term Concept1, Concept2, Concept3*”. These three concepts look like random patterns. How did you choose them in the first place?\n- Does your approach exhibit any flexibility in recognizing concepts that have been slightly stretched or squeezed with the height-width ratio being altered? If such flexibility is allowed, it’d be great to visualize the performance vs. the degree of distortion.\n- Line 249: “*During inference, its embedding for the graph structure can contain up to 10 hots, while during training, it is only up to 1-hot*”. Does the performance exhibit any degradation across inference examples from 1-hot up to 10-hot? If yes, how does the degradation look like versus #hots?\n- Line 263: “*… viewed in 3D from a certain camera angle …*”. How is a 3D image input represented? Is it still a 2D matrix with 3 channels or is it represented as a 3D scene with depth?\n- Line 263 again: I suppose the camera angle may distort a perpendicular relation so that it looks like an acute angle. And this distortion varies across camera angles. How did you address this and how did you choose camera angles when you constructed this dataset?\n- Line 264: “*each test task consists of a tuple of three images*”. Why three images at a time? \n- This doesn’t have to answered thoroughly but I’d love to see your thoughts: How do you expect your approach to generalize to non-90-degree angles, like the top angle in “A”, or relations about arcs? \n Broader social impact and limitations are discussed in the supplementary. \n\nHere I briefly point out a few limitations and addressing any of these would make the paper stronger.\n- The datasets used for evaluation cover limited number of concepts.\n- The search process for a mask indicating “objectness” is inefficient.\n- The universality of the set of primitive concepts and relations isn’t justified.\n- The current algorithm is unable to evolve with dynamic expansion and compression as the number of concepts to model increases.", " This work proposes a framework for zero-shot concept learning, i.e., learning and recognizing new concepts at inference time. The proposed frameworks extracts concepts and relations and composes them to learn new concepts hierarchically. The proposed method is evaluated on some synthetic datasets. \n Strength\n\n(1) Zero-shot concept learning at inference time is an interesting problem and requires more attention.\n\n(2) This work proposed a neat framework for learning concepts in a compositional way using energy based models and hierarchical composition. \n\nWeakness \n\n(1) There should be more discussion about how the proposed approach can scale to real world images that can include many edges and shapes with different relations. It seems this might become prohibitively costly. \n\n(2) The evaluation would be more convincing if it can also include results on some few-shot learning datasets such omniglot or, even better, some real world images.\n How would the basic relations and concepts be defined or discovered for real world images? It seems non-trivial to apply the current approach successfully to real world images, which is the main limitation and the authors might want to have more discussion about it and future works to address remaining challenges. ", " An EBM hierarchically describing concepts and relationships is used for concept discovery in a crowded scene with irrelevant concepts. This graph can be transferred across different domains. Strengths:\n\nTechnically correct\n\n- Addressing the relevant problem of hierarchical concept representation\n- Clarifying examples\n- Introduces a new dataset to test these ideas\n\nWeaknesses:\n\n- The size of the problem makes it too simple to brute force\n- Unclear whether inference would scale\n- Unclear how to learn arbitrary concepts from data\n- Unclear how learning would scale\n 1. Even though it is claimed that the algorithm can learn new concepts, isn't the structure of the energy model used in this paper already tuned to the discovery of spatial relationships between horizontal and vertical lines? Or if I run the same algorithm on data with arbitrary concepts (for instance, triangle, circle, object A inside objet B, object is a closed contour, etc.) will it be able to discover those as well?\n\n2. How does inference scale if I increase the resolution to 256 x 256? Would it work? For the size and structure of your dataset, isn't the partition function of your energy computable (albeit expensive)?\n\n3. Same question about learning\n\n4. Isn't the particular way in which you encode spatial relationships (relative) mean that this model cannot distinguish the concept of W and M, or 6 and 9, since they are both rotations of each other, and therefore satisfy the same relative relationships?\n\n5. This model resembles heavily the ones in [1] and [2], which also describe letters as graphs of lateral relationships that entangle nodes containing edges. What are the main differences? Can this model be used to solve CAPTCHAs? Experiments showing this would definitely be much more convincing as to its capabilities.\n\n6. I wasn't able to understand precisely which information is conveyed from ZeroC1 to ZeroC2. Could you clarify this section in the paper?\n\n7. When using the loss from [12], you mention that you neglect the entropy term. What's the problem with keeping it? Would the results from [12] improve had they neglected it?\n\nMinor: Figure 4 in the appendix, training relation, first column, doesn't look like a perp-edge.\n\n\n[1] A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs. Dileep George et al. Science 2017\n\n[2] Generative Shape Models: Joint Text Recognition and Segmentation with Very Little Training Data. Xinghua Lou. NIPS 2016. Described above", " This paper addresses the problem of zero-shot concept recognition. By representing concepts with a graph, and by exploiting an energy-based model, the proposed approach enables detection and classification of novel entities at inference time. The approach can be considered \"neuro-symbolic\" as it combines symbolic graph-representation with (neural) energy-based models with a one-to-one translation of one into the other.\n + The addressed problem is challenging and important.\n+ The proposed solution is sound and elegant.\n+ The experimental evaluation, though on a single dataset, is well conducted.\n\n- Generalization to multiple datasets and scenarios is not straightforward.\n- No discussion on computational complexity.\n * There is no description of the computational cost of the proposed approach. For example, Equation 2, although it is computed via MAP estimation, and although it needs only to find the concept with the highest value in the numerator, appears to be costly. Could the authors comment on this point?\n The experimental evaluation is conducted on a single artificial dataset. While this is comprehensible for a novel task, it is not clear to what extent the proposed approach is tailored on the domain of images (e.g., see also Algorithm 1) or whether it could be easily genaralized to other domains.\n\nAlso, a discussion of the computational cost of the proposed approach is missing.\n\nTypos:\n- Line 29, \"which requires\" -> \"which require\"\n- Line 71, \"that can given\" -> \"that given\"\n- Line 126, \"to determine if\" -> \"to determine whether\"\n- Line 134, \"a English\" -> \"an English\"" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "rDH4IkDeStZX", "3_epOAZNHDqY", "nips_2022_T7114JzrwB", "JSFjZTQPI17", "-SE8DRdE5kR", "-SE8DRdE5kR", "d2Wj0zftKrc", "gVO1MIoibrv", "gVO1MIoibrv", "gVO1MIoibrv", "nips_2022_T7114JzrwB", "nips_2022_T7114JzrwB", "nips_2022_T7114JzrwB", "nips_2022_T7114JzrwB" ]
nips_2022_xvZtgp5wyYT
Learning to Accelerate Partial Differential Equations via Latent Global Evolution
Simulating the time evolution of Partial Differential Equations (PDEs) of large-scale systems is crucial in many scientific and engineering domains such as fluid dynamics, weather forecasting and their inverse optimization problems. However, both classical solvers and recent deep learning-based surrogate models are typically extremely computationally intensive, because of their local evolution: they need to update the state of each discretized cell at each time step during inference. Here we develop Latent Evolution of PDEs (LE-PDE), a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs. LE-PDE learns a compact, global representation of the system and efficiently evolves it fully in the latent space with learned latent evolution models. LE-PDE achieves speedup by having a much smaller latent dimension to update during long rollout as compared to updating in the input space. We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability. We further introduce techniques for speeding-up inverse optimization of boundary conditions for PDEs via backpropagation through time in latent space, and an annealing technique to address the non-differentiability and sparse interaction of boundary conditions. We test our method in a 1D benchmark of nonlinear PDEs, 2D Navier-Stokes flows into turbulent phase and an inverse optimization of boundary conditions in 2D Navier-Stokes flow. Compared to state-of-the-art deep learning-based surrogate models and other strong baselines, we demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
Accept
The paper presents a new method for accelerating the simulation and inverse optimization of partial differential equations (PDEs) of large-scale systems. The proposed approach learns the evolution of dynamics in a “global” latent space (i.e., with fixed dimensionality). The reviewers agree the proposed approach is novel and empirically competitive. issues regarding experiments have largely been addressed by the authors in their rebuttal. Their authors are expected to add some extended discussion (if possible) on (theoretical) properties of PDEs where their approach is expected to succeed. Some of the reviewers increased their scores after the rebuttal period.
train
[ "avDG43zgnev", "xPwUQ94rGg-", "wbEcYR-aolD", "eH3kTFgV2LH", "MCGgTho68yQ", "qTGpjLv61dN", "1z7NbJMK-x4", "rJVs6e2b8f5", "OaBuxhNcF1n", "tYVtii66alA", "rbncqNtP_Em", "ubV8IilYRYb", "2IsWw94qSKx", "46LZzNNWGtX", "fS8fUeS5F_8", "lvMVLrupf7", "PkpJ78nSN0", "XDFxOpXsw-", "VqGOaHv9wJp", "Yvsk7vvSCsb", "KIsnb7cTFc", "o_mqVYBZFUG" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks the reviewers for the response! To give an intuitive answer to your question of \"which family of PDEs can be applied the proposed method\", we can think of a PDE as a ground-truth model that evolves the ***state*** of a physical system. Typically, the states show more global, dominant features, and can be described by a latent vector with much smaller dimension than the original discretization. Our LE-PDE exploit this ***compressivity of state*** to evolve the system in latent space and achieve speedup, and as long as the PDE does not significantly increase the spatial complexity of the state as it evolves (e.g. developing finer and finer spatial details as in 2-stream instability in of plasma [1]), our method can apply. Most of the PDEs satisfy the above requirements that the state are compressible and does not significantly increase its complexity, so our LE-PDE can be apply to most PDEs. Since any compression of state can incur a possible increase of error (possibly large or small, as the Pareto frontier of \"error vs. runtime\" and \"error vs. #parameter\" in Fig. S8 and S9 show for our LE-PDE and FNO), the more important/relevant question is then \"what is the tradeoff of error vs. runtime we want for the given PDE\", since we can design the encoder of LE-PDE with varying amount of compression. For example, we can design an encoder with minimal compression, so runtime reduction is low but can guarantee to retain low error, or with a much more aggressive compression (like in our 2D and 3D experiments), but can still achieve minimal increase of error. The amount of compression is a hyperparameter which can be optimized and chosen via a validation set. Theoretically studying the best amount of compression that achieves a good tradeoff will be left for an exciting future work. We have also added the above discussion to the end of Appendix J in the revised version.\n\n\n[1] Roberts, K. V., and Herbert L. Berk. \"Nonlinear evolution of a two-stream instability.\" Physical Review Letters 19.6 (1967): 297.\n", " Thank you for your answers.\n\nIt is still a bit unclear about the extent to which the parameters for the encoder and decoder actually contribute to learning the PDE since at this point I would assume that they do much of the work in compressing its representation - they would evolve the dynamics only, while they don't really compress all of the PDE itself. In other words, while a small MLP may not be able to fully compress dynamics (as pointed out by other reviewers), it may be able to evolve them in latent space (hence the title of the paper), while encoders and decoders would have most of the parameters tasked with compression and reconstruction.\n\nIn summary, although the number of parameters of LE-PDE is noticeably larger, given your explanation and results on training times and memory usage along with the new Pareto plots, my major concerns are resolved. \n\nThus, given the contributions of the paper as well as the willingness of the authors' to provide explanations and a considerable amount of additional experimental results, I would recommend the paper for acceptance and am raising my score to 7.", " > Regarding the Pareto front comparison, given that the results are in a table format (separated for both FNO and LE-PDE), it may be a bit hard to directly compare the results, and I would advise making, perhaps, a plot (e.g. error vs # parameters, error vs runtime) to better understand the differences. For example, looking at an extract of the table for the 2D table (best LE-PDE model vs default FNO)\n\n**Answer:** Thank you for the suggestion! We created plots that present error v.s. # parameters and error v.s. runtime trade-offs from the tables G1, G2, G3 and G4. Please see Figure S8 and S9 in Appendix J (pp. 26-27) in the newly revised version of the paper. We hope that the plots could provide a clear understanding of the differences between the models. In summary, for the error vs. runtime plot, LE-PDE Pareto-dominates FNO in 1D (Fig. S8(a)), and have much smaller runtime and comparable error w.r.t. FNO in 2D (Fig. S8(b)). For the error vs. #parameter plot, LE-PDE with evolution model typically has less parameters than FNO, which in turn also have less parameters than LE-PDE with full model (Fig. S9(a)(b)).\n\n> My question then is, how long is the training time for FNO vs LE-PDE? Similar to the question before, the 3D case also shows a substantial increase in the number of total parameters for the LE-PDE (~ 20 as many). To summarize, my main concern is about the impact on memory requirements of the proposed model as well as training times.\n\n\n**Answer:** Thank you for the additional comments. This is a very good point. Here, we present an augmented table G12 as an updated table of G7 in the following (we also updated the Table 5 in Appendix F, page 20, in the newly revised version); we add two metrics, training time per epoch and GPU memory usage during training, which are impacted by the parameters of the evolution models and other factors (e.g. architecture):\n\nTable G12: Comparison of LE-PDE with baseline on runtime and representation dimension, in the 3D Navier-Stokes flow. The runtime is to predict the state at t = 40.\n\nLE-PDE setting | error at t=40 | runtime for rollout | # parameters | # parameters for LE-PDE | training time (min) per epoch | memory usage (MiB)\n:--: | :--: | :--: | :--: | :--: | :--: | :--:\nFNO with 2-step loss | 0.1695 | 7.00 | 3281864 | 3281864 | 102 | 25147\nFNO with 1-step loss | 0.3215 | 7.00 | 3281864 | 3281864 | 58 | 24891\nLE-PDE (ours) | 0.1947 | 0.084 | 65003120 | 83072 | 65 | 25595\n\nThe “FNO with 2-step loss” is trained with 2-step rollout. We also added results for FNO trained with single-step loss in the Table. Comparing “LE-PDE (ours)” with “FNO with 2-step loss”, we see that although ours has much more total parameters, the training time is smaller, and memory usage is comparable. Concretely,\n\n**memory usage:** The reason why FNO has similar memory usage as ours albeit less parameter is the following: the default FNO model has 4 SpectralConv3d layers, 4 Conv3d layers and 2 dense layers, without spatial compression, and during the training, each layer needs to retain the intermediate layer activations (during forward) and gradients during backpropagation. Therefore, the intermediate layer activations and gradients of 4.2 million cells are stored for 4+4+2 layers. In contrast, our LE-PDE’s encoder has 5 layers of Conv3d, each with a compression rate of 4, resulting in much smaller requirement for storing intermediate layer activations and gradients.\n\n**Training time:** The smaller training time of our LE-PDE compared with “FNO with 2-step loss” is mainly due to LE-PDE’s smaller runtime for rollout (third column), which is an inner loop of training. We also see that “FNO with 1-step loss” halves the training time compared to “FNO with 2-step loss”, since the latter does not need to rollout for 2 steps during training as an inner loop. However, “FNO with 1-step loss” has a much larger error.\n\nIn summary, we see that in 3D large-scale experiment, although the full parameters of our LE-PDE can be larger than FNO, the memory usage and training time is roughly comparable. In 1D and 2D, we observe similar memory usage between FNO and our LE-PDE, and FNO’s training time is slightly less.\n", " Thanks for the response which has resolved all my queries about the submission. As these were only minor points, I have kept my score at 7 (Accept). Congratulations on an excellent paper, which I enjoyed reading and reviewing!", " I would like to thank the authors for providing additional experiments and updating the manuscript, especially for providing more results for the Pareto comparison, the 3D experiment details, and additional results compared to LFM.\n\nHowever, I have some questions regarding the updated experimental results, in particular regarding the number of parameters and training times. \n\n1. Regarding the Pareto front comparison, given that the results are in a table format (separated for both FNO and LE-PDE), it may be a bit hard to directly compare the results, and I would advise making, perhaps, a plot (e.g. error vs # parameters, error vs runtime) to better understand the differences. For example, looking at an extract of the table for the 2D table (best LE-PDE model vs default FNO):\n\n| Model | Error | Runtime | Parameter count |\n|-------------|------------------|-------------------|-------------------|\n| FNO (modes=12, width=20 (default setting)) | \t$0.1745$ |\t$42.7 \\pm10.9$\t| $465,717$ |\n| LE-PDE (d_z=256)|\t$0.1861$ | $14.8 \\pm 1.1$ | $3,384,944$ |\n\nWhile LE-PDE is around 3 times faster, the number of parameters is several times the ones of the FNO. In many cases, the number of parameters can be more than 10 times the ones required by the FNO. While I understand the latent parameters are less, it seems this number is due to the learned encoder and decoders. My question then is, how long is the training time for FNO vs LE-PDE? Given the high number of parameters, it looks like it should be several times as much, and I don't recall seeing such a metric in the main text and rebuttals.\n\n2. Similar to the question before, the 3D case also shows a substantial increase in the number of total parameters for the LE-PDE (~ 20$\\times$ as many).\n\nTo summarize, my main concern is about the impact on memory requirements of the proposed model as well as training times.", " Thanks for addressing all my questions, common dimension reduction methods such as VAE indeed perform worse on the PDE evolution task. The newly added ablation study on the number of parameters provides impressive results.\n\nI agree that the authors have performed empirical studies on when high-dimensional PDEs can be reduced to lower dimensions. However, it remains unclear which family of PDEs can be applied the proposed method. \n\nDespite still lacking theoretical soundness, the empirical results are significant and I will raise my score to 5.", " We thank the reviewer for the comments. Below we address the reviewer’s concerns.\n\n> Re1: There has been numerous dimension reduction techniques, PCA, VAEs, just to name a few. Why did the author pursue the proposed method rather than making use of the existing ones? Are there any theoretical relationships between the proposed method and the existing ones?\n\nAnswer: there is an important difference between the PDE simulation setting and the setting used by standard VAE. To learn a model that can simulate the evolution of PDE, whose state can change dramatically during the evolution, we need the learned model to generalize over new states encountered, new boundary and initial conditions, etc. Standard setting for VAE, PCA, in contrast, takes a <em>static<em> setting, which only needs to reduce the dimension but does not need to consider the evolution of the state. Thus, standard dimension reduction techniques cannot apply. Take PCA for example. Looking at the Figure 2 of the paper, we can see that the state at t=0 differs dramatically from state at t=20. The basis of PCA obtained at t=0 will clearly result in a poor performance at t=20.\n\nIt is possible to combine existing dimension reduction techniques with our latent evolution model. We have already stated the difference and novelty of our work with prior such works in the “reduced-order modeling” section of Section 2 “Related Work”. In addition, we perform two ablation experiments that explore performing data reduction first and then learn the evolution in latent space: (a) pretrain an autoencoder with states from all time steps, then freeze the autoencoder and train the latent evolution model. This mimics the method in [1]. (b) the encoder and decoder of LE-PDE is replaced with a VAE, first pre-trained with ELBO on all time steps, then freeze the encoder and decoder and train the latent evolution model. All other aspects of the model architecture and training remains the same. The result is shown in the following Table G10 and G11 for the 1D and 2D datasets in Section 4.4 of “Ablation study”.\n\n\nTable G10: Ablation of LE-PDE using pretrained autoencoder or VAE, for 1D dataset E2-50 scenario:\n\nLE-PDE setting | Cumulative error\n:--: | :--:\nLE-PDE (ours) | 1.127\n(a) pretrain autoencoder | 1.952\n(b) pretrained VAE | 1.980\n\n\n\nTable G11: Ablation of LE-PDE using pretrained autoencoder or VAE, for 2D dataset $\\nu$=1e-5 scenario\n\nLE-PDE setting | Cumulative error\n:--: | :--:\nLE-PDE (ours) | 0.1861\n(a) pretrain autoencoder | 0.2105\n(b) pretrained VAE | 0.2329\n\n\n\n\nFrom Table G10 and G11, we see that performing pre-training results in a much worse performance, since the data reduction only focuses on reconstruction, without consideration for <em>which<em> latent state is best used for evolving long-term into the future. On the other hand, our LE-PDE trains the components jointly with a novel objective that not only encourages better reconstruction, but also long-term evolution accuracy both in latent and input space. We also see that VAE as data-reduction performs worse than autoencoder, since the dynamics of the system is deterministic, and having a stochasticity from the VAE does not help.\n\n[1] K. Lee and K. T. Carlberg, “Model reduction of dynamical systems on nonlinear manifolds using deep convolutional autoencoders,” Journal of Computational Physics, vol. 404, p. 108973, 2020\n", " We thank the reviewer for positive review, and glad that the reviewer recognizes the originality, quality, clarity and significance of the work. Below we answer the questions raised by the reviewer.\n\n> Re: Line 75 \"flow probe\" -> \"flow to probe\"? \nLine 89 \"location... that satisfies given\" Remove \"that satisfies\"? \nRe: Line 122 \"LE-PDE relieves of local evolution\" I don't understand \"relieves\" in this context - use another word?\n\nAnswer: Fixed in the revised script.\n\n> Re: Line 205 \"The prevents the gradient to pass through to the boundary parameter p such as continuous location.\" I'm struggling to parse this sentence, though I think the point is that it's not possible to backprop through discrete variables.\n\nAnswer: yes, your understanding is correct. Typically the boundary is passed in as a binary mask as input, so a discrete value where gradient cannot pass through. We have improved the sentence in the revised script.\n\n> Re: \"η∈[0,0,2]\" Should this say \"[0,2]\"?\n\nAnswer: actually it is [0,0.2]. Thanks for spotting the typo, and we have fixed it in the revised script.\n\n> Re: Line 324 \"average amount of the advected smoke simulated by the solver\" I don't follow what this means and I can't find the exact details in the supplement.\n\nAnswer: we have improved this sentence to “for the optimized parameter, we measure the total amount of smoke simulated by the solver passing through two respective outlets and take their ratio.”\n\n> Re: In the objective defined in line 127, is the right hand side missing integration over x? If not, then there is a different objective for every x location. How are these combined? (Also, less importantly, what does the subscript d refer to?\n\nAnswer: The right hand side is not missing integration over x. In fact, the integral should be $L_d[a,X]=\\int_{t=t_s}^{t_e}\\ell_d[\\mathbf{u}(t,\\mathbf{x})]dt$ where $\\ell_d$ is a functional that maps a continuous function $\\mathbf{u}(t,\\mathbf{x})$ to a single scalar. The subscript d refers to “design”, as it is close in meaning to “inverse optimization”.\n\n> Re: In Table 1, why is the WEBO5 accumulated error so large if this is the ground truth method?\n\nAnswer: the WEBO5 result in the paper is using the ground-truth solver but evolve on the downsampled spatial resolution. Note that the ground-truth data is simulated with spatial resolution of $n_x=200$, but then is downsampled to $n_x=100$, 50 and 40. The larger spatial interval ignores certain information and results in large accumulated error \n\n\n> Re: As someone not familiar with this field, I found the subsection on \"Deep learning-based surrogate modeling\" reviewed autoregressive methods very clearly. However the 1-sentence summary of \"neural operators\" was too brief for me to understand. Can you expand on the details of \"neural networks that approximate a mapping between infinite-dimensional functions\", as this seems non-trivial to do\n\nAnswer: due to the space limit we had to make it concise. A typical neural operator M maps $[0,T] \\times \\mathcal{F} \\to \\mathcal{F}$, where $\\mathcal{F}$ is a (possibly infinite-dimensional) function space. Let’s take Fourier Neural Operator (FNO) for example, regards the data as samples on spatial locations x and time t of the infinite dimensional solution u(t,x), and design the operator as a kernel integral operator that can give value to the output function on (t,x). Certain discretization (in frequency space) is performed to approximate the kernel integral operator. For more information can see Section 2.3 of [1] or FNO paper [1] .\n\n[1] Brandstetter, Johannes, Daniel Worrall, and Max Welling. \"Message passing neural PDE solvers.\" ICLR 2022, arXiv preprint arXiv:2202.03376 (2022).\n[2] Li, Zongyi, et al. \"Fourier neural operator for parametric partial differential equations.\" ICLR 2021, arXiv preprint arXiv:2010.08895 (2020).\n\n\n> Re: Line 107. In what sense is $f$ a partial function?\n\nAnswer: Here the function f takes two inputs, first argument is the state $\\hat{U}_t$ at each time step, the second argument is the static parameter $p$. When we always give $p$ as a constant in the second argument, then $f(\\cdot,p)$ becomes a “partial function” that only takes one argument.", " > Re: Table 1. Why is runtime for FNO not included?\n\nAnswer: In Table 1, the results besides LE-PDE are provided by reference [1], which does not include the runtime for FNO. Since in 1D dataset, MP-PDE generally achieves the best performance, we compare mainly with MP-PDE.\n\n[1] Brandstetter, Johannes, Daniel Worrall, and Max Welling. \"Message passing neural PDE solvers.\" ICLR 2022, arXiv preprint arXiv:2202.03376 (2022).\n\n\n> Re: Line 233-234. The text says you start with nt=200 then downsample. But downsampling is to nt=250, which is a larger value! Is there a typo here? If not, why not keep the original value of nt?\n\nAnswer: thanks for catching the typo. $n_t$ should start with 250. We have updated it in the revised version.\n\n\n> Re: Line 314: \"our LE-PDE's ablated version without latent evolution\". Can you describe this in more detail? I couldn't find a description in the main paper. (Maybe it's somewhere in the supplement - in which case adding a reference would be helpful.)\n\nAnswer: Thanks for asking. In the “LE-PDE's ablated version without latent evolution”, we maintain the encoder and decoder of LE-PDE and removing the latent evolution operator. To predict state $U^{t+1}$, we do $\\hat{U}^{t+1}=decoder(encoder(U^t))$.\n\n> Re: Line 354 \"Increasing M... will be countered by less number of examples (since having to leave room for more steps in the future)\" I don't understand this, can you explain in more detail?\n\nAnswer: Take the 1D dataset for example. Since the temporal bundling steps is 25 and total number of time step is 250, with multi-step M=1, the dataset consists of data with time steps of \n\n(input: [1,2,...25], target [26,27,...50]), \n\n(input: [2,3,...26], target [27,28,...51]), \n\n…\n\n(input: [201,202,...225], target [226,227,...250]), \n\nin total can have 201 number of data.\n\nIf M=4, which means that the model need to rollout 4 steps and compare with target, the dataset would consist of data with time steps of \n\n(input: [1,2,...25], target [26,27,...50,51,...75,76,...100,101,....125]), \n\n(input: [2,3,...26], target [27,28,...51,52,...76,77,...101,102,....126]), \n\n…\n\n(input: [125,126,...150], target [151,...250])\n\nIn total can have only 125 number of data.\n\nWe see that since we need to have a larger length of target with larger M, it results in smaller number of data for each simulation. \n\n**Summary**\n\nThanks again for the detailed comments and questions! We hope that we have answered your questions and resolved your concerns.\n", " > Re2: About the novelty of our method\n\nAnswer: Here we would like to emphasize the novelty of our method (which is also stated in the Introduction and Related Work sections). The novelty lies in the following aspects: Compared to existing reduced-order modeling methods, (a) we focus on speeding up both simulation and inverse optimization of more general PDEs using expressive NNs, while most existing works focus only on dimension reduction for forward simulation, typically with limitations of the expressivity of the model (e.g. linear projection) and generality (design for narrower applications with domain-specific architecture); (b) we introduce a novel objective that results in better long-term autoregressive rollout. This is also shown in the above additional experiments Table G10, G11, and Table G5, G6 to reviewer jbxg; (c) We demonstrate competitive performance compared to state-of-the-art deep learning-based models for PDEs that evolve on the input space, and demonstrate the scalability of our method to states with millions of cells per time step. Compared to state-of-the-art models that evolve in input space, we clearly demonstrate the effectiveness of our model in speeding up while maintaining competitive accuracy. Compared to [2, 3] that perform inverse optimization on the input space, we instead perform inverse optimization in latent space, which results in speedup and improved accuracy (Section 4.3 of original submission).\n\n[2] K. R. Allen, T. Lopez-Guevara, K. Stachenfeld, A. Sanchez-Gonzalez, P. Battaglia, J. Ham- rick, and T. Pfaff, “Physical design using differentiable learned simulators,” arXiv preprint arXiv:2202.00728, 2022. [3] Q. Zhao, D. B. Lindell, and G. Wetzstein, “Learning to solve pde-constrained inverse problems with graph networks,” International Conference on Machine Learning, 2022.\n\n> Re3: Did not study when can PDE states be encoded into a lower dimensional state, and.or how many dimension can be get rid of.\nHowever, a theoretical study on what PDEs this reduction can be efficiently applied to is not conducted. \n\nAnswer: In fact, we have studied it in Section 4.4 “Ablation study” in the original submission, which we indicated in the main text which points to Table 6, Table 7 and Fig. 6 in Appendix H. As stated in the text in Appendix H of the original submission, we observe that for 1D dataset, “when latent dimension dz is between 16 and 128, the accumulated MSE is near the optimal of 1 ~ 1.1. It reaches minimum at d_z = 64. With larger latent dimensions, e.g. 256 or 512, the error slightly increases, likely due to the overfitting. With a smaller latent dimension (< 8), the accumulated error grows significantly. This shows that the intrinsic dimension of this 1D problem with temporal bundling of S = 25 steps, is somewhere between 4 and 8. Below this intrinsic dimension, the model severely underfits, resulting in huge rollout error.” Please see Appendix H for more details our empirical study of latent dimension for both 1D and 2D datasets. In general, determining the suitable latent dimension of a physical system is an empirical question, which depends on the system’s intrinsic dimension and the model architecture. Theoretically studying it is out of scope of this paper, and may be an interesting future work.\n\n> Re4: Did not study the number of parameters used against other methods, using an extra encoder and decoder might take extra number of parameters, resulting in unfair advantages over other methods\n\nAnswer: Thanks for the suggestion! We have performed additional experiments that do extensive hyperparameter search for state-of-the-art FNO model, where the results are provided in Table G1 and G2 in the response to reviewer 1 (jbxg). In Table G3 and Table G4 in the response to reviewer 1, we also provide the number of parameters for our LE-PDE. Please refer to that response for detailed analysis. As shown in the table, our model typically uses much less number of parameters in the latent evolution model (the deciding component in autoregressive rollout), and sometimes more total parameters due to the encoder and decoder. However, as is shown in Table G1 and G2, with a wide hyperparameter search range of FNO, the reported performance of FNO is already near the optimal. In addition, more parameters do not necessarily lead to better performance, since it increases the chance of overfitting. This is shown clearly in Table G1 where FNO with (modes=16, width=128) underperforms FNO with (modes=16, width=64), as well as in Table 6 and 7 in the original submission that increasing the latent dimension (thus total # parameters) too much results in worse performance.\n", " \n\n> Re5: Figure 1 is unclear. Specifically, is the box on the left the initial condition and boundary condition of the PDE? The schematic in the middle is also unclear.\n\nAnswer: Thanks for pointing it out! We have updated Fig. 1 in the revised version, with better text notations. Hope that this makes the schematic clearer.\n\n> Re6: It would be interesting to see a study of when can the proposed LE-PDE method have a reasonable result. \n\nAnswer: we don’t fully understand the question. Can you clarify it? If we understand correctly, the “reasonable result” means the result measured in the metrics of error and runtime. The main problem we aim to tackle is to speed up the forward simulation and inverse optimization of PDEs. We have shown in extensive experiments (Section 4.1, 4.2, 4.3, and Appendix F) that our LE-PDE achieves significant speedup (which is the main claim of our paper), and as Reviewer bK7j puts it, “the proposed method is always at least competitive with existing methods, and outperforms them in some important examples”. \n\n> Re7: Does the reconstruction take extra data to train? \n\nAnswer: No. As is stated in the Section 3.2 of the paper, the model is trained jointly with the three loss terms.\n", " We thank the reviewer for the feedback. We are glad that the reviewer recognizes the clear introduction, motivation and reproducibility of our work. Below, we address the reviewer’s suggestions/concerns about citation for low-dimensional representation, larger-scale experiment and experiment on influence of noise.\n\n> Re1: The authors state that low dimensional representation exists for high dimensional data (line 49-64). Please have a look at Johnson-Lindenstrauss lemma and cite appropriate literature applicable here.\n\nAnswer: Thank you for the suggestion. Indeed, the Johnson-Lindenstrauss lemma proves the existence of a function that can embed given points in a possibly high-dimensional space into a low-dimensional space without distorting more than a factor of $(1 + \\epsilon)$. On the one hand, we suspect that the lemma is applicable to PDE systems because PDEs are generally defined on infinite dimensional continuous space and require models to have strong generalizing ability. This is contrary to the lemma that only provides a transductive embedding model. For continuous space, there are also some classical theorems that assure the embedding of smooth manifolds into Euclidean spaces [1]. But this is also transductive. The theoretical study on inductive embedding for PDE systems is unknown in most cases. We believe that the experiments conducted here show some evidence of the model’s ability to inductively embed states and reveal the existence of a dominant global structure that offers the generalization ability as well as the significant speed-up for the model. We will leave theoretical study on the inductive embedding as future research\n\n[1] Whitney H., The self-intersections of a smooth n-manifold in 2n-space. Ann. of Math. (2) 45, (1944). 220-246.\n\n\n> Re2: The authors may want to try their experiments on a larger scale. Grid size of 64 is too small, as the authors mention dimensions in the millions/billions in the introduction, at least one experiment should demonstrate the efficacy of the proposed method in such a larger scale where other baseline methods are computationally very very expensive and resource consuming.\n\nAnswer: In fact, we have included a 3D experiment which has 4.19 million cells per time step in the original submission, as stated in the end of Section 4.2 which points to Appendix F. In the original submission, we demonstrated (in Table 5 in page 20 of Appendix) that in a challenging 3D chaotic N-S flow setting, LE-PDE only uses 0.084s to evolve to t=40, where an ablation without latent evolution uses 1.03s, ground-truth solver PhiFlow uses 70.80s on GPU and 1802s on CPU. Thus, LE-PDE achieves a 12.3× speed-up compared to the ablation without latent evolution, and 840× speed-up compared to the ground-truth solver on the same GPU, which is significant.\n\nIn addition, per Review 1 (jbxg)’s suggestion, we have also added comparison with current state-of-the-art Fourier Neural Operator (FNO) model in this dataset. The result is in the Table G7 under the Re3 in the response to Reviewer 1 (jbxg). We see that FNO slightly outperforms LE-PDE in terms of long-term rollout error. This is to be expected since LE-PDE uses much less representation dimension (128) than FNO (16.76M). Thus, LE-PDE achieves a much smaller runtime than FNO. This comparison shows that LE-PDE can scale to much larger datasets with millions of dimensions per time step, and achieve significant speedup with minor reduction in performance. We will add this in the revised version of the paper.\n\n\n", " > Re3: Another concern about the proposed method is the applicability of the proposed system in real-world scenarios. Experimental data are often noisy. The authors may want to look at how noise affects the latent space evolution and encoder-decoder performance.\n\nAnswer: Thanks for the suggestion! We have performed additional experiments on how the noise affects the performance, on the representative 1D and 2D dataset used in Section 4.4 “Ablation Study”. Specifically, we add random fixed Gaussian noise to the training, validation and test sets of the dataset, with varying amplitude. The noise is independently added to each feature of the dataset. It is also “fixed” in the sense that once added to the dataset, the noise is freezed and not re-sampled. This mimics the real world setting where random observation noise can corrupt the observation and we never have the ground-truth data to train and evaluate from. Below is the result table:\n\nTable G8: 1D dataset (E2-50 scenario) with varying noise amplitude (the amplitude is the standard deviation of the diagonal Gaussian). The value range of the state u(t,x) is within [-2,2].\n\n\nNoise amplitude | cumulative error\n:--: | :--:\n0 (default) | 1.127\n1e-5 | 1.253\n1e-4 | 1.268\n1e-3 | 1.456\n1e-2 | 2.612\n2e-2 | 4.102\n5e-2 | 9.228\n\n\nTable G9: 2D dataset ($\\nu$=1e-5 scenario) with varying noise amplitude. The value range of the state u(t,x) is within [-2,2]\n\n\n\n\nNoise amplitude | cumulative error\n:--: | :--:\n0 (default) | 0.1861\n1e-5 | 0.1880\n1e-4 | 0.1862\n1e-3 | 0.1866\n1e-2 | 0.1897\n2e-2 | 0.1875\n5e-2 | 0.1910\n1e-1 | 0.2012\n\n\nNote that the value range of both datasets are within [-2,2]. From Table G8, we see that LE-PDE’s cumulative error stays excellent (<=1.456) with noise amplitude <= 1e-3, much smaller than state-of-the-art MP-PDE’s error of 1.63 and FNO-PF’s 2.27. Even with noise amplitude of 1e-2, the LE-PDE’s error of 2.612 still remains reasonable. \n\nFrom Table G9, we see that LE-PDE is quite resilient to noise, with error barely increases for noise amplitude up to 2e-2, and only shows minimal increase at noise level of 1e-1. As a context, U-Net’s error is 0.1982 and TF-Net’s error is 0.2268 (Table 2 in main text).\n\nIn summary, in the 1D and 2D datasets, we see that LE-PDE shows good robustness to Gaussian noise, where the performance is reasonable where the ratio of noise amplitude to the value range can go up to 0.25% in 1D and 2.5% in 2D. The smaller robustness in the 1D Burgers’ dataset may be due to that it is a 200-step rollout and the noise may make the model uncertain about the onset of shock formation.\n\n\nRe4: Overall the idea seems interesting, the authors need to substantiate their claims in light of existing literature and possibly a few more experiments.\n\nWith the above response, we hope that we have addressed the reviewer’s concerns. The reviewer is also encouraged to look at our response to Reviewer 1 (jbxg) (Table G1 to G6) for additional experiments that further substantiate our claims.\n", " We thank the reviewer for the positive and detailed feedback. We are glad that the reviewer recognizes the significance, clarity and challenging experimental setting of our work. Below, we address the reviewer’s points on Pareto efficiency, comparison with one existing work, and 3D experiment.\n\n> Re1: It is unclear how the proposed approach would compare against other models with a similar number of parameters. In other words, it is unclear whether the method is Pareto efficient or not since there is no Pareto plot considering an extensive hyperparameter search for both LE-PDE and close competitors such as the FNO.\n\nAnswer: This is a good point. Following the reviewer’s suggestion, we have performed additional experiments that do extensive hyperparameter search for FNO, on the two representative settings of the 1D and 2D datasets used in Section 4.4 “Ablation Study”, which we provide the result table below. We would also like to point the reviewer to the existing ablation study of our LE-PDE in Appendix H, where we have performed extensive hyperparameter search that varies the latent dimension of our LE-PDE, which is the most important hyperparameter in the LE-PDE architecture, and determines the number of parameters in the latent evolution model and the runtime. As a summarization of results, For 1D dataset, LE-PDE Pareto-dominates FNO in error vs. runtime plot. For 2D dataset, FNO’s cumulative error is slightly better than LE-PDE, but its runtime is significantly larger. \n\nIn the following, we present two tables (Table G1, Table G2) for FNO hyperparameter search on 1D and 2D datasets in Section 4.4, respectively. For FNO, the most important hyperparameters are the “modes”, which denotes the number of Fourier frequency modes, and “width”, which denotes the channel size for the convolution layer in the FNO. We vary both values starting from the default setting:\n\nTable G1: 1D dataset (E2-50 scenario) with FNO:\n\n| FNO setting* | cumulative error | runtime (full) (ms) | # parameters |\n| :--: | :--: | :--: | :--: |\n| modes=16, width=64 (default setting) | 2.379 | 21.2 $\\pm$ 6.9 | 292249 |\n| modes=16, width=128 | 3.107 | 21.7 $\\pm$ 4.3 | 1138201 |\n| modes=16, width=32 | 2.695 | 22.1 $\\pm$ 7.4 | 78169 |\n| modes=16, width=16 | 2.755 | 21.0 $\\pm$ 5.7 | 23353 |\n| modes=16, width=8 | 4.992 | 17.9 $\\pm$ 1.2 | 9001 |\n| modes=20, width=128 | 2.804 | 20.9 $\\pm$ 1.1 | 1400345 |\n| modes=20, width=64 | 2.626 | 19.3 $\\pm$ 0.9 | 357785 |\n| modes=12, width=64 | 2.899 | 19.6 $\\pm$ 2.2 | 226713 |\n| modes=8, width=64 | 2.240 | 19.7 $\\pm$ 1.3 | 161177 |\n| modes=4, width=64 | 2.326 | 19.2 $\\pm$ 0.9 | 95641 |\n| modes=8, width=32 | 2.366 | 18.2 $\\pm$ 1.0 | 45401 |\n| modes=8, width=16 | 2.505 | 18.1 $\\pm$ 1.2 | 15161 |\n| modes=8, width=8 | 5.817 | 18.4 $\\pm$ 1.2 | 6953 |\n\n*Here the FNO follows the FNO-PF setting implemented in the MP-PDE paper (Brandstetter et al. 2022). The runtime is the average of 100 runs. \n\n\nTable G2:2D dataset ($\\nu$=1e-5 scenario) for FNO:\n\n\nFNO setting | cumulative L2 error | runtime (full) (ms) | # parameters\n:--: | :--: | :--: | :--:\nmodes=12, width=20 (default setting) | 0.1745 | 42.7 $\\pm$ 10.9 | 465717\nmodes=12, width=40 | 0.1454 | 42.7 $\\pm$ 4.2 | 1855977\nmodes=12, width=10 | 0.2016 | 40.3 $\\pm$ 5.4 | 117387\nmodes=12, width=5 | 0.2398 | 45.5 $\\pm$ 7.4 | 29922\nmodes=16, width=20 | 0.1710 | 43.7 $\\pm$ 4.2 | 824117\nmodes=8, width=20 | 0.1770 | 43.1 $\\pm$ 3.1 | 209717\nmodes=4, width=20 | 0.1997 | 43.2 $\\pm$ 4.8 | 56117\nmodes=8, width=10 | 0.2109 | 42.2 $\\pm$ 4.8 | 53387\nmodes=8, width=5 | 0.2415 | 43.3 $\\pm$ 4.3 | 13922\n\n\n", " General response:\nWe thank the reviewers for their thorough and constructive comments. Reviewers agree that our method is simple, flexible, and shows significant speed-up. Based on reviewers’ valuable feedback, we have conducted a number of additional experiments, which resolve the reviewers’ concerns. We have also updated the paper and Appendix in the revised version. The major additional experiments and improvements are as follows:\n\n1. To address the concern about Pareto efficiency, we do additional experiments that perform extensive hyperparameter search for FNO on 1D (Table G1) and 2D (Table G2) datasets, and also provide the number of parameters of our LE-PDE with varying latent dimension, for 1D (Table G3) and 2D dataset (Table G4). Experiments show that LE-PDE pareto-dominates FNO in error vs. runtime plot in most of the cases. They are under the Re1 for Reviewer jbxg. Also added appendix J in the revised version.\n\n2. To complement LE-PDE’s result on 3D dataset with 4.19 million cells per time step, we perform an additional experiment that runs FNO on this 3D dataset. The result table is provided in Table G7 under the Re3 for Reviewer jbxg, which also addresses one of Reviewer MdSE’s concerns. The experiment shows that our model achieves significant speed-up while keeping competitive long-term rollout error for the baseline. Detailed experimental results are below. We also updated Appendix F.\n\n3. We additionally compare with Latent Field Model (LFM), a model that also uses latent evolution, but differs from our work in the architecture, objective and experimental evaluation. The result table is provided in Table G5 and G6 under the Re2 for Reviewer jbxg. We see that with LFM objective, the error is larger. Detailed are added at Appendix K. \n\n4. We run an additional experiment that explores how noise influences the performance of our model. It is provided in Table G8 and G9 under the Re3 for reviewer MdSE. We see that LE-PDE shows good robustness to Gaussian noise, where the performance is reasonable where the ratio of noise amplitude to the value range can go up to 0.25% in 1D and 2.5% in 2D. Details are added at Appendix L.\n\n5. We run additional experiments that compare our LE-PDE with the model that pretrains an autoencoder or VAE for dimension reduction, then freeze the autoencoder and train the latent evolution model. Results are shown in Table G10 and G11 for reviewer Mytm. We see that performing pre-training results in a much worse performance. Details are added at Appendix M.\n\nOther concerns are individually addressed in the response to each reviewer. We also emphasize the novelty of our method in the response to reviewer Mytm. \n\nIn summary, through extensive experiments in original submission (in the main text and Appendix) and additional experiments in the following response, we show general applicability and scalability of LE-PDE to different scenarios, and its relative strengths compared with current state-of-the-art models and baselines. We hope that our work makes a useful step to help speed up the forward simulation and inverse optimization of PDEs, pivotal in science and engineering.\n", " For comparison, here we also provide augmented table of our LE-PDE by varying the latent dimension (d_z), for 1D dataset (Table G3) and 2D dataset (Table G4). This includes results in Table 6, 7 in Appendix H but also provides additional information about the number of parameters. Note that we provide both total number of parameters (second last column) and # parameters for latent evolution model (last column). The latter is also a good indicator since during long-term evolution, the latent evolution model is autoregressively applies while encoder and decoder is only applied once. So the latent evolution model is the deciding component of the long-term evolution accuracy and runtime.\n\nTable G3: 1D dataset (E2-50 scenario) with LE-PDE:\n\n\nLE-PDE setting | cumulative error | runtime** (full) (ms) | runtime (evolution) (ms) | # parameters | # parameters for latent evolution model\n:--: | :--: | :--: | :--: | :--: | :--:\nd_z=512 | 2.778 | 16.3 $\\pm$ 2.6 | 6.7 $\\pm$ 1.0 | 4043648 | 1314816\nd_z=256 | 2.186 | 15.0 $\\pm$ 0.8 | 6.1 $\\pm$ 0.3 | 2271360 | 329728\nd_z=128 | 1.127 | 14.9 $\\pm$ 1.1 | 6.0 $\\pm$ 0.4 | 1630976 | 82944\nd_z=64 | 0.994 | 14.4 $\\pm$ 1.0 | 5.7 $\\pm$ 0.3 | 1372224 | 20992\nd_z=32 | 1.048 | 14.5 $\\pm$ 0.8 | 5.8 $\\pm$ 0.4 | 1258208 | 5376\nd_z=16 | 1.041 | 14.1 $\\pm$ 0.9 | 5.8 $\\pm$ 0.4 | 1205040 | 1408\nd_z=8 | 21.03 | 14.0 $\\pm$ 0.7 | 5.6 $\\pm$ 0.2 | 1179416 | 384\nd_z=4 | 205.09 | 13.9 $\\pm$ 0.5 | 5.7 $\\pm$ 0.3 | 1166844 | 112\n\n** The runtime value here slightly differs from that in Table 6 in paper due to that the GPU machine was busy at the time of running. Here we make sure the 4 current tables (Table G1 to G4) are run on the same machine and same environment, so the comparison is fair.\n\nTable G4: 2D dataset ($\\nu$=1e-5 scenario) for LE-PDE:\n\nLE-PDE setting | cumulative error | runtime (full) (ms) | runtime (evolution) (ms) | # parameters | # parameters for latent evolution model\n:--: | :--: | :--: | :--: | :--: | :--:\nd_z=512 | 0.1930 | 16.2 $\\pm$ 1.1 | 6.8 $\\pm$ 0.7 | 6467184 | 1313280\nd_z=256 | 0.1861 | 14.8 $\\pm$ 1.1 | 5.8 $\\pm$ 0.4 | 3384944 | 328960\nd_z=128 | 0.2064 | 14.8 $\\pm$ 0.5 | 5.9 $\\pm$ 0.4 | 2089584 | 82560\nd_z=64 | 0.2252 | 14.7 $\\pm$ 0.7 | 6.0 $\\pm$ 0.7 | 1503344 | 20800\nd_z=32 | 0.2315 | 15.0 $\\pm$ 2.1 | 5.9 $\\pm$ 0.5 | 1225584 | 5280\nd_z=16 | 0.2236 | 14.2 $\\pm$ 1.3 | 5.8 $\\pm$ 0.6 | 1090544 | 1360\nd_z=8 | 0.3539 | 14.3 $\\pm$ 0.6 | 5.7 $\\pm$ 0.3 | 1023984 | 360\nd_z=4 | 0.6353 | 14.2 $\\pm$ 0.5 | 5.7 $\\pm$ 0.2 | 990944 | 100\n\n\n\nFrom the comparison, we see that:\n\nFor 1D dataset, LE-PDE Pareto-dominates FNO in error vs. runtime plot. FNO’s best cumulative error is 2.240, and runtime is above 17.9ms, over the full hyperparameters combinations (# parameter varying from 6953 to 1.4M). In comparison, our LE-PDE achieves much better error and runtime over a wide parameter range: for d_z from 16 to 64, LE-PDE’s cumulative error <= 1.05, runtime <=14.5ms, latent runtime <=5.8ms, (which uses 1408 to 82944 number of parameters for latent evolution model, and ~1.2-1.4M total parameters).\n\nFor 2D dataset, FNO’s cumulative error is slightly better than LE-PDE, but its runtime is significantly larger. Concretely, the best FNO achieves an error of 0.1454 while the best LE-PDE’s error is 0.1861. FNO’s runtime is above 40ms, while LE-PDE’s runtime is generally below 15ms and latent evolution runtime is below 6ms. LE-PDE uses larger total number of parameters but much less # parameters for latent evolution model.\n\nWe will add the above tables to the Appendix of the paper and also add a Pareto plot of error vs. runtime for both models.\n\n", " > Re2: About the novelty: there is a possibly missing reference [1] (although not published in conference proceedings), in which the authors evolve a model in the latent space (see Figure 1 and Section 3.1 of the reference), and the main idea of the paper is very similar to the presented one. How does this work compare against it?\n\nAnswer: Thanks for pointing us to the Latent Field Model (LFM, reference [1]), which we have added the citation in the revised version. Our LE-PDE differs from LFM in three major aspects: (1) local vs. global representation: to improve speed, LE-PDE requires an MLP in the encoder and decoder, which makes the latent representation global. In comparison, LFM requires the full architecture to be local, so has no MLP in the encoder and decoder. (2) Objective: we introduce novel learning objective, which encourage the matching of values in both the input and latent space after long-term rollout. In comparison, LFM’s objective encourages the matching of time-derivative in both the latent space and in input space, connected by the Jacobian of encoder or decoder. The LFM objective is good with very small time intervals Δt, but will become imprecise with larger time intervals Δt: with larger time intervals where the states change dramatically, the Jacobian of the encoder or decoder w.r.t. input may also change and we may not be able to use the Jacobian at time t to approximate the Jacobian across [t, t+Δt]. On the other hand, since LE-PDE’s objective encourages matching of values, it is valid for even large intervals. (3) Experiment evaluation: LFM is only evaluated in a 1D PDE, while we evaluate on a 1D PDE dataset, a challenging 2D dataset, and a more challenging 3D datasets, as well as inverse optimization problem, demonstrating the wide applicability and scalability of our method.\n\nWe also perform additional experiments to compare our LE-PDE with LFM, in the representative 1D and 2D datasets in Section 4.4. As noted above, LFM differs with LE-PDE in (1) architecture and (2), therefore, we perform the ablation study where we (a) remove MLP in our model (b) use LFM objective, but maintain MLP (c) full LFM: remove MLP, use LFM objective, while all other aspects of training is kept the same. We use PyTorch’s jvp function in autograd to compute the Jacobian-vector product and carefully make sure that our implementation is correct. Below is the comparison table. From the tables, we see that without MLP, it actually results in worse performance (ablation (a)), and with LFM objective, the error is larger, likely due to that the dataset are quite chaotic and LFM may not adapt to the large time range in these datasets. From the below tables, we see that without MLP, it actually results in worse performance (ablation (a)), and with LFM objective, the error is larger, likely due to that the dataset are quite chaotic and LFM may not adapt to the large time range in these datasets.\n\nTable G5: Comparison of LE-PDE with LFM, for 1D dataset E2-50 scenario:\n\nLE-PDE setting | cumulative error | runtime (full) (ms) | runtime (evolution) (ms) | # parameters | # parameters for latent evolution model\n:--: | :--: | :--: | :--: | :--: | :--:\nLE-PDE (ours) | 1.127 | 14.9 $\\pm$ 1.1 | 6.0 $\\pm$ 0.4 | 1630976 | 82944\n(a) without MLP | 7.930 | 17.2 $\\pm$ 6.0 | 8.3 $\\pm$ 0.4 | 2730368 | 1580544\n(b) with LFM objective | 58.85 | 15.7 $\\pm$ 1.5 | 6.5 $\\pm$ 0.6 | 1630976 | 82944\n(c) full LFM: without MLP, with LFM objective | 26.12 | 15.7 $\\pm$ 1.3 | 8.4 $\\pm$ 0.7 | 2730368 | 1580544\n\nTable G6: Comparison of LE-PDE with LFM, for 2D dataset $\\nu$=1e-5 scenario\n\nLE-PDE setting | cumulative error | runtime (full) (ms) | runtime (evolution) (ms) | # parameters | # parameters for latent evolution model\n:--: | :--: | :--: | :--: | :--: | :--:\nLE-PDE (ours) | 0.1861 | 14.8 $\\pm$ 1.1 | 5.8 $\\pm$ 0.4 | 3384944 | 328960\n(a) without MLP | 0.2120 | 16.6 $\\pm$ 2.2 | 9.2 $\\pm$ 0.8 | 2126960 | 1181184\n(b) with LFM objective | 0.4530 | 15.8 $\\pm$ 2.3 | 6.2 $\\pm$ 0.6 | 3384944 | 328960\n(c) full LFM: without MLP, with LFM objective | 0.6315 | 16.2 $\\pm$ 1.9 | 9.1 $\\pm$ 0.4 | 2126960 | 1181184\n\nWe have added the comparison to the Appendix K of the revised version.", " > Re3: The experimental section in the Appendix about the 3D extension of the method (3D Navier Stokes) lacks some experimental details and evaluation. The learned model is not compared with other deep learning approaches, so it is difficult to assess how the proposed method would perform against other models\n\nAnswer: Thanks for the comment. We perform an additional experiment on the 3D dataset that compares with FNO, the state-of-the-art model. We see from the table below G5 that FNO slightly outperforms LE-PDE in terms of long-term rollout error. This is to be expected since LE-PDE uses much less representation dimension (128) than FNO (16.76M). In the table, we see that our LE-PDE uses much less number of parameters to evolve autoregressively than FNO. The most parameters of LE-PDE are mainly in the encoder and decoder. Thus, LE-PDE achieves a much smaller runtime than FNO to evolve to t=40. The results are updated in the Appendix F in the revised Appendix. \n\nTable G7: 3D experiment that compare our model with FNO\n\nLE-PDE setting | error at t=40 | runtime (s) | # parameters | # parameters of evolution model\n:--: | :--: | :--: | :--: | :--:\nFNO | 0.1695 | 7.00 | 3281864 | 3281864\nLE-PDE (ours) | 0.1947 | 0.084 | 65003120 | 83072\n\nWe have updated the table 5 in Appendix F in the revised version.\n\n\n\n> Re4: Question 1: GNN+MLPs as a future direction, but it is unclear to me whether this could be possible. How would message passing be realized in a global latent space, though? Could we still use the same latent space even if we added particles or mesh points?\n\nAnswer: By GNN+MLP as a future direction, we mean that similar to CNN+MLP scenario, the GNN can have a few message passing layers with local graph pooling that reduces the number of nodes but increases the feature dimension. Then a global pooling is performed on the GNN that results in a single vector, which feeds into an MLP that produces the latent representation. This architecture would generalize the CNN + MLP architecture. If the local graph pooling and global graph pooling is invariant to the permutation of nodes, then this architecture can adapt to added particles or mesh points at test time. This is an interesting direction for future research.\n\n> Re5: Question 2: About the boundary annealing, what is meant by β being a “temperature hyperparameter”? This notation is also not introduced in the main text.\n\nAnswer: First, we apologize for causing confusion by not clarifying the notation of the temperature parameter in the main text. Typically, boundary information in PDE systems is provided as a binary mask which indicates which cells are outside of the simulation domain. This discreteness actually makes the inverse boundary optimization difficult because it is not possible to perform backpropagation through discrete variables. We therefore introduced a continuous boundary mask that interpolates the discrete boundary mask and continuous variables. The parameter β is set in this continuous mask and plays a role in controlling the degree of continuity; if β is small enough, then the mask accurately approximates the discrete boundary mask. Especially, when we perform the inverse boundary optimization, we simultaneously run an annealing technique where large β of the early stage of iteration gradually becomes small. We call β the “temperature” hyperparameter because the parameter gets hard to be updated as β gets “cooler” (i.e., smaller). We include the details on the formulation in Appendix B. We hope that the explanation above and the appendix resolve your question.\n\n> Re6: Minor comments on typos:\n\nAnswer: Thanks for spotting the typos. We have fixed the typos in the revised version.\n\n\nWith the above additional results, we hope that we have resolved the reviewer’s concerns and have strengthened the paper.", " This paper builds on recent advances in deep learning for physical simulation to reduce the computational complexity of learned surrogate models by avoiding local evolution. Instead of updating the values at each discretized point in space, the authors propose to evolve the model in a global latent space. By encoding both states and boundaries into lower-dimensional latent vectors, the model can evolve dynamics in the latent space and recover the inferred states only when needed. The paper also introduces techniques for training, such as a new consistency loss directly in latent space, a fast way of dealing with inverse optimization problems by backpropagating through time in the latent space, and an annealing technique for boundary optimization. Experiments in various PDE settings demonstrate that the proposed model, LE-PDE (Latent-Evolution of PDEs) compares competitively against state-of-the-art methods while requiring a fraction of the other methods’ computational resources. ### Strenghts\n\nThe paper is appropriately placed in the current growing literature on scientific machine learning to create a novel fast model for tackling complex problems in the realm of PDEs. The exposition for the motivation is written crisply and is generally easy to follow. The experiments are performed in challenging settings, such as boundary control with the 2D Navier Stokes equations. The results show that very large neural networks are not always necessary to represent complex physics and that “simple” latent MLPs with few parameters can learn such physics evolutions thus considerably lowering computational requirements, which is significant in deep learning for simulation.\n\n### Weaknesses\n\nAlthough the speedups of LE-PDE are considerable, it is unclear how the proposed approach would compare against other models with a similar number of parameters. What if the FNO had fewer parameters, comparable to LE-PDE? Would it still be better than the proposed approach? In other words, it is unclear whether the method is *Pareto efficient* or not since there is no Pareto plot considering an extensive hyperparameter search for both LE-PDE and close competitors such as the FNO.\n\nAbout the novelty: there is a possibly missing reference [1] (although not published in conference proceedings), in which the authors evolve a model in the latent space (see Figure 1 and Section 3.1 of the reference), and the main idea of the paper is very similar to the presented one. How does this work compare against it?\n\nThe experimental section in the Appendix about the 3D extension of the method (3D Navier Stokes) lacks some experimental details and evaluation. The learned model is not compared with other deep learning approaches, so it is difficult to assess how the proposed method would perform against other models (except for the same version of the model without the global latent space evolution). While the comparison in the main text is against other deep learning approaches, here it is against the ground truth solver which is known to be generally much slower than deep learning approaches. Moreover, there is no report on statistical errors but only a qualitative plot (that also shows noticeable artifacts).\n\n[1] Sanchez A, Kochkov D, Smith JA, Brenner M, Battaglia P, Pfaff TJ. Learning latent field dynamics of PDEs, Third Workshop on Machine Learning and the Physical Sciences (NeurIPS 2020)\n\n### Minor comments\n\nThese are primarily typos and do not influence my score:\n\n- Line 235-236: “Burgur’s Equation”\n- Line 355: “sweep spot”\n- Line 707 (Appendix): \"The details of the dataset has already given in […]”\n- Line 748: “To explore how LE-PDE to larger scale turbulent dynamics a […]” 1. GNN+MLPs as a future direction, but it is unclear to me whether this could be possible. How would message passing be realized in a global latent space, though? Could we still use the same latent space even if we added particles or mesh points?\n\n2. About the boundary annealing, what is meant by $\\beta$ being a “temperature hyperparameter”? This notation is also not introduced in the main text. See above. As stated in the main text, no major negative societal impact is to be expected from this work.", " The authors propose a method LE-PDE to efficiently perform forward simulation and inverse optimization of Partial differential equation based models by performing the evolution in latent space. A loss term penalizing 3 terms including the consistency between latent space and the original space is defined, and optimized via backpropagation for the inverse task. Experimental evaluations including an ablation study is provided.\n Strength:\nThe authors put together a very clear introduction and motivation for their work, the method is described clearly in an easily reproducible manner.\n\nWeakness:\nThe authors state that low dimensional representation exists for high dimensional data (line 49-64). Please have a look at Johnson-Lindenstrauss lemma and cite appropriate literature applicable here.\n\nThe authors may want to try their experiments on a larger scale. Grid size of 64 is too small, as the authors mention dimensions in the millions/billions in the introduction, at least one experiment should demonstrate the efficacy of the proposed method in such a larger scale where other baseline methods are computationally very very expensive and resource consuming.\n\nAnother concern about the proposed method is the applicability of the proposed system in real-world scenarios. Experimental data are often noisy. The authors may want to look at how noise affects the latent space evolution and encoder-decoder performance.\n Please have a look at the weakness mentioned in the strength and weakness section and address these. Overall the idea seems interesting, the authors need to substantiate their claims in light of existing literature and possibly a few more experiments. Yes, the limitations are discussed in the appendix.", " This paper looks at the problem of expensive time cost when simulating the time evolution of a PDE. Existing methods employ values at different spatial positions at each time step, causing a long simulating time. LE-PDE accelerates by learning a low dimensional latent representation, and evolve the low-dimensional state, rather than the high-dimensional original variable. Optimization is performed upon reconstruction loss of the latent state and the evolution accuracy. Empirical advantages on time cost achieved over previous methods on different PDEs. Strengths: \nPaper demonstrated empirical improvements on both 1D nonlinear PDEs and 2D Navier-Stokes PDE over multiple previous methods. \nModel has a relatively flexible structure.\n\nWeaknesses: \nDid not study when can PDE states be encoded into a lower dimensional state, and.or how many dimension can be get rid of. \nDid not study the number of parameters used against other methods, using an extra encoder and decoder might take extra number of parameters, resulting in unfair advantages over other methods.\nFigure 1 is unclear. Specifically, is the box on the left the initial condition and boundary condition of the PDE? The schematic in the middle is also unclear. It would be interesting to see a study of when can the proposed LE-PDE method have a reasonable result. Does the reconstruction take extra data to train? \nAlso, there has been numerous dimension reduction techniques, PCA, VAEs, just to name a few. Why did the author pursue the proposed method rather than making use of the existing ones? Are there any theoretical relationships between the proposed method and the existing ones? The author proposed to accelerate evolution of PDEs by first reducing its dimension. \nThe idea is intuitive and straight forward, and is proven empirically with different PDEs. \nHowever, a theoretical study on what PDEs this reduction can be efficiently applied to is not conducted, and it is not argued why the proposed reduction method is better than existing ones like VAEs. \nThe method is flexible, and should be able to be accompanied with existing surrogate models and other method.", " The paper proposes a neural PDE solver based on learning how to represent the system with an evolving latent global state.\nThe advantage is reducing the cost of approximating PDE solutions without significantly reducing accuracy.\nThe method can also be used to solve some inverse problems.\n ## Summary (and strengths)\n\n* Originality. The paper proposes an approach which is simple to understand and very general (allowing any architecture to be used for the various components e.g. encoders, decoders). Furthermore, it appears novel to me, although I'm not familiar with the neural PDEs literature.\n* Quality. The proposed method is always at least competitive with existing methods, and outperforms them in some important examples. The experiments are thorough, covering all the important ablations I could think of while reading the paper.\n* Clarity. Overall the paper is clearly written and easy to understand, but there are some minor problems discussed below.\n* Significance. The authors make the case for the importance of neural PDE solver well. \n\nIn my opinion this is a good paper and clearly above the acceptance threshold.\n\n## Weaknesses\n\nI found few important weaknesses in the paper. Some minor points are listed below and in the \"Questions\" section.\n\nWhile the paper is generally well written, there are a few typos and confusingly worded sentences.\nMost of these are unimportant, but below I list a few which made the paper harder to understand:\n\n* Line 75 \"flow probe\" -> \"flow to probe\"?\n* Line 89 \"location... that satisfies given\" Remove \"that satisfies\"?\n* Line 122 \"LE-PDE relieves of local evolution\" I don't understand \"relieves\" in this context - use another word?\n* Line 205 \"The prevents the gradient to pass through to the boundary parameter $p$ such as continuous location.\" I'm struggling to parse this sentence, though I think the point is that it's not possible to backprop through discrete variables.\n* Line 236 \"$\\eta \\in [0,0,2]$\" Should this say \"$[0,2]$\"?\n* Line 324 \"average amount of the advected smoke simulated by the solver\" I don't follow what this means and I can't find the exact details in the supplement. ## Major\n\n* In the objective defined in line 127, is the right hand side missing integration over $x$? If not, then there is a different objective for every $x$ location. How are these combined? (Also, less importantly, what does the subscript $d$ refer to?)\n\n* In Table 1, why is the WEBO5 accumulated error so large if this is the ground truth method?\n\n## Minor\n\n* As someone not familiar with this field,\nI found the subsection on \"Deep learning-based surrogate modeling\" reviewed autoregressive methods very clearly.\nHowever the 1-sentence summary of \"neural operators\" was too brief for me to understand.\nCan you expand on the details of \"neural networks that approximate a mapping between infinite-dimensional functions\",\nas this seems non-trivial to do.\n* Line 107. In what sense is $f$ a partial function?\n* Table 1. Why is runtime for FNO not included?\n* Line 233-234. The text says you start with $n_t=200$ then downsample. But downsampling is to $n_t=250$, which is a larger value!\nIs there a typo here? If not, why not keep the original value of $n_t$?\n* Line 314: \"our LE-PDE's ablated version without latent evolution\". Can you describe this in more detail?\nI couldn't find a description in the main paper.\n(Maybe it's somewhere in the supplement - in which case adding a reference would be helpful.)\n* Line 354 \"Increasing M... will be countered by less number of examples (since having to leave room for more steps in the future)\"\nI don't understand this, can you explain in more detail? The paper covers technical and ethical limitations clearly in the supplementary material.\nI agree with the authors' assessment that there are no obvious negative social impacts.\nI also appreciate that the authors included experimental results where existing methods (slightly) outperform the proposed approach.\nThis helps readers get a clear picture of the relative strengths of the methods in different scenarios." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "qTGpjLv61dN", "wbEcYR-aolD", "MCGgTho68yQ", "o_mqVYBZFUG", "XDFxOpXsw-", "tYVtii66alA", "KIsnb7cTFc", "o_mqVYBZFUG", "o_mqVYBZFUG", "KIsnb7cTFc", "KIsnb7cTFc", "Yvsk7vvSCsb", "Yvsk7vvSCsb", "VqGOaHv9wJp", "nips_2022_xvZtgp5wyYT", "VqGOaHv9wJp", "VqGOaHv9wJp", "VqGOaHv9wJp", "nips_2022_xvZtgp5wyYT", "nips_2022_xvZtgp5wyYT", "nips_2022_xvZtgp5wyYT", "nips_2022_xvZtgp5wyYT" ]
nips_2022_dcmp81De77k
Localized Curvature-based Combinatorial Subgraph Sampling for Large-scale Graphs
This paper introduces a subgraph sampling method based on curvature to train large-scale graphs via mini-batch training. Owing to the difficulty in sampling globally optimal subgraphs from large graphs, we sample the subgraphs to minimize the distributional metric with combinatorial sampling. In particular, we define a combinatorial metric that distributionally measures the similarity between an original graph and all possible node and edge combinations of the subgraphs. Further, we prove that the subgraphs sampled using the probability model proportional to the discrete Ricci curvature (i.e., Ollivier-Ricci curvatures) of the edges can minimize the proposed metric. Moreover, as accurate calculation of the curvature on a large graph is challenging, we propose to use a localized curvature considering only 3-cycles on the graph, suggesting that this is a sufficiently approximated curvature on a sparse graph. In addition, we show that the probability models of conventional sampling methods are related to coarsely approximated curvatures with no cycles, implying that the curvature is closely related to subgraph sampling. The experimental results confirm the feasibility of integrating the proposed curvature-based sampling method into existing graph neural networks to improve performance.
Reject
The majority reviewers consider that this paper should be rejected. Their concerns include clarity of presentation, a comparison to previous work and finally a number of individual points which were not addressed in the rebuttal period.
train
[ "_uqh6Dev8VS", "wlXw2BZy1Z", "W4Sv97WtfEs", "opDziE1qWu5" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The paper proposes a curvature-based graph subsampling method that aims at sampling structurally representative subgraphs via Olliver's Ricci curvature. *Strength*\nThe paper addresses an important topic with geometric tools that are not very well explored in this context. \n\n*Weaknesses*\n- The writing should be improved. In terms of clarity and presentation, I don't think that the paper in its current form meets the standards of NeurIPS. Examples: theoretical results are stated as propositions in the main text with no indication of whether and where proof can be found. The actual downstream tasks on which you benchmark your algorithm are not described in the main text and there are no clear references to the appendix. \n- It is not clear, whether your approach is computationally efficient and how its efficiency compares to the other methods you compare against. This should be reported in addition to a comparison of the achieved accuracy. \n- The literature review is not very extensive. Related work on applications of curvature in network analysis is only briefly mentioned in section 2; importantly, related methods on *curvature-based sampling* are only mentioned in the appendix. This should be moved to the main text.\n- The proposed theoretical guarantees for the curvature approximation are given for the Erdos-Renyi (ER) model only. This is not necessary a good model for real network data. In addition, the structure of ER graphs varies very significantly with the edge threshold, which is not even commented on. Do your results hold for all regimes? \n- What are the values in Tab. 1? How are they computed?\n see above see above", " This paper introduces a subgraph sampling method based on curvature to train large-scale graphs via mini-batch training. They define a combinatorial metric that distributionally measures the similarity between an original graph and all possible node and edge combinations of the subgraphs. This metric is used as an evaluation metric for how good the sampling is. They found that sampling the edges with large curvatures is equivalent to reducing the distributional difference. \npros:\n1. The motivation is well explained. It is true that the large graph is computationally infeasible, especially in GCN. Network sampling is in desperate need. \n 2. The proposed method for sampling is novel. \n 3. The writing and presentation are nice. \n\ncons: \nThis paper emphasizes too many selling points, such as proposing a distribution metric, proposing a curvature-based sampling, proposing localized curvature calculation, and improving the accuracy of GCN. But I have two questions. (1) Regarding the distribution metric, I am confused about the innovation of this metric. As for me, this metric is the same as the definition of Gromov-Wasserstein distance. Please clarify the difference. (2) Regarding the localized curvature calculation, which basically uses the lower bound which is derived in [19]. What's the innovation in this paper? \n\n\n\n\n\n\n\n\n\n \n1. Since the sampling probability p_xy is proportional to k_xy, and k_xy can be negative, how can you get a negative probability? \n\n2. Proposition 3 shows the lower error bound of localized curvature. I am not sure whether this lower bound is small. Since the range of Ollivier Ricci curvature is usually between -1 and 1, the lower error bound between \\hat{k_xy} and k_xy should not be greater than 2. Please provide more statements about when the error bound is small, and when it is large, either in a theoretical way or an empirical experiment way. \n It is not clear what are the drawbacks or challenges of existing network sampling methods and how the authors address them.", " The paper presents a new method for picking samples consisting of relatively small subgraphs of a very large graph so that the selected subgraphs are representative of the large graph for learning and classification. It is stated that this process improves tasks involving classification of the graph using the small samples, eg as in GNNs. The sampling method is based on constructing distributions on nodes and edges of the graph which reflect their discrete Ricci curvatures. Subsequently, several key observations about the localized curvature-based node/edge distributions are stated and proved. Numerical experiments are provided at the end which support the claim of representativeness of the samples for graph learning and classification. \nThe methodology presented is theory-based and is computationally manageable. The theoretical results are discussed at some length in the supplementary material provided. \n\nThree key weaknesses of this paper are 1) Why do sampled subgraphs (segments of the very large graph one wishes to learn) used in feature learning need to be similar in any way to the larger graph, the enormous discrepancy between their node/edge sizes notwithstanding, 2) what actual graph classification tasks did the computational experiments solve? and 3) How does the proposed method compare with prior art? \n \nIt would be helpful if the authors could address each of the questions raised above:\n\n1) Why do sampled subgraphs used in feature learning need to be similar in any way to the larger graph? Would it not be better if these samples cover the spectrum of variations that subgraphs of a fixed size actually exhibit? \n\n2) what actual graph classification tasks did the computational experiments solve? More specifically, what are the classification/learning problems in ogbn-arxiv and ogbn-mag tasks which represent the problems set out to handle via small subgraph sampling?\n\n3) How does the proposed method compare with prior art? The authors cite much prior work. Are the rows labeled random, neighbor, node, edge, rw, cluster and ppr the best known prior sampling methods and if so could the authors remind the reader what classification learning tasks obgn consisted of? \nNA", " The paper introduces an approach to (large) graph subsampling based on positive Ollivier Ricci curvature preservation. This is motivated by theoretical analysis showing that local structure can be better preserved by sampling positively curved edges. Experiments are conducted to support the findings on tasks like node-classification. Originality and impact: \n\nAs far as I can tell, the core idea of leveraging directly the Ollivier curvature by relying on positively curved edges for sampling procedures is new. Some of the theoretical findings are in my opinion not that significant though.\n\n- Proposition 1 feels a lot like simply `shifting' the problem with the distance d_{m} that should be defined in terms of random walks Markov chains exactly as for the Ollivier curvature. Similarly, Proposition 3 states something relatively obvious that is for sparse graphs (where higher-order structures are in fact less frequent on average) neglecting higher order structures in the curvature computation would indeed be not that costly.\n- Partly connected to the first point, results like Proposition 2 and Proposition 3 feel more like `gap-filling' and less coherent with the narrative.\n\n\nQuality and soundness: \n\nThe presentation is poor and the general soundness is at points questionable. Some instances are listed below (but the actual list is longer):\n- Lines 102--104: the message should be better fleshed out here.\n- Is Definition 1 novel? This seems like OT distance definition yet there is no reference. Also line 110 `We define... around the nodes' is not clear.\n- Where is equation (2) derived? \n- The `local structural graph' used throughout the paper $\\mathcal{G}_{x}$ is never introduced -- I can only guess what that is.\n- I don't quite follow the first inequality in (23), could you elaborate?\n- What is $\\nabla_{x_{i}y_{i}}$ in (7)?\n- The sentence `The gradient is proportional.. in the path' at line 194 is unjustified.\n- In the proof of Proposition 2, how (30) derives from (29)? If $m$ is not supported in $y$, then we should have $(D^{-1}Af)(y)$ and not $f(y) + \\Delta f (y)$.\n- In (31) what is $\\nabla_{yx}$? \n- In (36) there should not be an equality since we are bounding (32) and (33) from above? \n- From the previous points, I am a doubtful about the correctness of Proposition 2.\n- I don't follow paragraph 272-277 and similarly the scenario for the node-classification task is not clear (see Questions section as well).\n- Appendices are not useful, mostly containing parts copied/reported by existing references and fail to further clarify and explain the paper.\n\n\n The more technical/theoretical/presentation questions are already listed in the section above.\n\nFurther questions:\n\n- How do you compare to the baseline in terms of complexity and time?\n Limitations have not been addressed." ]
[ 3, 4, 5, 3 ]
[ 4, 4, 3, 4 ]
[ "nips_2022_dcmp81De77k", "nips_2022_dcmp81De77k", "nips_2022_dcmp81De77k", "nips_2022_dcmp81De77k" ]
nips_2022_JY6fLgR8Yq
Graph Self-supervised Learning with Accurate Discrepancy Learning
Self-supervised learning of graph neural networks (GNNs) aims to learn an accurate representation of the graphs in an unsupervised manner, to obtain transferable representations of them for diverse downstream tasks. Predictive learning and contrastive learning are the two most prevalent approaches for graph self-supervised learning. However, they have their own drawbacks. While the predictive learning methods can learn the contextual relationships between neighboring nodes and edges, they cannot learn global graph-level similarities. Contrastive learning, while it can learn global graph-level similarities, its objective to maximize the similarity between two differently perturbed graphs may result in representations that cannot discriminate two similar graphs with different properties. To tackle such limitations, we propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA). Specifically, we create multiple perturbations of the given graph with varying degrees of similarity, and train the model to predict whether each graph is the original graph or the perturbed one. Moreover, we further aim to accurately capture the amount of discrepancy for each perturbed graph using the graph edit distance. We validate our D-SLA on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which ours largely outperforms relevant baselines.
Accept
This paper proposes a novel self-supervised learning strategy by considering the quantitative discrepancy of two perturbed graphs, which is measured by graph edit distance. The major concerns come from the motivation of the proposed approach. This has been well addressed in authors’ rebuttal, with additional new experiments. The authors have done a great job in addressing this main concern and other questions raised by reviewers, such as ablation studies on major hyperparameters. The contribution of incorporating graph-level quantitative metric as additional self-supervision signal is clear. Although there are divided ratings in the end, I still recommend acceptance of this paper.
train
[ "bRvq5uXKUdC", "cKWIfYnr41N", "mdqOPwdQbjJ", "-7wlig-d9EH", "pOpSgOPGcpc", "RU-M4cpu85T", "XAEpr-WZmTs", "OfRPwcAJko_", "n6wK--vGMy", "LI4XPn5dt2m", "pvIgLBmrWb", "lGXLuaqFND7", "Ek4znY_Lnrf", "q498xxH2nua", "VuVio-U37T", "swKw59daFPz", "hmPQnk8ugzp", "bEIpzfZ-84h", "iXDH0ijbxeX", "ULVLpe7cYqR", "qpdmGV6iegN", "c83OwEyH1bn", "0KdXGuPIKVP" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Q1. This work is not well motivated, since there exist works [1, 2, 3] that perform SSL on graphs without perturbing graphs. \n\nA1. This is a critical misunderstanding of our motivations. Please note that one of our main motivations is to **learn the exact discrepancies between different graphs**, and we use graph perturbation to create **slightly different graphs**. None of the existing works, including [1,2,3], aims to learn the exact discrepancy between two graphs, and **whether the baselines perturb the graphs or not does not affect our motivation, since none of them can learn how similar/dissimilar different graphs are, which ours aim to learn**. \n\nTo go one step further regarding your misunderstanding, which we hope to be clarified in this response comment, “they [1, 2, 3] have comprehensively justified that graph perturbations are not necessary for graph contrastive learning”, yes we did point out the limitation of graph contrastive learning, but we also pointed out the limitation of these **baselines without perturbation**, in their inability to learn the exact difference between two graphs, in our paper as well as previous responses. \n\nWe are sorry if our previous response leads you to a critical misunderstanding, but please see our Abstract in Lines 10-12 (and Introduction in Lines 67-69) that we aim to learn the exact discrepancy between graphs to discriminate two similar graphs with different properties. Once again, the suggested works [1, 2, 3] cannot discriminate between two similar graphs with the amounts of their exact distances, since they do not learn how similar the two graphs are, regardless of using the perturbations or not. \n\nDuring the rebuttal period, while we experimentally showed that our D-SLA significantly outperforms the suggested baselines in the graph classification task, you argued that graph classification tasks are not convincing enough to show the limitation – failure in exact discrepancy learning between graphs. However, the superior performances of our D-SLA come from its exact discrepancy learning, and that is, since baselines cannot learn the exact differences among graphs, they are suboptimal on downstream tasks. Furthermore, as shown in **our additional experiments on a link prediction task (See Table R5 in this response comment or Table S8 in the supplementary file)** our D-SLA also significantly outperforms baselines on this task as well due to its effectiveness in discrepancy learning. \n\nHowever, you might still be less convinced despite our two quantitative experiments above, and you might still have questions about 1) why exact discrepancy learning is helpful in downstream tasks, and 2) does suggested baselines really fail to learn such discrepancies between graphs. Regarding question 1), we already provided the evidence in our main paper in Table 3 and Table 4 that exact discrepancy learning is necessary for identifying similar graphs with different properties. \n\nThen, for the next question, **we visualize the learned embedding spaces with two different distance metrics, such as edit distance and Tanimoto similarity, in Figure S5 and Figure S6 of the supplementary file**, respectively. And, the embedding space shows that our D-SLA can capture the exact amount of discrepancy between different graphs, whereas **the suggested augmentation-free approaches cannot capture the topological differences of graphs**. Thus, to summarize, the whole discussions and their corresponding results in the above suggest that baselines have the limitation on learning the exact discrepancies, meanwhile, our D-SLA can capture them, which is one of our main motivations and which makes ours effective.\n\n\n| | ROC-AUC | AP | ROC-AUC | AP | ROC-AUC | AP\n|--------------|----------------|----------------|---|---|---|---|\n| | **COLLAB** | | **IMDB-B** | | **IMDB-M** | | \n| SimGCL | 81.56 ± 1.10 | 77.46 ± 0.86 | 76.29 ± 2.37 | 64.91 ± 2.60 | 74.60 ± 2.21 | 63.78 ± 2.28 |\n| SimGRACE | 78.79 ± 1.07 | 74.51 ± 1.54 | 75.64 ± 2.40 | 64.49 ± 2.79 | 73.44 ± 2.15 | 62.81 ± 2.32 |\n| **D-SLA (Ours)** | **88.14** ± 0.32 | **86.21** ± 0.38 | **86.64** ± 1.41 | **78.54** ± 2.79 | **78.53** ± 1.51 | **69.45** ± 2.29 |\n\nTable R5. Fine-tuning results on link prediction.\n\n---\n\nQ2. Reviewer NxMJ also raised similar concerns.\n\nA2. Last but not least, you mentioned that Reviewer NxMJ is also concerned about our motivation similarly. However, the concern of Reviewer NxMJ is different from your concern since his/her main concern is that **the objective of our discrepancy learning might not work in social networks**. Regarding the answer to this question, please see our response to Reviewer NxMJ in Questions 3 and 4, in which we believe that we clearly deal with Reviewer NxMJ's concern.", " Thank the authors for answering my questions. Regrettably, the authors did not resolve my major concern about the motivation of this work. In specific, the motivation of this work is based on the authors' argued limitation of contrastive learning, i.e., \"*two differently perturbed graphs may result in representations that cannot discriminate two similar graphs with different properties*\". However, as I have pointed out in Question 1, recently, there have been some recent advances in self-supervised learning (SSL) of GNNs, i.e., some recent works are able to effectively perform SSL on graphs without perturbing graphs. The given references [1,2,3] are such three example works. They have comprehensively justified that graph perturbations are not necessary for graph contrastive learning. They have also verified this for various tasks and on various widely-used datasets. Although the authors have experimentally compared studies [2,3] through the graph classification task, this is less convincing, because, in addition to the given references [1,2,3], we can reasonably believe that there are more recent works that do not have this limitation argued by the authors.\n\nFinally, I have also read the questions raised by the other reviewers. Reviewer NxMJ also raised concerns about the motivation of this work. \n\nBased on the above reasons, it seems that this work may require significant improvements, which cannot be completed during the camera-ready phase. Therefore, I would like to lower my rating from 5 to 4.", " Dear Reviewer NxMJ\n\nWe sincerely appreciate your positive comments on the importance of our tackled problems as well as our comprehensive experiments. During the response period, we have made every effort to faithfully address all your concerns/comments in the initial responses below. Here we briefly summarize the main points as follows:\n* We clarified that the objective of our discrepancy learning is **significantly different from conventional contrastive learning** and further demonstrated the differences between them by recapitulating the experimental results. For one particular example, **our objective is to learn the subtle difference** between two similar graphs on the graph-level representations, unlike contrastive learning which fails to do so.\n* We have clarified that optimizing our objective functions **allows the model to learn the local information** at the component level, and, to show the effectiveness of learning local information, we conducted experiments on the link prediction tasks. Also, we have further analyzed learned local information by our D-SLA **by measuring the discrepancy among local representations**.\n* We have further demonstrated the effectiveness of our hyperparameters regarding $\\lambda_1$ and $\\lambda_2$, by varying them.\n\nSince the end of the discussion phase is approaching, could you please go over our responses? Please let us know if you have anything else that we should address. We believe that including all the clarification and additional results in the response comments to our paper will significantly improve ours. And we thank you again for your time and efforts in reviewing our paper as well as your insightful and constructive comments.\n\nBest regards, Authors\n", " Dear Reviewer 3WcA\n\nWe sincerely appreciate your positive comments that we clearly motivate the drawbacks of existing graph SSL methods, we effectively visualize the strengths of our methods, and we provide our source code. During the response period, we have made every effort to faithfully address all your comments in the responses below. Here we briefly summarize the main points of our response as follows:\n* We have clarified that the suggested AF-GCL [1] still suffers from the drawback of contrastive learning. Then, we have validated that **the data augmentation is a key factor for obtaining transferable representations**, by showing that **our discrepancy learning outperforms suggested SimGCL [2] and SimGRACE [3]**.\n* We have clarified the effect of our graph discrimination task via the visualization of the embedding space, on which we have explained **the intuitive working principle of our graph discriminator**.\n* We have demonstrated the sensitivity of our hyperparameters regarding $\\lambda_1$, $\\alpha$, and $\\lambda_2$, by varying them.\n* We have demonstrated that our D-SLA **outperforms** baselines **on ROC-AUC and AP scores** measures as well.\n\nSince the end of the discussion phase is approaching, could you please go over our responses? Please let us know if you have anything else that we should address. We thank you again for your time and efforts in reviewing our paper, and sincerely appreciate your insightful and constructive comments.\n\nBest regards, Authors\n\n---\n\n*References*\n\n[1] [arXiv 2022] Augmentation-Free Graph Contrastive Learning\n\n[2] [SIGIR 2022] Are Graph Augmentations Necessary Simple Graph Contrastive Learning for Recommendation\n\n[3] [WWW 2022] A Simple Framework for Graph Contrastive Learning without Data Augmentation", " Dear Reviewer yyyE\n\nWe sincerely appreciate your positive comments on the soundness of our proposed method, the clarity of our paper, and the significance of the experimental results. We have made every effort to faithfully address all your comments in the responses, and here we briefly summarize the main points of our response as follows:\n* We have demonstrated the sensitivity of our hyperparameters by varying them and then provided the guidelines for the choice of hyperparameters.\n* We have clarified **the necessity of joint training** of the graph discrimination and exact discrepancy learning tasks. \n\nSince the end of the discussion phase is approaching, could you please go over our responses? Please let us know if you have anything else that we should address. We thank you again for your time and efforts in reviewing our paper as well as your constructive comments.\n\nBest regards, Authors\n", " Dear Reviewer BmqR\n\nWe sincerely appreciate your positive comments in regard to the quality and clarity of our paper. During the response period, we have made every effort to faithfully address all your concerns/comments, given in detail below. In short,\n* We have clarified the significance of our work in two folds. At first, **we tackle a fundamental problem of conventional graph contrastive learning** which has been overlooked in the graph domain. Also, we have clarified **the difference between conventional contrastive learning and our discrepancy learning** by comparing each component of our discrepancy learning against conventional contrastive learning.\n* We have explained the objective of our perturbation strategy and then clarified **the difference in perturbation strategies** between ours and conventional contrastive learning.\n* We have clarified that our graph discriminator is not affected by the class imbalance problem.\n\nSince the end of the discussion phase is approaching, could you please go over our responses? Please let us know if you have anything else that we should address. We thank you again for your time and efforts in reviewing our paper, and sincerely appreciate your insightful comments. \n\nBest regards, Authors\n", " We sincerely appreciate your time and effort in reviewing our papers, as well as the constructive comments and valuable suggestions. Here, we clarify our novelty and contribution in the following two aspects: **framework-level** and **component-level**. For the other points you asked or raised beside the main contributions, please refer to our responses to each question in the comments for each reviewer.\n\n---\n\n## Framework-level novelty\n* We first want to emphasize that our main contribution is the **learning of subtle differences between two similar graphs**, which has been overlooked in conventional graph contrastive learning. In other words, we aim to learn a discriminative representation space, since graphs are discrete data structures and two slightly different graphs may have drastically different properties. We find that learning the subtle difference between two similar graphs is significant to obtain transferable representations as shown in Tables 2 and 4. We strongly believe that our framework cannot be treated as similar to conventional contrastive learning, but rather be significantly different.\n* Our discrepancy learning framework aims to learn the local difference **on graph-level representations**, by tackling that predictive learning cannot learn the global representations. We find that our proposed framework can **capture not only the local semantics but also the global semantics** as we demonstrated in Tables 2 and 4, and Figure 5.\n* Our discrepancy learning framework is a **general** graph self-supervised learning method that is applicable to diverse domains not limited to molecular graphs, but also to biological and social networks since our proposed framework can capture both local and global information.\n\n---\n\n## Component-level novelty\nWe now highlight the difference between our components and contrastive learning.\n* Our graph discrimination task in Section 3.2 forces the model to **embed two similar graphs into distinct representations** (Figure 3 (b)), which is completely opposite from the objective of contrastive learning.\n* Our discrepancy learning with graph edit distance in Section 3.3 allows the model to **learn the discrete embedding space** according to discrete graph structure even if two graphs are highly similar (Figure 3 (c)), whereas contrastive learning continuously attracts the embeddings of similar graphs and cannot learn the discrete embedding space.\n* We facilitate learning the exact amount of discrepancy by leveraging graph edit distance which is computed with **near-zero cost** when performing graph perturbations.\n", " We sincerely thank you for the constructive and helpful comments. We appreciate your comments that the methodologies and implementations of our proposed method are clearly written and experimental results including analysis well support our argument. We address all your concerns below:\n\n--- \n\n**Question 1.** I would like to understand the novelty of the framework as how the discrepancy framework is different from redefining the similarity functions, and then use existing methods. The significance is not quite obvious, and would appreciate authors' further clarifications.\n\n**Answer 1.** The significance of our work is **tackling a fundamental assumption** of conventional graph contrastive learning approaches which overlooks the fact that graphs are discrete structures, and we proposed a novel framework with a **completely opposite objective** from conventional contrastive learning to tackle the problem. \n\nGraph contrastive learning approaches assume that the perturbed graphs are similar to the original one. However, we argue that such a fundamental assumption does not hold in the graph domain, observing that similar graphs could have largley different properties.\n\nTo this end, we do not redefine the similarity, but rather propose completely opposite objectives from the conventional contrastive learning schemes as described in line 67-69.\n\nSpecifically, contrasting to the conventional contrastive learning methods, our graph discriminator treats the **perturbed graphs as dissimilar** and enforces the model to **embed them apart** in the latent space (Figure 3 (b)). Also, we propose a method to learn the exact amount of discrepancy between the perturbed graphs and the original graph, whereas conventional contrastive learning try to maximize the similarity between them.\n\nBy tackling that the fundamental assumption in conventional contrastive learning does not hold in the graph domain, we believe that our discrepancy learning framework could give new insight into the graph self-supervised field.\n\nAdditionally, we proposed an **efficient way to measure the discrepancy** by leveraging the **graph edit distance** which is computed at **near-zero cost** while perturbing graphs.\n", " **Question 2.** I wonder is this a common approach for obtaining perturbed graph, or a novel part of this paper?\n\n**Answer 2.** Inspired by our graph discriminator, we propose a perturbation method that is challenging for the model to discriminate by learning the subtle difference between the perturbed graphs and the original graph. Please note that the perturbation strategy follows along with the self-supervised learning strategy. \n\nConventional contrastive learning [1,2] considers that graph perturbation does not change the local and global properties. Consequently, over 20% of nodes or edges are perturbed in contrastive learning methods by attribute masking, edge perturbation, node dropping, or subgraph sampling.\n\nContrarily, we argue that perturbing even a subtle region can change the local and global properties and the model should distinguish the subtle differences between the perturbed graphs and the original graph. Therefore, our perturbation strategy is different from the conventional perturbation strategy since **we aim to perturb only a subtle region** of a graph and make the graph discriminator confuse to discriminate the original graph from the perturbed graphs. Specifically, we clarify which component is different from conventional perturbation strategies:\n1) We choose to perturb **edges**, since perturbing nodes (i.e., node dropping and subgraph sampling) significantly changes the graph properties.\n2) We perturb only a **small amount of edges** by aiming to discriminate the subtle differences between the perturbed graphs and the original graph.\n3) In our framework, attribute masking is viewed as an auxiliary device to **make it harder to distinguish the subtle difference**. In our point of view, the attribute masking allows the model to more focus on learning the subtle differences in connectivity.\n\nIn our D-SLA framework, attribute masking plays an important role to learn the subtle difference, since it is easy to distinguish two similar graphs when giving the all information about nodes. To validate this fact, We further demonstrate the effect of attribute masking for capturing the local semantics on the link prediction task.\n| COLLAB | Accuracy |\n|-------------|----------------|\n| w/ Masking | 76.19 +/- 0.50 |\n| w/o Masking | 70.42 +/- 0.95 |\n\nTable R1. Ablation study on attribute masking.\n\nAs shown in table R1, the performance without attribute masking is significantly lower than the performance with attribute masking. We suggest that, in the pre-training stage, attribute masking actually limits the information given to the model and forces the model to learn more transferable and fruitful representations, demonstrating that attribute masking in our perturbation is a key factor to learn the local semantics.\n\n*Reference*\n\n[1] You, Yuning, et al. \"Graph contrastive learning with augmentations.\" Advances in Neural Information Processing Systems 33 (2020): 5812-5823.\n\n[2] You, Yuning, et al. \"Graph contrastive learning automated.\" International Conference on Machine Learning. PMLR, 2021.\n\n---\n\n**Question 3.** For those contrastive learning method papers which also consider original graph and perturbed graphs, are the perturbed graphs being generated, or being found by calculating graph edit distance?\n\n**Answer 3.** The conventional contrastive learning methods generate the perturbed graphs by selecting a specific perturbation strategy without leveraging graph edit distance. ", " **Question 4.** I wonder could it be the case that in real data the graph samples already contains graphs similar to the original graph.\n\n**Answer 4.** There could be more similar negative graphs than perturbed ones, however, such a situation rarely happens, since the pre-training dataset contains general graphs collected from the real world and our perturbation method makes only a subtle difference from the original one.\n\n---\n\n**Question 5.** How to choose the number of perturbed graphs n, and if the perturbed graphs samples are large, would the imbalance of two classes (1 vs n) affect the performance of classifier/discriminator?\n\n**Answer 5.** Our discriminator does not suffer from the class imbalance when the number of perturbed graphs is large since we design our graph discrimination to increase the score of the original graphs *compared* to the scores of the perturbed graphs. Specifically, our D-SLA does not compute a loss value per score or per graph, as the independent computation of loss for each graph could cause the class imbalance problem. Instead, our D-SLA computes one loss value by combining the scores of the original and perturbed graphs and aims to maximize the score of the original graph compared to the scores of the perturbed graphs.\n\nOn the other hand, if the number of perturbed graphs is large, the model can learn the more diverse graph structures and may obtain more transferable representations for various downstream tasks. However, we argue that the number of perturbed graphs has a trade-off relationship with time and memory space for perturbing more graphs.\n", " We sincerely thank you for your constructive and helpful comments. We deeply appreciate your comments that our proposed method is sound and the performance of our approach is meaningful compared to predictive and contrastive learning. We address all your concerns below:\n\n---\n\n**Question 1.** The paper does not include any guidelines on tuning these hyperparameters (e.g. the margin parameter $\\alpha$, parameters $\\lambda_1$ and $\\lambda_2$ to determine the weight of each pretext task in the final objective function, max graph-edit distance considered to generate perturbed graphs) and how the choice of hyperparameters affect the results.\n\n**Answer 1.** Thank you for your helpful suggestion. We already provided the effect of $\\lambda_1$ in Appendix B.4, with Figure S3 and Figure S4, where we observed that our D-SLA is not sensitive across different $\\lambda_1$ values and the effect of perturbation magnitude in Appendix B.3, with Table 8, where we observed that weaker perturbation magnitude is helpful to capture the local semantics. Please note that the graph edit distance depends on the perturbation magnitude: a stronger perturbation magnitude causes a larger graph edit distance.\n\nOn the other hand, based on your suggestion, we further provide the effects of $\\alpha$ and $\\lambda_2$ by first varying their values during pre-training and then measuring the downstream performances on the BACE dataset. Note that, to measure the sensitivities of $\\alpha$ and $\\lambda_2$, we particularly chose the BACE dataset, since two structurally similar graphs are often different in their activities as shown in Table 3, thus learning subtle discrepancies is important for this dataset. \n\n| $\\alpha$ | ROC-AUC |\n|-------|----------------|\n| 1.0 | 83.75 ± 0.96 |\n| 5.0 | 83.81 ± 1.01 |\n| 10.0 | 78.34 ± 1.07 |\n\nTable R1. Effect of varying $\\alpha$ on BACE dataset finetuning\n\nAs shown in Table R1, when the margin $\\alpha$ is too large, learning subtle discrepancies degenerates. The margin $\\alpha$ in Equation (7) allows the model to preserve the discrepancy learned by Equations (4) and (6), since the model does not attract the embeddings of similar graphs if the embeddings of dissimilar graphs are sufficiently far apart than the margin $\\alpha$ plus the distance between embeddings between similar graphs. We suggest that if the margin $\\alpha$ is too large, the model tends to strongly attract the embeddings of similar graphs, resulting in the degeneration of learning discrepancies between the similar graphs.\n\n| $\\lambda_2$ | ROC-AUC |\n|-----|----------------|\n| 0.1 | 83.68 ± 0.78 |\n| 0.5 | 83.81 ± 1.01 |\n| 0.9 | 80.72 ± 0.71 |\n\nTable R2. Effect of varying $\\lambda_2$ on BACE dataset finetuning\n\nAs shown in Table R2, a larger $\\lambda_2$ value degenerates the performance of our D-SLA, since too large $\\lambda_2$ value forces the model to too focus on learning the discrepancy between two completely different graphs (i.e., perturbed and negative graphs), rather than learning the subtle differences among original and its slightly perturbed graphs.\n\nWe further add the effect of varying our hyperparameters in the next revision.\n\n---\n\n**Question 2.** I am not quite convinced if graph-discriminator pretext task is needed over $L_{edit}$ and $L_{margin}$. For ablation study in Table 5, there needs to be another variant where graph-discriminator is turned off, but the discrepancy modeling is on.\n\n**Answer 2.** We already provided the ablation study that we only use the edit distance loss (i.e., $L_{edit}$) without the graph discriminator loss (i.e., $L_{GD}$) in Table 5. As shown in the table, if we only use the edit distance loss, the performance of our D-SLA becomes degenerate, since the model would trivially set all the distances between the original and its perturbed graphs as zero which is discussed in Lines 221-226 and Lines 347-350. Thus, joint training of our graph discriminator loss (i.e., $L_{GD}$) and edit distance loss (i.e., $L_{edit}$) is a necessity. ", " **Question 3.** To my understanding, discrepancy modeling with margin factor for real graphs is indirectly helping the model identify key features (edges or nodes). Then, if real graphs are perturbed, key features could lead to a valid graph with distinctive properties. Consequently, the identification of such distinctive nodes/edges could largely aid the downstream task, for example, graph classification. Is that correct? If so, perhaps the authors should clearly articulate this clearly in the paper as it might not be obvious to readers.\n\n**Answer 3.** Thank you for your suggestion. We answer your statements one by one as follows:\n\n1) Is discrepancy modeling with margin factor for real graphs indirectly helping the model identify key features (edges or nodes)? \n\n Yes, learning the discrepancy between real graphs with a certain amount of margin can help the model accurately capture important nodes and edges, since the model should differentiate the different combinations of nodes and edges during discrepancy learning. \n\n2) Then, if real graphs are perturbed, key features could lead to a valid graph with distinctive properties.\n\n As you said, perturbed graphs might have distinctive representations which differ from their original graph representation throughout discrepancy learning. However, we cannot guarantee whether the perturbed graphs have valid graph structures, since, regarding molecular graphs, removing particular edges makes an invalid molecule structure. However, as described in Section C (i.e., Limitation and Potential Societal Impacts) of the supplementary file, it is infeasible to identify every valid structure and key features in a unified way for handling diverse domains, and we leave studying them as future work.\n\n3) Consequently, does the identification of such distinctive nodes/edges could largely aid the downstream task, for example, graph classification?\n\n Yes, even though the perturbed graphs might not be valid, we believe discriminative nodes/edges features from our discrepancy learning help achieve outstanding performances in downstream tasks.\n\nWe will further clarify the above points in the revision.\n", " We sincerely thank you for your constructive and helpful comments. We appreciate your comments that our tackling drawbacks are well analyzed and demonstrated and our proposed method is effective in visualization experiments. We address all your concerns below:\n\n---\n\n**Question 1.** There are also some existing works that do not rely on contrasting perturbed graphs, e.g., [1,2,3]. In this case, the argued drawbacks of existing methods claimed by the authors do not seem to hold.\n\n**Answer 1.** Thank you for suggesting related works. The suggested AF-GCL[1] still has drawbacks, since different graphs are considered similar. To be more specific, for a graph in a batch, AF-GCL[1] chooses the most similar but structurally not the same graph in a batch and defines a graph and the most similar graph as a positive pair. Therefore, two graphs in a positive pair have different structures, although may have drastically different properties. As the objective function of AF-GCL is maximizing the similarity among structurally different graphs, AF-GCL could still suffer from the drawbacks of graph constrastive leanring approaches. \n\nSimGCL[2] and SimGRACE[3] augment the views of graphs by adding noise to model parameters or graph embeddings while preserving the graph structure. Therefore, their methods learn the similarity between structurally the same graphs. However, data augmentation is a key factor for self-supervised learning since, by data augmentation, the model can learn the representations of graphs not in the pretraining dataset and obtain more transferable representations. We further validate the significance of data augmentation by comparing graph classification performances.\n| Method | BBBP | ClinTox | MUV | HIV | BACE | SIDER | Tox21 | ToxCast | Avg. |\n|--------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|----------------|-------|\n| SimGCL | 67.37 ± 1.23 | 55.66 ± 4.72 | 71.24 ± 1.79 | 75.04 ± 0.86 | 74.11 ± 2.74 | 57.44 ± 1.74 | 74.39 ± 0.45 | 62.27 ± 0.38 | 67.19 |\n| SimGRACE | 71.25 ± 0.86 | 64.16 ± 4.50 | 71.18 ± 3.40 | 74.52 ± 1.12 | 73.81 ± 1.37 | **60.59** ± 0.96 | 74.20 ± 0.64 | 63.36 ± 0.52 | 69.13 |\n| GraphCL | 69.68 ± 0.67 | 75.99 ± 2.65 | 69.80 ± 2.66 | 78.47 ± 1.22 | 75.38 ± 1.44 | 60.53 ± 0.88 | 73.87 ± 0.66 | 62.40 ± 0.57 | 70.77 |\n| **D-SLA (Ours)** | **72.60** ± 0.79 | **80.17** ± 1.50 | **76.64** ± 0.91 | **78.59** ± 0.44 | **83.81** ± 1.01 | 60.22 ± 1.13 | **76.81** ± 0.52 | **67.24** ± 0.50 | **74.51** |\n\nTable R1. Finetuning results on graph classification with SimGCL and SimGRACE.\n\nAs shown in Table R1, GraphCL which perturbs graph structure to augment views outperforms SimGCL and SimGRACE which preserve graph structures.This suggests that data augmentation allows the model to obtain more general and transferable representations. \nAdditionally, **our D-SLA still outperforms the suggested works**, thanks to both data augmentation and discrepancy learning.\nWe will include these results in our revised version of the paper.\n\n*References*\n\n[1] [arXiv 2022] Augmentation-Free Graph Contrastive Learning\n\n[2] [SIGIR 2022] Are Graph Augmentations Necessary Simple Graph Contrastive Learning for Recommendation\n\n[3] [WWW 2022] A Simple Framework for Graph Contrastive Learning without Data Augmentation\n\n---\n\n**Question 2.** Why use Equation (4) to discriminate original graphs from perturbed graphs? This could be explained more intuitively.\n\n**Answer 2.** As described in Lines 166-167 with Figure 3 (b), the objective in Equation (4) embeds the perturbed graphs apart from the original graph. Then, by doing so, the model attempts to uniquely embed the original graph in the representation space, which leads the representation space to distinguish two similar graphs (e.g., the original graph and its slightly perturbed ones) having different properties, while learning their graph structures (i.e., node and edge features) as well.\n", " **Question 3.** The proposed method D-SLA consists of many components and many hyperparameters. The sensitivity of the three key hyperparameters $\\alpha$, $\\lambda_1$, and $\\lambda_2$ should be studied in the main text.\n\n**Answer 3.** Thank you for the suggestion. Due to the page limit, we provide the sensitivity of $\\lambda_1$ in Appendix B.4, with Figure S3 and Figure S4, where we observed that our D-SLA is not sensitive across different $\\lambda_1$ values. \n\nOn the other hand, based on your suggestion, we further provide the sensitivity of $\\alpha$ and $\\lambda_2$ by first varying their values during pre-training and then measuring the downstream performances on the BACE dataset. Note that, to measure the sensitivities of $\\alpha$ and $\\lambda_2$, we particularly chose the BACE dataset, since two structurally similar graphs are often different in their activities as shown in Table 3, thus learning subtle discrepancies is important for this dataset. \n| $\\alpha$ | ROC-AUC |\n|-------|----------------|\n| 1.0 | 83.75 ± 0.96 |\n| 5.0 | 83.81 ± 1.01 |\n| 10.0 | 78.34 ± 1.07 |\n\nTable R2. Effect of varying $\\alpha$ on BACE dataset finetuning\n\nAs shown in Table R2, when the margin $\\alpha$ is too large, learning subtle discrepancies degenerates. The margin $\\alpha$ in Equation (7) allows the model to preserve the discrepancy learned by Equations (4) and (6), since the model does not attract the embeddings of similar graphs if the embeddings of dissimilar graphs are sufficiently far apart than the margin $\\alpha$ plus the distance between embeddings between similar graphs. We suggest that if the margin $\\alpha$ is too large, the model tends to strongly attract the embeddings of similar graphs, resulting in the forgetting of the learned discrepancies between the similar graphs.\n\n| $\\lambda_2$ | ROC-AUC |\n|-----|----------------|\n| 0.1 | 83.68 ± 0.78 |\n| 0.5 | 83.81 ± 1.01 |\n| 0.9 | 80.72 ± 0.71 |\n\nTable R3. Effect of varying $\\lambda_2$ on BACE dataset finetuning.\n\nAs shown in Table R3, a larger $\\lambda_2$ value degenerates the performance of our D-SLA, since too large $\\lambda_2$ value forces the model to too focus on learning the discrepancy between two completely different graphs (i.e., perturbed and negative graphs), rather than learning the subtle differences among original and its slightly perturbed graphs.\n\nWe will include these additional experimental results with varying hyperparameters in the revised version of the paper. \n\n---\n\n**Question 4.** Most existing works report AUC and AP scores for the link prediction experiment. This work only reports accuracy.\n\n**Answer 4.** We sincerely appreciate your suggestion of using AUC and AP scores for link prediction. We have further evaluated baselines and our model with ROC-AUC and AP scores, and then reported the performances in Table R4. As shown in Table R4, **our D-SLA still clearly outperforms** all the other baselines on the link prediction task by large margins, even in AUC and AP. \n| | ROC-AUC | AP | ROC-AUC | AP | ROC-AUC | AP\n|--------------|----------------|----------------|---|---|---|---|\n| | **COLLAB** | | **IMDB-B** | | **IMDB-M** | | |\n| No Pretrain | 84.53 ± 0.55 | 80.01 ± 1.14 | 80.28 ± 2.23 | 68.72 ± 2.58 | 75.64 ± 1.42 | 64.93 ± 1.92 |\n| AttrMasking | 85.07 ± 0.49 | 81.43 ± 0.80 | 81.78 ± 3.15 | 70.62 ± 3.68 | 74.26 ± 2.11 | 63.37 ± 2.15 |\n| ContextPred | 86.49 ± 0.35 | 83.96 ± 0.75 | 80.49 ± 1.57 | 70.47 ± 2.24 | 74.20 ± 2.71 | 66.09 ± 2.74 |\n| Infomax | 83.13 ± 0.35 | 80.83 ± 0.62 | 77.68 ± 1.70 | 67.25 ± 1.87 | 74.19 ± 1.85 | 64.98 ± 2.47 |\n| GraphCL | 80.62 ± 0.88 | 76.04 ± 1.04 | 75.31 ± 3.07 | 63.71 ± 2.98 | 73.23 ± 3.16 | 62.40 ± 3.04 |\n| JOAO | 81.58 ± 1.39 | 76.57 ± 1.54 | 76.80 ± 2.94 | 65.37 ± 3.23 | 73.72 ± 1.46 | 62.76 ± 1.52 |\n| GraphLoG | 86.73 ± 0.65 | 82.95 ± 0.98 | 80.62 ± 2.29 | 69.71 ± 3.18 | 75.52 ± 1.82 | 64.88 ± 1.87 |\n| BGRL | 81.56 ± 0.32 | 76.79 ± 1.13 | 79.18 ± 3.75 | 67.97 ± 4.14 | 74.74 ± 1.85 | 63.71 ± 2.09 |\n| **D-SLA (Ours)** | **88.14** ± 0.32 | **86.21** ± 0.38 | **86.64** ± 1.41 | **78.54** ± 2.79 | **78.53** ± 1.51 | **69.45** ± 2.29|\n\nTable R4. Link prediction results with ROC-AUC and AP scores.\n\nWe will include these new results in the revised version of the paper.", " We sincerely thank you for your constructive and helpful comments. We initially address all your concerns below:\n\n---\n\n**Question 1.** The notion of distance here is a weighted version of the original contrastive learning: two similar graphs will still have small distances and be close in the latent space even if some properties of the perturbed graph drastically change from the original graph.\n\n**Answer 1.** This is a critical misunderstanding of our discrepancy learning framework, and our discrepancy learning clearly differs from contrastive learning. In particular, our D-SLA aims to learn the discrete embedding space by learning the discrepancy even between slightly perturbed graphs, thus obtaining a discriminative space for them that can further be utilized to distinguish between them for an unknown downstream task. Please note that graph self-supervised learning performs **unsupervised learning** of the graph representations with no knowledge of the downstream task, and the **properties** of graphs can drastically change from one downstream task to another.\n\nWe first clarify what graph self-supervised learning aims to learn, and then describe the objectives of conventional graph contrastive learning. Finally, we clarify the difference between our D-SLA and conventional contrastive learning.\n\n* Self-supervised learning (SSL) aims to learn general information from the graph structure without utilizing any labels for downstream tasks. A self-supervised learner thus **cannot determine whether two similar graphs have similar or different properties**, since specific tasks and desired properties are not given in the pre-training stage with SSL. Learning a space that captures the **target properties** is something that is done at a **fine-tuning stage**, when a **specific downstream task** is given. As we demonstrated in Table 3, depending on the downstream tasks and finetuning datasets, some desired properties are related to the structural similarity such as ClinTox and BBBP, but some are not such as BACE and MUV. Thus a graph SSL should learn a space that can capture both **the similarity and difference between two graphs (i.e., accurate discrepancies among graphs)** without considering the labels. \n\n* Conventional graph contrastive learning methods aim to maximize the similarity between the representations of two similar graphs, overlooking the reality that even though they are similar, they could still have different properties. This pitfall results in the embedding collapse of similar graphs which could have different properties as shown in Figure 1 (b-2). Our method, on the other hand, learns discriminative space even for two graphs that are different only by a single node or an edge. \n\n* We devise a framework that can discretize graph embeddings according to their structures allowing the model can distinguish a subtle difference between similar graphs, whereas contrastive learning continuously maximizes the similarity between similar graphs. To solve the drawbacks of conventional contrastive learning, the model should capture not only the similarity between two similar graphs but also the subtle differences between them. To this end, we propose a discrepancy framework with completely opposite objective functions from contrastive learning as we described in Lines 67-69. Therefore, our objective functions are not similar to contrastive learning, but rather *completely opposite* from contrastive learning, since our objective functions aim to discriminate the similar graphs and learn the exact amount of discrepancy. That is, the small distance between two similar graphs as you mentioned is a key point of our D-SLA which allows learning the subtle difference between two similar graphs, which conventional contrastive learning cannot learn as shown in Figure 1 (b,c-2).\n\nWe here clarify how learning the subtle difference affects the downstream tasks by recapitulating the experimental results.\n\n* As we demonstrated in Table 3 and Figure 4, the BACE dataset contains a bunch of **similar graphs with different properties**. Our D-SLA outperforms all baselines on this BACE dataset, whereas conventional contrastive learning methods do not outperform even predictive methods. Thus, as long as the graphs are well distinguished, even by small distances, we should allow the fine-tuning model to capture their differences. None of the existing graph SSL methods, either contrastive or predictive, can learn a discriminative space for slightly different graphs and thus obtain significantly lower performance compared to those obtained by our method.\n\n* In link prediction experiments in Section 4.2 where capturing local information is important, our D-SLA outperforms all baselines including conventional contrastive learning, which suggests that our D-SLA can capture the small difference in a local region by learning the subtle difference, whereas conventional contrastive learning methods fail to do so. \n", " **Question 2.** How can we make sure that graph edit distance is a good proxy for measuring if two graphs should be close or far in the latent space?\n\n**Answer 2.** Graph edit distance is a good proxy since it is simple yet effective: \n1) We can compute distances between any original and perturbed graphs in our perturbation method with **near-zero costs** (i.e., O(1)), as described in Lines 195-209 along with Table 1.\n2) It is **generalizable to any graph domains**: we don’t have to control our perturbation and distance measures, even if we deal with graphs in unknown domains. In other words, if we use graph properties for measuring the distances among graphs, since properties are different across different graph domains, we might have to change our distance measures every time when we deal with a new graph domain. However, we don’t have to take this into consideration, thanks to graph edit distance.\n3) It is **applicable to unlabeled datasets**: in graph SSL experiments, labels are generally not available during pre-training, and, with graph edit distance, we can compute distances between graphs without accessing any label.\n\nTo summarize, we aim to make our D-SLA work in general regardless of domains (properties and labels of graphs), however, as described in Appendix C, it is also possible to pre-define semantic discrepancy to use in discrepancy learning, which we leave as future work.\n\n**Question 3.** The paper tested the proposed model for link prediction on social networks. However, I find the motivation of the paper to be weak in social networks (and stronger for small graphs like molecules).\n\n**Answer 3.** This is a misunderstanding of our objectives in D-SLA. Our D-SLA aims to learn the **local difference**, which is the reason why we validate our D-SLA on social networks. Here, we clarify the motivation and the objective of our D-SLA.\n* The motivation for our D-SLA is that since graphs are discrete data structures, the property may largely vary even between slightly perturbed graphs as we described in Lines 40-42. Therefore, our D-SLA aims to learn the difference between slightly perturbed graphs and the original graph. As graph perturbation targets a local region, the property changes first occur in the subgraphical structures where the perturbation is applied. Then, the global properties are affected by the change in local semantics. \n\n* To this end, the objective of our D-SLA is to learn the local difference. However, if the model learns only the local-level representations, the model cannot learn the global graph-level representations as we tackled the drawbacks of predictive learning in Line 31. Therefore, the final objective of our D-SLA is to learn the local difference of the global graph-level representations.\n", " **Question 4.** The proposed self-supervised loss in Equation 8 mainly focuses on the global information in a graph as Equations 4, 6, and 7 are all defined on the graph embedding level. This might cause the model not to pay attention to local information in the graph.\n\n**Answer 4.** This is a misunderstanding of our work since our D-SLA loss in Equation 8 is also **able to learn local information**. The objective of our D-SLA is to learn the local difference on the global graph-level representations and we achieved that objective by proposing a framework of learning the subtle (i.e., local) difference. we further clarify why our D-SLA can capture the local information.\n\n* At first, as represented in Equation (4), our D-SLA is trained by discriminating the original graph from its perturbed graphs, and there only are subtle differences between them when we perturb only the tiny amount of edges in the graph. Therefore, for optimizing the objective in Equation (4), the model should aware of the local differences between graphs.\n* Also, we propose to learn the exact amount of differences between original and perturbed graphs with the graph edit distance, formalized in Equation (6), and, to optimize the objective in Equation (6), our D-SLA should recognize how many edges are different between two graphs. That is, our D-SLA should focus on edge-level (local) differences, allowing it to capture the local information.\n\nExperimentally, we already showed our D-SLA does pay attention to the local information by evaluating it on the local-level task, namely link prediction, in Table 4 of Section 4.2, where ours significantly outperforms all the other graph SSL methods. Therefore, to summarize, as described in Lines 62-64, our D-SLA can not only capture local information (Section 4.2) but also discriminate global-level differences (Section 4.1) within/between graphs. \n\nHere, we provide explicit evidence that learning the local difference on graph-level representations can affect the local-level representations regarding Equations (4) and (6) respectively. Both experiments are conducted on a synthetic community graph with 4000 nodes and about 800K edges. We perturb only one or two edges to the original graph, making two perturbed graphs coined as $G_1$ and $G_2$. We compare the changes in graph-level representations with the changes in local-level representations. The metrics are as follows:\n\n1) The distances between graph representations of the perturbed graphs and the original graph: $||h_{G_0} - h_{G_i}||$ where $h_{G_0}$ denotes the graph representation of original graph and $h_{G_i}$ denotes the graph representation of the $i$-th perturbed graph.\n2) The distances between node representations of a node in the original graph and the corresponding node in the perturbed graph: $||h_{v_j,G_0} - h_{v_j,G_i}||$ where $h_{v_j,G_0}$ denotes the $j$-th node representation of the original graph and $h_{v_j,G_i}$ denotes the $j$-th node representation of the $i$-th perturbed graph.\n\nDue to the character limit, we provide the results in the comment box below.\n\n", " (Continued from Answer 4 in the comment box above)\n* **Optimizing Equation (4)**: We train the model by optimizing Equation (4) to learn the local differences with the original graph ($G_0)$ and a graph perturbed by one edge ($G_1$). Here we denote 'Target Node' as a node where an edge perturbation is applied. As shown in Table R1, learning local differences on the graph-level representations **affects local representations only in the subgraphical region** where the perturbation is applied (target nodes and 1-hop neighbor nodes of the target node) and does not affect the local representations of distant nodes from the target node (Most Distant Nodes Distance). \n| Epoch | Graph Distance | Target Node Distance | 1-hop Nodes Distance | Most Distant Nodes Distance |\n|-------|--------|-------------|-------------|--------------------|\n| 0 | 0.0000 | 0.0007 | 0.0003 | 0.0000 |\n| 20 | 0.0122 | 0.0583 | 0.0569 | 0.0000 |\n| 40 | 0.2032 | 1.0248 | 0.9954 | 0.0000 |\n| 60 | 1.0055 | 4.7191 | 4.5890 | 0.0002 |\n| 80 | 2.0163 | 9.2662 | 9.0415 | 0.0003 |\n| 100 | 2.4094 | 11.0090 | 10.7649 | 0.0004 |\n\n Table R1. Discrepancy learning by Equation (4)\n\n* **Optimizing Equation (6)**: We train the model by optimizing Equations (4) and (6) to learn the exact amount of discrepancy. Please note that the graph edit distance for the graph perturbed by two edges ($G_2$) is double the graph edit distance for the graph perturbed by only one edge ($G_1$). Here, we measure the representation distances of target nodes where the perturbation is applied. As shown in Table R2, the learning exact amount of discrepancy on global-level representations also **affects the local-level representations** resulting in the more distant node representations in more perturbed graphs (Node distance of ($G_0$, $G_2$) ) from the node representations in the original graph than node representations in less perturbed graphs (Node distance of ($G_0$, $G_1$) ).\n| Epoch | Graph Distance of ($G_0$, $G_1$) | Graph Distance of ($G_0$, $G_2$) | Node distance of ($G_0$, $G_1$) | Node distance of ($G_0$, $G_2$) |\n|-------|------------|------------|-----------|-----------|\n| 0 | 0.0000 | 0.0001 | 0.0012 | 0.0008 |\n| 20 | 0.0002 | 0.0004 | 0.0013 | 0.0014 |\n| 40 | 0.0157 | 0.3019 | 0.0449 | 0.1358 |\n| 60 | 0.2199 | 0.4418 | 0.6241 | 1.8051 |\n| 80 | 0.9174 | 1.8228 | 2.6067 | 7.4082 |\n| 100 | 1.4597 | 2.9522 | 4.3504 | 12.1939 |\n\n Table R2. Discrepancy learning by Equations (4) and (6)\n\n---\n\n**Question 5.** Why some perturbations in a large social network might drastically change some of its properties?\n\n**Answer 5.** The task that we deal with on social networks is a link prediction, and, in this link prediction task, perturbing a tiny amount of local information can drastically change its node representation, while the other nodes far away from this node might not be affected by this perturbation. In other words, regarding the local task, some perturbations on certain regions of nodes and edges can significantly affect the properties of that subregions more than the other regions.\n\n", " **Question 6.** How does the value of $\\lambda_1$ and $\\lambda_2$ in Equation 8 affect the performance of the model? some charts that show the performance of the model for different values of $\\lambda_1$ and $\\lambda_2$ will be helpful.\n\n**Answer 6.** Thank you for the suggestion. Please note that we already provided the effect of $\\lambda_1$ in Appendix B.4, with Figure S3 and Figure S4, where we observed that our D-SLA is not sensitive across different $\\lambda_1$ values. \n\nOn the other hand, based on your suggestion, we further provide the effect of $\\lambda_2$ by first varying its values during pre-training and then measuring the downstream performances on the BACE dataset. Note that, to measure the effectiveness of $\\lambda_2$, we particularly chose the BACE dataset, since two structurally similar graphs are often different in their activities as shown in Table 3, thus learning subtle discrepancies is important for this dataset. \n| $\\lambda_2$ | ROC-AUC |\n|-----|----------------|\n| 0.1 | 83.68 ± 0.78 |\n| 0.5 | 83.81 ± 1.01 |\n| 0.9 | 80.72 ± 0.71 |\n\nTable R3. The sensitivity of $\\lambda_2$ on BACE dataset finetuning.\n\nAs shown in Table R3, we observe that a larger $\\lambda_2$ value degenerates the performance of our D-SLA, since too large $\\lambda_2$ value forces the model to too focus on learning the discrepancy between two completely different graphs (i.e., perturbed and negative graphs), rather than learning the subtle differences among original and its slightly perturbed graphs. \n\nWe further add the effect of varying $\\lambda_2$ in the next revision.\n\n---\n\n**Question 7.** Tables 2 and 4: how many times did the authors run the model to get the results in these two tables? \n\n**Answer 7.** As described in Line 276 and Lines 319-320, we run our experiments five times. \n\n---\n\n**Question 8.** Which results are statistically significant compared to the best baseline?\n\n**Answer 8.** Thank you for your helpful suggestion in regard to the statistical analysis, from which the significance of our D-SLA becomes clearer. Specifically, we have conducted the t-test with p-value of 0.05 and observed that, compared to all models, our D-SLA achieves statistically significant results on all datasets in the link prediction task. Also, in the graph classification task, our D-SLA achieves statistically significant results on five datasets among the nine compared against the best baseline (i.e., GraphLog) which is not significant on all datasets. \n", " This paper proposes a discrepancy-based framework to learn graph representations. The framework leverages the discrepancy between 1) the original graph and the perturbed graphs and 2) the original graph with other negative graphs. The discrepancy amount for perturbed graph is defined via graph edit distance, which is easy to obtain by perturbed graph generating process. The discrepancy amount for other negative graphs are defined above a margin. Under such discrepancy hierarchy, the learned graph representations could preserve dissimilarity among graphs accurately. The learned representations could be used for downstream tasks such as graph classification and link predictions in chemical and biological domains or social networks. Originality\nThis paper proposes a new discrepancy preserved framework for learning graph representations, addressing limitations in existing literatures that would learn similar representations of two perturbed graphs with different properties. However, I would like to understand the novelty of the framework as how the discrepancy framework is different from redefining the similarity functions, e.g., sim(two perturbed graphs) = graph edit distance, and sim(graph, negative graph) = $\\alpha$ + additional terms, and then use existing methods. Would appreciate if the authors could further clarify it.\n\nQuality:\nThe paper is well written. The experiments are overall comprehensive, and additional analysis support the main points well.\n\nClarity:\nThe methodologies and implementation details are clearly explained in the main article and supplementary material.\n\nSignificance\nAs mentioned in the originality section, this paper uses discrepancy preserved framework to learn graph representations. However, my understanding is that authors define a new similarity function, which leverages the discrepancy among graphs. The significance is not quite obvious, and would appreciate authors' further clarifications.\n 1. This method has data augmented by generating perturbed graph via eq(5). I wonder is this a common approach for obtaining perturbed graph, or a novel part of this paper? For those contrastive learning method papers which also consider original graph and perturbed graphs, are the perturbed graphs being generated, or being found by calculating graph edit distance? \n\n2. This approach first generates perturbed graphs and defines discrepancy as graph edit distance. Then all the other samples are viewed as negative samples, and eq(7) assumes distance between original graph and negative graph is greater than a margin. This assumes that all the other graphs are very different from the original graph, at least not as some perturbation of original graph. I wonder could it be the case that in real data the graph samples already contains graphs similar to the original graph. For example, in Figure 1, (d) and (e) are two similar graphs in the samples, and the edit distance between (d) and (e) appear to be 1. Based on the proposed method, it will first generates some perturbed graph of (d), and then define distance of (d) and (e) greater than $\\alpha$. In this example, distance(d,e) > distance(d, perturbed graph of d), is this still a reasonable assumption? How the graph sample (e) different from those generated perturbation of (d)?\n\nIn summary, my question is: how the method differentiate the generated perturbed graphs, and the graphs that are similar to the original graphs in the sample? Will the assumption that all the other graphs are negative samples and have distance at least $\\alpha$ actually disturb learning and makes the learning results worse, when they are actually similar to the original graph? Consider the extreme cases that the graph samples are all perturbations of one graph, what graph representations the proposed method would give?\n\n3. How to choose the number of perturbed graphs n, and if the perturbed graphs samples are large, would the imbalance of two classes (1 vs n) affect the performance of classifier/discriminator?\n The authors have discussed limitations thoroughly in the supplementary materials. I do not have other to add except the ones listed in Questions and Strengths And Weaknesses sections.", " The paper presents a novel self-supervised learning approach on graphs to boost up the performance on downstream graph classification tasks. In many scenarios, two graphs with highly similar structure have remarkably different properties ((i.e. belong to different classes). By design, the classical contrastive learning-based approaches fail to separate out the representations of such similar graphs because they aim to maximize similarity between graphs and their perturbations. To address these limitations, the authors propose a novel approach referred to as Discrepancy-Based Self-Supervised Learning(D-SLA) that comprises of multiple pretext tasks including: i) training a discriminator to distinguish between a real graph and a perturbed variant; ii) learning the exact discrepancy between the original and the perturbed graphs, where the discrepancy is measured as the graph-edit distance. Additionally, increase the discrepancy measure between any two real graphs by introducing a margin factor, thereby enforcing that the original graph is embedded closer to its perturbed variants than any other similar-looking real graph. The proposed approach outperforms the SOTA approaches from contrastive learning and \npredictive learning literature on on several graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction. Strengths: \n1. The proposed approach is quite plausible and likely to be relevant to biological networks and molecular graphs where two similar graphs could have distinct properties.\n2. The approach outperforms or matches the performance of several baselines from the fields of predictive and contrastive learning approaches on both graph classification and link prediction downstream tasks.\n3. The paper is overall well-written and augmented with informative figures.\n\n\nWeaknesses: \n1. The approach involves a bunch of hyperparameters (e.g. the margin parameter \\alpha, parameters \\lambda_1 and \\lambda_2 to determine the weight of each pretext task in the final objective function, max graph-edit distance considered to generate perturbed graphs). The paper does not include any guidelines on tuning these hyperparameters and how the choice of hyperparameters affect the results.\n\n2. I am not quite convinced if graph-discriminator pretext task is needed over L_{edit} and L_{margin}. (See my questions on table 5 below). It would be helpful if authors could throw insights on why all the three pretext tasks might be needed.\n\n 1. For ablation study in Table 5, there needs to be another variant where graph-discriminator is turned off, but the discrepancy modeling is on. This will justify the additional utility of graph discriminator over the discrepancy-modeling pretext tasks. I will encourage authors to discuss more on the possible intuition behind the efficacy of the proposed approach. To my understanding, discrepancy modeling with margin factor for real graphs is indirectly helping the model to identify key features (edges or nodes) that if perturbed, could lead to a valid graph with distinctive properties. Identification of such distinctive nodes/edges could largely aid to the downstream task of graph classification. Is that correct? If so, perhaps authors should clearly articulate this clearly in the paper as it might not be obvious to readers. ", " This paper argues that existing graph self-supervised learning methods have some drawbacks. Specifically, predictive learning methods may not capture the global properties of graphs, and contrastive learning methods may not discriminate two similar graphs with different properties. To this end, a framework D-SLA is proposed to learn the exact discrepancy between the original and the perturbed graphs. - Strengths\n> 1. The drawbacks of existing graph self-supervised methods are well analyzed and demonstrated.\n> 2. The visualization experiment shows that the proposed method is quite effective.\n> 3. The provided source codes facilitate the good reproducibility of this work.\n\n- Weaknesses\n> 1. There are also some existing works that do not rely on contrasting perturbed graphs, e.g., [1,2,3]. In this case, the argued drawbacks of existing methods claimed by the authors do not seem to hold.\n> 2. Why use Equation (4) to discriminate original graphs from perturbed graphs? This could be explained more intuitively.\n> 3. The proposed method D-SLA consists of many components and many hyperparameters. The sensitivity of the three key hyperparameters $\\alpha$, $\\lambda_1$, and $\\lambda_2$ should be studied in the main text.\n> 4. Most existing works report AUC and AP scores for the link prediction experiment. This work only reports accuracy.\n\n\n*References*\n\n[1] [arXiv 2022] Augmentation-Free Graph Contrastive Learning\n\n[2] [SIGIR 2022] Are Graph Augmentations Necessary Simple Graph Contrastive Learning for Recommendation\n\n[3] [WWW 2022] A Simple Framework for Graph Contrastive Learning without Data Augmentation\n\n Please respond to the weaknesses listed above. Yes. the authors have detailedly discussed the limitations and potential societal impacts in the appendix.", " This paper studies the problem of self-supervised learning of graph neural networks. The goal is to learn representations for nodes and graphs in an unsupervised manner. The paper studies the limitations of the predictive and contrastive learning methods and proposes a new framework (D-SLA) to incorporate the discrepancy between the original and perturbed graphs in the self-supervised loss. D-SLA is tested for various tasks. Strengths: \n\n1. Self-supervised learning is an important problem that has recently gained a lot of attention. \n\n2. The proposed method is tested on various tasks on several graph benchmarks.\n\n\nWeaknesses: \n\n\n1. The proposed self-supervised loss in Equation 8 mainly focuses on the global information in a graph as Equations 4, 6, and 7 are all defined on the graph embedding level. This might cause the model not to pay attention to local information in the graph.\n\n2. The motivation of the work is based on a pitfall of contrastive learning methods, which assumes two similar graphs (one original and one perturbed version of the original) are the same. The paper proposed using distance-based self-supervised learning which incorporates the distance of two graphs when pushing their embeddings closer or pulling their embeddings apart in the latent space. However, the notion of distance here is a weighted version of the original contrastive learning version: two similar graphs will still have small distances and be close in the latent space even if some properties of the perturbed graph drastically change from the original graph. Also, it is not clear why a perturbed graph with a distance of two to the original graph should be further apart compared to a graph with a distance of one to the original graph. How can we make sure that graph edit distance is a good proxy for measuring if two graphs should be close or far in the latent space?\n 1. Tables 2 and 4: how many times did the authors run the model to get the results in these two tables? which results are statistically significant compared to the best baseline?\n\n2. How does the value of $\\lambda_1$ and $\\lambda_2$ in Equation 8 affect the performance of the model? some charts that show the performance of the model for different values of $\\lambda_1$ and $\\lambda_2$ will be helpful.\n\n3. The paper tested the proposed model for link prediction on social networks. However, I find the motivation of the paper to be weak in social networks (and stronger for small graphs like molecules). Why some perturbations in a large social network might drastically change some of its properties? See weaknesses 1 and 2." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 4, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "cKWIfYnr41N", "Ek4znY_Lnrf", "0KdXGuPIKVP", "c83OwEyH1bn", "qpdmGV6iegN", "ULVLpe7cYqR", "nips_2022_JY6fLgR8Yq", "ULVLpe7cYqR", "ULVLpe7cYqR", "ULVLpe7cYqR", "qpdmGV6iegN", "qpdmGV6iegN", "c83OwEyH1bn", "c83OwEyH1bn", "0KdXGuPIKVP", "0KdXGuPIKVP", "0KdXGuPIKVP", "0KdXGuPIKVP", "0KdXGuPIKVP", "nips_2022_JY6fLgR8Yq", "nips_2022_JY6fLgR8Yq", "nips_2022_JY6fLgR8Yq", "nips_2022_JY6fLgR8Yq" ]
nips_2022_EQgPNPwREa
Tikhonov Regularization is Optimal Transport Robust under Martingale Constraints
Distributionally robust optimization (DRO) has been shown to offer a principled way to regularize learning models. In this paper, we find that Tikhonov regularization is distributionally robust in an optimal transport sense (i.e. if an adversary chooses distributions in a suitable optimal transport neighborhood of the empirical measure), provided that suitable martingale constraints are also imposed. Further, we introduce a relaxation of the martingale constraints which not only provide a unified viewpoint to a class of existing robust methods but also lead to new regularization tools. To realize these novel tools, provably efficient computational algorithms are proposed. As a byproduct, the strong duality theorem proved in this paper can be potentially applied to other problems of independent interest.
Accept
This work focuses on robust stochastic optimization (under a Wasserstein constraint), and shows the efficiency of Tikhonov regularization for this problem. There has been a lively and constructive discussion between authors and reviewers, and ultimately all agree that this work should be accepted, and so do I.
train
[ "wd3Gj6UGLhT", "O_Sqlh669hC", "R7fZKTh07Tj", "_IMNYsEX7n", "JwvZ4Jbbmty", "vzagP25VMLy", "EQzST1Pw6eb", "wsbWNGZYin", "X-a29-DTyc4T", "3bbPknWuOWaO", "KkC59h5nRif", "thxpobdwEgf", "KTvkpdojJ0N", "7D9AJBIMTJI", "ICrAzen_8AmG", "VisaTnf5oj", "G6rjjqEpOA3", "ZU-ioDR86v5", "0OE_OLvTHwa" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Wc7X, \n\nThanks for keeping an open mind and for agreeing to change the score.\n \nFollowing your suggestions regarding the experiments, we have done a new set of experiments that reveals an intriguing qualitative difference in the structure of the adversarial optimal coupling. Please see Figure 2(b) for further details. \n \nOn the theoretical side, we believe that the equivalence between the Tikhonov regularization and the exact martingale constraint in OT-based DRO is well-motivated. Upon this interesting hidden connection, it is natural for us to develop the perturbed martingale DRO model to get rid of the disadvantages of standard regularization techniques and the over-conservative vanilla DRO model.\n \nOverall, our paper has significant theoretical and modeling contributions. As the discussion will be closed soon, we would like to take this last opportunity to address your follow-up concerns or questions.\n\nThank you for your consideration!\n\n\nAll the best, \nAuthors \n", " Dear authors,\nI thank you for your answer which clarified all my points of uncertainty. I think I will keep my score as is.", " Dear reviewer,\n\nThank you very much for getting involved in the discussion and interacting with the reviewers.\n\nAre you still considering to update your score?\n\nBest,\nAC", " I would change my rating to borderline accept.\nI am just not convinced that the proposed approach is that insightful about OT, as they are missing the viability or new insight to get there, which anyone would expect from a new approach.", " Thanks for your suggestions. We provide one way to construct $\\Delta$. That is , we take a normal random variable $C \\sim N(0, \\rho)$, which is independent of $X$, and let $\\Delta = C M^{-1} \\beta$. We have incorporated this detailed explanation into our latest version. ", " Thanks for your comments. We may require further clarification from the reviewer in order to answer this question appropriately. What does the reviewer mean about source and target? We just have source data here. We would like to highlight our focus --- OT-based distributionally robust optimization instead of OT itself. It is well known that often OT acts as a function of the gradient norm (i.e., gradient with respect to the data), which is not Tikhonov. Our paper shows that in fact both Tikhonov and Jocabian regularization are optimal in a non-parametric local min-max sense (i.e., distributionally robust). However, we guess you may want to ask about the structure of the adversary and know the difference between the vanilla DRO and the proposed perturbed martingale DRO models. Based on equation (4.1), it is easy to observe that the new adversarial learning paradigm is to further constrain the magnitude of each perturbation no more than eps. We also provide the visualization result to help the reader to get a better understanding of this structure, see Figure 2 (additional one more illustration of the structure of the adversary). As expected when adding our eps budget on perturbation, the Martingale perturbation constrains the magnitude of perturbation to be smaller than eps for each data point, while the original DRO method tends to perturb more wildly. The difference in the structure of the worst-case adversary is more prominent right around the boundaries of the ring regions. \n\nWe sincerely hope you can acknowledge the theoretical merits of this paper and re-evaluate our contributions. We are also happy to further clarify your concerns during the discussion period. ", " Thank you very much for your feedback. Respectfully, we would like to highlight that our paper indeed provides “substantiations” of the claims both from theoretical and empirical perspectives. The effectiveness of the new adversarial training scheme has been validated on real-world datasets **MNIST** and **CIFAR 10** (add it in our first-round rebuttal). Notably, the MNIST dataset is still the field’s standard benchmark dataset to evaluate and compare performance among models to condense and deliver insights. To study how robust a deep learning model is subject to (possibly adversarial) distributional shift, the MNIST dataset is also one of the leading benchmarks [5]. We believe the experiment set up in our paper is aligned with conventions in ML top conferences,e.g., NeurIPS, ICML, and ICLR. Here, we just list several recent theoretical/method-oriented papers published in the last five years to support our claim, see [1-11] for details. Most of them conduct their experiments just on MNIST and few of them provide additional results on a larger dataset --- CIFAR 10. Upon the Reviewer’s suggestions, we have added the experimental results on CIFAR 10 in our first-round response. We observe that the performance is consistent with the result of MNIST. Besides that, we also provide a toy 2D example, see Figure 2 and Figure 5, to illustrate the geometric interpretation of the interpolation between ERM and the vanilla DRO and the structure of adversarial examples. \n\nReference:\n\n[1] Robey, Alexander, et al. \"Adversarial robustness with semi-infinite constrained learning.\" Advances in Neural Information Processing Systems 34 (2021): 6198-6215.\n\n[2] Jafarpour, Saber, et al. \"Robust implicit networks via non-Euclidean contractions.\" Advances in Neural Information Processing Systems 34 (2021): 9857-9868.\n\n[3] Lee, Sungyoon, et al. \"Towards Better Understanding of Training Certifiably Robust Models against Adversarial Examples.\" Advances in Neural Information Processing Systems 34 (2021): 953-964.\n\n[4] Nguyen V A, Zhang F, Blanchet J, et al. Distributionally robust local non-parametric conditional estimation[J]. Advances in Neural Information Processing Systems, 2020, 33: 15232-15242.\n\n[5] Ovadia Y, Fertig E, Ren J, et al. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift[J]. Advances in neural information processing systems, 2019, 32.\n\n[6] Wang Y, Jha S, Chaudhuri K. Analyzing the robustness of nearest neighbors to adversarial examples[C]//International Conference on Machine Learning. PMLR, 2018: 5133-5142.\n\n[7] Bhattacharjee, Robi, and Kamalika Chaudhuri. \"When are non-parametric methods robust?.\" International Conference on Machine Learning. PMLR, 2020.\n\n[8] Yang, Yao-Yuan, et al. \"Robustness for non-parametric classification: A generic attack and defense.\" International Conference on Artificial Intelligence and Statistics. PMLR, 2020.\n\n[9] Wong, Eric, and Zico Kolter. \"Provable defenses against adversarial examples via the convex outer adversarial polytope.\" International Conference on Machine Learning. PMLR, 2018.\n\n[10] Awasthi, Pranjal, Abhratanu Dutta, and Aravindan Vijayaraghavan. \"On robustness to adversarial examples and polynomial optimization.\" Advances in Neural Information Processing Systems 32 (2019).\n\n[11] Raghunathan, Aditi, Jacob Steinhardt, and Percy S. Liang. \"Semidefinite relaxations for certifying robustness to adversarial examples.\" Advances in Neural Information Processing Systems 31 (2018).\n", " I guess we agree to disagree. \nI am familiar with both OT and Tikhonov regularization... regularization in OT is meant to reduce the search space by establishing some dependence between the source space and target space.... I don't see that reflected in the provided example example....", " Why would it be inconventional to expect a theoretical paper to have some substantiation of the claims?\nThat is very common in numerous theoretical papers, and given that significant context may not be present, an illustration can inject additional insight.\n", " Dear ReviewerWc7X,\n\nSince only a few days remain in the discussion period, we would appreciate it if you check and reply to our response to your comments soon. This will give us time to address further questions and comments that you may have before the end of the discussion period. If our response adequately addresses your concerns, please consider raising the score of our submission. Thank you very much for your time.\n\nBest, \nThe Authors", " Hi, thank you for the answer to my question. \n\nHow to construct $\\Delta$ is exactly what I want to ask. Because $\\Delta = c(\\Delta) M^{-1} \\beta$ here needs to satisfy a lot of constraints. It is the difference of $\\overline{X}$ and $X$, where the joint distribution of them is $\\pi$. Then it needs to satisfy $E_{\\pi}[c(\\Delta)|X] = 0$ and $E_{\\pi}[c^2(\\Delta)] \\|\\beta\\|_{M^{-1}}^2 \\leq \\rho$. I think it is non-trivial to construct such $c(\\Delta)$ and we need more discussion on the proof here.\n\nIf such a task is non-trivial and you need more space. I think you may change the \"Proof of Proposition 3.2.\" in the main text to a sketch proof. Then refer detailed steps to the Appendix. It would help to clarify the presentation here.", " Thanks to all reviewers and authors for their work on this submission.\n\nAs the discussion period starts, I want to make sure that reviewers have read the author's response, and if needed react to it.\n\nThis can be done either by communicating with authors or in private conversation within the reviewing team.\n\nReviewer Wc7X: Has the author's response appropriately adressed your concerns?", " **Q3:** The experiments are rather thin. It would be more useful to show some comparisons with existing regularization approaches. Complex dataset? \n\n**Response:** First of all, in Section 5.2, we compared our method with the most relevant regularization technique --- *the Jacobian regularization* on the MNIST dataset, please refer to Figure 3 and Figure 4 for a detailed comparison of the performance. Furthermore, we would like to emphasize that our paper is a methodology/theory oriented paper: we aim to provide a unified viewpoint of various useful regularizations from the distributionally robust optimization perspective. And use these perspective to obtain a larger class of distributionally robust regularization methods. The experiment results conducted in this paper are to corroborate our theoretical results and lie within the norm of typical theory-focused paper within this conference. We believe that it is unconventional to expect a theoretically-focused paper to test the model performance on extensive state-of-the-art deep learning models and datasets. Nevertheless, to address your concern, we are happy to further demonstrate the effectiveness of the proposed method on a large dataset --- **CIFAR 10**.\n\n|  PGD Attack | ERM |  DRO | Martingale DRO |  Jacobian Regularization |\n| ----------- | ------ | ------ | -------------- | ------------------------ |\n| ϵ = 0 | 84.16% | 84.02% | 85.48% | 81.73% |\n| ϵ = 0.04 | 77.50% | 82.87% | 83.25% | 78.78% |\n| ϵ = 0.08 | 70.20% | 80.68% | 80.86% | 73.85% |\n\n*Experimental Set Up for CIFAR 10* --- For the classifier, we train a ResNet with the architecture in [1]. We optimize using Adam with a batch size of 128 for all methods. The learning rate starts from 0.01 and shrinks by $0.1^{\\frac{\\textrm{epoch}}{\\textrm{total epochs}}}$, \nand each model is trained for 100 epoches. The simulations are\nimplemented using Python 3.8 on Google Colab with TPU v2 and 16 GB RAM. \nSimilarily, we test the performance of four methods (ERM, DRO, Jacobian regularization and martingale DRO) under the PGD attack with different levels of perturbation, whose performance is consistent with the results of MNIST. The Top-1 accuracy results are shown in the table above. \n\n[1] He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.\n\nPlease let us know if our response addresses your concerns. We are happy to address any remaining points during the discussion phase. If our response has adequately addressed your concerns, we kindly ask you to consider raising the score.", " Thank you for your comments, hopefully the following discussion can clear up your concerns. For the notation issue, we have already tried our best to make all arguments clear and consistent. As the proposed model is indeed the interpolation between empirical risk minimization (with regularization) and the vanilla DRO model, it is necessary to invoke different symbols to distinguish them. We don't believe we introduce superfluous definitions. If you have any other concrete suggestions can help us to further improve it, we are happy to incorporate them in our manuscript. We now provide responses to all questions you have raised. \n\n**Q1:** Theorem 3.7 requires a convex quadratic function and linear mapping. I think these two constraints are quite strict in the adversarial learning problem, how can you guarantee these? Can one relax these two conditions if so how?\n\n**Response:** Thanks for your question. For the adversarial learning problem, we indeed rely on Proposition 3.6 instead of Theorem 3.7, which is general enough to be applied to deep neural networks. The reformulation is tight without any relaxation here. In Sections 4.2 and 5.2, we also give the computational scheme and supportive experimental results to corroborate our theoretical findings. To better highlight our theoretical contributions, it would be better to change **Proposition 3.6** to **Theorem 3.6** to let the reader notice that this reformulation result is general enough. \n\n With this discussion, we would like to re-emphasize our contributions here. First, our paper reveals the equivalence between the Tikhonov regularization and the optimal transport robustification with exact martingale constraints (thus showing that Tikhonov regularization is distributionally robust optimal in a precise sense). Second, the martingale DRO model motivated us to develop a new *perturbed* martingale DRO model which leads to a new class of robustification (or equivalently, a new class of regularization) for the adversarial learning problem, see lines 73-82. We hope that the clarification that we emphasize may provide the reviewer with a better understanding of our contributions. \n\n**Q2:** In the first experiment of section 5.2, the intuitive explanation of the results is not very enlightening if not meaningless to reader. Is there a statistical measure to estimate the ensuing results?\n\n**Response:** We respectfully disagree with the reviewer about the purpose of this experiment. To clarify: our perturbed martingale DRO model is the interpolation between ERM with Tikhonov regularization (with $\\epsilon = 0$) and the vanilla optimal transport DRO model (with $\\epsilon = \\infty$). Moreover, it is well-known that ERM suffers from overfitting while the vanilla optimal transport DRO model may become too over-conservative. We believe that a toy 2-dimensional problem conducted in Figure 2 has explicitly illustrated that our model can alleviate the disadvantages of both extremes and achieve a better model performance. In Figure 2, it is clear that the classification boundary generated by the vanilla optimal transport DRO (green) misclassifies some outer points (gray) due to its over-conservativeness. Also, the classification boundary generated by ERM with Tikhonov regularization (blue) misclassifies some inner points (darkgray) due to overfitting. We can observe that our proposed perturbed martingale DRO strikes the balance between these two models and achieve 100\\% test accuracy. We think this example is quite intuitive to help readers better understand the ``interpolation\" effects of the parameter $\\epsilon$ (e.g., ERM with Tikhonov regularization at $\\epsilon =0$ and vanilla optimal transport DRO at $\\epsilon = + \\infty$). \n\n Regarding the statistical measure, as we discussed, our model obtained the higher test accuracy. Besides that, it is clear that the proposed martingale model will have a diagonalize confusion matrix. To further demonstrate the effectiveness of our model, we provide the confusion matrix of three models as below: \n\n| Confusion Matrix | | ERM | | Martingale DRO | | DRO | |\n| ---------------- | -- | ---- | ---- | -------------- | ---- | ---- | ---- |\n| TP | FN | 1093 | 0 | 1093 | 0 | 1837 | 66 |\n| FP | TN | 5 | 2902 | 0 | 2907 | 0 | 2907 |\n", " Thanks for your positive comments. Please find the detailed clarification of your questions as below. \n\n**Q1:** How do you get the last equality of the proof of Proposition 3.2 in the main text? Can you explain this step clearly?\n\n**Response:** For the occurrence of equality, we need to construct $\\Delta = c(\\Delta) M^{-1} \\beta$, $\\pi$-a.s., where $c(\\Delta) \\in \\mathbb{R}$ satisfies $E_{\\pi}[c(\\Delta) | X] = 0$, then a proper scaling will lead to $E_{\\pi} [\\|\\\\|\\Delta\\\\|_{M}^2] = \\rho$.\n\n**Q2:** The fact that we can choose the cost function in OT to be adjusted by a positive definite matrix is nice. But in practice, how do we choose rather than identity matrix? It would be helpful to have some sentences to explain it around line 90-93.\n\n**Response:** Thanks for your suggestion. Usually, the positive definite matrix $M$ is supposed to invoke some prior information. For example, in [3, Section 4.2, equation (24)], the authors tune $M$ using implied volatility to include additional market information in the ambiguity set. Besides, methods developed in metric learning can be also applied to the selection of $M$. Based on your suggestion, we will definitely include a detailed explanation in the modified version. \n", " We appreciate and thank the reviewer for their positive comments. We now provide responses to all questions you have raised.\n\n**Q1:** The derivation of the duality is a bit fast in the paper. In particular I am not sure how the martingale constraint is handled. It seems that in the proof you assume the adversary distribution is discrete (similar to the empirical distribution), so that the martingale constraint can be handled, and to obtain a duality result as for Theorem 2.3. Is it indeed the case? If so, isn't it a restriction to assume it is discrete? The adversary distribution could have a continuous density for instance.\n\n**Response:** We would like to clarify that the only discrete assumption we make is about the nominal distribution $\\hat{\\mathbb{P}}$: we assume that $\\hat{\\mathbb{P}}$ is the empirical measure supported on the training data and $\\hat{\\mathbb{P}}$ is used as the center of the ambiguity set. The adversary can choose any distribution $\\mathbb{Q}$ in the ambiguity set containing all Borel probability measures supported on $\\mathcal{X}$ that are of a Wasserstein distance less than or equal to $\\rho$ from the center distribution $\\hat{\\mathbb{P}}$. This ambiguity set is non-parametric: it contains continuous, discrete and mixture distributions. We refer the reviewer to line 456--459 in Appendix B for further detailed clarification. Due to the page limit, we cannot include all proof details of the strong duality results in the main context; however, all the proof details can be found in Appendix B.\n\n**Q2:** There seems to be a typo at line 147. \n\n**Response:** There is no typo here. We have discussed the inequality at line 147-148. The inequality holds because the Wasserstein ambiguity set with martingale constraints is a subset of the vanilla Wasserstein ambiguity set (with**out** the martingale constraints). The inequality now follows by basic rules of optimization. \n\n**Q3:** Concerning Proposition 3.6, I wonder how behaves this result when taking asymptotics. For instance if epsilon goes to infinity, we should retrieve Proposition 2.2, but it does not seem obvious to retrieve the square root of the MSE. Could you discuss why we should retrieve such result?\n\n**Response:** \nWe refer the reviewer to Theorem 3.7 (line 196). When $\\epsilon$ goes to infinity, the minimizer of $s$ in $R(\\beta)$ is forced to be $\\mathbf{0}$, then equation (3.5) is reduced to ($\\gamma = 1$)\n$$\n \\begin{align}\n \\mathcal{L}_\\beta(\\hat{\\mathbb{P}}, \\rho, \\epsilon) & = \\frac{1}{2N} \\sum_i (\\beta^T X_i)^2 + \\frac{\\rho \\\\|\\beta\\\\|_I^2}{2} + \\\\|\\beta\\\\|_I \\sqrt{\\frac{\\rho}{N} \\sum_i (\\beta^TX_i)^2} \\\\\\\\\n& = \\frac{1}{2}(\\sqrt{\\frac{1}{N} \\sum_i (\\beta^TX_i)^2} +\\sqrt{\\rho}\\\\|\\beta\\\\|_I)^2. \n\\end{align}\n$$\nHere, the identity matrix $I$ can be replaced by the inverse of $M$. \n\n**Q4:** Could you provide the definition of adversarial RMSE in the main body, so that we are sure what is plotted in figure 1?\n\n**Response:** The adversarial RMSE is defined as\n $$\n \\textrm{RMSE} = \\sqrt{\\frac{1}{N}\\sum_i (\\hat{\\beta}^Tx_{adv}^{(i)}-y_{adv}^{(i)})^2},\n $$\n where $x_{adv}^{(i)}$ are the generated adversarial samples based on test samples $x_{test}^{(i)}$ via Fast Gradient Method (FGM) and Projected Gradient Descent (PGD) and $\\hat{\\beta}$ is the esimator returned by the proposed method.\n\n**Q5:** At line 242 you say that \"without loss of generality\" we can assume that $M=I$. However I see no argument to justify it. Is it a simplification assumption, or can one prove the loss and gradients is the same for any covariance M?\n\n**Response:** When $M$ is not $I$, then $\\\\|\\beta\\\\|_{M^{-1}} = \\\\|M^{-\\frac{1}{2}} \\beta\\\\|_2$, which means we just need to insert $M^{-\\frac{1}{2}}$ in front of the corresponding $\\beta$. We let $M=I$ to make the formula at line 243 to look more clear.\n\n**Q6:** It would be interesting to track time in the numerical experiments. The penalty seems much more complicated, thus do the benefits of the performance outweigh the extra computation complexity of the model?\n\n**Response:** Thanks for your suggestion. We provide the per-iteration wall-clock time comparison with the vanilla DRO model as below, which shows that the algorithm we propose is fairly efficient and does not cause additional computational burden. \n\n| Training time per epoch (s) | DRO | Martingale DRO |\n| --------------------------- | -------- | -------------- |\n| Average | 1.66 | 1.73 |\n| Variance | 1.90E-03 | 2.10E-03 |", " The paper show the equivalence of Optimal Transport Distributional Robust Optimization and Tikhonov regularization under martingale constraints. Inspired by this observation, the paper further introduce some new regularization techniques, both for linear regressions and black-box models such as neural networks. **Strength**: \n\n1. The presentation in this paper is good, all the proofs in Appendix are carefully written. \n\n2. The proposed regularization method works better than the old OT-DRO method, and the theoretical support is solid.\n\n**Weakness**: I think this is a strong paper with no major weakness. \n 1. How do you get the last equality of the proof of Proposition 3.2 in the main text? Can you explain this step clearly? The Holder's inequality provides that \n$$ (\\beta^{T} \\Delta)^2 \\leq |\\Delta|_{M}^{2} |\\beta|_{M^{-1}}^{2}, $$ \nwith the equality occurs if $\\Delta = M^{-1} \\beta$ (up to a constants). After that, we need to construct a $\\pi$ (or a sequence of $\\pi$) to attain this equality and satisfy all the imposed constraints. Although the other longer proof in the Appendix is correct, I am still confused about this short proof in the main text. \n\n2. The fact that we can choose the cost function in OT to be adjusted by a positive definite matrix $M$ is nice. But in practice, how do we choose $M$ rather than identity matrix $I$? It would be helpful to have some sentences to explain it around line 90-93. The paper is well written and has no major limitations. ", " This paper proposes a connection between Tikhonov regularization and optimal transport map with exact martingale constraints. This finding provides an explanation and guidance to the existing regularization in DRO problem. The authors also introduced a meaningful implementation to solve the problem at hand. Strengths: The paper appears to be theoretically sound, firstly establishing a connection between regularization and martingale constraints, thereby building a potential basis for adversarial learning.\nWeaknesses: The notation in the paper is rather hard to track, e.g., L_\\beta, l(f_\\beta), and f_\\beta are three different things. The experiments are rather thin. It would be more useful to show some comparisons with existing regularization approaches. Please also see questions below\n a)\tTheorem 3.7 requires a convex quadratic function and linear mapping. I think these two constraints are quite strict in the adversarial learning problem, how can you guarantee these? Can one relax these two conditions if so how? \nb)\tIn the first experiment of section 5.2, the intuitive explanation of the results is not very enlightening if not meaningless to reader. Is there a statistical measure to estimate the ensuing results?\n\n The model should also be tested on a more complex dataset thereby provide a deeper understanding on the effectiveness and improvement of the model.", " The authors focus on the problem of distributional robust optimization, which consists in training a model, assuming the learned distribution lies in a ball around the empirical dataset (w.r.t. to some metric). They consider the Wasserstein distance as such metric, and propose to add another martingale constraint on the true distribution. It is motivated by imposing a provably higher dispersion on the learned distribution. Inspired by a previous work proving the equivalence between distributionaly robust problem in some setting and a regularized MSE minimization problem, they prove an analog result on their formulation. The martingale constraint is penalized using a Mahalanobis norm. They propose an subgradient algorithm to estimate the model. They provide experiments on synthetic and MNIST data. The contributions of this work are interesting. The equivalence with a regularized MSE problem is interesting. However, I am not an expert of distributionally robust optimization, so I cannot precisely assess the novelty of the authors contributions. I provide below some remarks.\n\n1) The derivation of the duality is a bit fast in the paper. In particular I am not sure how the martingale constraint is handled. It seems that in the proof you assume the adversary distribution is discrete (similar to the empirical distribution), so that the martingale constraint can be handled, and to obtain a duality result as for Theorem 2.3. Is it indeed the case ? If so, isn't it a restriction to assume it is discrete ? The adversary distribution could have a continuous density for instance.\n\n2) There seems to be a typo at line 147.\n\n3) Concerning Proposition 3.6, I wonder how behaves this result when taking asymptotics. For instance if \\epsilon goes to infinity, we should retrieve Proposition 2.2, but it does not seem obvious to retrieve the square root of the MSE. Could you discuss why we should retrieve such result ?\n\n4) Could you provide the definition of adversarial RMSE in the main body, so that we are sure what is plotted in figure 1 ?\n\n5) At line 242 you say that \"without loss of generality\" we can assume that M=I. However I see no argument to justify it. Is it a simplification assumption, or can one prove the loss and gradients is the same for any covariance M ?\n\n6) It would be interesting to track time in the numerical experiments. The penalty seems much more complicated, thus do the benefits of the performance outweigh the extra computation complexity of the model ? Could you please provide an answer to the above remarks (1), (3), (4), (5) and (6) ? The authors did not adress societal impact, but I think it is limited for this work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "_IMNYsEX7n", "VisaTnf5oj", "_IMNYsEX7n", "thxpobdwEgf", "KkC59h5nRif", "wsbWNGZYin", "X-a29-DTyc4T", "7D9AJBIMTJI", "KTvkpdojJ0N", "ZU-ioDR86v5", "ICrAzen_8AmG", "nips_2022_EQgPNPwREa", "ZU-ioDR86v5", "ZU-ioDR86v5", "G6rjjqEpOA3", "0OE_OLvTHwa", "nips_2022_EQgPNPwREa", "nips_2022_EQgPNPwREa", "nips_2022_EQgPNPwREa" ]
nips_2022_4maAiUt0A4
Boosting Out-of-distribution Detection with Typical Features
Out-of-distribution (OOD) detection is a critical task for ensuring the reliability and safety of deep neural networks in real-world scenarios. Different from most previous OOD detection methods that focus on designing OOD scores or introducing diverse outlier examples to retrain the model, we delve into the obstacle factors in OOD detection from the perspective of typicality and regard the feature's high-probability region of the deep model as the feature's typical set. We propose to rectify the feature into its typical set and calculate the OOD score with the typical features to achieve reliable uncertainty estimation. The feature rectification can be conducted as a plug-and-play module with various OOD scores. We evaluate the superiority of our method on both the commonly used benchmark (CIFAR) and the more challenging high-resolution benchmark with large label space (ImageNet). Notably, our approach outperforms state-of-the-art methods by up to 5.11% in the average FPR95 on the ImageNet benchmark.
Accept
This paper received unanimous recommendations of acceptance. Concerns were expressed regarding the similarity between the proposed method and ReAct, but the concerns were addressed by the authors. The AC agrees with the reviewer regarding the contribution of this paper and recommends acceptance.
train
[ "I2EShbkd1zq", "QfXwhWynOJ", "UHcqo1pgi6", "a__knB5nRhx", "8WbO_cs3487", "U7aO4LU03uq", "lVluesMW8_", "X1viltWeRcF", "tJuTeBTVJMj", "55jL2nHVpRB", "gekuhA1C0s", "V0t8Vts3Lin", "I63AIRc1IJE", "KNxNbCXW8Z9", "b1mqeKlgKQ6", "dolKJVYH_g", "SaLBXfH9Ko_", "duB0j2MLET", "xoWdLbKcVo", "0qaPlL-Gej", "QmWTM9Mmsux", "lPvDOez8Vt5", "2l43FfUKsw", "IntM5JJIdZ9" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your appreciation of our paper and the valuable comments. Best regards.", " Thanks to the authors for their thorough answers to my comments and to all other reviewers. I think the responses and the modifications to the manuscript cover my questions appropriately and I found some of the answers to other reviewers equally interesting. I therefore consider this manuscript should be accepted.", " Thanks again for your precious review time and valuable comments. Best regards!", " I appreciate that the authors took the time to carry out the analysis suggested by all reviewers. After careful reading, I have decided to increase my score as a result. This is a strong paper with valuable contributions and insights.", " We really appreciate your valuable comments. Thank you again for helping us improve the manuscript. We have updated the revision.\nBest wishes!", " A potential minor edit to the sentence: \"Although truncating features into a typical set can improve OOD detection, a potential negative impact of the proposed process is that it inherently introduces a bias and causes some information loss which may be important to the model in real-world scenarios.\".", " Thank you very much for your valuable review. We are grateful that you appreciate our paper.", " The authors have addressed my concerns and I appreciate the efforts the authors made to refine the paper. Though I would like this paper to be accepted, I am willing to hear about the other reviewers' further opinions, especially reviewer juXE, whose score is divergent.", " We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we hope to address your concerns. We hope to further discuss with you whether or not your concerns have been addressed appropriately. Please let us know if you have additional questions or ideas for improvement.", " Thank you for your helpful suggestion. In the revision, we have replaced the sentence \"We anticipate no negative consequences of our work.\" with \"Regarding potential negative impact, truncating the features into the typical set can improve the OOD detection but will introduce a bias and inherently cause some information loss which may be important for the model in the real-world scenario.\" \n", " Thank you for the thorough responses to my questions/concerns and the additional experiments, particularly the discussion on the limitations in Table 2/Appendix M and that on the calibration of the softmax uncertainties! My only remaining concern is in the limitations section. The sentence \"We anticipate no negative consequences of our work.\" should be replaced in my opinion. In my original review, I recommended some potential additions to this section:\n\n> The authors have done a good job in listing limitations of the BATS method. However, the addition of some potential negative societal implications would be helpful. For example, by truncating the features, certain biases in the data learned by the pre-trained model may be amplified. Furthermore, the process of truncation will inherently cause some information loss which may be crucial to model performance during deployment (building on the hyperparameter tuning discussion in the paper).\n\n", " We thank all of the reviewers for helping us improve the paper. We uploaded a revised version of our manuscript and marked the major changes in blue. In short, \n\n1. We have carefully checked the typos and improved the writing in the revised manuscript.\n\n2. We have added some discussion for the results in Tab.2 in Appendix M.\n\n3. We have added some discussion on the differences between BATS and ReAct in Appendix N.\n\n4. To select the features' typical set without the assistance of the BN layers, we provide another simple way to extend our method to models without BN in Appendix O. \n\n5. We have added an experiment to show that BATS can surpass the other two latest OOD detection methods (KNN Score (ICML2022) and ViM (CVPR2022)) in Appendix P.\n\n6. We find that our BATS can also improve the calibration of the model and show the results in Appendix Q.\n\nThank you all again for your valuable and insightful suggestions. Please let us know if you have additional questions or ideas for improvement.\n\nKind regards, Authors", " >**Q6:** Besides the concerns mentioned above, I don't understand why BATS can boost OOD detection theoretically. Why variance reduction boosts OOD detection isn't clear. Does it generally improve classifiers or specifically solve OOD detection problem? }\n\n**A6:** Thanks for your comments. \nIntuitively, as shown in Fig. 1, there exists an overlap in the distribution of the scores for ID and OOD examples. Smaller overlap indicates better OOD detection performance. \nBATS constrains the variance of the distribution of the OOD score, which reduces the overlap between the ID and OOD examples and improves the separability between the ID and OOD examples. Moreover, we added an illustration (Fig. 10) in Appendix I to show the influence of our BATS on different OOD detection methods. These detection methods hope to assign higher scores for the ID examples and lower scores for the OOD examples. BATS can reduce the variance of the scores and the overlap between the distribution of ID and OOD examples, which benefits OOD detection.\n\nMathematically, as we analyze in the preliminaries and Appendix A in our paper, OOD detection is a single-sample hypothesis testing problem.\nLet $X$ be the input space. Suppose that the in-distribution data $D_{in}$ is drawn from a distribution $P_{0}$ defined over $X.$\nGiven a test input $x \\in X$, the problem of out-of-distribution detection can be formulated as a single-sample hypothesis testing task:\n\n$H_0: x \\sim P_0, \\quad \\text{vs.} \\quad H_1: x \\nsim P_0. $\n\nHere the null hypothesis $H_0$ implies that the test input $x$ is an in-distribution sample.\nThe goal of OOD detection here is to design criteria based on $D_{in}$ to determine whether $H_0$ should be rejected. OOD detection tasks need to determine a reject region $R$ such that for any test input $x \\in X$, the null hypothesis is rejected if $x \\in R.$\nGenerally, the reject region $R$ is formulated by a test statistic and a threshold.\nLet $f$ be a model pre-trained from $D_{in}$, which is used to predict the class label of an input sample. One can use the model $f$ or a part of $f$ (e.g., feature extractor) to construct a test statistic (also named as OOD score in OOD detection literature) $T(x; f)$, where $x$ is the test input. Then the reject region can be written as $R = {x: T(x;f) \\leq \\gamma}$, where $\\gamma$ is the threshold. \nBecause the in-distribution $P_0$ is unknown, the reject region is determined by the empirical distribution of the test statistic $T(x; f)$ over the ID data. If the test statistic $T(x; f)$ over the ID data has a large variance and contains many unusual values, the reject region may be underestimated. By reducing the variance, BATS constrains the uncertainty of the test statistic and can improve the estimation accuracy of the reject region. \n\n***\n**References**\n\n[1] Nalisnick, et al. \"Detecting Out-of-Distribution Inputs to Deep Generative Models Using a Test for Typicality.\" (2019).\n\n[2] Choi H, et al. \"WAIC, but Why? Generative Ensembles for Robust Anomaly Detection.\" (2018).\n\n[3] Shannon C E. \"A mathematical theory of communication\". (1948).\n\n[4] Yang, Jingkang, et al. \"Generalized out-of-distribution detection: A survey.\" (2021).\n\n\n[5] Sun, Yiyou, et al. \"Out-of-distribution Detection with Deep Nearest Neighbors.\" (2022).\n\n[6] Wang, Haoqi, et al. \"ViM: Out-Of-Distribution with Virtual-logit Matching.\" (2022).\n\n[7] Haroush, Matan, et al. \"A Statistical Framework for Efficient Out of Distribution Detection in Deep Neural Networks.\" (2021).\n", " >**Q4:** As previous research showed that BN increases adversarial vulnerability, whether the proposed approach solved the problem of BN or had the ability of detecting OOD samples is not clear.\n\n**A4:** Empirically, we perform extensive evaluations and establish superior performance on both the large-scale ImageNet OOD detection benchmark and the commonly used CIFAR benchmarks. We focus on the OOD detection task rather than adversarial vulnerability tasks. Actually, our approach can not improve the adversarial robustness but can slightly improve the test accuracy and the robustness of the pre-trained models (as shown in Appendix H). Regarding the influence of the adversarial vulnerability on OOD detection, we found that the normal pre-trained model and the adversarially trained robust models perform similar to each other in detecting the OOD examples and our BATS surpasses the existing methods (we illustrate the performance of different models in Fig.8 in our paper).\n\n****\n>**Q5:** The authors didn't compare with SOTA of OOD detection [5,6] on large-scale ImageNet.\n\n**A5:** Thanks for your suggestion. The recently published methods KNN [5] and ViM [6] are very interesting works. According to your suggestion, we have added the comparison in Appendix P in the revision. \nKNN is a nearest-neighbor-based OOD detection method, which computes the k-th nearest neighbor (KNN) distance between the embedding of test input and the embeddings of the training set to determine if the input is OOD or not. ViM combines the class-agnostic score from feature space and the In-Distribution class-dependent logits to calculate the OOD score. The following table shows the OOD detection performance of different methods on ResNet-50 on the ImageNet benchmark. Our BATS outperforms the existing methods by a large margin. KNN explores and demonstrates the efficacy of the non-parametric nearest-neighbor distance for OOD detection, but its performance is worse than GradNorm and ReAct. ViM performs well on the OOD dataset Textures, but when using SUN as the OOD dataset, its performance is even worse than the simple baseline MSP.\n\n| Method | iNaturalist | | SUN | | Places | | Textures | | Average | |\n|:--------:|:-----------:|:------:|:------:|:------:|:------:|:------:|:--------:|:------:|:-------:|:------:|\n| | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC |\n| MSP | 51.44 | 88.17 | 72.04 | 79.95 | 74.34 | 78.84 | 54.90 | 78.69 | 63.18 | 81.41 |\n| ODIN | 41.07 | 91.32 | 64.63 | 84.71 | 68.36 | 81.95 | 50.55 | 85.77 | 56.15 | 85.94 |\n| Energy | 46.65 | 91.32 | 61.96 | 84.88 | 67.97 | 82.21 | 56.06 | 84.88 | 58.16 | 85.82 |\n| GradNorm | 23.73 | 93.97 | 42.81 | 87.26 | 55.62 | 81.85 | 38.15 | 87.73 | 40.08 | 87.70 |\n| ReAct | 17.77 | 96.70 | 25.15 | 94.34 | 34.64 | 91.92 | 51.31 | 88.83 | 32.22 | 92.95 |\n| KNN | 59.00 | 86.47 | 68.82 | 80.72 | 76.28 | 75.76 | 11.77 | 97.07 | 53.97 | 85.01 |\n| VIM | 77.34 | 86.46 | 90.71 | 73.80 | 89.64 | 72.15 | 16.63 | 96.37 | 68.58 | 82.20 |\n| Ours | 12.57 | 97.67 | 22.62 | 95.33 | 34.34 | 91.83 | 38.90 | 92.27 | 27.11 | 94.28 |\n", " >**Q3:** The major limitation of the proposed approach is that it can be applied only on BN (please correct me if I misunderstand.) Modern architectures utilize better normalization methods other than BN, such as LN and GN, and thus interest and impact of the proposed method is greatly limited.\n\n**A3:** In this paper, we provide new insights into classification-based OOD detection from the perspective of typicality and propose to rectify the features into the features' typical set. Regarding how to select the features' typical set, we design a concise and effective approach to select the features' typical set with the assistance of BN layers in our paper. To select the features' typical set without the assistance of the BN layers, we provide another simple way to extend our method to models without BN.\n\nTo be specific, we directly use a set of training images to estimate the mean $\\mu$ and the standard deviation $\\sigma$ of the features (extracted by the penultimate layer of the model) at each dimension. In this experiment, we randomly choose 1500 images from the training dataset of the ImageNet. Then we rectify the features into the interval [$\\mu$-$\\lambda$*$\\sigma$, $\\mu$+$\\lambda$*$\\sigma$] and use these typical features to calculate the OOD scores. \nWe name this method as Typical Feature Estimated Method (TFEM) and show the results in the following table. The experiment is performed on the ImageNet benchmark. The pre-trained model is ResNet-50 and ViT. The $\\lambda$ is set to 1. Rectifying the features into the typical set with TFEM can greatly improve the performance of the existing OOD detection methods both on the model with BN layers (ResNet-50) and the model without BN layers (ViT).\n\nThis experiment demonstrates the effectiveness of the typical features in OOD detection, which is consistent with the analysis in our paper. We believe there exists a method that can estimate the features' typical set better. In this paper, BATS has already established state-of-the-art performance on both the large-scale and small-scale OOD detection benchmarks. We have added this experiment in Appendix O.\n\n| Model | Method | iNaturalist | | SUN | | Places | | Textures | | Average | |\n|:--------:|:------------:|:-----------:|:------:|:------:|:------:|:------:|:------:|:--------:|:------:|:-------:|:------:|\n| | | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC |\n| ViT | MSP | 18.72 | 96.09 | 56.02 | 85.92 | 59.30 | 84.85 | 51.08 | 84.90 | 46.28 | 87.94 |\n| | MSP+TFEM | 7.12 | 98.34 | 42.62 | 90.36 | 48.71 | 88.64 | 41.29 | 88.48 | 34.94 | 91.46 |\n| | ODIN | 12.72 | 97.15 | 40.04 | 90.40 | 50.46 | 87.10 | 41.08 | 89.30 | 36.08 | 90.99 |\n| | ODIN+TFEM | 8.43 | 98.28 | 33.37 | 93.00 | 44.80 | 89.64 | 39.98 | 89.87 | 31.65 | 92.70 |\n| | Energy | 6.11 | 98.67 | 36.83 | 91.82 | 45.26 | 89.37 | 31.86 | 91.78 | 30.02 | 92.91 |\n| | Energy+TFEM | 3.59 | 99.08 | 30.27 | 93.70 | 41.53 | 90.51 | 31.13 | 92.13 | 26.63 | 93.86 |\n| | GradNorm | 7.66 | 98.43 | 48.36 | 85.28 | 66.38 | 72.85 | 48.79 | 80.76 | 42.80 | 84.33 |\n| | GradNorm+TFEM | 4.05 | 98.95 | 30.23 | 93.37 | 41.07 | 91.20 | 31.67 | 91.82 | 26.76 | 93.84 |\n| ResNet50 | MSP | 51.44 | 88.17 | 72.04 | 79.95 | 74.34 | 78.84 | 54.90 | 78.69 | 63.18 | 81.41 |\n| | MSP+TFEM | 38.50 | 92.77 | 66.53 | 84.47 | 70.59 | 82.13 | 58.40 | 86.71 | 58.51 | 86.52 |\n| | ODIN | 41.07 | 91.32 | 64.63 | 84.71 | 68.36 | 81.95 | 50.55 | 85.77 | 56.15 | 85.94 |\n| | ODIN+TFEM | 28.40 | 94.67 | 52.34 | 89.47 | 62.13 | 85.14 | 37.27 | 92.35 | 45.04 | 90.41 |\n| | Energy | 46.65 | 91.32 | 61.96 | 84.88 | 67.97 | 82.21 | 56.06 | 84.88 | 58.16 | 85.82 |\n| | Energy+TFEM | 20.29 | 96.24 | 53.98 | 86.85 | 43.37 | 90.90 | 38.24 | 92.22 | 38.97 | 91.55 |\n| | GradNorm | 23.73 | 93.97 | 42.81 | 87.26 | 55.62 | 81.85 | 38.15 | 87.73 | 40.08 | 87.70 |\n| | GradNorm+TFEM | 11.88 | 97.83 | 26.24 | 95.00 | 40.46 | 90.77 | 25.05 | 94.85 | 25.91 | 94.61 |", " Thank you for your thoughtful comments. Below we address the feedback and comments in detail. Please feel free to let us know if you have any further questions about the paper. We will try our best to address your concerns.\n\n***\n>**Q1:** My first impression of this paper is that the proposed approach looks like a slight modification of ReAct. Although BATS outperforms ReAct in experiments, the operation and the mathematical analysis look very similar. In my understanding, BATS adaptively estimates the parameter c in ReAct.\n\n**A1:** The similarity between our BATS and ReAct is that these methods are used to improve the performance of the existing OOD scores. As follows, we discuss the difference between BATS and ReAct from three aspects.\n\nFirst, the motivation between our BATS and ReAct is different. ReAct hypothesizes that the mean activation of OOD data has significantly larger variations across units and is biased towards having sharp positive values, while the activation of the ID data is well-behaved with a near-constant mean and standard deviation. Thus, ReAct thinks that the truncation can rectify the activation of the OOD examples and preserve the activation for in-distribution data. However, this hypothesis does not always hold, as shown in Fig. 15 in Appendix N. \nThe distribution of the deep features after batch normalization is consistent with the Gaussian distribution. Our BATS hypothesizes that deep models may be hard to model the extreme features but can provide reliable estimations on the typical features. This is because extreme features are exposed to the training process with a low probability. We propose to rectify the features into the typical set and calculate the OOD scores with the typical features.\n\nSecond, the mathematical analysis between our BATS and ReAct is different. ReAct theoretically analyze that if the OOD activations are more positively skewed, their operation reduces mean OOD activations more than ID activations.\nWe analyze the benefit of BATS from the perspective of the bias-variance trade-off. BATS can reduce the variance of the deep features, which contributes to constraining the uncertainty of the test static $T(x; f)$ and improving the estimation accuracy of the reject region. Our method hopes to estimate the reject region better, and we do not assume whether OOD data is positively skewed.\n\nThird, our method surpasses the ReAct in both the large-scale benchmark (ImageNet) and the small-scale benchmark (CIFAR).\nWe have added some discussion in Appendix N in the revised version to make our idea easier to read.\n\n***\n>**Q2:** The authors explain the motivation in the perspective of typical features, but typicality is not novel in OOD detection[1,2].\n\n**A2:** Thanks for your comments. It's true that the typical set was proposed by Shannon in 1948 [3], which indicates the set whose elements have an information content sufficiently close to that of the expected information. However, the typicality introduced in these density-based methods [1,2] and our proposed BATS are used for different purposes. The typical sets in these density-based OOD detection methods indicate sets of samples whose expected log-likelihood approximates the model's entropy. These density-based OOD detection methods distinguish the OOD examples by estimating whether the examples lie in the typical set of the model. \nIn contrast, the typical features in our paper indicate the features that fall into the high-probability regions. We rectify the features into the typical set in order to reduce the variance of the test statistic and improve the estimation accuracy of the reject region. Our method aims to improve the performance of the classification-based OOD detection methods from the perspective of typicality.\n\nMoreover, the density-based methods [1,2] need to train the generative models, which are time-consuming and can hardly adopt in large-scale settings. The performance of the density-based methods can often lag behind the classification-based approaches [4]. \n", " Thank you for your positive assessment and helpful feedback. We appreciate the time and attention you spent on reviewing our paper. \n\n***\n>**Q1:** Uncertainty measurement is important in OOD cases. It would be interesting to analyze the results of the pretrained models from the perspective of uncertainty quantification, such as comparing Brier score or expected calibration error (ECE).\n\n**A1:** Thanks for your suggestion. We found that BATS can improve the calibration of the pre-trained model and reduce the expected calibration error (ECE) of the pre-trained ResNet-50 from 3.56% to 2.12%. We have added the reliability diagram and the confidence histogram of the pre-trained ResNet-50 and the ResNet-50 with our BATS on ImageNet in Appendix Q in the revision.\n\n***\n>**Q2:** What are the choices for the reject region threshold $\\gamma$, is it always the best value or a fixed value?\n\n**A2:** Thanks for your comments. Following the standard settings in the existing works, the reject region threshold is a fixed value, which can correctly identify 95% in-distribution examples as in-distribution examples (the true positive rate of in-distribution (positive) examples is 95%).", " We sincerely appreciate your appreciation of our paper and the positive comments. We address specific questions below.\n\n***\n\n>**Q1:** The comparison with the related work is brief and there is little discussion of the similarities and differences. Methods such as ReAct seem to be close to the proposed approach since they target \"anomalous features\".\n\n**A1:** Thanks for your suggestion. Due to page limitations, we place some related literature in Appendix G. According to your suggestion, we have added some discussion on the difference between BATS and ReAct in Appendix N.\n\nFirst, the motivation between our BATS and ReAct is different. ReAct hypothesizes that the mean activation of OOD data has significantly larger variations across units and is biased towards having sharp positive values, while the activation of the ID data is well-behaved with a near-constant mean and standard deviation. Thus, ReAct thinks that the truncation can rectify the activation of the OOD examples and preserve the activation for in-distribution data.\nHowever, this hypothesis does not always hold, as shown in Fig. 15 in Appendix N. The distribution of the deep features after batch normalization is consistent with the Gaussian distribution. Our BATS hypothesizes that deep models may be hard to model the extreme features but can provide reliable estimations on the typical features. This is because extreme features are exposed to the training process with a low probability. We propose to rectify the features into the typical set and calculate the OOD scores with the typical features.\n\nSecond, the mathematical analysis between our BATS and ReAct is different. ReAct theoretically analyze that if the OOD activations are more positively skewed, their operation reduces mean OOD activations more than ID activations. \nWe analyze the benefit of BATS from the perspective of the bias-variance trade-off. BATS can reduce the variance of the deep features, which contributes to constraining the uncertainty of the test static $T(x; f)$ and improving the estimation accuracy of the reject region. Our method hopes to estimate the reject region better, and we do not assume whether OOD data is positively skewed.\n\nThird, our method surpasses the ReAct in both the large-scale benchmark (ImageNet) and the small-scale benchmark (CIFAR).\n\n****\n>**Q2:** The presentation could be improved. It requires a revision to correct some typos and grammatical errors. Some images and their texts could be more readable.\n\n**A2:** Thanks for your suggestion. We have carefully checked the typos and improved the writing in the revised manuscript.\n\n****\n>**Q3:** The best results of the proposed method are in the supplementary, which is a bit odd. I would expect those to be discussed in the main paper, and also provide some insight on why the proposed method might work better in some of those methods.\n\n**A3:** In this paper, we hope to provide new insights into OOD detection from the perspective of typicality and show that rectifying the features into the typical set can greatly improve the performance of the OOD scores. Our proposed method is compatible with many test statistics (OOD scores). \nExperimentally, we mainly show that applying our method to the normally used Energy score can achieve state-of-the-art performance. Moreover, Tab. 5 in our paper shows that our method can further improve the performance of other OOD scores, specifically that higher performance can be obtained using more advanced OOD scores (GradNorm). \n\nWe have added an illustration (Fig.10) in Appendix I in the revision. \nGradNorm itself performs better than the simple baseline method MSP score, which can assign higher scores to the ID examples and lower scores to the OOD examples. \nApplying BATS to the existing OOD detection methods can reduce the variance of the scores and reduce the overlap between the ID examples and the OOD examples. We think combining our method with a better OOD score can achieve better performance.\n", " >**Q9:** How much do poorly calibrated softmax uncertainties hinder post hoc method effectiveness?\n\n**A9:** We provide an experiment for the influence of the calibration of the softmax uncertainty on the OOD detection. Using the temperature scaling method [3], we get one overconfident ResNet-50 (Expected Calibration Error (ECE): 11.59%) and one unconfident ResNet-50 (ECE: 14.52%). The ECE of the original ResNet-50 is 3.56%. In the following table, we compare the performance of different OOD detection methods with different models. \"Original\" means using the original ResNet-50, \"Unconfident\" means using the unconfident ResNet-50 and \"Overconfident\" means using the overconfident ResNet-50. The unconfident ResNet-50 performs better than the original ResNet-50 and overconfident ResNet-50 when using the MSP score, while the unconfident ResNet-50 performs much worse when using the Energy score. The influence of the calibration of the softmax uncertainty on the post hoc method may not be monotonous.\n\n| Method | Model | iNaturalist | | SUN | | Places | | Textures | | Average | |\n|:--------:|:-------------:|:-----------:|:------:|:------:|:------:|:------:|:------:|:--------:|:------:|:-------:|:------:|\n| | | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC | FPR95 | AUROC |\n| MSP | Original | 51.44 | 88.17 | 72.04 | 79.95 | 74.34 | 78.84 | 54.90 | 78.69 | 63.18 | 81.41 |\n| | Unconfident | 43.14 | 91.88 | 66.01 | 83.70 | 69.47 | 81.84 | 61.12 | 83.39 | 59.94 | 85.20 |\n| | Overconfident | 67.23 | 82.56 | 79.28 | 76.50 | 80.73 | 75.67 | 78.87 | 74.28 | 76.53 | 77.25 |\n| Energy | Original | 46.65 | 91.32 | 61.96 | 84.88 | 67.97 | 82.21 | 56.06 | 84.88 | 58.16 | 85.82 |\n| | Unconfident | 60.88 | 88.03 | 63.71 | 84.59 | 71.49 | 81.28 | 57.04 | 84.50 | 63.28 | 84.60 |\n| | Overconfident | 44.98 | 91.65 | 62.31 | 84.77 | 67.38 | 82.29 | 56.54 | 84.66 | 57.80 | 85.84 |\n| GradNorm | Original | 23.73 | 93.97 | 42.81 | 87.26 | 55.62 | 81.85 | 38.15 | 87.73 | 40.08 | 87.70 |\n| | Unconfident | 30.00 | 93.78 | 45.22 | 89.02 | 58.40 | 84.62 | 40.85 | 88.56 | 43.62 | 89.00 |\n| | Overconfident | 33.51 | 91.33 | 50.21 | 84.28 | 63.38 | 77.81 | 45.11 | 85.16 | 48.05 | 84.65 |\n| BATS | Original | 12.57 | 97.67 | 22.62 | 95.33 | 34.34 | 91.83 | 38.90 | 92.27 | 27.11 | 94.28 |\n| | Unconfident | 13.32 | 97.24 | 20.90 | 95.71 | 33.62 | 92.02 | 36.52 | 92.00 | 26.09 | 94.24 |\n| | Overconfident | 16.57 | 97.07 | 36.68 | 93.01 | 46.81 | 89.81 | 38.76 | 92.15 | 34.71 | 93.01 |\n\n****\n>**Q10:** Do you have a hypothesis for why BATS performance is lower than baseline methods on the Tiny-Imagenet OOD dataset in Table 2?}\n\n**A10:** Thanks for your comments. We answer this question in A6 above. We have added some discussion in Appendix M in the revised version.\n\n****\n>**Q11:** In the limitations, it is mentioned that 'some other information in the model' may be conducive to selecting the feature's typical set. Do you have a hypothesis for what this information might be?}\n\n**A11:** First, we think the gradient of the model may be conducive to selecting the feature's typical set because the gradients of the model contain some information about the training data [1]. Second, we think the other parameters of the deep models can also be helpful. For example, the centers of different classes are encoded in the fully connected layer [2], which may be helpful in selecting the typical features. Third, a set of training images can be helpful in calculating the mean and the standard deviation of the features. We have added experiments in Appendix O to show that selecting the features' typical set without the assistance of BN can also greatly improve the performance of the existing OOD detection methods.\n\n***\n\n**References**\n\n[1] Zhu L, Liu Z, Han S. \"Deep leakage from gradients.\" Advances in neural information processing systems, 2019, 32.\n\n[2] Qian, Qi, et al. \"Softtriple loss: Deep metric learning without triplet sampling.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.\n\n[3] Guo, Chuan, et al. \"On calibration of modern neural networks.\" International conference on machine learning. 2017.\n", " Thank you for the positive feedback and helpful suggestions. We answer your questions point-by-point as follows.\n\n****\n>**Q1:** In Lines 3-4, the statement is not necessarily true as there are also methods that focus on better distribution calibration during training (some of these ideas are mentioned later in the paper). I would recommend softening the message of this sentence to be more precise.\n\n**A1:** Thank you for your suggestion. In the revision, we have rewritten the sentence \"Existing OOD detection methods primarily work for designing OOD scores or introducing diverse outlier examples to retrain the model. We delve into the obstacle factors in OOD detection from the perspective of typicality and regard the feature's high-probability region of the deep model as the feature's typical set.\" as \"Different from most previous OOD detection methods that focus on designing OOD scores or introducing diverse outlier examples to retrain the model, we delve into the obstacle factors in OOD detection from the perspective of typicality and regard the feature's high-probability region of the deep model as the feature's typical set.\"\n\n****\n>**Q2:** Language like 'Obviously' (line 47) and 'It is easy to see' (line 168) should generally be avoided in academic writing.\n\n**A2:** Thank you for your suggestion. We have removed these words in the revised version of this paper.\n\n****\n>**Q3:** At the end of Sec. 2, it would be helpful to have a one or two sentence discussion to contextualize the proposed BATS method in the described related work.\n\n**A3:** We have added the sentence \"Different from these methods, our BATS proposes to calculate the OOD scores with the typical features, which benefits the estimation of the reject region and can improve the detection performance.\" following the related work in Sec.2. \n\n****\n>**Q4:** Variations on the phrase 'We propose to rectify the features into the feature's typical set and then use these typical features to calculate the OOD score.' are repeated frequently throughout the paper.\n\n**A4:** Thanks for your suggestion. We have removed some unessential phrases in our revised version.\n\n****\n>**Q5:** The hyperparameter $\\lambda$ was not introduced in line 142 when it was first used.\n\n**A5:** Thanks for the helpful comment. The hyperparameter $\\lambda$ controls the range of the interval. Larger $\\lambda$ indicates higher probability that features fall in the interval [$\\mu$-$\\lambda$*$\\sigma$,$\\mu$+$\\lambda$*$\\sigma$]. We have added this explanation in the revised version. \n\n****\n>**Q6:** The decreased BATS performance on Tiny-Imagenet OOD detection in Table 2 is not mentioned or discussed.\n\n**A6:** Thanks for pointing this out. We agree that the decreased performance of our BATS on Tiny-Imagenet OOD detection is interesting and needs some discussion. \n We hypothesize that this performance degradation is due to the bias introduced by BATS. By truncating the features, BATS can reduce the variance of the in-distribution examples, which benefits the estimation of the reject region but inherently cause some information loss which may reduce the performance of the pre-trained models. \n \nTo validate our hypothesis, we tune the bias-variance trade-off by the hyperparameter $\\lambda$. As shown in Fig. 14, BATS can indeed reduce the variance of the OOD scores. Choosing a proper $\\lambda$, BATS can reduce the overlap between the ID and OOD examples and reduce the FPR95, while a small $\\lambda$ hinders the performance of OOD detection. For example, using larger $\\lambda=8$, BATS can achieve better FPR95 performance 15.10% on detecting Tiny-Imagenet using ResNet-18, which is 2.65% better than $\\lambda=3$ in our Tab. 2. For the practicability of our method, we set the same hyperparameter to test different OOD datasets, without adjusting for specific OOD datasets. We have added this discussion in Appendix M in the revised version.\n\n****\n>**Q7:** The paper needs to be proofread for typos. The following is a non-exhaustive list of the typos I found.\n\n**A7:** Thanks very much for pointing out the typos in our paper. We really appreciate you for your carefulness and conscientiousness. We have carefully checked the typos and improved the writing in the revised manuscript.\n\n****\n>**Q8:** How well do you believe that OOD detection can work post hoc without retraining the model?\n\n**A8:** Post-hoc detection methods can be easily adopted in real-world scenarios and large-scale settings because these methods do not need retraining the model. Fig. 4 and Fig. 7 illustrate the t-SNE plots for the features of ID examples and OOD examples, which shows that deep models can extract separable features for the ID and OOD examples. \nBased on this phenomenon, we believe that the deep models know which examples are OOD samples. With the development of interpretability for deep models, a good post-hoc detection method may be designed and achieves excellent detection performance without retraining the model.", " The paper presents a post hoc method for OOD detection in classification models. The key to the proposed approach is to determine the typical feature set from the data and the trained neural network, and use it to compute an existing OOD detection score, such as the energy score. The typical set is computed by truncating the output from a batch normalization error. The truncation is controlled by a hyperparameter that determines the bias-variance trade-off for the method. The proposed BATS method outperforms baseline methods consistently across a variety of experimental settings. Strengths:\n* I really enjoyed reading this paper. It is well written and the ideas are easy to follow. \n* The problem is well-motivated and the literature review does a good job at contextualizing the paper in prior work.\n* The proposed approach is simple, yet appears to be highly effective. The theoretical analysis further provides intuition for the BATS method.\n* Overall, the empirical evaluation of the method is extensive and convincing. The analysis and discussion is thoughtfully constructed, and the effects of the hyperparameter thoroughly ablated.\n* The figures are informative and effectively illustrate the benefits of the proposed approach.\n* The analysis in the appendix was extensive and interesting, providing further support for the claims made in the main paper.\n\nWeaknesses:\n* In lines 3-4, the statement is not necessarily true as there are also methods that focus on better distribution calibration during training (some of these ideas are mentioned later in the paper as well). I would recommend softening the message of this sentence to be more precise.\n* Language like 'Obviously' (line 47) and 'It is easy to see' (line 168) should generally be avoided in academic writing.\n* At the end of Sec. 2, it would be helpful to have a one or two sentence discussion to contextualize the proposed BATS method in the described related work.\n* Variations on the phrase 'We propose to rectify the features into the feature's typical set and then use these typical features to calculate the OOD score.' are repeated frequently throughout the paper.\n* The $\\lambda$ hyperparameter was not introduced in line 142 when it was first used.\n* The decreased BATS performance on Tiny-Imagenet OOD detection in Table 2 is not mentioned or discussed.\n\nThe paper needs to be proofread for typos. The following is a non-exhaustive list of the typos I found:\n\n1. Line 2: 'which raises the attention on out-of-distribution (OOD) detection' is awkward phrasing.\n2. Lines 74-75: 'large sufficiently' should read 'sufficiently large'.\n3. Line 98: 'energe score' should read 'energy score'.\n4. Line 109: 'is provable aligned' should read 'is provably aligned'.\n5. Line 130: 'common-used layer' should read 'commonly used layer'.\n6. Footnote 1: I did not grammatically understand the phrase: 'the pre-training outputs moving average estimators during iterations'. Maybe something like: 'The pre-trained model outputs moving average estimators at each iteration.'?\n7. Line 184: 'a two-side rectified normal distribution' should read 'a two-sided rectified normal distribution'.\n8. There should be a space between the abbreviation and the number in references (i.e., Fig. X, Table Y, Sec. Z).\n9. Line 197: 'Fig.2 illustrate' should read 'Fig. 2 illustrates'.\n10. Line 220: 'verse vice' should read 'vice versa'.\n10. Line 225: 'Recent researches propose' should read 'Recent literature/work proposes'.\n11. Line 235: 'models are standard pre-trained' is grammatically awkward. Maybe something like: 'models are pre-trained in a standard manner'?\n12. Line 238: 'In specific,' should read 'Specifically,'.\n13. Line 246: 'which cost more' should read 'which costs more'.\n14. Line 252: 'The start learning rate' should read 'The starting learning rate'.\n15. Line 284: 'Our BATS can reduce the variance that benefit the OOD detection but also introduce a bias.' should read 'Our proposed BATS method can reduce variance, which benefits OOD detection, but can also introduce a bias.'.\n16. Line 285: 'Energy Score (The horizontal lines).' should read 'Energy Score (the horizontal lines)'.\n17. Fig. 5: x-axis labels are missing. This is also the case for some figures in the appendix.\n18. Sec. 6: 'Limitation and societal impact' should read 'Limitations and societal impact'. \n19. Sec. 6: batch normalization is referred to in three different ways in the last paragraph (Batch Normalization, Batch-Norm, BN).\n20. Line 693: 'our method surpass' should read 'our method surpasses'.\n21. The references should be proofread (e.g., to ensure the year is not entered twice in a citation, the conference venue is listed instead of ArXiv when available, etc.). In addition to addressing the above listed weaknesses, I have the following questions for the authors.\n\n1. How well do you believe that OOD detection can work post hoc without retraining the model? How much do poorly calibrated softmax uncertainties hinder post hoc method effectiveness?\n2. Do you have a hypothesis for why BATS performance is lower than baseline methods on the Tiny-Imagenet OOD dataset in Table 2?\n3. In the limitations, it is mentioned that 'some other information in the model' may be conducive to selecting the feature's typical set. Do you have a hypothesis for what this information might be? The authors have done a good job in listing limitations of the BATS method. However, the addition of some potential negative societal implications would be helpful. For example, by truncating the features, certain biases in the data learned by the pre-trained model may be amplified. Furthermore, the process of truncation will inherently cause some information loss which may be crucial to model performance during deployment (building on the hyperparameter tuning discussion in the paper).", " The authors propose an OoD method that does not require the retraining of the model, and which relies on replacing the last BN layer by their proposed TrBN at inference time. The main concept of the new layer is that it clamps the most extreme features to be within their variance. By correcting those features (compressing the extreme features), the outputs of the model become less susceptible to OoD miss-classifications. The authors provide comparable or better results to existing state-of-the-art methods under different OoD scenarios. *Strentghs*\n\nThe idea is of interest to the community and the method is easy to implement.\n\nThe theoretical analysis seems sound and well based.\n\nThe analysis of the trade-off from lambda and the bias introduced by the proposed TrBN looks correct.\n\nThe ablation study provides nice insight and answers a question I was already thinking of while reading the method section.\n\n*Weaknesses*\n\nThe comparison with the related work is brief and there is little discussion of the similarities and differences. Methods such as ReAct seem to be close to the proposed approach, since they target \"anomalous features\".\n\nThe presentation could be improved. It requires a revision to correct some typos and grammatical errors. Some images and their texts could be more readable (e.g. Figs. 1-2).\n\nThe best results of the proposed method are in the supplementary, which is a bit odd. I would expect those to be discussed in the main paper, and also provide some insight on why the proposed method might work better in some of those methods. With the results shown in the main paper, the improvements in some of the scenarios are quite marginal in relation to other methods. Could you provide a bit more explanation on the differences with ReAct, that truncates the values of the activations? I assume by the results that the activations targeted by both ReAct and BATS are not the same, but is there any insight on that? Are the abnormal activations of ReAct closely correlated with the extreme features of BATS?\n\nMost reported results are basically an upgrade on top of Energy [5], which can be applied to other methods (as shown in Fig. 3, and Appendix I). Therefore, wouldn't it make more sense to report it in most tables/figures as Energy+BATS? Considering the huge gap between the Energy and the Energy+BATS results, why use that combination and not another one? Even with the added cost of GradNorm, the results are much better. Also, that would raise the question to why would BATS work better in some methods than others.\n\nMinor comments:\n- check for typos and revise manuscript (e.g. line 9 play-and-plug --> plug-and-play, line 98 energe --> energy)\n- Table 2, CIFAR100 WRN, ODIN, that should be in bold (check for other cases)\n- Figure 5 is not mentioned in the text. It is correctly stated in the limitations that this method is proposed as a post-hoc method, however, it has the limitation of the model a BN layer before the last FC layer.\n\nThere does not seem to be any potential negative societal impact generated from this work.", " The paper proposes a replacement of the batch normalization layer (BATS) to rectify deep model features into its typical set to improve OOD detection performance. The authors provide theorectical analysis and ablation studies to look deeper into BATS, and achieves state-of-the-art performance using it as a play-and-plug module. **Strengths**\n\nOriginality: The idea is new and intriguing. The authors propose a novel insight for OOD detection from the perspective of feature typicality, and divides deep features into typical features and extreme features. They also propose a novel replacement for Batch Normalization, which can be integrated into existing model structrues and OOD scores.\n\nQuality: From the perspective of typicality, and under the assumption that extreme features are harmful for model training, the authors provide thourough ablation studies on the proposed BATS approach, and theoretically prove its bias-variance trade-off. The proof is in detail and experiments are discussed well.\n\nClarity: The paper is well-written and easy to follow.\n\nSignificance: The paper provides a novel perspective into OOD detection, and introduces a simple yet effective method for post-hoc detection. The method is evaluated on both small and large real-world datasets.\n\n**Weaknesses**\n\nI have not identified major weaknesses of this paper, while I do have some minor concerns that are listed in the “Questions” part. 1.Uncertainty measurement is important in OOD cases. Although the authors used FPR95, AUROC and test accuracy (in appendix) to show the effectiveness of BATS, it would be interesting to analyze the results of the pretrained models from the perspective of uncertainty quantification, such as comparing Brier score or expected calibration error (ECE)[1].\n\n2.What are the choices for the reject region threshold $\\gamma$, is it always the best value or a fixed value? \n \n[1] Guo C , Pleiss G , Yu S , et al. On Calibration of Modern Neural Networks, 2017. The authors have properly addressed the limitations and potential negative societal impacts.", " This paper proposed Batch Normalization Assisted Typical Set Estimation (BATS) for enhancing Out-of-distribution Detection methods. BATS is a truncated activation scheme to bound the output features of the BN unit. The paper demonstrated that BATS could improve several OOD detection methods, such as Energy, Gradnorm, ODIN, when applying it to rectify the features of the penultimate layer. The proposed approach looks simple and effective. Strengths:\n1) The proposed approach is simple to implement\n2) The proposed approach is effective on several OOD detection methods with BN in the architecture demonstrated by experiments\n3) The paper is well-written and easy to understand.\n\nWeaknesses:\n1) My first impression of this paper is that the proposed approach looks like a slight modification of ReAct.\nAlthough BATS outperforms ReAct in experiments, the operation and the mathematical analysis look very similar.\nIn my understanding, BATS adaptively estimates the parameter c in ReAct.\n\n- The authors explain the motivation in the perspective of typical features, but typicality is not novel in OOD detection, e.g.,\n\n[r1] Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality.\n\n[r2] WAIC, but Why? Generative Ensembles for Robust Anomaly Detection.\n\n2) The major limitation of the proposed approach is that it can be applied only on BN (please correct me if I misunderstand.)\nThis can bring two major concerns:\n- As previous research showed that BN increases adversarial vulnerability [r3, r4], whether the proposed approach solved the problem of BN or had the ability of detecting OOD samples is not clear.\n- Modern archiectures utilize better normalization methods other than BN, such as LN and GN, and thus interest and impact of the proposed method is greatly limited.\n\n[r3] Batch Normalization Increases Adversarial Vulnerability and Decreases Adversarial Transferability: A Non-Robust Feature Perspective. ICCV 2021.\n\n[r4] Batch Normalization is a Cause of Adversarial Vulnerability.\n\n3) The authors didn't compare with SOTA of OOD detection on large-scale ImageNet, e.g.,\n\n[r5] Out-of-distribution detection with deep nearest neighbors. ICML 2022.\n\n[r6] VIM: Out-of-distribution with virtual-logit matching. CVPR 2022.\n Besides the concerns mentioned above, I don't understand why BATS can boost OOD detection theoretically.\n\nSection 4.3 shows that BATS reduces the variance of output features and also introduces a bias term. In lines 175 - 177, the authors stated that \"Our BATS aids this problem by reducing the variance of the deep features, which contributes to constraining the uncertainty of f and T(x; f) and improving the estimation accuracy of the reject region.\"\n\nHowever, why variance reduction boosts OOD detection isn't clear. Does it generally improve classifiers or specifically solve OOD detection problem? The discussion of limitations by authors is adequate. As said in Lines 299 - 300, \"The limitation of our method can be that the Batch-Norm layers are required in the model architecture in our approach.\" I would encourage the authors to continue to improve the method and find a more general formulation, as normalization technique (especially unified formulation) have been extensively studied." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "QfXwhWynOJ", "duB0j2MLET", "a__knB5nRhx", "tJuTeBTVJMj", "U7aO4LU03uq", "55jL2nHVpRB", "X1viltWeRcF", "SaLBXfH9Ko_", "IntM5JJIdZ9", "gekuhA1C0s", "xoWdLbKcVo", "nips_2022_4maAiUt0A4", "IntM5JJIdZ9", "IntM5JJIdZ9", "IntM5JJIdZ9", "IntM5JJIdZ9", "2l43FfUKsw", "lPvDOez8Vt5", "QmWTM9Mmsux", "QmWTM9Mmsux", "nips_2022_4maAiUt0A4", "nips_2022_4maAiUt0A4", "nips_2022_4maAiUt0A4", "nips_2022_4maAiUt0A4" ]
nips_2022_W4ZlZZwsQmt
Symplectic Spectrum Gaussian Processes: Learning Hamiltonians from Noisy and Sparse Data
Hamiltonian mechanics is a well-established theory for modeling the time evolution of systems with conserved quantities (called Hamiltonian), such as the total energy of the system. Recent works have parameterized the Hamiltonian by machine learning models (e.g., neural networks), allowing Hamiltonian dynamics to be obtained from state trajectories without explicit mathematical modeling. However, the performance of existing models is limited as we can observe only noisy and sparse trajectories in practice. This paper proposes a probabilistic model that can learn the dynamics of conservative or dissipative systems from noisy and sparse data. We introduce a Gaussian process that incorporates the symplectic geometric structure of Hamiltonian systems, which is used as a prior distribution for estimating Hamiltonian systems with additive dissipation. We then present its spectral representation, Symplectic Spectrum Gaussian Processes (SSGPs), for which we newly derive random Fourier features with symplectic structures. This allows us to construct an efficient variational inference algorithm for training the models while simulating the dynamics via ordinary differential equation solvers. Experiments on several physical systems show that SSGP offers excellent performance in predicting dynamics that follow the energy conservation or dissipation law from noisy and sparse data.
Accept
Learning from continuous-time physical systems when input data is noisy & sparse, and without access to time derivatives, is a hard problem. The authors propose a novel algorithm using Gaussian Processes, guided by physical knowledge. Reviewers agreed that the work was original. One reviewer raised concerns about the readability of the paper. The authors' responses will likely address most of those concerns. Other reviewers also suggested a number of improvements, which the authors took on board and will easily implement. Despite relatively simple experiment scenarios, this new algorithm demonstrated some advantages in the low data regime, where it improves on previous known algorithms.
train
[ "sn2D8i2xwd0", "dRac-XFXM_V", "8TQyc_K1Gbi", "8oRhqkZEoA2", "lFDT0GHnqB", "S6zy_NmMrp", "fJ2xJRzCGU", "VefeF463zlI", "6aXYvNflSH8", "YWoqnYSMGYCZ", "A0lOAi7fO9U", "IX1wMp1mZuV", "J52f-JfDEFk", "Ss5EeGWdAP8", "2B_lqqSf5oH", "PMixBamr35", "lNRmTzSObe7", "EZEIt_lCCG5", "Arzzd5TqHWK", "GPu7yR7LP7W", "3hyDxvzW2bv" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am glad that your concerns have been addressed. Your comments will help us to revise our manuscript even better.", " You are correct. We will clarify it as you commented.", " I appreciate your reply. I am glad that your concerns have been addressed. Your comments will help us to revise our manuscript even better. I will clarify the past work SympGPR and polish the presentation of the technical part so that the motivation for introducing the approximations is clear. ", " Thank you! I have no more questions.", " thanks for the clarification.\nA simple way to say could be that q factorizes as two factors separating the weights for the sines and cosines.", " Thanks for your comments.\nI understand past work and motivation and they are quite clear in your response.\n**Final suggestion**: Your manuscript would gain in clarity if you followed this presentation for the past work symgpr (maybe add the name when you cite, like (symgpr [10] )) Before you introduce VI, you could explain what would be the ideal thing to do: marginal likelihood, and why it is not tractable.\nThen introduce VI, RFF explaining what are the gains (scalability) and what it allows: reparameterization trick + ODE solver conditioned on samples for f.\n\nI have no further questions", " As you indicated, SympNet incorporates the symplectic structure into the neural network architecture, which allows one to estimate Hamiltonian systems effectively from data. **One most significant difference between our model and SympNet is the modeling of observation noises: SympNet does not model the observation noises.** Since our model is based on probabilistic generative modeling (i.e., GPs), we can estimate the unknown dynamics from **noisy** observations via Bayesian inference procedures. As one can also see from Figure 2 of [R1], they did not consider the noisy trajectories as training data in their experiments. Accordingly, SympNet might degrade the performance in such **noisy and sparse settings**. We will add the above discussion in the Related Work section of the final version.", " I am glad that your concerns have been addressed. ", " Thank you for your very detailed reply. But I have one more concern. I don't think SympNet needs a large dataset. They already maximize the symmetry of symplectic geometry. I hope you show a complete comparison with SympNet in the next round of submissions, at least methodologically.", " The additional discussions are also reasonable. The new information resolved all my concerns.\n\nI reread my review again and realized that I had submitted an older version of some parts of it. At the beginning of the Strengths section, I mentioned the method as the first study of GP for Hamiltonian system. This is not true; the aim is an extension to sampled data and dissipativity. I mentioned that later in the review. Sorry for the confusion. I clarify that my score has nothing to do with this (initial) misunderstanding.\n\nI am willing to raise the score during the discussion with AC.", " We would thank you for the positive evaluation of our work and your constructive feedback. Please find our response to your concerns in the following:\n\n**Question 1: At line 67, the authors stated that \"whose covariance function incorporates the geometric structure (also called symplectic structure) for the energy conservation or dissipation laws\". The geometric structures appearing in physics are not limited to the symplectic structure, but include contact structure, Poisson structure, Dirac structure, and so on. Moreover, the symplectic structure is related to the energy conservation law, but not to the dissipation law. Please introduce and discuss the geometric structure exactly.**\n\n**Response:** As you indicated, the description of the geometric structure was misleading. The only geometric structure considered in this study is the symplectic structure, which is appeared in the dynamics with conserved quantities. As you know, one can handle Hamiltonian dynamics with friction by introducing the dissipation matrix $R$, as in Eq. (1). To clarify it, we will modify Lines 66-68 as \"By employing the theory of Hamiltonian mechanics, the vector fields are derived as a multi-output GP whose covariance function incorporates the symplectic structure for the energy conservation law. Moreover, one can handle Hamiltonian dynamics with friction by introducing the dissipation matrix.\". Also, we will carefully revise the entire manuscript to avoid similar misunderstandings.\n\n**Question 2: To deal with the case without the true derivatives, this study proposes the symplectic random Fourier features. The experiments demonstrated that the features worked well for pendulum and Duffing oscillator, which exhibit periodic behaviors and are easily captured in the frequency domain. The generality for non-periodic cases such as double pendulums is unclear.**\n\n**Response:** In general, it is known that the model based on RFFs can be used for modeling non-periodic functions. The intuition of this mechanism is that the period of the function approximated by RFFs will typically be very long if the individual frequencies are not all multiples of a common base frequency. This fact has been discussed in Section 3.1 of [22]. Accordingly, the proposed model is applicable to non-periodic behaviors as well as periodic ones.\n\nWe conducted the additional experiments using the double pendulum system (without friction). The MSE comparisons between the proposed model (SSGP) and the baselines (i.e., SymODEN and HNN) are shown in the following table. We prepared {10, 15, 20, 30, 50} trajectories sampled at a frequency of 5 Hz for 10 seconds, and added Gaussian noise with variance $\\sigma^2=0.1$ to each sample. We took the other settings for this system from Supplementary Material of [5]. These results show that SSGP is also effective for the non-periodic double pendulum. \n\n| | #traj.=10 | 15 | 20 | 30 | 50 |\n| :--- | ---: | ---: | ---: | ---: | ---: |\n| SSGP | 0.635 | 0.497 | 0.484 | 0.503 | 0.457 |\n| SymODEN | 1.151 | 1.092 | 1.047 | 0.996 | 0.915 |\n| HNN | 1.131 | 1.152 | 1.097 | 1.095 | 1.163 |\n\n**Question 3: I am curious about the impacts of noise level on the accuracy. The comparison methods, namely HNN and SymODEN, ignore noise. In the noiseless case, is SymODEN enough?**\n\n**Response:** We conducted the additional experiments using the pendulum (without friction) in the noiseless setting, where we set the sampling frequency to 5 Hz. We show the MSE comparisons between the SSGP and the baselines (i.e., SymODEN and HNN) in the following table. As expected, the MSEs of all the models were small rather than the result in Figure 3(a). Nevertheless, SSGP improved the predictive performance than the baselines, especially when the number of trajectories was small. This is presumably because the GP-based model makes the assumption that the vector field is smooth, which might lead to better estimation results in the noiseless but sparse data setting. \n\n| | #traj.=10 | 15 | 20 | 30 | 50 |\n| :--- | ---: | ---: | ---: | ---: | ---: |\n| SSGP | 0.135 | 0.080 | 0.073 | 0.117 | 0.060 |\n| SymODEN | 0.509 | 0.324 | 0.097 | 0.185 | 0.199 |\n| HNN | 1.862 | 1.240 | 1.125 | 0.997 | 0.993 |", " **Question 4: The experimental setting is a bit unclear. (a) The difference between HNN and SymODEN is unclear. HNN learns the Hamiltonian using a single neural network, and SymODEN learns the weight matrix and the potential energy using two networks, obtaining the Hamiltonian. Is this OK? (b) At line 298, the authors stated \"Since HNN, D-HNN and SympGPR require derivative observations for training, we used the finite difference instead.\" Wasthe finite difference used as an approximation to the derivative?**\n\n**Response:** (a) You are right. But, the most important difference between HNN and SymODEN is whether the ODE solver can be utilized in the training process or not. The HNN is trained from the finite differences without using the ODE solver; meanwhile, the SymODEN can be trained using the ODE solver from state trajectories. I would like you to see Table 2 in Supplementary Material, which summarizes the differences between our model and the baselines.\n\n(b) You are right. These baselines assume that the time-derivative observations are available, so we used the finite differences as the approximation of the time-derivative. Please see Response to Comment 1 by Reviewer AdVL about the details of this procedure.\n\n**Question 5: The images are shown in the reverse order of the captions in the leftmost and second leftmost columns in Figure 4.**\n\n**Response:** Thank you for pointing this out. We will modify it.", " We would thank you for the valuable questions and comments. Please find our response to your concerns in the following:\n\n**Comment 1: The demonstrated examples seem to be cherry-picked, and recently proposed methods, such as SympNet, can also accurately predict simple Hamiltonian systems.**\n\n**Response:** We believe that our experiments are convincing that the proposed model can improve the predictive performance, especially when the number of trajectories was small, because we conducted the evaluations in various experimental settings, varying the systems, the number of trajectories, and sampling frequency (See Figures 3 and 7). Meanwhile, we agree that it is important to discuss the cases where the proposed model does not work well. We will discuss it in Response to Question 7.\n\nAs you indicated, the experiments in [R1, R2] have shown that SympNet can predict simple Hamiltonian systems. However, as described in the first paragraph of the Introduction Section, neural network-based models implicitly assume that a large amount of training data with a high temporal resolution is available. Thus, the prediction performance may degrade in the sparse data settings that this work focuses on, like the results of SymODEN (Figures 3 and 7). \n\n**Comment 2: Adding some citations of recent papers related to symplectic neural networks would be helpful.**\n\n**Response:** We will add the works of SympNet [R1, R2] to our Related Work Section. The drawback of SympNet is described in Response to Comment 1.\n\n**Comment 3: The symplectic neural networks are divided into separable and non-separable. The current paper seems to discuss only separable Hamiltonian systems, which needs to be clarified.**\n\n**Response:** Indeed, Hamiltonians can be distinguished between the separable and the non-separable cases. This means whether the system's total energy (i.e., Hamiltonian $H(x)$) can be explicitly separated into the kinetic and potential energy terms. Since our formulation is not based on this assumption, it can be used for learning dynamics that are governed by non-separable Hamiltonian as well as separable Hamiltonian.\n\n**Comment 4: Limitations of the method need to be discussed.**\n\n**Response:** Please see Response to Question 7.\n\n**Question 1: The equations of the predicted systems should be given in the main text.**\n\n**Response:** We will move Eqs. (34) and (35) in Appendix F to Section 6 of the main text.\n\n**Question 2: Is the order of the subgraphs in Figure 4 reversed.**\n\n**Response:** Thank you for pointing this out. We will modify it.\n\n**Question 3: The introduction of the theoretical model could be more concise.**\n\n**Response:** We will carefully check Sections 3, 4, and 5 and revise them to be as concise as possible. We will move Lines 194-200 to Appendix B of Supplementary Material.\n\n**Question 4: The idea in Figure 2 is not clear. Can you show a clear schematic?**\n\n**Response:** We agree that the schematic diagram of the proposed model helps to understand our formulation. So, **we have added the schematic diagram of our proposed model in Figure 11 in the revised Supplementary Material: Our idea is a novel generative model of noisy trajectories.** In the final version, we will add this diagram instead of Figure 2 in the main text and revise the manuscript as clearly as possible. \n\n**Question 5: Why do the other methods in Figure 3 perform well in their papers?**\n\n**Response:** Because they used the trajectories with a sufficient number and/or a high sampling frequency for training. For example, the SymODEN was learned using the trajectories with the sampling frequency of 20 Hz in their paper [47]. In our experiments, we used the trajectories sampled at a frequency of {3,5,10} Hz to evaluate the robustness of our proposed model against the degree of sparsity. This experimental setting is reasonable because our contribution is to present the model that can accurately predict Hamiltonian systems from noisy and sparse trajectories.\n\n**Question 6: The selection of some parameters should be discussed in detail. For example, time step, parameter size, data set size, etc.**\n\n**Response:** As described in Response to Question 5, the experimental setting was carefully designed to evaluate the predictive performance in the low data regime. We generated the training data by varying the number of trajectories (i.e., data set size) and sampling frequency (i.e., time step size). Then, we have discussed the results in detail in the Result paragraph of Section 6.\n\nThe hyperparameter (i.e., parameter size) that needs to be carefully determined in the proposed model to obtain high predictive performance is the number of spectral points $M$ (i.e., the number of basis functions). We determined it automatically on the basis of the validation error (See Lines 291-292). Also, the hyperparameters of baselines (e.g., network size) have been specified in Appendix G of Supplementary Material.", " **Question 7: I would love to hear some limitations of the proposed paper about failure modes that the authors might have encountered.**\n\n**Response:** We have mentioned the limitations of the proposed models in Section I of Supplementary Material. In addition, as pointed out by Reviewer zQ6w in **Limitations**, the extrapolation performance of our model might be limited. As you can see in Figure 5, the cumulative errors of the proposed model and the baselines increased significantly in the period [10,15]. Here, the observation period was 10 seconds. Improving the accuracy of extrapolation is an interesting and challenging task, which is one of the future works. We will add this limitation in the final version.\n\n**References:**\n\n[R1] Pengzhan Jin et al., SympNets: Intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems, Neural Networks, 2020.\n\n[R2] Shiying Xiong et al., Nonseparable symplectic neural networks, ICLR, 2021.", " We would thank you for the valuable feedback on our manuscript. Based on the comments, we will revise the manuscript. **We will add the schematic diagram of the proposed model to give a good intuition. Please see Figure 11 in the revised Supplementary Material.** We will answer the questions below.\n\n**Comment 1: Eq 5. $w_m$ is drawn from a normal distribution. In line 224, it is said that $w_m$ is a learned parameter. Perhaps something is not explained clearly.**\n\n**Response:** Eq. (5) states that $w_m$ is assumed to follow a normal distribution; that is, the prior distribution of $w_m$ is a normal distribution. It should be noted that $w_m\\sim \\mathcal{N}(0,\\frac{\\sigma_0^2}{M}I)$ does not represent the realizations of $w_m$ from the prior distribution. In Section 5, given the training data, we estimate $w_m$ and the other parameters on the basis of variational Bayesian inference procedures. As you can see from Eq. (12), the prior distribution of $w_m$ is used as a regularizer (i.e., KL divergence). \n\n**Comment 2: Line 195: why $p(f|w)$ is a Dirac's delta function? How do we get Eq 9?**\n\n**Response:** The use of RFF allows us to obtain the approximation of GP represented by $f(x)=\\Psi(x)w^\\top$, as Eq. (7). The important thing is that the randomness of function $f(x)$ is totally controlled by the prior distribution $p(w)$. Then, given $w$, the function $f(x)$ is deterministic. In such a situation, Dirac's delta function is generally used for representing the conditional distribution of $f$, as follows: $p(f\\mid w)=\\delta(f-\\Psi w^\\top)$. \n\nThe marginalization of $w_m$ in Eq. (9) is well-known to be Gaussian. One can obtain the mean and covariance of this marginal distribution by taking moments (i.e., $\\mathbb{E}[f]$ and $\\rm{Cov}[f]$). This result is described in Eqs. (4.148)-(4.150) of the text [2, Chapter 4.5.2], which have been mentioned in the footnote (Page 5) of our manuscript.\n\n**Comment 3: Line 208: justification needs to be given for distribution of $x$ being a delta function Eq. 11.**\n\n**Response:** The reason for using the delta function is the same as in Response to Comment 2. The sentence \"the distribution of $x_{ij} (j\\geq 2)$ is assumed to be Dirac's delta function\" in Line 208 might be confusing. So, we will modify it as \"Given $f$ and $x_{i,j-1}$, the state $x_{ij}$ is deterministically given by solving the ODE; thus, we can write the conditional distribution $p(x_{ij}\\mid f, x_{i,j-1})$ using Dirac's delta function, as follows:\".\n\n**Comment 4: There are undefined math symbols, e.g. Eq 17, $b$ and $C$.**\n\n**Response:** $\\Psi(x)$ in Eq. (17) has been defined in Eq. (7). $w^{(k,l)}$ is the sample of the weight $w$ from the variational distribution $q(w)=\\mathcal{N}(b,C)$, where $b$ and $C$ has been defined in Line 229. As the reviewer indicated, the numbers $K$ and $L$ of Monte Carlo samples were undifined in Page 6; we will specify them.\n\n**Comment 5: Does Eq 17 contradicts Eq 5?**\n\n**Response:** No. Eq. (5) represents the approximation of **GP prior** using RFFs. On the other hand, Eq. (17) represents the **GP posterior** derived by the variational inference.\n\n**Comment 6: The physical meanings of equations should be explained better.**\n\n**Response:** Our aim is to infer the unknown Hamiltonian (i.e., energy function) $H(x)$ from noisy and sparse trajectories and predict the dynamics from an arbitrary initial condition. To achieve this, we propose a novel probabilistic generative model based on GP and its Bayesian inference procedure. **To give a good intuition of our model, we have added the schematic diagram of the proposed model in Figure 11 of the revised Supplementary Material.** This diagram is the generative process of noisy trajectories (not inference procedure). Our model assumes that the observed data are generated from this generative process; then, the model parameters are estimated by the variational Bayesian method, as described in Section 5. In the final version, we will add this diagram instead of Figure 2 in the main text and revise the manuscript as clearly as possible. ", " We would thank you for the positive evaluation of our work and your constructive feedback. Following the reviewer's comments, **we will clarify this work's novelty and motivation.** Please find our response to your concerns in the following:\n\n**Comment 1: Previous work, especially the symGPR is not introduced which makes it difficult to understand the novelty here. What did they do exactly? did they also use RFF to approximate the covariance or do the calculation closed form?**\n\n**Response:** The SympGPR assumes that derivative observations are available, where each derivative observation is a pair $(x,\\dot{x})$ containing a state $x$ and its time-derivative $\\dot{x}=\\frac{dx}{dt}$. Then, they model the conditional probability $p(\\dot{x}\\mid x)$ using GPR with covariance function inspired by Hamiltonian mechanics. The training is based on the exact marginal likelihood of $\\dot{x}$; one can predict the time-derivative at any state by calculating the predictive distribution and can simulate the dynamics. Notice that they did not use ODE solvers for the training phase (ODE solvers are used **only** for simulation in the test phase). Also, they did not introduce the RFF approximation and did not consider the energy dissipation. \n\nIn practice, it is difficult to observe the time-derivatives directly; we often obtain state trajectories $\\{(t, x)\\}$ instead. Although the SympGPR is applicable by approximating the time-derivatives $\\dot{x}$ with finite differences $\\frac{\\Delta x}{\\Delta t}$, it is problematic, especially when the temporal resolution is lower (See Lines 116-119).\n\nWe aim to present the algorithm for training the GP models for Hamiltonian systems (with dissipation) from state trajectories by employing ODE solvers. Our novelties compared with the SympGPR are: \n- A GP prior for modeling systems with energy dissipation as well as energy conservation.\n- Its spectral representation by deriving RFFs that incorporate the symplectic structure.\n- A variational inference (VI) procedure with a numerical integration by ODE solvers as a subroutine.\n\n**Comment 2: the many approximations introduced are not necessarily well motivated. Why do you do the RFF? Is it necessary? convenient for VI? for scalability? I have my guess but this needs to be more explicitely stated in the paper.**\n\n**Response:** The approximations (RFF and VI) are necessary to utilize ODE solvers in the training procedures of the GP models. One of the most important reasons to adopt the RFF is scalability. In our training process, we should perform numerical integration via ODE solvers, as in Eq. (16). If we do not use the RFF approximation but the exact GP, the computational costs become prohibitive (the fourth power of the number of points evaluated by the ODE solver). We have elaborated on it in Appendix E of Supplementary Material.\n\nAlso, we adopted VI because we cannot calculate the exact marginal likelihood (Eq. (10)) analytically as it includes the process of solving the ODEs. We have mentioned it in Lines 225-226. We will clarify it more in the final version. \n\n**Comment 3: The consequence of the approximations introduced are not discussed. Does the RFF preserves the symplectic structure? What is lost by approximating the prior does it bias the inference? if so how? The same applies to using VI.**\n\n**Response:** The vector field $f(x)$ approximated by RFFs always preserves the symplectic structure even if the number of basis functions is finite. This is because $f(x)$ is defined by Hamilton's equation $f(x)=\\mathcal{L}H(x)$ as in Eq. (7). Meanwhile, in the case where the number of RFFs is small, the expressive power of $H(x)$ (Eq. (5)) approximating the Hamiltonian might decrease compared with the exact GP in Eq. (2). \n\nAlso, VI does not prevent the symplectic structure from being satisfied. In VI, the evidence (Eq. (10)) is approximated by introducing the variational distributions $q(w)$ and $q(x_{ij})$, as Eq. (12). It may be necessary to discuss how tight the evidence lower bound (Eq. (12)) is. For example, the choice of variational distributions $q(w)$ and $q(x_{i1})$ is optional. This work's contribution is to present the VI framework for learning the GP model inspired by Hamiltonian mechanics. A more practical design of VI is one of the future works.", " **Question 1: I don't understand the following sentence: \"We used a block diagonal approximation of $C$ so that each pair of basis functions shared the same covariance\". Is this mean field $q(C)=\\prod_i q(C_i)$, or $q(C)=\\hat{q}(C)^K$. In both case, this has strong consequence on the inference. These choices should be discussed.**\n\n**Response:** Assume that the set of basis functions is represented as follows: $[\\cos(2\\pi s_1^\\top x),\\ldots,\\cos(2\\pi s_M^\\top x),\\sin(2\\pi s_1^\\top x),\\ldots,\\sin(2\\pi s_M^\\top x)]$. Then, the variational distribution of the weights $w\\in \\mathbb{R}^{2M}$ is given by $q(w)=\\mathcal{N}(b,C)$. To save the computational time, we used the block-diagonal approximation of $C$ as follows: $C=[[C_1,0][0,C_2]]$, where $C_1, C_2 \\in \\mathbb{R}^{M\\times M}$. In some experiments, the prediction accuracy was almost the same as with full-matrix $C$. We will clarify this in the final version. ", " The manuscript proposes a way to learn Hamiltonian systems with additive dissipation from (scattered) observations.\nThe learning is framed as a (Bayesian) inference of a Hamiltonian endowed with Gaussian Process prior with the rest of the generative model relating trajectories to the hamiltonian via dissipative dynamical equations, and noisy observations on top of these trajectories.\n\nThe prior covariance matrix on trajectories is approximated using RFF for an ARD base kernel and propagating the approximation via the linear operator specifying the dynamics.\n\nVariational inference under this (approximate) prior model is used, which conveniently allows to use the reparametrization trick to evaluate the ELBO.\n\nThe resulting method is evaluated on a number of experiments and compared to pre-existing methods.\n -- Strengths\n\nThe method is a useful extension of the SymGPR method to dissipative systems.\n\nThe combination of the prior on hamiltonian, dissipative dynamics and RFF appears original to me.\n\nThe combination of many different approximations (RFF, VI, stochastic evaluation) leads to a practical algorithm for the problem at hand.\n\nThe possibility to make predictions without or with different dissipation is expected but neat.\n\nThe results are impressive especially in the low data regime, where the method clearly outperforms alternatives in its prediction accuracy.\n\n-- Weaknesses\n\nI report here a few points that made the manuscript a bit difficult to read.\nIt is mostly about the motivation rather than the technical content.\n\n1) Previous work, especially the symGPR is not introduced which makes it difficult to understand the novelty here.\nWhat did they do exactly? did they also use RFF to approximate the covariance or do the calculation closed form?\n\n2) the many approximations introduced are not necessarily well motivated. Why do you do the RFF? Is it necessary?\n convenient for VI? for scalability? I have my guess but this needs to be more explicitely stated in the paper.\n \n3) The consequence of the approximations introduced are not discussed.\nDoes the RFF preserves the symplectic structure? What is lost by approximating the prior does it bias the inference? if so how?\nThe same applies to using VI.\n\n\nBecause of these perceived weaknesses,\nI set my score to weak accept. I m willing to change my evaluation if my concerns are adequately addressed\n I don't understand the following sentence:\n\"We used a block diagonal approximation of C so that each pair of basis functions shared the same covariance\".\nIs this mean field $q(C)=\\prod_i^M q(C_i)$, or $q(C) = \\tilde{q}(C)^K$. In both case, this has strong consequence on the inference.\nThese choices should be discussed.\n None", " Paper propose a spectral representation of Hamiltonian. Demonstrated the procedure to solve the equations of motion given a set of noisy measured trajectories (line 202).\n\nThe learning process is on the parameters used in the formalism of the special Hamiltonian (line 224). Strengths:\nThe formalism and learning process is very different from a traditional Gaussian process approach. Hence there is some novelty in this paper.\n\nBack information is well given and pitch at the correct level of sophistication. e.g. Hamiltonian systems, Gaussian process and literature review.\n\n\nWeakness:\nThe main mathematical framework is given in page 5 and 6. I find that these two pages of the paper is extremely difficult to follow. This is the main weakness of this paper. I explain the reasons why these two pages are difficult to follow:\n1. Eq 5. w_m is drawn from a normal distribution. In line 224, it is said that w_m is a learned parameter. Perhaps something is not explained clearly.\n2. Line 195: why p(f|w) is a Dirac's delta function? How do we get Eq 9?\n3. Line 208: justification needs to be given for distribution of x being a delta function Eq. 11\n4. There are undefined math symbols, e.g. Eq 17, b and C\n5. Does Eq 17 contradicts Eq 5?\n6. The physical meanings of equations should be explained better.\n\n\nIt will be much better if the author rewrite the manuscript to explain more clearly and in a higher level of what their method is trying to do. After a good intuition is given to the readers, the authors may dive into details of the mathematics.\n\n\n\n\n Please read the questions given in the section \"Strengths and Weakness\". Discussion on limitations is given in a short paragraph towards the end of the paper. The authors propose to extend this formalism into high dimensions.", " This paper proposed a symplectic spectrum Gaussian process method. The method can predict systems whose dynamics follow energy conservation and dissipation laws from noisy and sparse data. The proposed method is a general tool and has the potential to use the design of kernel machines with prior knowledge in physics. Although the proposed method is solid in theory, the scenarios applied are relatively simple. In addition, some classic examples such as vortex-particle and gravitational systems should be compared with the recently proposed methods. Strengths\n\nThis paper proposed a symplectic spectrum Gaussian process method for modeling Hamiltonian systems with additive dissipation. They derived a new spectral representation by incorporating the symplectic structure to handle energy dissipation and energy conservation systems. They also proposed a variational inference procedure that offers numerical integration of the ODE solver as a subroutine. Finally, they demonstrated some experiments on several physical systems to verify the accuracy of the proposed symplectic spectrum Gaussian process method. \n\nWeaknesses\n\nThe demonstrated examples seem to be cherry-picked, and recently proposed methods, such as SympNet, can also accurately predict simple Hamiltonian systems.\n- Adding some citations of recent papers related to symplectic neural networks would be helpful. \n- The symplectic neural networks are divided into separable and non-separable. The current paper seems to discuss only separable Hamiltonian systems, which needs to be clarified.\n- Limitations of the method need to be discussed.\n Overall, the exposition of the paper is pretty straightforward. However, I assume it to be difficult for readers who are not familiar with Gaussian process models. More specific comments:\n- The equations of the predicted systems should be given in the main text.\n- Is the order of the subgraphs in Figure 4 reversed.\n- The introduction of the theoretical model could be more concise.\n- The idea in Figure 2 is not clear. Can you show a clear schematic?\n- Why do the other methods in Figure 3 perform well in their papers?\n- The selection of some parameters should be discussed in detail. For example, time step, parameter size, data set size, etc.\n I would love to hear some limitations of the proposed paper about failure modes that the authors might have encountered.", " Many neural network-based methods for continuous-time physical systems have been proposed recently. These methods are based on ordinary differential equations and consistent with geometric structures. In practice, the available data is noisy and sparse. To deal with such situation, this study proposes to use Gaussian process (GP) consistent with geometric structures. Because tha data is composed only of observations (without derivatives), the GP uses the random Fourier features.\n *Strengths*\n\nTo my best knowledge, this is the first study to tackle the modeling of the Hamiltonian system (with dissipation) by using GP. Like the great success of Hamiltonian neural networks (Greydanus et al., NeurIPS, 2019), this study potentially opens up a new research field.\n\nThis study tackled the observation noise, which is a crucial issue in this field, but has been ignored by most existing studies.\n\nThe impact of the sampling rate on the performance, shown in Figure 7, is interesting. The proposed method is superior especially in the case of the sparse sampling.\n\n*Weakness*\n\nThe explanation about geometric structures is inexact. See (1) below.\n\nThe generality of the spectral representation is unclear. See (2) below.\n (1) At line 67, the authors stated that \"whose covariance function incorporates the geometric structure (also called symplectic structure) for the energy conservation or dissipation laws\". The geometric structures appearing in physics are not limited to the symplectic structure, but include contact structure, Poisson structure, Dirac structure, and so on. Moreover, the symplectic structure is related to the energy conservation law, but not to the dissipation law. Please introduce and discuss the geometric structure exactly.\n\n(2) To deal with the case without the true derivatives, this study proposes the symplectic random Fourier features. The experiments demonstrated that the features worked well for pendulum and Duffing oscillator, which exhibit periodic behaviors and are easily captured in the frequency domain. The generality for non-periodic cases such as double pendulums is unclear.\n\nI am curious about the impacts of noise level on the accuracy. The comparison methods, namely HNN and SymODEN, ignore noise. In the noiseless case, is SymODEN enough?\n\n*Minor comments:*\n\nThe experimental setting is a bit unclear.\n(a) The difference between HNN and SymODEN is unclear. HNN learns the Hamiltonian using a single neural network, and SymODEN learns the weight matrix and the potential energy using two networks, obtaining the Hamiltonian. Is this OK?\n(b) At line 298, the authors stated \"Since HNN, D-HNN and SympGPR require derivative observations for training, we used the finite difference instead.\" Wasthe finite difference used as an approximation to the derivative?\n\nThe images are shown in the reverse order of the captions in the leftmost and second leftmost columns in Figure 4.\n\nDue to the inexact discussion about the geometric structures, I cannot accept the present paper as it is. In this regard, I believe that a revision of the text would be sufficient. Then, I will be happy to raise the score. As discussed in Section I (and potentially shown in Figure 10), the extrapolation performance might be limited. The authors sincerely expressed this limitation. I believe that this limitation will not diminish the value of this study.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4, 4 ]
[ "8oRhqkZEoA2", "lFDT0GHnqB", "S6zy_NmMrp", "fJ2xJRzCGU", "lNRmTzSObe7", "PMixBamr35", "6aXYvNflSH8", "YWoqnYSMGYCZ", "Ss5EeGWdAP8", "A0lOAi7fO9U", "3hyDxvzW2bv", "3hyDxvzW2bv", "GPu7yR7LP7W", "GPu7yR7LP7W", "Arzzd5TqHWK", "EZEIt_lCCG5", "EZEIt_lCCG5", "nips_2022_W4ZlZZwsQmt", "nips_2022_W4ZlZZwsQmt", "nips_2022_W4ZlZZwsQmt", "nips_2022_W4ZlZZwsQmt" ]
nips_2022_L9YayWPcHA_
Plan To Predict: Learning an Uncertainty-Foreseeing Model For Model-Based Reinforcement Learning
In Model-based Reinforcement Learning (MBRL), model learning is critical since an inaccurate model can bias policy learning via generating misleading samples. However, learning an accurate model can be difficult since the policy is continually updated and the induced distribution over visited states used for model learning shifts accordingly. Prior methods alleviate this issue by quantifying the uncertainty of model-generated samples. However, these methods only quantify the uncertainty passively after the samples were generated, rather than foreseeing the uncertainty before model trajectories fall into those highly uncertain regions. The resulting low-quality samples can induce unstable learning targets and hinder the optimization of the policy. Moreover, while being learned to minimize one-step prediction errors, the model is generally used to predict for multiple steps, leading to a mismatch between the objectives of model learning and model usage. To this end, we propose Plan To Predict (P2P), an MBRL framework that treats the model rollout process as a sequential decision making problem by reversely considering the model as a decision maker and the current policy as the dynamics. In this way, the model can quickly adapt to the current policy and foresee the multi-step future uncertainty when generating trajectories. Theoretically, we show that the performance of P2P can be guaranteed by approximately optimizing a lower bound of the true environment return. Empirical results demonstrate that P2P achieves state-of-the-art performance on several challenging benchmark tasks.
Accept
All the reviewers agree that this is a good paper. The idea is original and the paper has good empirical results. There were some confusions, which were resolved during the discussions and the revised paper. I recommend this paper to be accepted, possibly as a spotlight presentation. I enlist a few concerns below, so that the authors can improve their paper. Some of them are by the reviewers, and some of them are by myself. - Be more clear about how R^m is estimated. This is discussed in Appendix B.3 of the revised version, but given its importance to the algorithm, the authors may want to consider discussing it in the main body. - Reviewer q2Vv mentioned their concerns about preventing the agent to go to the uncertain regions, which may prevent the exploration. The authors answered that "the exploration-exploitation tradeoff in RL mainly works on real environments instead of the approximate models". This is not entirely accurate. Methods based on the optimism in face of uncertainty, such as UCRL, actually try to exploit the uncertainty of the model. If the model's promise of large return turns out to be false, due to its large uncertainty, we have gathered useful information and decreased our uncertainty. - I feel that there is a gap between the theoretical results and the algorithm. It does not seem that the optimizer of the the model MDP (Definition 1), which is optimized on L5 of Algorithm 1, is the same as the optimizer of the upper bound on Theorem 1, used to justify the algorithm. For example, the $e^m_t$ term is based on maximum over actions of the TV error between the model and the true environment, weighted according to state distribution induced by the model. On the other hand, $R^m$ in the model MDP (after taking the expectation over $s_{t+1}$ coming from distribution $P^m$), seems to be the chi-squared divergence, which is also weighted by the policy $\pi$. They are not the same. It is OK if they are slightly different for practical purposes, as long as one can show their relation and be clear about it. - The inequality at the beginning of Section 3.2 (Theoretical Results) requires $|J^\hat{P}(\pi) - J(\pi)|$ be smaller than C for both the new policy pi and the old policy $\pi_D$. Disregarding the previous issue (or assuming that it can be resolved), solving the optimization problem defined on L5 of Algorithm 1 only guarantees $|J^\hat{P}(\pi) - J(\pi)|$ to be small for the current policy $\pi_D$, and not the optimized one. As such, the inequality is not satisfied, even if the algorithm works well. Am I missing something? Please clarify it in the revised paper. - It is claimed on L154 that monotonic policy improvement can be achieved by solving the update rule (2). I don't think it is correct. We need the value of the maximizer to be larger than $J(\pi_D)$, which may not always be the case.
train
[ "YNxOIEXg4gf", "c44lGI6mfnN", "4WM3j6k1ARb", "7kOH2dODMQ5", "C35nZlnc9KM", "kpUkUXMeJA", "BI3hvTzIkMi", "eHjIv4ew63T", "uhJof5nBp4k", "hOgrJ_wmFYw", "dfqfZr8qvHa", "OPWc0Cz20ux", "-jyFoE-CbgV", "hMFxbZSxiHt", "T-l9Juofeb", "Rtzaz04-dOz", "A_YB_J1pfmD", "QE7DLdPRSaz", "NlaBpnCwz5g" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for their detailed responses, which corroborate my positive assessment of this paper.", " We thank the reviewer for the valuable suggestions and for updating the score. We will further expand the description of Figure 1 in our future revision to improve the readability.", " Dear authors, thanks for humoring my problems and expanding the paper! In its current forms, the details and benefits of the formulation are much more clear to me! I will be raising my score to \"Accept\".\nI am still confused about Figure 1 and I would encourage you to expand the description in the final paper, similar to the text you wrote in the rebuttal comment. I would potentially suggest to have only the left version of the figure but separated into old and new policy side, with highlighted differences? This is just a suggestion for improvement in the camera ready though.", " Yes, the reviewer is right, in the offline setting we pre-train $\\hat{P}$ and $\\hat{R}$ and fix them for policy learning. But please note that the pseudo codes we provide are for the online setting, where incremental environment data is collected and added to $\\mathcal{D}_e$ in each iteration (Line 6). We will make this clearer in the Algorithm caption.", " I am sure that I am using the latest version. In Line 3-4 in Alg. 1 (P2P-MPC), we train $\\hat P$ and $\\hat R$ just based on the static offline dataset $\\mathcal{D}_e$. So I think that obviously we can just pre-train $\\hat P$ and $\\hat R$ and fix them for $\\pi$ learning (Line 5-18), instead of learning $\\pi$ with imperfect $\\hat P$ and $\\hat R$. Are these some special designs or did I miss something?", " We think that the reviewer may refer to the previous version (uploaded before the author response deadline) where the pseudo codes have some mistakes in terms of the loops. We are very sorry for this confusion. \n\nPlease refer to the latest revision of our paper (uploaded yesterday) where Lines 6-20 are all in the loop of ``N epochs''.", " Thanks for the detailed response. The response solves my several concerns. I have one further question on P2P-MPC. I found that the model learning process (Line 3-4 in Alg. 1) is in the loop of ``N epochs'' but the current policy \\pi does not affect the training process of model learning. If I understand correctly, we can move it out as a pre-trained process (out of the loop of Line 2-20), right?", " Regarding the undefined notations, we have added clarifications in our revised version. (Page 2, Line 48 for the term \"value\", Page 5, Lines 149-150 for $R_{max}$ and $D_{TV}$)\n\nAs for the use of the terms \"policy-dynamics\" and \"model-policy\", we have rewrote the relevant part to make it clearer to the readers in our revised version. (Page 6, 186-211)\n\nThanks for your valuable suggestions, and please let us know if there are still confusions.", " Thank you for the suggestion of using a more \"tutorial style\" of writing. We have rewrote Section 3.1 to give a more specific explanation of the algorithm. Please refer to Lines 124-136 in our revision. \n\nPlease let us know if there are still confusions.", " We thank the reviewer for all these valuable comments. We provide point-by-point responses below. \n\n**Q1: \"Can you compare to more other MBRL approaches?\"**\n\n**A1:** We have added one more baseline, i.e., SLBO [1], in the evaluation of our revised version (Section 4.1, Page 7, Lines 226-227). \n\n\n\n**Q2: \"Can you run on larger problems?\"**\n\n**A2:** In this work, we conducted experiments on MuJoCo and D4RL, which are the widely used benchmarks in the existing MBRL research. Extending our work to larger problems is left as an important future work.\n\n\n\n[1] Luo et al. Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees. 2019.", " **Q6: \"Line 190: The $R^m$ is trained as a neural network? How exactly?\"**\n\n**A6:** In practice, during each training iteration, P2P-MPC first trains the model via traditional one-step prediction loss, and then trains the $\\hat{R}^m$ network by taking transitions sampled from the environment dataset as input, and the prediction errors on these transitions as label. The prediction error of an environment transition $(s, a, r, s')$ are computed via $\\\\|\\hat{s}'-s'\\\\|+\\\\|\\hat{r}-r\\\\|$, where $\\hat{s}', \\hat{r}$ are sampled from $\\hat{P}(\\cdot, \\cdot|s, a)$. The above details are added to Appendix B.3 in our revised version.\n\nAs discussed in **A4**, to computed the expected return w.r.t. the current policy, $\\hat{R}^m$ may need to predict the model accuracy on unseen transitions, and this requirement for generalization is why we choose to use a neural network. Intuitively, $\\hat{R}^m$ can be seen as an indicator that tells the model where its \"weakness\" lies in.\n\nThe above clarification has been added to Appendix B.3 (Page 3, Lines 38-48) in the revised version.", " We thank the reviewer for all these valuable comments. Point-by-point responses are provided below. \n\n**Q1: Clarification of details and setup in Figure 1.**\n\n**A1:** Figure 1(a) is a conceptual model that illustrates the our motivation, and the setup can be described as follows: Given an arbitrary state-action pair $x_0$, the model has two options of prediction, namely $s_1$ and $s_1'$. Under the old policy $\\pi_{\\text{old}}$, both options will lead the trajectory to enter regions with low value, hence $\\pi_{\\text{old}}$ is updated to $\\pi_{\\text{new}}$ to explore regions with potential high value. Under the current policy $\\pi_{\\text{new}}$, predicting $s_1$ will result in a subsequent trajectory with significantly higher accumulative error than that of predicting $s_1'$. \n\nFigure 1(b) is an informal instance of Figure 1(a), where $x_0$ corresponds to the ant falling from the sky (executing action like adjusting the belt of the parachute), $s_1$ and $s_1'$ respectively corresponds to landing on the left/right side of the wall, and the arrows as well as the colored regions have the same meaning with the ones in Figure 1(a). \n\nWe have simplified Figure 1 in the introduction of our revised version (Page 2). \n\n\n\n**Q2: Cite additional previous work for using the term \"objective mismatch\".**\n\n**A2:** Thanks for reminding us of this. We have cited the mentioned work in our revision (Page 2, Line 42).\n\n\n\n**Q3: Justification of \"why exactly the model should prevent the policy from going into high uncertainty regions\".**\n\n**A3:** First, from the theoretical perspective, preventing the policy from going into highly uncertain regions can reduce the accumulative model error, and thus guarantee a tighter performance lower bound and better policy improvement according to Theorem 1.\n\nSecond, as mentioned by the reviewer, the overestimation of unfamiliar states can to some extent be counteracted by utilizing large batch of parallelized trajectories during model rollout. However, it is often not clear that, to what extent and how large the batch size can counteract this overestimation, and there is no guarantee that the expectation of these parallelized samples can well approximate the true value. From this perspective, the potential risk of overestimation may still be high if the policy visits the uncertain regions in the model frequently. That is why the performance becomes better if the model can foresee this risk and avoid it in advance. This is also the main motivation of our work. \n\nFinally, as mentioned by the reviewer, the erroneous part of the model can be hopefully corrected by counterfactual data in the online setting. However, this correcting process itself requires additional samples, which may run counter to the principle of developing model-based methods. Furthermore, exploring the erroneous part of the model may not only provides the agent with false information about the real environment, but also be prone to the *model exploitation* issue which severely hurts the asymptotic performance of the policy. \n\n**Q4: \"How does predicting** $R^m$ **actually differ between this approach and MBPO? How is the expectation with regard to the current policy realized?\"**\n\n**A4:** Roughly speaking, $R^m$ takes a transition tuple as input and return the model error on this transition. In this regard, the key difference between P2P and MBPO can be described as: MBPO optimizes $E_{s_0,a_0,s_1}[R^m]$ where $s_0\\sim p_{\\pi_{\\text{old}}}, a_0\\sim\\pi_{\\text{old}}(\\cdot|s_0),s_1\\sim \\hat{P}(\\cdot|s_0,a_0)$, and P2P optimizes $E_{s_{0:T+1},a_{0:T}}[\\sum\\gamma^tR^m_t]$ where $s_0\\sim p_{\\pi_{\\text{old}}}$ and $a_t\\sim\\pi_{\\text{new}}(\\cdot|s_t),s_{t+1}\\sim \\hat{P}(\\cdot|s_t,a_t)$ for $t\\in\\{0, \\ldots,T\\}$. Since $s_1'\\sim P(\\cdot|s_0, a_0)$ can be approximated by sampling from the environment dataset, MBPO updates $\\hat{P}$ by directly minimizing $\\\\|s_1-s_1' \\\\|$. In contrast, $(s_t,a_t)$ may be not available in the environment dataset since $s_t$ is predicted by the model and $a_t$ is obtained from the new policy that have not interacted so much with the true environment. In other words, in practice P2P needs to approximate $R^m$ to predict the model accuracy on unseen inputs. We will clarify the specific approach for this approximation in **A6**.\n\nThe expected return with regard to the current policy is approximated through finite trajectory samples generated by active interactions between the fixed current policy and the model.\n\n\n\n**Q5: The meaning of Line 162.** \n\n**A5:** As discussed in **A4**, in theory P2P optimizes $E_{s_{0:T+1},a_{0:T}}[\\sum\\gamma^tR^m_t]$ where $s_0\\sim p_{\\pi_{\\text{old}}}$ and $a_t\\sim\\pi_{\\text{new}}(\\cdot|s_t),s_{t+1}\\sim \\hat{P}(\\cdot|s_t,a_t)$ for $t\\in\\{0, \\ldots,T\\}$. To approximate the above expectation, the model needs to interact with the current policy $\\pi_{\\text{new}}$ and optimize over the induced data.\n\n\n", " **Q5: \"Does generating trajectories heading to regions with low uncertainty run counter to the exploration-exploitation principle in reinforcement learning?\"**\n\n**A5:** Generally speaking, the exploration-exploitation trade-off in RL mainly works on the real environment instead of the approximate model. Since it is hard for the uncertain regions to reflect the real dynamics accurately, exploring these regions can not only provides the agent with false information about the real environment, but also be prone to the *model exploitation* issue which severely hurts the asymptotic performance of the policy. From the theoretical perspective, preventing the policy from going into highly uncertain regions can reduce the accumulative model error, and thus guarantee a tighter performance lower bound and better policy improvement according to Theorem 1. Furthermore, note that P2P does not directly intervene the learning of value function or policy, hence the value function can still predict high value for uncertain regions and encourage the policy to explore them in the real environment. Overall, the focus of P2P is to learn a model which can quickly adapt to the current policy, so as to provide multi-step samples that are as accurate as possible for policy learning.\n", " We thank the reviewer for all the valuable comments. Point-by-point responses are provided below. \n\n**Q1: \"Is the reward function** $R(s,a)$ **assumed to be known? If not, where and how do you learn** $R(s,a)$ **in Algorithm 1?\"**\n\n**A1:** $R(s, a)$ is assumed to be known in our analysis. We are sorry for confusion and have fixed this problem in the revised version (Section 2, Page 3, Line 103). Note that this is a commonly used assumption since the sample complexity of learning the reward function with supervised learning is a lower order term compared to the one of learning the transition model [1].\n\n\n\n**Q2: Clarification of** $R^m$.\n\n**A2:** **1) P2P-MPC:** During each training iteration, we first trains the model via traditional one-step prediction loss, and then trains the $\\hat{R}^m$ network by taking transitions sampled from the environment dataset as input, and the prediction errors on these transitions as label. The prediction error of an environment transition $(s, a, r, s')$ are computed via $\\\\|\\hat{s}'-s'\\\\|+\\\\|\\hat{r}-r\\\\|$, where $\\hat{s}', \\hat{r}$ are sampled from $\\hat{P}(\\cdot, \\cdot|s, a)$. \n\n**2) P2P-RL:** As mentioned in Section 3.3 in the paper, unlike P2P-MPC, P2P-RL does not actually generate the trajectories by interacting $\\hat{P}$ with $\\pi$. Instead, P2P-RL trains the model on the environment dataset and treats the model learning process as an offline RL problem, as the \"decision maker\" of the environment dataset is the true dynamics. Thus, regarding a transition $(s, a, r, s')$, $R^m$ can be directly approximated by computing $- \\\\|\\hat{s}'-s'\\\\|-\\\\|\\hat{r}-r\\\\|$ , where $\\hat{s}', \\hat{r}\\sim\\hat{P}(\\cdot, \\cdot|s, a)$.\n\n\n\nThe above clarification has been added to Appendix B.3 (Appendix Page 3, Lines 38-48) in the revised version.\n\n\n\n**Q3: Clarification of the evaluations in Figure 4.** \n\n**A3:** The yellow curves represent the performance of P2P-MPC, which minimizes the multi-step loss on the trajectories generated by active interactions between the model and the current policy. The blue curves show the results of an ablation version of MBPO where the original one-step loss is replaced by a multi-step loss computed over the trajectories sampled from the environment dataset. The lengths of these trajectories are set to the same in this comparison. This clarification have been added to the revised version (Section 4.3, Page 8, Figure 4). \n\n**Q4: \"In the online setting, could P2P underperform or fail in scenarios where the goal is in a region of high model uncertainty?\"**\n\n**A4:** According to the reviewer's suggestion, we conducted a new experiment to investigate the case when the goal is in an uncertain region in the online setting. For the convenience of implementation, here the term \"uncertainty\" is equated with the epistemic uncertainty [2], which can be quantified as the amount of relevant real-world data. Therefore, a region with more data is considered to have lower uncertainty. Since in pure online settings the uncertainty of regions is hard to control during the training iterations, we first pretrain the model with an offline dataset and then switch to online training. The goal is allocated to the grey region where the relevant offline samples are partially discarded. The percent of discarded samples is set to 25%, 50%, 75% and 100% respectively and the results are given as follows:\n\n\n| | 25% | 50% | 75% | 100% |\n| :-----: | :-------------: | :------------: | :------------: | :------------: |\n| P2P-MPC | $148.9\\pm 35.9$ | $75.4\\pm 31.6$ | $51.7\\pm 29.8$ | $43.2\\pm 25.1$ |\n| MBPO | $116.2\\pm 35.6$ | $61.1\\pm 34.8$ | $47.5\\pm 35.1$ | $44.7\\pm 30.2$ |\n\nAs the degree of uncertainty increases, the performances of both methods degrade rapidly, but P2P-MPC still outperforms MBPO in all these cases except for the 100% case, where P2P-MPC achieves slightly worse performance in average but better stability with lower standard deviation. To give a possible explanation of these results, it is worth noting that **1)** P2P does not directly intervene the learning of policy or value function, but only improves the accuracy of the generated samples. As a result, the value function can still predict high value for uncertain regions and thus encourage the policy to explore them in the real environment; and **2)** in contrast, even if the goal is in a region with high uncertainty and the model does not prevent the policy from exploring this region in the model, the value function can still predict low value of this region due to the lack of relevant data and thus mislead the learning of policy. \n\nWe have added these new results and explanations in Appendix D (Appendix Page 4, Lines 53-74).\n\n\n[1] Azar et al. Minimax pac bounds on the sample complexity of reinforcement learning with a generative model. 2013.\n\n[2] Chua et al. Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models. 2018.\n", " We thank the reviewer for all these valuable comments. We provide point-by-point responses below. \n\n**Q1: About the implementation details.** \n\n**1) In P2P-MPC, \"How do we estimate $R^m$ using a neural network, do you mean directly predicting the model error?\"** \n\n**A1(1):** Yes, the reviewer has the right understanding, that is, the $\\hat{R}^m$ network is trained to predict the model error. During each training iteration, P2P-MPC first trains the model via traditional one-step prediction loss, and then trains the $\\hat{R}^m$ network by taking transitions sampled from the environment dataset as input, and the prediction errors on these transitions as label. The prediction error of an environment transition $(s, a, r, s')$ are computed via $\\\\|\\hat{s}'-s'\\\\|+\\\\|\\hat{r}-r\\\\|$, where $\\hat{s}', \\hat{r}$ are sampled from $\\hat{P}(\\cdot, \\cdot|s, a)$. \n\n**2) In P2P-RL, \"If we use the model for multi-step rollout, in the successor time steps, we cannot get the ground-truth** $P^*(s’|\\hat s,a)$**, where $\\hat{s}$ is predicted by $\\hat{P}$, how to calculate** $R^m$**?\"**\n\n**A1(2):** Unlike P2P-MPC, P2P-RL does not actually use the trajectories generated by the interaction of $\\hat{P}$ and $\\pi$. Instead, P2P-RL trains the model on the environment dataset and treats the model learning process as an offline RL problem, where the true dynamics becomes the \"decision maker\" of the environment dataset in our problem formulation. Thus, regarding a transition $(s, a, r, s')$, $R^m$ can be directly approximated by computing $-\\\\|\\hat{s}'-s'\\\\|-\\\\|\\hat{r}-r\\\\|$ , where $\\hat{s}', \\hat{r}$ are sampled from $\\hat{P}(\\cdot, \\cdot|s, a)$. \n\n\n\nRegarding the mentioned details above and the rest of the concerns about P2P-RL, we have taken the reviewer's suggestion and added a more complete description as well as pseudocodes in Appendix B.3 (Appendix Page 3, Lines 38-48) and Appendix E (Appendix Page 4, Lines 75-77) to further improve the readability.\n\n**Q2: Explanation of the contradiction between the cumulative error and the performance of P2P-RL and P2P-MPC in Hopper.**\n\n**A2:** In our problem formulation, the final policy performance is not only determined by the accumulative model error but also affected by other factors (e.g., the value function approximation in the model learning phase of P2P-RL), which may result in the contradiction mentioned by the reviewer. \n\n\n\n**Q3: About the related work.** \n\n**A3:** Thanks for recommending these related articles. \n\n- Shang et al. [1] propose an environment reconstruction method which models the influence of the hidden confounder on the environment by treating the platform, the user and the confounder as three agents interacting with each other. They focus on the offline setting (i.e., RL-based recommendation) and simultaneously train the model and the policy using a multi-agent imitation learning method. \n- Xu et al. [2] treat the model as a dual agent and analyze the error bounds of the model. They propose to train the model using imitation learning methods. Chen et al. [3] also consider multi-step model error, but mainly focus on handling counterfactual data queried by adversarial policies. Unlike the above two work that both focus solely on model learning, P2P aims at proposing a general MBRL framework and attempts to adapt the model to the continuously updating policy quickly.\n\nWhile the above work shares some similarity with our work, the targeted problems and the proposed solutions are not exactly the same as P2P. We have added these references into the Related Work section in the revised version (Appendix F, Page 5, Lines 114-125).\n\n[1] Wenjie Shang et al. Environment Reconstruction with Hidden Confounders for Reinforcement Learning based Recommendation. 2019.\n\n[2] Tian Xu et al. Error Bounds of Imitating Policies and Environments. 2020.\n\n[3] Xiong-Hui Chen et al. Adversarial Counterfactual Environment Model Learning. 2022.\n", " The authors propose an MBRL framework named Plan to Predict (P2P), which treats the model rollout process as a sequential decision-making problem. The model in P2P can minimize the multi-step accumulative errors on the induced trajectories and thus quickly adapt to the state distribution of the current policy, the authors give a theoretical guarantee for P2P. Empirical results on several MuJoCo benchmark tasks verify the effectiveness of P2P. Strengths\n1. The article is overall easy to follow except Section 3.3 (see Questions below);\n2. The motivation to fix the mismatch between model learning and model usage is reasonable and valuable. \n3. The experiment is well-designed to demonstrate the mechanism of P2P.\n\nWeaknesses\n\n\nThe article is sound and has no major drawbacks to me. But there are several questions for the authors to further clarify (see below). \n1. The description in Section 3.3 is a bit confusing. For P2P-MPC, Line 191 mentions \"Besides, the reward R^m, which cannot be directly computed by definition, is approximated here using a neural network and trained on the environment dataset\". How do we estimate R^m using a neural network, do you mean directly predicting the model error (sounds like a difficult job)? In P2P-RL, the authors mention (Line 205), \"simply approximated by re-predicting the dynamics in the sampled transitions and computing the prediction errors for the current model-policy\". It seems that R^m is obtained based on the data P^*, calculated by the \\hat P prediction error. But if we use the model for multi-step rollout, in the successor time steps, we cannot get the ground-truthP^*(s’|\\hat s,a), where \\hat s is predicted by \\hat P, how to calculate R^m? Also, for P2P-RL, a lot of technical details are mentioned here, such as: “updating the next state in each sampled transition by applying the current policy-dynamics“; ”adopting SAC with behavior cloning as the underlying learning algorithm“; ”DualDICE is also applied to correct the estimation of the state distribution“. I suggest that the authors can add pseudocode or a more complete description in the appendix to improve the readability of this part.\n\n\n2. The author's explanation for the poor performance of the P2P-RL method (in Line 230) is that \"we find that P2P-RL sometimes struggles to balance the loss of behavior cloning and RL, leading to the difficulty in hyperparameter tuning and the instability of learning network parameters”. But we can find that in the Hopper environment, the cumulative error of P2P-RL and P2P-MPC algorithms is similar (Figure 3), while the policy performance of P2P-RL is twice as bad. This seems to be contrary to the explanation proposed by the authors.\n\n3. I think some related work is missing in this article. In recent years, a series of practical work and theoretical analyses have attempted to learn the model using the \"treats the model rollout process as a sequential decision-making problem\" framework. Its solution is similar but not exactly the same as P2P-RL. I recommend the author to read the following articles and add to the related work, which may inspire the author's follow-up research and improvement work [1,2,3].\n\n\n[1] Wenjie Shang et al. Environment Reconstruction with Hidden Confounders for Reinforcement Learning based Recommendation. 2019.\n\n[2] Tian Xu et al. Error Bounds of Imitating Policies and Environments. 2020.\n\n[3] Xiong-Hui Chen et al. Adversarial Counterfactual Environment Model Learning. 2022.\n\n\n\n\n \nOverall, the article is sound and has no major drawbacks to me. If the authors clarify the proposed questions and improve the writing of Section 3.3, I will consider increasing the score of the article.", " The paper proposes a model-based reinforcement learning algorithm that alternates between model-learning with the policy fixed, and policy improvement with the model fixed. The key insight is that the model-learning phase is treated as a Markov decision process (MDP) with the estimated model as the \"policy\" to be learned (with the actual policy fixed), where the reward is the negative mismatch between the estimated model and the actual system transition dynamics. The paper claims that encapsulating model-learning with a fixed policy as a \"reversed\" MDP encourages model-learning to more quickly adapt to policy updates and produce a model that is more accurate over multiple transition steps. *** ORIGINALITY ***\n\nThe key contribution of the paper is the treatment of the unknown transition model as the \"decision-maker\" when paired with a fixed policy during model learning. The novelty of this idea is clearly established by the authors in their comparisons to prior work. The paper proposes the \"meta-algorithm\" Algorithm 1, which requires the user to choose from existing algorithms to specify the model and policy learning phases. Overall, the proposed novelty can be viewed as a specification on the objective for the model learning phase. However, the empirical improvements offered by this key insight are clearly demonstrated in Section 4, so there is strong value in it as a contribution.\n\n*** QUALITY ***\n\nThe paper is technically sound for the most part, and the methods used seem appropriate. However, it is unclear if the authors learn the reward function in Algorithm 1 at all -- there does not seem to be a description anywhere in the main paper if R(s,a) is known or must be learned. In addition, in Figure 4 of Section 4, the results cannot be interpreted because the authors have not provided a clear description of what these plots show. Indeed, the caption only vaguely states that these results compare \"performances induced by two ways of computing the multi-step prediction loss\". Since the authors repeatedly tout the benefit of their approach over prior approaches using multi-step objectives, this part of Section 4.3 should be expanded and improved.\n\n*** CLARITY ***\n\nThe paper is clearly written for the most part. Some notation in Theorem 1 is undefined in the main body of the paper and in the appendix (e.g., R_max, D_TV). The use of the term \"value\" throughout the paper should be clarified (e.g., does this refer to a value function?). The use of the terms \"policy-dynamics\" for the policy pi and \"model-policy\" for the dynamics model P_hat beginning on line 180 is confusing. It would probably be clearer to just refer to them as the policy and model, respectively, and ensure it is clear to the reader when either the policy-learning or the model-learning phase is being discussed.\n\n*** SIGNIFICANCE ***\n\nThe key idea in treating dynamics learning with a fixed policy as an MDP with the transition model as the \"policy\" is interesting and could be useful to other researchers. The empirical results, particularly in Figures 3 and 7, seem to show that P2P can potentially reduce model error over multi-step rollouts.\n\nThe primary concern I have with this paper is its treatment of regions in state-action space with high model uncertainty. The authors seem to consider such regions as ones that should be avoided during reinforcement learning. On line 272, the authors state that their method instead \"tends to generate trajectories heading to the regions with low uncertainty\", while the baseline method MOPO \"fails to prevent the trajectories from being absorbed into the highly uncertain region\". This line of reasoning runs counter to the exploration-exploitation principle in reinforcement learning. Indeed, while the scenario in Figures 1 and 6 seems to have the goal in a region of low model uncertainty, it is certainly possible to encounter scenarios where the goal is instead in a region of high model uncertainty. Thus, exploration into regions with high model uncertainty in the online setting perhaps should be encouraged rather than avoided (subject to, e.g., safety constraints). This is a potentially critical limitation of the proposed work that is not discussed, and perhaps should be investigated with an additional experiment where the goal is located in a region of high model uncertainty. Based on the discussion above in \"Strengths and Weaknesses\", I have the following questions and requests for clarification:\n\n- Is the reward function R(s,a) assumed to be known? If not, where and how do you learn R(s,a) in Algorithm 1?\n\n- For P2P-MPC, on lines 190-191 the authors state \"the reward Rm, which cannot be directly computed by definition, is approximated here using a neural network and trained on the environment dataset.\" For P2P-RL, on the lines 205-207 the authors state \"since the samples come from the true environment, the reward Rm can be simply approximated by re-predicting the dynamics in the sampled transitions and computing the prediction errors for the current model-policy\". Both statements lack significant detail and yet seem to be critical to model learning since they quantify the difference between the model P_hat and the true transition dynamics P. Please expand on these points.\n\n- Please expand on your evaluations of the model error induced by P2P and multi-step loss function baselines, particularly in Figure 4. As it stands, the lack of details severely hampers reproducibility.\n\n- The authors promote that their proposed method P2P learns by \"actively interacting with the current policy\" (line 66) and has the \"capability to avoid model error\" (line 84). Distinctly, the authors' ant-in-a-maze example scenario has the goal in a region of low model uncertainty. However, it is certainly possibly to encounter a scenario where the goal is in a region of high model uncertainty, thereby requiring further exploration in the online setting. Could P2P underperform or fail in such scenarios? Further experiments would be welcome here.\n There is no substantial discussion of limitations in this work. There is some superficial comparison of P2P-MPC and P2P-RL at the end of Section 4.1 and in Appendix C, but little insight or discussion is provided. I have raised an issue regarding exploration-exploitation in the review sections above that may be useful for the authors to consider.\n\nThe authors have adequately addressed the societal impact of their work.\n", " The paper proposes an improved method for learning models for model-based reinforcement learning, by recasting the supervised regression approach commonly used as a control problem. The following will be very brief and most of my comments will be under \"Questions\", because I have serious doubts that I actually understood the method the authors are proposing. This is not due to lack of familiarity with the field, I have done research and published in model-based reinforcement learning. I will point out my problems in the next section.\n\nGiven the very nice results and the overall introduction, I am very favorable to recommend acceptance. This is not currently reflected in the score however, due to the questions I have about the paper. Therefore I would ask the authors to go through them and hopefully improve the presentation (or convince me that I overlooked crucial details of course :) ), and to not be discouraged by the low score! I think the ideas warrant a presentation at NeurIPS, but I think that the paper can be heavily improved to make it accessible and give it a chance to shine in the community. - Figure 1: while nicely designed, I do not think I fully understand what the figure represents. It has a lot of details and an unfamiliar setup. Please simplify and clarify.\n- Line 42: Since the term \"objective mismatch\" is used, I would encourage the authors to cite previous work by Lambert et al. and others.\n- Justification: It is unclear to me why exactly the model should prevent the policy from going into high uncertainty regions. First, in most model based methods, several trajectories would be computed (explicitly with random shooting, or implicitly, by repeatedly querying a probabilistic model). These should to some extend counteract the overestimation, as the model would be less controllable in this region. Furthermore, executing the next policy on the real environment will hopefully provide counterfactual data to the previously erroneous model.\n- How does predicting $R^m$ actually differ between this approach and MBPO? How is the expectation with regard to the current policy realized?\n- Line 162: I do not fully understand what is meant by this sentence? Is the difference in the data used for the update?\n- Line 190: The $R^m$ is trained as a neural network? How exactly? This seems crucial for making the method work, but is treated as a side comment.\n\nI think a lot of my confusion stems from the last question: how is the reward/loss for the mode actually computed. I think a minimal change that would alleviate a lot of my concerns would be to add a more clear, step by step explanation of the algorithm to the top of Section 3, instead of relegating implementation details to later? Since the setup is fairly different from other model learning approaches, a more \"tutorial style\" of writing would immensely benefit my (and hopefully others) understanding. The method discusses an optimization method and therefore does not clearly require a discussion of societal impact.", " Model based reinforcement learning promises to reduce sample complexity in reinforcement learning. MBRL is responsible for some of the major breakthroughs in AI, zuch as AlphaGo Zero, and AlphaFold. MBRL may suffer from model inaccuracy, especially when the learning of the model, which is typically doen with 1 step lookahead, is separate from model usage, which is typically done with many step lookahead. The paper suggests a solution to this problem, using a meta-approach, to approach the model learning as an MDP problem itself. \nThe paper offers both a theoretical analysis, and convincing experimental evidence of strongly better performance than some of the best model based, and some of the best model free approaches. This a convincing contribution. Strengths\n- clear writing, clear problem statement\n- clear contribution\n- good theoretical analysis\n- convincing experimental evidence\n- important new idea, that performs substantially better. important contribution\n\nWeaknesses\n- few - can you compare to more other MBRL approaches\n- can you run on larger problems? few limitations are mentioned in the paper. The authors are invited to provide more\nThere are no ethical issues to address" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 5 ]
[ "-jyFoE-CbgV", "4WM3j6k1ARb", "dfqfZr8qvHa", "C35nZlnc9KM", "kpUkUXMeJA", "BI3hvTzIkMi", "T-l9Juofeb", "A_YB_J1pfmD", "QE7DLdPRSaz", "NlaBpnCwz5g", "OPWc0Cz20ux", "QE7DLdPRSaz", "hMFxbZSxiHt", "A_YB_J1pfmD", "Rtzaz04-dOz", "nips_2022_L9YayWPcHA_", "nips_2022_L9YayWPcHA_", "nips_2022_L9YayWPcHA_", "nips_2022_L9YayWPcHA_" ]
nips_2022_zTQdHSQUQWc
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
Recent studies have shown that deep learning models such as RNNs and Transformers have brought significant performance gains for long-term forecasting of time series because they effectively utilize historical information. We found, however, that there is still great room for improvement in how to preserve historical information in neural networks while avoiding overfitting to noise present in the history. Addressing this allows better utilization of the capabilities of deep learning models. To this end, we design a \textbf{F}requency \textbf{i}improved \textbf{L}egendre \textbf{M}emory model, or {\bf FiLM}: it applies Legendre polynomial projections to approximate historical information, uses Fourier projection to remove noise, and adds a low-rank approximation to speed up computation. Our empirical studies show that the proposed FiLM significantly improves the accuracy of state-of-the-art models in multivariate and univariate long-term forecasting by (\textbf{19.2\%}, \textbf{22.6\%}), respectively. We also demonstrate that the representation module developed in this work can be used as a general plugin to improve the long-term prediction performance of other deep learning modules. Code is available at https://github.com/tianzhou2011/FiLM/.
Accept
Paper provides a time series modeling technique combining the use of Legendre polynomials for projections and Frequency based low rank approximation / selection. The reviewers found the paper to be interesting, and the results convincing and possibly usable in other sequence modeling tasks. Some questions were raised by nNQa about the baselines / comparisons, that I felt were addressed appropriately by the authors. Other questions that were raised about the details of the experiments, including the datasets, the ablations performed, and comparisons to alternatives (such as lagged inputs in LSTMs, and comparisons to n-Hits) seem to have been well addressed by the authors.
train
[ "iYHCxZE4GUY", "2PBHualu9cZ", "NFm5GM8F3U", "M6wx3aVzwRJ", "ss23jqooBgL", "96kGH0vwoK7", "IQ9kWTxy7qo", "7pHU1Rq4Pt3", "CiHej5tkjtL", "bdXdn2TCeL", "ZSLt0-1jUCs" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Pwiu\n\nWe want to thank your valuable comments sincerely. Indeed as you point out, the low-rank approximation is not a stable improvement design for all datasets. On the contrary, it will hurt our performance in the heaviest compression version. But, it might be used as a building block for a future efficient or mobile model as it dramatically decreases the learnable parameters. And thanks for pointing out the typos; we will check the appendix and fix them. \n\nKind regards, The Authors", " > > Q1: Different datasets chosen for different ablation studies, and suggest using all datasets for ablation study\n\n> Some datasets, such as Traffic and Electricity, are significantly larger than others, making it time consuming to complete its ablation study. Due to the lack of computational resources, we limit the ablation studies to the datasets of medium size. We have completed some ablation studies with all the datasets, and the rest would be finished soon. We found the same observations from the complete ablation study as those from the datasets of medium size. We will include the full results of ablation studies in the final version. Two full ablation tables are already in Appendix K.\n\nThank you for providing the additional ablation results. I'm still wondering why ETTm1 is used for some ablations and ETTm2 for other ablations? \n\nFor the shown ablations, I agree that the ablations demonstrate the gain of the LPU layer and the mode selection policy. However, the results for the low-rank approximation are mixed (error for Electricity, Traffic, and ILI increases with K=4 and K=1). Thus, I would suggest the authors to provide a balanced discussion of this result in the camera-ready version of the paper because it seems that one needs to be careful (depending on the dataset) when using LRA in this method. \n\n> We follow the reviewer's suggesion and completed the LPU boosting experiments for LSTM with lagged values. The results are summarized in Appendix K Table 19. Lag features improved LSTM model, although not as much as LPU features. We believe this is due to the fact that LPU introduces many more global info than lag features. We also note that adding LPU to the lagged LSTM yields worse performance in some datasets, which could be caused by the conflicting effect between lag features and LPU features.\n\nThank you for this clarification. \n\n> We follow the reviewer's suggesion and completed the LPU boosting experiments for LSTM with lagged values. The results are summarized in Appendix K Table 19. Lag features improved LSTM model, although not as much as LPU features. We believe this is due to the fact that LPU introduces many more global info than lag features. We also note that adding LPU to the lagged LSTM yields worse performance in some datasets, which could be caused by the conflicting effect between lag features and LPU features.\n\nThank you for adding this experiment. Table 19 suggests that the used lag features do NOT improve the LSTM model (values in the \"LPU\" column are higher for lagged-LSTM than for LSTM). What lags are exactly used? My expectation would have been that the lagged LSTM should be better than the vanilla LSTM. I suggest to add these details in the final version of the paper. \n\n> Both Table 2 and 10 include experimental results for univariate forecasting. The results in Table 2 are intended to examine the pure effect of LPU without introducing other tricks such as multiscale mixture of experts mechanism and RevIn data normalization. This is why the results in Table 2 are worse than those in Table 10 that are obtained by the combination of all tricks. We apologize for the misleading name used for the first baseline method in Table 2, which is not our full FiLM model but an LPU+FEL combination. We have made it clear in the revised version.\n\n> Similar to Q3, the baseline method in Table 6 is LPU+FEL, not FiLM. Inspired by the review comments, we realize that we should conduct an ablation study for a full FiLM model, instead of one for LPU+FEL. We have completed the empirical studies, and included the new results in Table 6 of the revised draft.In table 6, ``relative'' means the relative increase of MSE over FiLM model. LPU+MLP combining all boosting tricks has even slightly better performance compared to FiLM for ETTm1 dataset, but FiLM remains the most stable and effective model among all variants. \n\nThank you for this clarification. I find this still somewhat confusing in the paper (it is not clear to me at which places this is corrected in the current version). I also noted that several of the added Appendix tables/captions contain typos. I think the paper still needs significant edits for the camera-ready version. \n\nOverall, I think the proposed method is an interesting addition to the time series forecasting landscape, but I remain skeptical of the effectiveness of the low-rank approximation. The authors addressed most of my concerns and I raised my score. \n", " Dear reviewer nNQa ,\n\nThanks again for your review. Since the discussion period is approaching its end, we would be glad to hear from you if we have addressed your questions/concerns.\n\nKind regards, The Authors", " Dear reviewer Pwiu,\n\nThanks again for your review. We just finish the experiments and update the rest two full datasets ablation tables in appendix K as you suggested. Since the discussion period is approaching its end, we would be glad to hear from you if we have addressed your questions/concerns.\n\nKind regards, The Authors", " >Q8: How did you choose the hyperparameters (M and N) of the model for the main table. (Appendix F)\n\nWe have performed experiments to study how M and N influence the performance on different datasets with different input lengths, and summarize the results in Figure 8 of Appendix F. Note that in the results reported in the main table, we do not use the optimal M and N as finding the optimal hyperparameters on each dataset and forecasting setting is computationally expensive. Instead, we use the default setting by fixing M=32 and N=128 for all experiments in the main table. As a general guideline from our empirical studies on different settings, we recommend either the default hyperparameter setting which can achieve decent performance (actually quite competitive), or tuning the hyperparameters via cross-validation if the optimal results is preferred given enough computing resources. ", " We would like to sincerely thank the Reviewer nNQa for providing a detailed review with insightful questions.\n>Q1: Authors should include non-Transformer methods\n\nWe have run experiments with two non-transformer methods, i.e. N-HiTS and Seasonal-naive. N-HiTS is the latest development from the research group that also published N-Beats, and Seasonal-naive is a strong baseline that outperforms N-HiTs, Fedformer and Autoformer on the exchange datasets. Furthermore, we directly take the results of N-HiTS from the original paper to avoid inappropriate parameter tuning. We found that (a) FiLM outperforms N-HiTS in 33/48 cases, and (b) FiLM outperforms Seasonal-naive in all datasets, including the exchange datasets. All the updated results can be found in Appendix K. \n\n>Q2: how (or if) FiLM models relations between the different time series.\n\nSince FiLM models different channels independently, each series is indeed treated independently in this work. It is possible to mix FiLM features by introducing additional components such as MLP embedding layers. We emphasized that the main contribution of this work is to introduce the modules LPU and FEL for forecasting neural network.\n\n>Q3: The results on training speed presented in appendix J are not convincing.\n\nThanks a lot for reviewing our paper including appendix carefully and comprehensively. We agree that the overall training time is important. In our experiments, we fix the number of training epochs, and thus comparing the training time per step is equivalent to comparing the overall training time. We are aware that the overall training time can be different if one follows a different training setting. It is true that model efficiency comparison may require different metrics for different algorithms in different scenarios. \nWe acknowledged that N-HiTS is a very efficient model with less overall training time compared to FiLM. We are also aware that models, such as shallow MLP based models and seasonal naive model, can be extremely efficient for training (or even does not require any training). In the revision, we make our \npoints clear by stating ``Though failing short in training speed compared to some MLP based models like N-HiTS, FiLM is an efficient model with shorter per-step training time and smaller memory usage compared to baseline models.\". We finally note that the key contribution of this work is to introduce LPU and FEL modules that make long term forecasting more effective.\n\n>Q4: the complexity or training times against the number of time series (D)\n\nYes, the training time grows linearly with dimension D. Similar to our response to Q2, for the consistency of the paper, we focus on the discussion of LPU and FEL on temporal feature and do not include any embedding layers which work on channel dimension. Actually, when D becomes too large (Exchange: D=863, Electricity: D=322), we have tested the idea of compressing channels using a linear layer and decompressing them at the output。 Indeed, a carefully compression design(low compression ratio) is necessary, but generally it works quite well in our practice. \n\n>Q5: missing references/baselines (N-BEATS and N-HiTS)\n\nThank you for your suggestion. They are both great works, and we have cited them and compared the latest work N-HiTs in the revision (please also refer our response to Q1). \n\n>Q6: Why is the performance of FiLM different in ablation tables (table 2, 5) than from the main result table?\n\nBoth Table 2 and 10 include experimental results for univariate forecasting. The results in Table 2 are intended to examine the pure effect of LPU without introducing other tricks such as multiscale mixture of experts mechanism and RevIn data normalization. This is why the results in Table 2 are worse than those in Table 10 that are obtained by the combination of all tricks. We apologize for the misleading name used for the first baseline method in Table 2, which is not our full FiLM model but an LPU+FEL combination. We will make it clear in the final version.\n\nThe same argument applies to Table 5 as well, where we aim to study how different variants of LPU affect the performance and thus LPU+FEL is used as the baseline without other tricks. \n\n>Q7: Can you provide an explanation on how the model learns relations between time series or is the model univariate?\n\nAs we respond to Q2, FiLM models different channels independently. We use the original features as channels. Note that an extra embedding layer could be used to merge the information between multiple time series. We are trying to apply FiLM in the recommendation domains with multiple feature embeddings. It is a topic that is worth further investigation. ", " We would like to sincerely thank the Reviewer vH4a for providing valuable comments and recognizing the value of our work.\n>Q1: This is not really a weakness. Both using orthogonal functions as basis to store features for time series (LPU) and using choosing Fourier frequencies / dimensional reduction to remove noise are well-explored idea in time series modeling. One may argue that the paper lacks novelty. Personally, I think it is nice to see them combined to achieve good performances and the ablation studies show the necessity of both to achieve the improvement.\n\nThanks for appreciating our work by acknowledging the novelty. It is worth mentioning that our designed model does not contain any non-linear activation function. All the non-linearity transformations are done using projections (Legendre function projection by LPU and Fourier base function projection by FFT). At the beginning, we even considered the title ``Projection is all you need\" to emphasize the generality of our method, but later changed to the current title due to the lack of favorable results in multiple fields. It would be great if this work could inspire researchers in other fields.\n\n>Q2: Introduce what subscripts and superscripts mean before line 117.\n\nWe have a smooth function $f(x)$ to approximate. f(x)_${[t-\\theta,t]}$\nmeans $x \\in [t-\\theta,t]$, where $t$ is time and $\\theta$ is the window\nsize. $g^{(t)}(x)$ is the approximation of f(x)_${[t-\\theta,t]}$. So\n$g^{(t)}(x)$ also has $x \\in [t-\\theta,t]$ with a window size of\n$\\theta$. The $x$ in $g^{(t)}(x)$ starts from $t-\\theta$ and ends at $t$\nwith a window size of $\\theta$. So the measure ($\\mu^{(t)}$) of\n$g^{(t)}(x)$ is $\\frac{1}{\\theta}I_{[t-\\theta,t]}(x)$. We\nwill add more detailed explanations in Subsection 2.1 to clarify the\nnotations.\n", " We would like to sincerely thank the Reviewer Pwiu for providing thorough and insightful comments.\n\n> Q1: Different datasets chosen for different ablation studies, and suggest using all datasets for ablation study\n\nSome datasets, such as Traffic and Electricity, are significantly larger than others, making it time consuming to complete its ablation study. Due to the lack of computational resources, we limit the ablation studies to the datasets of medium size. We have completed some ablation studies with all the datasets, and the rest would be finished soon. We found the same observations from the complete ablation study as those from the datasets of medium size. We will include the full results of ablation studies in the final version. Two full ablation tables are already in Appendix K.\n\n>Q2a: What does \"comparable-sized linear layer\" mean exactly here?\n\nThe \"comparable-sized linear layer\" is mentioned in the ablation study of LPU. For fair comparison, the LPU layer and its variants (MLP) are set to the similar size. When putting a tensor with shape [L, D] into LPU with N Legendre functions, the output's shape becomes [N, L, D], which is N times of the input. When replacing LPU with a linear layer, it should achieve the same effect, which is [1, L, D] for input size and [N, L, D] for output size. We will make it clear in the revised version.\n\n>Q2b: Whether the LPU improves the performance over a standard architectural choice (for example, LSTM with lagged values) in forecasting?\n\nWe follow the reviewer's suggesion and completed the LPU boosting experiments for LSTM with lagged values. The results are summarized in Appendix K Table 19. Lag features improved LSTM model, although not as much as LPU features. We believe this is due to the fact that LPU introduces many more global info than lag features. We also note \nthat adding LPU to the lagged LSTM yields worse performance in some datasets, which could be caused by the conflicting effect between lag features and LPU features.\n\n>Q3: Unlinkable results for Table 2 and Table 10\n\nBoth Table 2 and 10 include experimental results for univariate forecasting. The results in Table 2 are intended to examine the pure effect of LPU without introducing other tricks such as multiscale mixture of experts mechanism and RevIn data normalization. This is why the results in Table 2 are worse than those in Table 10 that are obtained by the combination of all tricks. We apologize for the misleading name used for the first baseline method in Table 2, which is not our full FiLM model but an LPU+FEL combination. We have made it clear in the revised version.\n\n>Q4: Relative improvement in Table 6.\n\nSimilar to Q3, the baseline method in Table 6 is LPU+FEL, not FiLM. Inspired by the review comments, we realize that we should conduct an ablation study for a full FiLM model, instead of one for LPU+FEL. We have completed the empirical studies, and included the new results in Table 6 of the revised draft.In table 6, ``relative'' means the relative increase of MSE over FiLM model. LPU+MLP combining all boosting tricks has even slightly better performance compared to FiLM for ETTm1 dataset, but FiLM remains the most stable and effective model among all variants. ", " The paper introduces FiLM which stands for Frequency improved Legendre Memory Model for long-term time series forecasting. The authors leverage Legendre polynomials to obtain a fixed-sized representation of the cumulative history of the time series and combine it with Fourier analysis and low-rank approximation. The authors show that the method is competitive on long-term time series forecasting tasks and analyze the effect of several model choices and parameter sensitivities. The authors combine several known components (Legendre polynomials, Fourier analysis, low-rank approximation, mixture of experts) into a new time series forecasting method. While the individual components are not novel, the non-trivial combination presented of the components presented in the paper in this paper is novel. Specifically, studying Legendre polynomials for time series forecasting could be an alternative to learning long-range dependencies in time series data. The authors provide proof for function approximation and error accumulation bounds. \n\nThe proposed model is evaluated on a set of six real-world datasets in a long-term forecasting setting and the authors carefully analyze the impact of their components in ablation experiments. However, the plethora of results presented make it hard in some cases hard on what actually has been presented and why. More detailed explanation on what exactly is presented in each experiment would improve the paper. I will detail this in the Questions section. I would increase my score if these questions are addressed during the rebuttal. 1. The results in Table 2, Table 3, and Table 4 (LPU layer effect, low-rank approximation and mode selection policy) and on different datasets. Why is this the case? How have the datasets been selected for each experiment? I suggest that the authors provide the full table (with all datasets) of each experiment in the Appendix and clarify why these specific datasets have been selected in each experiment. \n\n2. In Table 2 (LPU boosting results) the authors show the impact of LPUs (compared to linear layers). What does \"comparable-sized linear layer\" mean exactly here? Probably the biggest advantage of the LPU is to provide a meaningful representation of long time series history. However, using lagged values is a standard trick to capture long-range dependencies with standard NN architectures (like MLP layers or LSTM). I would suggest the authors include lags into this experiment to evaluate whether the LPU improves the performance over a standard architectural choice (for example, LSTM with lagged values) in forecasting. \n\n3. In Table 2, I'm unable to link the results in the \"FilM\" column in either Table 1 (multivariate results) or Table 10 (univariate results). Shouldn't be the results in the FiLM correspond to either of those (at least for electricity)? The error presented in this column is also much larger than the results in Table 1/10 and the error bars in Table 9. For Electricity these results are used also in Table 5 and 6 but the results in Table 1 are different. I would kindly ask the authors to clarify this. It seems to be as expected in Tables 3 and 4. \n\n4. Table 6: What is the relative improvement here? Relative over what? I again fail to link the results for FilM to either Table 1 or Table 10. I would kindly ask the authors to clarify the table. The limitations of the this model or the limitations of the evaluation are not discussed in the main text. I would kindly ask the authors to add a short limitations section. ", " The paper introduces two techniques to improve modeling of long time series - the Legendre Projection Unit (LPU), which compresses a time series with Legendre polynomials as basis; and the Frequency Enhanced Layer (FEL) which performs a low-rank approximation and selects a subset of Fourier transformation frequency modes. These layers are not domain or task specific and can be used in many time series modeling tasks. The authors provide both theoretical and empirical support for the effectiveness of the model. Strengths:\n- Clear presentation. Figure 4 and 5 (and the code in appendix) are very informative in explaining how LPU and FEL are implemented. The extended list of ablation studies answered all of my questions about performance. \n- Strong performance compared to popular deep-learning methods for long-term time series forecasting.\n- Theoretically sound methods for reducing noise and capturing structure in different timescales. \n- Merit in model design. One can change the basis function in LPU to other classes (Fourier / wavelets) according to data. Also love how they are modular - LPU and FEL are clear and easy to incorporate into many existing time series models to improve the problem of long horizon prediction performance deterioration. I can see wide use of methods introduced in this paper. \n\nWeakness: \n- This is not really a weakness. Both using orthogonal functions as basis to store features for time series (LPU) and using choosing Fourier frequencies / dimensional reduction to remove noise are well-explored idea in time series modeling. One may argue that the paper lacks novelty. Personally, I think it is nice to see them combined to achieve good performances and the ablation studies show the necessity of both to achieve the improvement. \n\n Not being from the time series community, I was a little confused by the notations at first. It might help clarity to, for example, introduce what subscripts and superscripts mean for $f(x)_{[t-\\theta, t]}$, $ g^{(t)}(x)$, and \n\n$ \\mathbb{I}_{[t-\\theta, t]}$ is before line 117.\n\n N/A", " The paper proposes FiLM, a novel model based on Legendre projections, for long-horizon forecasting. The paper proposes two novel components: Legendre projection unit (LPU) and Fourier Enhanced Layer (FEL), which are combined in the FiLM architecture. Both LPU and FEL components can be used on multiple architectures. The authors tested the proposed approach on several benchmark datasets and compare it against recent Transformer-based models, and provide comprehensive ablation studies of the proposed components. Strengths:\n- The paper proposes two original components: LPU and FEL, which can be incorporated by many different architectures.\n- FiLM achieves superior performance than baselines, including the recent FED-former model, and the proposed components improve the performance of other architectures.\n- Authors provide some theoretical results which support the design of the components.\n- FiLM is a simpler and faster model than Transformer based models.\n- The paper is well written and clear.\n- Long-horizon forecasting is a very relevant topic and an active area of research.\n\nWeaknesses:\n- Recent studies have shown that many transformer-based models do not achieve SoTA performance in this setting [1, 2]. In many cases, improvements over previous models are caused by flawed comparisons (such as not tunning baselines properly, I confirmed with the authors of the Autoformer that, for example, they did not tune baselines such as the N-BEATS) or by omitting stronger baselines. Authors should include non-Transformer methods as well, including simple models. For example, a simple seasonal naive outperforms the Autoformer and FED-Former in Exchange and ILI (based on own experiments).\n- While the main experiments are performed on multivariate datasets, the paper does not discuss how (or if) FiLM models relations between the different time series. Based on the architecture description it seems forecasts are produced independently for each channel. Authors should make this clear.\n- The results on training speed presented in appendix J are not convincing. First, the total training time is more relevant than per iteration time but is not included nor discussed in the paper. Second, authors should again include simpler (non-Transformer) baselines, to better assess the trade-off between computation complexity and performance.\n- Authors do not discuss the complexity or training times against the number of time series (D). As seen in Figure 5, the components operate separately for each channel, which suggests poor scaling on D. Authors should provide comparisons in datasets with more time series, such as Traffic.\n- Some missing references/baselines. The N-BEATS [3] is a popular model closely related to the proposed technique, and N-HiTS [1] is an extension of the N-BEATS tailored for long-horizon forecasting.\n\n[1] N-HiTS: Neural Hierarchical Interpolation for Time Series Forecasting, https://arxiv.org/pdf/2201.12886.pdf\n[2] FreDo: Frequency Domain-based Long-Term Time Series Forecasting, https://arxiv.org/pdf/2205.12301.pdf\n[3] N-BEATS: Neural basis expansion analysis for interpretable time series forecasting, https://arxiv.org/pdf/1905.10437.pdf - Why is the performance of FiLM different in ablation tables (table 2, 5) than from the main result table?\n- Can you provide an explanation on how the model learns relations between time series or is the model univariate?\n- How did you choose the hyperparameters of the model for the main table, in particular, M and N? The sensitivity analysis in appendix F shows some approximate optimal rules for N. However, these results are performed on the test set. Ideally, hyperparameters should be chosen based on the validation set (for example on a small grid over different values of M, N, etc).\n Limitations are discussed, and I do not identify potential negative social impacts." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "2PBHualu9cZ", "7pHU1Rq4Pt3", "ss23jqooBgL", "7pHU1Rq4Pt3", "96kGH0vwoK7", "ZSLt0-1jUCs", "bdXdn2TCeL", "CiHej5tkjtL", "nips_2022_zTQdHSQUQWc", "nips_2022_zTQdHSQUQWc", "nips_2022_zTQdHSQUQWc" ]
nips_2022_3MZnNARib5
SAPipe: Staleness-Aware Pipeline for Data Parallel DNN Training
Data parallelism across multiple machines is widely adopted for accelerating distributed deep learning, but it is hard to achieve linear speedup due to the heavy communication. In this paper, we propose SAPipe, a performant system that pushes the training speed of data parallelism to its fullest extent. By introducing partial staleness, the communication overlaps the computation with minimal staleness in SAPipe. To mitigate additional problems incurred by staleness, SAPipe adopts staleness compensation techniques including weight prediction and delay compensation with provably lower error bounds. Additionally, SAPipe presents an algorithm-system co-design with runtime optimization to minimize system overhead for the staleness training pipeline and staleness compensation. We have implemented SAPipe in the BytePS framework, compatible to both TensorFlow and PyTorch. Our experiments show that SAPipe achieves up to 157% speedups over BytePS (non-stale), and outperforms PipeSGD in accuracy by up to 13.7%.
Accept
This paper proposes a new algorithm to speed up data-parallel distributed training, focused on mitigating staleness-induced issues that arise when limiting communication between nodes. All reviewers and myself agree this is a worthwhile contribution, which is backed by both convincing empirical and theoretical results. I consider that the potential novelty concerns that were raised in initial reviews have been addressed by the authors. The main remaining concerns are related to the limitations of the proposed method, that comes with some trade-offs and may not apply to all situations. I believe the authors have adequately answered these concerns by being upfront about these limitations during the discussion period, and I encourage them to make sure this is also clear in the final version of the paper. In spite of these limitations, I believe the novelty and significance of this work meet the bar for acceptance at NeurIPS, since speeding up distributed computations is a very relevant and challenging problem in modern deep learning.
train
[ "wp9hUUHRggo", "W5X2HMWyl8o", "MDyh08MKiqD", "5coHNxf2M98", "Xw8XA41UHYk", "9HLYAPbik5C", "aVTCU2HyZAV", "iDXkG3LgeAv", "G98JD3T5ApH", "Pusx9gIXlTG", "z_UFq0MAZO8", "eWfRcdWc7QG", "hEajH28cCFX" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and suggestion. \n\n1. why does SAPipe perform (marginally) better than fully synchronous for some tasks (VGG-16 and ResNet-50)? \n\n **A**: We do observe that in a few cases, SAPipe performs slightly better than fully synchronous SGD. For example, in Table 2, SAPipe-WP-OPT1 has a little bit better accuracy than BytePS when training VGG-16 on CIFAR-10. We also observe that this phenomenon happens when we use the local gradients in weight prediction (SAPipe-WP-OPT1 and SAPipe-WP-OPT3). Our intuition is that, by using the local gradients, SAPipe-WP may share some advantages with local SGD, which also uses local gradients. There are some recent research indicating that local SGD can perform better than mini-batch SGD under certain conditions (please refer to https://arxiv.org/pdf/1808.07217.pdf and http://proceedings.mlr.press/v119/woodworth20a/woodworth20a.pdf for more details). However, further research with theoretical analysis and empirical evaluation will be needed in our future work to fully understand it. \n\n2. Please add more yticks when necessary (e.g., ResNet-50 in Figure 9). I think adding some discussion to the final version of the paper would be good (especially the part about when SAPipe does well and when it doesn't). The additional experiments are appreciated.\n\n **A**: Thanks for the suggestion. Since the rebuttal revision is closed, we will make revision in the final version as suggested by the reviewer.\n\n", " Thanks for the responses. I will update my score.\nI read Figure 9 wrong. However, why does SAPipe perform (marginally) better than fully synchronous for some tasks (VGG-16 and ResNet-50)? Also, this is a nit, but please add more yticks when necessary (e.g., ResNet-50 in Figure 9).\nI think adding some discussion to the final version of the paper would be good (especially the part about when SAPipe does well and when it doesn't). The additional experiments are appreciated.", " 1. I did not see the same results when trying to scale GPT-2 and ResNet-50 in the past on comparable hardware on AWS. What was the setup? To confirm, the total batch size is reported per-GPU batch size (e.g., 128 images for ResNet-50) times the number of GPUs used?\n\n **A**: Our setup can be found at Section 5.1 (Line 258). A lot of factors can affect the scalability. Even with same network bandwidth, different network topology can still result in different scalability. We run our experiments on our own clusters, so the results could be different from those on AWS. \n\n Yes, the total batch size is reported per-GPU batch size times the number of GPUs used. \n\n2. When I asked for the effect of hardware on performance, I was looking for qualitative estimates rather than new experiments. I was asking more about the effect of models on throughput. For example, will your technique work better for VGG-16 or ResNet-50, which have different computation-communication profiles when scaling using data parallelism?\n\n **A**: The benefit of using PipeSGD/SAPipe mainly depends on the communication-to-computation ratio, which is affected by many factors, such as the computation power, the network bandwidth, and the model sizes. PipeSGD/SAPipe is suitable for hardware deployment with roughly comparable computation and communication time. If the communication time is too long, there will be limited overlapping space to reduce the communication overhead; If the computation time is too long, the communication overhead may be negligible or already covered by the forward and backward pass. \n\n SAPipe has better improvement on models with relatively higher (but no greater than 1) communication-to-computation ratio, 57% for VGG16 and 31% for ResNet50, respectively. \n\n4. For the scaling experiments, doesn't it make more sense to fix the global batch size and then change the per-GPU microbatch size and degree of gradient accumulation accordingly, since the batch size affects semantics. Having said that, 80 tokens per GPU seems very small.\n\n **A**: Our goal is to speedup the training throughput with more computation resources, while fixing the global batch size mainly benefits the setups with a small number of GPUs. We follow the previous related work to setup the scaling experiments (fixing per-GPU micro-batch size), e.g., ZeRO-Offload [https://www.usenix.org/conference/atc21/presentation/ren-jie] and BytePS [https://www.usenix.org/system/files/osdi20-jiang.pdf]. \n\n For GPT-2, we use the default hyper-parameters in the examples of BytePS [https://github.com/byteps/examples]. GPT-2 is a large model, and there occurs out-of-memory issues for GPU memory when we attempt to increase the batch-size. \n\n5. Can I use SAPipe for any model and expect superior time-to-accuracy compared to synchronous methods?\n\n **A**: No, SAPipe has some requirements for superior performance: \n\n a) Our algorithm for selecting partial staleness can only be used in sequential models, though the staleness compensation and runtime optimizations can be directly applied to a more complicated DAG model. \n\n b) The throughput improvement depends on the communication-to-computation ratios. When training models with relatively higher (but no greater than 1) communication-to-computation ratios, SAPipe can achieve higher speedups than the baseline due to higher overlapping potentials. \n\n c) Theoretically, many factors can affect accuracy, which depends on the properties of the datasets and model structures. Our methods can achieve higher performance under certain conditions, e.g., low gradient variance, low gradient diversity, and good smoothness. We discuss the details in remark 7 of Section 4.2 (Line 248).\n\n6. Why is Horovod slightly different from BytePS on the accuracy vs. epochs graph?\n\n **A**: Horovod does not appear in accuracy vs. epochs graph (Figure 9 in appendix), since both Horovod and BytePS do not modify the training algorithm. Could you please inform us which figure you are referring to? \n\n7. As reviewer 9R6e points out, I would also like to see comparisons to PyTorch's DDP.\n\n **A**: In our implementation, we are using the updated version of BytePS, which includes similar optimizations of state-of-the-art version of PyTorch. In some of our earlier investigations, we found that BytePS is still better than state-of-the-art version of PyTorch. For example (throughput of training wav2vec model, batch-size 4500):\n\n | #GPU | 1 | 8 | 16 | 32 |\n |-------------|-------|-------|--------|--------|\n | PyTorch-DDP | 12429 | 90059 | 123387 | 245256 |\n | BytePS | 12468 | 91520 | 143717 | 311123 |\n\n Hence, we use BytePS as our baseline. ", " 1. I am not sure this state-of-the-art version of PyTorch was included in the comparison reported in the BytePS paper. If the author confirmed similar optimizations also exists in BytePS, I would agree that the comparison with BytePS would also support the statement in the empirical study.\n\n\n **A**: Yes, the PyTorch version in the original BytePS paper does not include these system optimizations when comparing with BytePS (we took some time to contact the author to confirm that). However, in our implementation, we are using the updated version of BytePS, which includes similar optimizations of state-of-the-art version of PyTorch. In some of our earlier investigations, we found that BytePS is still better than state-of-the-art version of PyTorch. For example (throughput of training wav2vec model, batch-size 4500):\n\n | #GPU | 1 | 8 | 16 | 32 |\n |-------------|-------|-------|--------|--------|\n | PyTorch-DDP | 12429 | 90059 | 123387 | 245256 |\n | BytePS | 12468 | 91520 | 143717 | 311123 |\n\n Hence, we use BytePS as our baseline. \n\n2. To be more clear about the questions, I was trying to ask if the proposed optimization can be adopted to DNN represented by a more complicated DAG (instead of simply linearly stacked blocks, such as VGG, ResNet40, GPT2)? \n\n\n **A**: Yes, we implied that our work focuses on the sequential models (Line 100), and this is indeed our limitation. Our staleness compensation methods and runtime optimizations can be directly applied to a more complicated DAG model. But the method of selecting partial staleness cannot be directly used in DAG models. In the appendix, we discuss how to extend the partial staleness method for complicated DAG models (see Appendix D). ", " Thank you for the detailed responses to my questions. Some comments / follow-up questions below.\n- Thank you for pointing out the differences compared to staleness mitigation techniques used in pipeline parallelism. The differences make sense to me.\n- I did not see the same results when trying to scale GPT-2 and ResNet-50 in the past on comparable hardware on AWS. What was the setup? To confirm, the total batch size is reported per-GPU batch size (e.g., 128 images for ResNet-50) times the number of GPUs used?\n- When I asked for the effect of hardware on performance, I was looking for qualitative estimates rather than new experiments.\n- I was asking more about the effect of models on throughput. For example, will your technique work better for VGG-16 or ResNet-50, which have different computation-communication profiles when scaling using data parallelism?\n- For the scaling experiments, doesn't it make more sense to fix the global batch size and then change the per-GPU microbatch size and degree of gradient accumulation accordingly, since the batch size affects semantics. Having said that, 80 tokens per GPU seems very small.\n- Thank you for the pointers to the comparisons to SAPipe without staleness mitigation (PipeSGD).\n- Can I use SAPipe for any model and expect superior time-to-accuracy compared to synchronous methods?\n- Why is Horovod slightly different from BytePS on the accuracy vs. epochs graph?\n- As reviewer 9R6e points out, I would also like to see comparisons to PyTorch's DDP.", " Thank the author for their detailed response! Here are some follow-ups:\n\n1. Thanks for the clarification!\n\n2. I think recent version of PyTorch-DDP includes a series of system optimization, including tensor flattening and bucketing, overlapping communication and computation, etc., which was report at VLDB 2020 (https://www.vldb.org/pvldb/vol13/p3005-li.pdf), I am not sure this state-of-the-art version of PyTorch was included in the comparison reported in the BytePS paper. If the author confirmed similar optimizations also exists in BytePS, I would agree that the comparison with BytePS would also support the statement in the empirical study. \n\n3. Sorry for triggering some confusion. To be more clear about the questions, I was trying to ask if the proposed optimization can be adopted to DNN represented by a more complicated DAG (instead of simply linearly stacked blocks, such as VGG, ResNet40, GPT2)? GNN is just one example, Perhaps U-net is a better example for this question. Lastly, I want to emphasize that, in my opinion, clear statement of the applicable scope of a methodology would be considered as a strength instead of a weakness.", " **Q**: Are the ideas of delay compensation and weight prediction new? Don't these papers propose similar ideas for pipeline parallelism without flushes (which shouldn't make a difference): PipeMare [https://proceedings.mlsys.org/paper/2021/file/6c8349cc7260ae62e3b1396831a8398f-Paper.pdf] and Kosson et al. [https://arxiv.org/pdf/2003.11666.pdf].\n\n **A**: The algorithms proposed in this paper are novel in several aspects as follows:\n 1. This paper focuses on data parallelism, while PipeMare/Kosson et al. uses model parallellism. Applying staleness mitigation to data parallelism is novel and different from previous work. Note that the distributed training mechanism used in model parallelism papers (PipeMare/Kosson et al.) and data parallelism papers (PipeSGD/SAPipe) are totally different. In brief, we summarize the main differences as follows: For PipeMare/Kosson et al: 1) model parallelism; 2) in the same batch, different micro/mini-batches use different versions of model parameters with different staleness for forward-backward; 3) for the same micro/mini-batch, the model parameters used in forward pass and backward pass are different due to the different staleness; 4) weight prediction is used to close the gap between different versions of model parameters in forward pass and backward pass; 5) no theoretical results for convergence. While for SAPipe: 1) data parallelism; 2) in the same batch, different mini-batches or workers use the same version of model parameters, with exactly the same fixed staleness of 1; 3) same model parameters are used in forward pass and backward pass; 4) weight prediction/delay compensation is used to mitigate the fixed staleness of 1; 5) theoretical results for convergence are provided (and note that our convergence proof could not be applied to PipeMare/Kosson et al., due to the difference between data parallelism and model parallelism). In a nutshell, SAPipe has its own unique properties, which results in novel algorithms using delay compensation and weight prediction, and new challenges in the convergence proof. Furthermore, some options such as weight prediction with local gradients (SAPipe-WP-OPT1/OPT3) are infeasible for PipeMare/Kosson et al.\n\n 2. Weight prediction (WP) is a general concept to mitigate weight inconsistency, and this paper provides several WP methods to solve the staleness issue on data parallelism. SAPipe-WP-OPT2 is similar to linear weight prediction used in previous work, while SAPipe-WP-OPT1 and SAPipe-WP-OPT3 are absent in PipeMare/Kosson et al., and achieve better results in our experiments. \n\n 3. The theoretical analysis of convergence is one of our main contributions, while there is no convergence analysis in PipeMare/Kosson et al., regardless of the difference between data parallelism and model parallelism. Furthermore, it is non-trivial to prove the convergence when adding different options of weight prediction to SAPipe. For example, in SAPipe-WP-OPT2, the use of weight prediction with latest synchronized gradient causes a recursive error term (Line 524-529 in Appendix). Such a recursive error term doesn't exist in the previous work of PipeSGD or asynchronous SGD, which is the key to show that SAPipe-WP has lower error bound compared to vanilla PipeSGD/SAPipe without staleness mitigation under certain conditions, as we have discussed in Remark 3, 4, 5, 6 in Section 4.2.\n\n 4. Additionally, we modify the delay compensation (DC) method using the full-matrix form to avoid the additional error caused by the diagonal approximation, compared to the original paper [33], as we explained in Section 3.2, Line 130-140. Furthermore, the corresponding theoretical analysis of convergence is also different from the original paper. In our proof, we remove an unreasonable and impractical assumption from the original paper of DC, which manually sets a \"search region\" for the model parameters: $\\\\| x - x' \\\\|^2_2 \\leq \\pi^2$, where $x$ and $x'$ are any 2 versions of model parameters in the training sequence of $\\\\{ x_t, t \\in [T] \\\\}$. The abandonment of this assumption incurs extra difficulty to our convergence proof. To overcome this challenge, we adopt a new technique, where the error bound is established by solving a sequence of recursive inequalities (Line 498-500 in Appendix). Such a new proof procedure can not be found in the previous work.", " 1. why does ResNet-50 scale worse than GPT-2?\n\n **A**: ResNet-50 and GPT-2 have similar poor scaling ratios (real throughput with 64 GPUs over throughput with 8 GPUs times 8) in our experiment, 0.64 and 0.63, respectively (see Figure 3(b) and 3(c)). Though ResNet-50 involves less communication compared to GPT-2, the inter-server communication still becomes the bottleneck when scaling to over 32 workers.\n\n2. Are the ideas of delay compensation and weight prediction new? Related work: PipeMare and Kosson et al.\n\n **A**: The algorithms proposed in this paper are novel in several aspects as follows:\n\n - This paper focuses on data parallelism, while PipeMare/Kosson et al. uses model parallelism. Applying staleness mitigation to data parallelism is novel and different from previous work.\n\n - Weight prediction (WP) is a general concept to mitigate weight inconsistency, and this paper provides several WP methods to solve the staleness issue on data parallelism. SAPipe-WP-OPT2 is similar to linear weight prediction used in previous work, while SAPipe-WP-OPT1 and SAPipe-WP-OPT3 are absent in PipeMare/Kosson et al., and achieve better results in our experiments.\n\n - The theoretical analysis of convergence is one of our main contributions, while there is no convergence analysis in PipeMare/Kosson et al.. Furthermore, it is non-trivial to prove the convergence of SAPipe with various WP.\n\n - Additionally, we modify the delay compensation (DC) method using the full-matrix form to avoid the additional error caused by the diagonal approximation, compared to the original paper [33], as we explained in Section 3.2, Line 130-140. Furthermore, the corresponding theoretical analysis of convergence is also different from the original paper.\n\n3. Why does the optimization problem at the top of page 4 have a runtime complexity of O(m^2)?\n\n **A**: For each k, we compute summation for O(m) times in constraints. And the maximal value of k is m.\n\n4. What batch size was used?\n\n **A**: We specify the batch size for each model in Section 5.1 (Line 262).\n\n5. What is the effect of hardware on performance?\n\n **A**: SAPipe is suitable for hardware deployment with roughly comparable computation and communication time. We only have one type of hardware, Tesla V100, in our lab. Studying the effect of different hardware could be our future work. \n\n6. What is the effect of model on performance?\n\n **A**: The properties of models, e.g., the model structure and the number of parameters, affect the constant values in our assumptions, such as smoothness ($L$ in Assumption 1), gradient variance ($V_1$ in Assumption 2), gradient diversity ($\\rho$ in Assumption 3), etc. However, studying the exact effect of the model properties on our assumptions is beyond the scope of this paper, which could be our future work.\n\n7. How much worse convergence rate does SAPipe have compared to synchronous DP?\n\n **A**: We have added the figures with convergence rate comparison (accuracy vs. #epoch) in Figure 9 in appendix. As shown in Figure 9, in most cases, without the proposed staleness mitigation methods, the vanilla PipeSGD has significant regression in accuracy/perplexity compared to the non-stale baseline, when the same number of iterations are executed. When the staleness mitigation is used, SAPipe has the same convergence rate compared to synchronous DP (\"SAPipe\" in Figure 9 is the one with the best choice of staleness mitigation method).\n\n8. How much do the proposed mitigation techniques help with convergence?\n\n **A**: PipeSGD is the 1-stale pipeline method without any mitigation techniques, causing obvious impact to convergence as shown in Table 2, Figure 4, and Figure 9 in appendix.\n\n9. The authors did not discuss limitations of their work.\n\n **A**: One limitation of our work is lack of finding the best staleness mitigation options for different models and datasets. We previously put a brief discussion of this limitation in Remark 7. Note that Remark 3\\~7 all have mentioned that the proposed staleness mitigation methods (DC/WP-OPT1/OPT2/OPT3) has lower error compared to vanilla PipeSGD conditional on certain constant values depends on the models and data. For example, Line 224-225 states that better convergence of SAPipe-WP-OPT1 requires small gradient divergence, otherwise the vanilla PipeSGD would be better. In the revised version, we have also added such discussion in Line 289-295 and 335-337.\n\n In section 6 related work, we also mention that we haven't tried to combine pipelined training with gradient compression. These 2 methods are orthogonal and could be easily combined, which is another limitation of this work.\n\n For optimizers, since our proposed staleness mitigation methods are applied directly to the gradients before entering the optimizer, they are compatible to most of the popular 1st-order methods, such as SGD and Adam, but not compatible to the 2nd-order methods.", " 1. Figure 1 is a little confusing; for the default pipeline part, it seems that v3 begins before the end of b3 visually, this sees inaccurate without mentioning any potential other optimization, e.g., communicating in a thinner granularity.\n\n **A**: Thanks for the comment. The arrows denote dependencies between two operators. We have specified this in the caption of Figure 2 to avoid confusion (Line 114).\n\n2. PyTorch-DDP should be included as a baseline, because it is a very popular data-parallel implementation and provides efficient system optimizations such as bucketing and communication overlapping. \n\n **A**: We use BytePS as the non-stale baseline, because it has higher training throughput than PyTorch-DDP (referred to as state-of-the-art all-reduce implementation) with more communication optimization techniques, as shown in the paper of BytePS [https://www.usenix.org/system/files/osdi20-jiang.pdf]. \n\n3. Perhaps some discussion about the scope of models should be considered; for example, can this approach be used for graph neural network training, where the layers in the model are not linearly stacked?\n\n **A**: Our method could be easily extended to GNN, but this may be unnecessary. Since GNN is shallow with much fewer parameters than DNN, its bottleneck is usually at data preprocessing (computational graph sampling and feature retrieving), which has negligible gradient synchronization overhead. ", " 1. Using VGG16 on CIFAR-10 for the ablative study is not nearly as interesting as if they had used a more modern and interesting model (e.g. ResNet or GPT-2). \n\n **A**: ResNet and GPT-2 are also shown in the ablation study in Figure 5(b). Ablative study for partial stale experiments with ResNet model can be found in Figure 10(b) in appendix. \n\n2. The later part of the paper is quite rushed with minimal explanation and analysis of the results. For example Figure 5 has no analysis in the caption and only a bit in the text. \n\n **A**: We have added the detailed descriptions of experiment results in appendix (E.2, Line 641). \n\n3. None of the proposed optimization schemes performed the best on all benchmarks. This can pose a significant challenge to future application if a researcher would have to conduct an exhaustive test of all optimization methods to determine which may produce the best results for a new application.\n\n **A**: Yes, the performance of staleness mitigation methods varies in different models and datasets. In theoretical analysis, we have also explained that the best choice of the mitigation methods depends on the choice of hyperparameters and some unknown constant values such as smoothness, gradient diversity and variance, which depends on the data and model, as we discussed in Remark 7, Line 251-255. This is the limitation and future work of our paper. In the revised version, we have also added this discussion Line 293-295 and Line 335-337. \n\n4. Figure 1 is introduced early in the text, prior to any significant discussion of partial stale gradient updates.\n\n **A**: Thanks for the comment. We have moved Figure 1 (DNN training pipeline) to Section 3 (Line 114), which is now Figure 2.\n\n5. Line 6 of Algorithm 1 is strange. For it to say “same as t > 1” should be clarified that there is really only a conditional for the PipeSGD case.\n\n **A**: Yes, this is only conditional for PipeSGD case. This line is just for comparison of DNN training pipeline and PipeSGD so that the difference between normal distributed training and PipeSGD is clearer to the readers. For implementation, distributed training doesn't need such a conditional.\n\n6. For subfigure 2a, the fact that the parts of the staleness-aware system are all in the lower box but with no real relationships between them and the runtime parts of the system is not helpful to the reader.\n\n **A**: Thanks for the comment. We have modified this figure and added relationships between our algorithm design and runtime optimizations, which is now Figure 1(a) in the revised version.\n\n7. For 2b, it would be better to represent the communication pattern with some sort of parallel timeline. Again, with the cyclic dependencies of iterative training, the interplay of the communication isn’t clearly conveyed by 2b.\n\n **A**: The text below this subfigure (now Figure 1(b) in the revised version) shows the timeline.\n\n8. In the equation below line 111 it is not clear what u is as it doesn’t appear in the table.\n\n **A**: $u$ is the duration of forward operator, which can be found in Table 1 (in the 3rd row, left column).\n\n9. On line 119 a figure that clearly shows how partial staleness / gradient updates works would improve the discussion.\n\n **A**: Thanks for the suggestion. We have moved the figure of training pipeline with partial staleness in this section (Line 114, Figure 2).\n\n10. Replace Figure 5 analysis with a more challenging example for the ablation study, i.e. ResNet on ImageNet or one of the transformers, or both. \n\n **A**: ResNet and GPT-2 are also shown in the ablation study in Figure 5(b). Ablative study for partial stale experiments with ResNet model can be found in Figure 10(b) in appendix. \n\n11. Figures 3-5 are extremely small. \n\n **A**: Magnified figures can be found in appendix (Figure 7-10).", " One of the challenges with scaling data parallel training of neural networks is that the natural dependencies between the forward propagation, backward propagation, and gradient updates limits the benefits of parallel execution. A common solution in the state of the art is to use stale gradients to allow the gradient updates to overlap with the data motion between parallel ranks. However, the use of stale gradients typically leads to a lower quality of solution in the trained network. This paper presents several methods for addressing the staleness of gradients, using three techniques: partial staleness for “earlier” layers of the network, delay gradient compensation, and weight prediction. Using these techniques they demonstrate that they can achieve runtime performance similar to the SOTA using stale gradients, but with a quality that nearly matches a standard SGD training approach. The paper also provides both a convergence analysis and experimental analysis with both visual and language models demonstrating the impact of their proposed techniques. This paper leverages multiple advancements in the community and integrates them into a common composition and framework, demonstrating that in aggregation they can provide an approach that is performant in terms of both speed and quality of the trained network. Overall, they clearly indicate where the derived ideas come from, how they were used previously and how they are being integrated into a combined algorithm. The explanation of the algorithm and visualization via diagram could use some improvement. Please see the next section for specific comments, however, the reader is left with a general understanding of how the algorithm execute at runtime, but some better figures could significantly improve this.\n\nThe evaluation of the algorithm is mixed. Overall it is good that the authors include both visual and language models, however, using VGG16 on CIFAR-10 for the ablative study is not nearly as interesting as if they had used a more modern and interesting model (e.g. ResNet or GPT-2). Additionally, the later part of the paper is quite rushed with minimal explanation and analysis of the results. For example Figure 5 has no analysis in the caption and only a bit in the text. On a similar note, Figures 3-5 are extremely small and clearly not really meant for a reader to actually read.\n\nOne key challenge for this work is that none of the proposed optimization schemes performed the best on all benchmarks, and thus is is not clear a priori as to which technique (Opt1 - 3) should be used when. This can pose a significant challenge to future application if a researcher would have to conduct an exhaustive test of all optimization methods to determine which may produce the best results for a new application.\n Figure 1 is introduced early in the text, prior to any significant discussion of partial stale gradient updates, yet it is illustrating partially stale gradients. The figure, text, and caption should be refined to indicate that it is not showing the state of the art, but actually part of the proposed method.\n\nThe implementation of Algorithm 1 is strange. Specifically, line 6 is weird for the base case. For it to say “same as t > 1” should be clarified that there is really only a conditional for the PipeSGD case.\n\nFigure 2 doesn’t really convey what the authors probably intended for explaining the algorithm and does not contribute too much as is. For subfigure 2a, the fact that the parts of the staleness-aware system are all in the lower box but with no real relationships between them and the runtime parts of the system is not helpful to the reader. Furthermore for 2b, it would be better to represent the communication pattern with some sort of parallel timeline. Again, with the cyclic dependencies of iterative training, the interplay of the communication isn’t clearly conveyed by 2b.\n\nIn the equation below line 111 it is not clear what u is as it doesn’t appear in the table.\n\nOn line 119 a figure that clearly shows how partial staleness / gradient updates works would improve the discussion.\n\nReplace Figure 5 analysis with a more challenging example for the ablation study, i.e. ResNet on ImageNet or one of the transformers, or both. The biggest issue is that of the proposed techniques, on the use cases shown, one combination does not always win. As a result it is not clear if these approaches were to be applied to a new problem area, which set of the staleness compensation techniques should be used. Without some discussion of this, it is not clear how to leverage this work going forward without actually performing something of this analysis. Furthermore, the section on sensitivity analysis does not address this.", " This paper presents SAPipe, a system to support efficient data parallelism, where communication is effectively hidden within computation with adaptive staleness and corresponding compensations. Both theoretical analysis and empirical study are conducted to verify the effectiveness of the proposed solution. Strengths:\n- The idea of improving PipeSGD with adaptive partial staleness is simple but effective.\n\n- Theoretical analysis is provided to justify the design of the algorithm. \n\n- The experiment section is solid, and the performance boost is significant. \n\nWeaknesses:\n- Some writing and illustration can be further polished. For example, Figure 1 is a little confusing; for the default pipeline part, it seems that v3 begins before the end of b3 visually, this sees inaccurate without mentioning any potential other optimization, e.g., communicating in a thinner granularity. \n\n- Some reasonable baseline approaches are missing. For example, PyTorch-DDP should be included because it is a very popular data-parallel implementation and provides efficient system optimizations such as bucketing and communication overlapping. \n N.A. Perhaps some discussion about the scope of models should be considered; for example, can this approach be used for graph neural network training, where the layers in the model are not linearly stacked?", " Synchronous data parallelism is widely adopted, but can suffer from poor scaling due to excessive communication overheads. This paper proposes using stale weight updates (weight gradients computed using the not latest weights) to better overlap computation and communication. It discusses various mitigation techniques to get bounded-stale data parallelism to work well in practice. #### Strengths\n- The paper offers an alternative to synchronous data parallelism in situations where the network is not fast enough to hide the cost of communication.\n- The paper is well written and clear.\n- Distributed training performance is an important problem in Machine Learning Systems, and this paper proposes one more possible solution to the problem. \n\n#### Weaknesses\n- Evaluation is not entirely convincing: for example, Figure 4 shows accuracy vs. time, but I would have also liked to see accuracy vs. iteration to see the impact on convergence speed.\n- Baselines seem if-fy: why does ResNet-50 scale worse than GPT-2? I expect the convolutional layers in a ResNet-50 model to be much more amenable to data-parallel-style communication compared to a GPT-2 model with a lot of linear layers in the attention layers. VGG-16 doesn't scale well exactly for these reasons.\n - Are the ideas of delay compensation and weight prediction new? Don't these papers propose similar ideas for pipeline parallelism without flushes (which shouldn't make a difference): PipeMare [https://proceedings.mlsys.org/paper/2021/file/6c8349cc7260ae62e3b1396831a8398f-Paper.pdf] and Kosson et al. [https://arxiv.org/pdf/2003.11666.pdf].\n- Why does the optimization problem at the top of page 4 have a runtime complexity of O(m^2)?\n- What batch size was used? Communication overhead is smaller with larger batch sizes.\n- What is the effect of hardware (compute accelerator and network) on performance?\n- What is the effect of model on performance?\n- How much worse convergence rate (accuracy vs. number of iterations) does SAPipe have compared to synchronous DP (BytePS or Horovod)?\n- How much do the proposed mitigation techniques (delay compensation and weight prediction) help with convergence? What if I don't use any mitigation techniques? How badly does this do? The authors did not discuss limitations of their work. Some discussion of situations where they expect SAPipe to not be a useful solution (e.g., particular types of models, hardware deployments, optimizers, or other situations where their theoretical analysis breaks down)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "W5X2HMWyl8o", "MDyh08MKiqD", "Xw8XA41UHYk", "9HLYAPbik5C", "iDXkG3LgeAv", "G98JD3T5ApH", "hEajH28cCFX", "hEajH28cCFX", "eWfRcdWc7QG", "z_UFq0MAZO8", "nips_2022_3MZnNARib5", "nips_2022_3MZnNARib5", "nips_2022_3MZnNARib5" ]
nips_2022_6hzH8pohyPY
Batch-Size Independent Regret Bounds for Combinatorial Semi-Bandits with Probabilistically Triggered Arms or Independent Arms
In this paper, we study the combinatorial semi-bandits (CMAB) and focus on reducing the dependency of the batch-size $K$ in the regret bound, where $K$ is the total number of arms that can be pulled or triggered in each round. First, for the setting of CMAB with probabilistically triggered arms (CMAB-T), we discover a novel (directional) triggering probability and variance modulated (TPVM) condition that can replace the previously-used smoothness condition for various applications, such as cascading bandits, online network exploration and online influence maximization. Under this new condition, we propose a BCUCB-T algorithm with variance-aware confidence intervals and conduct regret analysis which reduces the $O(K)$ factor to $O(\log K)$ or $O(\log^2 K)$ in the regret bound, significantly improving the regret bounds for the above applications. Second, for the setting of non-triggering CMAB with independent arms, we propose a SESCB algorithm which leverages on the non-triggering version of the TPVM condition and completely removes the dependency on $K$ in the leading regret. As a valuable by-product, the regret analysis used in this paper can improve several existing results by a factor of $O(\log K)$. Finally, experimental evaluations show our superior performance compared with benchmark algorithms in different applications.
Accept
Thank the authors for their submission. The paper studies combinatorial multi-armed bandit with probabilistically triggered arms. That is an MAB setting in which, at each round, the learner chooses a subset of the arms and obtains a reward that is some function of expected rewards of the chosen arms. In addition, the learner only observes feedback on a random subset of her chosen arms (triggered arms). The paper relaxes a smoothness assumption laid by a previous work, and further improves the dependence on K in the regret bound, where K is the batch size (maximum number of triggered arms) The authors provide computationally-efficient algorithms that are based on Bernstein concentration inequality, facilitating the improved bounds. The paper is well-written and organized, and the theoretical results are sound.
test
[ "_VghkKDIDN", "7Gjv3zvTLS", "7qo0wvfkwrF", "OC06JGccak", "tIJnQp3RQgBS", "9zSpvcBfSFd", "ET6xAcroMFx", "3riG_pFDWbI", "vwuK5VKM1ma", "guea8oQS_V5", "JQioC-fcJPp" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer,\n\nWe wonder if our response has addressed your question about the $(\\alpha,\\beta)$-approximation regret and the experiments. We are happy to have a further discussion if you have more questions.", " Thank you for the response. This will be a good addition to the final version of the paper. ", " Thanks for the clarification. I read your detailed response and revisited the relevant papers. This addressed my concern and I updated my review accordingly.", " We thank the reviewer for the positive comments. We agree with the reviewer that our regret does in fact have an extra $\\log K$ term compared with [15], and we apologize for the confusion on this point. However, we believe it is reasonable to compare with [16] whose regret bound is $O(K\\sum_{i}\\frac{\\log T}{\\Delta_{i}})$, rather than with [15] whose regret bound is $O(\\sum_{i}\\frac{\\log T}{\\Delta_{i}})$. The reason is as follows.\n\nThe main difference between these two works lies in the set of feasible actions $\\mathcal{S}$.\nFor [15], $\\mathcal{S}$ is the collection of all permutations whose size equals to $K$ (i.e., a uniform matroid). In this case, the items in the feasible solutions are **exchangeable** (a critical property for matroids), i.e., $S - \\{e_1\\} + \\{e_2\\} \\in \\mathcal{S}$, for any $S \\in \\mathcal{S}, e_1, e_2 \\in [m]$. Based on the exchangeability property, [15] is able to define the item-wise sub-optimal gap $\\Delta_{e,e^*}$ in Eq. (3) of [15] and the critical event $G_{e,e^*,t}$ in Theorem 1 of [15]. With these definitions, their proof is basically bounding the number of times that each suboptimal item $e$ is chosen instead of any optimal item $e^*$, which yields a batch-size independent regret bound $O(\\sum_{i}\\frac{\\log T}{\\Delta_{i}})$.\n\nFor [16], however, $\\mathcal{S}$ (i.e., $\\Theta$ in [16]) consists of arbitrary feasible actions (perhaps with different sizes), e.g., $S \\in \\mathcal{S}$ could refer to any path that connects the source and the destination in network routing applications.\n[16] refers to cascading bandits with this kind of $\\mathcal{S}$ as \"combinatorial cascading bandits\", where items in the feasible actions may not be exchangeable (e.g., deleting an edge from a path and adding another edge may not be a valid path anymore). This means the item-wise definition of $\\Delta_{e,e^*}$ and $G_{e,e^*, t}$ no longer works. Instead of using the $\\Delta_{e,e^*}$, [16] uses another definition for the sub-optimal gap $\\Delta_{e,\\min}$ that is similar to our Definition 1, and applies a more general proof based on [17] that yields an extra $K$ factor. \n\nSimilar to [7], [16], and [17], our CMAB-T formulation focuses on the arbitrary $\\mathcal{S}$. In other words, our BCUCB-T algorithm and its analysis can deal with this general $\\mathcal{S}$ whose feasible actions are not exchangeable. So we believe that we should compare [16] with our result, and it is unfair to compare ours with [15] that explicitly assumes $\\mathcal{S}$ to be the uniform matroid, which enjoys the additional \"exchangeable property\".\nWe will add this discussion in our final version to make it clear.", " We thank the reviewer for raising this good question. The regret in Section 4 is the exact regret, i.e., $\\alpha=\\beta=1$, since our SESCB algorithm (Algorithm 2) enumerates all possible solutions and selects the optimal one. \n\nInspired by the reviewer's question, we can actually generalize our SESCB algorithm by allowing an $(\\alpha, \\beta)$-approximation oracle, similar to line 5 of our BCUCB-T algorithm. In particular, we can change lines 4-7 by defining $\\bar{r}\\_t(S)$ for all $S \\in \\mathcal{S}$ instead of explicitly computing them. We then treat $\\bar{r}\\_t(S)$ as a general set function, described by $2m$ parameters consisting of $m$ empirical means $\\hat{\\mu}\\_{t-1,i}$ and $m$ counters $T\\_{t,i}$, for $i \\in [m]$. Now we assume an $(\\alpha, \\beta)$-approximation oracle $\\bar{O}:[0,1]^m \\times \\mathbb{Z}^{m}\\rightarrow \\mathcal{S}$ which can produce\n\\begin{align}\n S=\\bar{O}(\\hat{\\boldsymbol{\\mu}}\\_{t-1}, \\boldsymbol{T}\\_{t}) \\text{ s.t. } \\text{Pr}\\left[\\bar{r}\\_t(S)\\ge \\alpha \\cdot \\bar{r}\\_t(\\bar{S}^\\*\\_t)\\right] \\ge \\beta,\n\\end{align}\nwhere $\\bar{S}^\\*\\_t=\\arg\\max\\_{S\\in \\mathcal{S}} \\bar{r}\\_t(S)$.\nIn this case, we can improve the computational efficiency of SESCB when there exists an efficient oracle that can \n\t(approximately) optimize the set function $\\bar{r}\\_t(S)$ over $S \\in \\mathcal{S}$. For the analysis, it is straightforward to change Eq. (136)-(139) as follows,\n\\begin{align}\n\\Delta\\_S &= \\alpha r(S^*; \\boldsymbol{\\mu}) - r(S;\\boldsymbol{\\mu})\\\\\\\\\n&\\le \\alpha ( r(S^*;\\hat{\\boldsymbol{\\mu}}\\_{t-1}) + \\rho\\_t(S^*)) - r(S; \\boldsymbol{\\mu}) \\\\\\\\\n&\\le \\bar{r}\\_t(S) - r(S; \\boldsymbol{\\mu})\\\\\\\\\n&= r(S;\\hat{\\boldsymbol{\\mu}}\\_{t-1}) + \\rho_t(S) - r(S; \\boldsymbol{\\mu}) \\\\\\\\\n&\\le 2\\rho\\_t(S),\n\\end{align}\nwhere the first inequality is because of Eq. (135) over $S^*$, the second inequality is due to the $(\\alpha, \\beta)$-approximation oracle $\\bar{O}$ mentioned above, and the last inequality is due to Eq. (135) over $S$.\nSince the last inequality remains exactly the same as Eq. (139) in the previous version, this change does not affect later analysis. As for the $\\beta$ part, we can apply the similar proof as in Eq. (54) to bound the regret when the oracle fails by $(1-\\beta)T\\Delta_{\\max}$, which is absorbed by our $(\\alpha, \\beta)$-approximate regret definition, and the final regret bound remains unchanged. The only difference is that we have an $(\\alpha, \\beta)$-approximate regret, instead of the exact regret given by the enumeration. \n\nFor the existence of such efficient oracles, we can give a concrete example on the PMC problem. Specifically, we are able to modify $\\bar{r}\\_t(S)$ in Algorithm 2 so that it can be proved to be a submodular function. We then use the submodular maximization technique to find an approximate solution to $\\bar{r}\\_t(S)$ , which is essentially a greedy $(1-1/e,1)$-approximation oracle. Solving this problem is quite efficient and achieves the state-of-the-art $(1-1/e,1)$-approximation regret bound for PMC bandit, removing the $O(\\log K)$ factor. For more details, see our response to Reviewer uueH. \n\nWe will add the above clarification and discussion in the final version to make it clear.", " We thank the reviewer for mentioning three related works about variance-aware algorithms as well as the kind suggestion on our writing. For the related works, (A) is an interesting concurrent work that studies cascading bandits, which shares a similar variance-aware principle as ours. (A) studies two settings: the tabular case which is overlapped with us (though our work focuses on a slightly more general combinatorial cascading bandits [16], see the discussion above this reply) and the linear contextual cascading bandit case. Compared with the overlapped tabular case, our BCUCB-T achieves the matching regret bound when translating to gap-independent regret bound as in Appendix C.5.3. Interestingly, (A) also discusses the regret lower bound, which can show the tightness of our results as well. Moreover, our work is on a more general framework for CMAB-T, summarized by the TPVM condition, while (A) only focuses on cascading bandits.\nFor the linear contextual case studied by (A), we do not consider this setting in our work and it will be an interesting future direction to see whether the TPVM condition can also be applied to handle the linear contextual CMAB. For related works (B) and (C), we will add them to emphasize the timeliness of the current work as suggested. For the writing, we will add more examples (e.g., combinatorial cascading bandits) in the proofs and discussions to illustrate the intuitions of our definitions, assumptions, and proof techniques, which would be helpful for possible follow-up works.", " We thank the reviewer for the positive comments. The reviewer raises a concern about the computational efficiency of our SESCB algorithm, which can completely remove the $O(\\log K)$ dependence. As we mentioned in line 318, the computation efficiency issue comes from the fact that the SESCB enumerates all possible actions, which is the same issue experienced by other ESCB-type algorithms that also use the brute force method, e.g., [8], [24].\n\nOne way to avoid the brute-force search is to introduce an $(\\alpha,\\beta)$-approximation oracle that could be efficient, as mentioned in our reply to the first question of reviewer HNeW. \nGenerally speaking, we can define $\\bar{r}\\_t(S)$ for all $S \\in \\mathcal{S}$ instead of explicitly computing them. We then treat $\\bar{r}\\_t(S)$ as a general set function. Now can we assume an $(\\alpha, \\beta)$-approximation oracle $\\bar{O}$ which can efficiently produce $S \\in \\mathcal{S}$ such that $\\Pr\\left[\\bar{r}\\_t(S)\\ge \\alpha \\cdot \\bar{r}\\_t(\\bar{S}^\\*\\_t)\\right] \\ge \\beta$, where $\\bar{S}^*\\_t=\\arg\\max\\_{S\\in \\mathcal{S}} \\bar{r}\\_t(S)$. In this way, we can easily change the regret to $(\\alpha,\\beta)$-approximate regret to trade-off the efficiency. \n\nFor the existence of such an oracle, the main difficulty lies in whether one can efficiently solve the optimization problem over a non-linear set function $r(S;\\hat{\\boldsymbol{\\mu}}\\_{t-1})$ plus another non-linear set function $\\rho\\_t(S)$.\nIn fact, we can find an efficient $(1-1/e, 1)$ greedy oracle when the reward function $r(S;\\hat{\\boldsymbol{\\mu}}\\_{t-1})$ and $\\rho\\_t(S)$ are both monotone submodular functions.\n\nTake the PMC problem for example, for which $r(S;\\boldsymbol{\\mu})$ is monotone submodular. To make the interval $\\rho\\_t(S)=B\\_v\\sqrt{\\sum\\_{i \\in S} \\frac{C\\_1}{T\\_{t-1,i}}+ \\max\\left\\\\{8C\\_1\\sqrt{\\sum\\_{i \\in S}\\frac{\\log(2|\\mathcal{S}|T)}{T\\_{t-1,i}^2}}, \\frac{8C\\_1\\log(2|\\mathcal{S}|T)}{T^{\\min}\\_{t-1, S}}\\right\\\\}}$ submodular, we can change it to $\\rho'\\_t(S)=B\\_v\\sqrt{\\sum\\_{i \\in S} \\frac{C\\_1}{T\\_{t-1,i}}+ 8C\\_1\\sqrt{\\sum\\_{i \\in S}\\frac{\\log(2|\\mathcal{S}|T)}{T\\_{t-1,i}^2}}+ \\frac{8C\\_1\\log(2|\\mathcal{S}|T)}{T^{\\min}\\_{t-1, S}} }$, where the $\\max$ is replaced with a sum ($+$). We know that $g(f(S))$ is submodular if $f(S)$ is submodular and $g$ is a non-decreasing concave function, so it suffices to show three terms within the (non-decreasing concave) square root in $\\rho'\\_t(S)$ are submodular. The first term is a modular function, the second term is the square root of a modular function, and the third term can be rewritten as $\\max\\_{i \\in S}\\frac{8C\\_1\\log(2|\\mathcal{S}|T)}{T\\_{t-1, i}}$, which is also monotone submodular. Now we can use the greedy oracle to maximize a new optimistic reward $\\bar{r}\\_t(S)=r(S;\\hat{\\boldsymbol{\\mu}}\\_{t-1})+\\rho'\\_t(S)$ in our SESCB algorithm. As for the final regret, using $\\rho'\\_t(S)$ instead of $\\rho\\_t(S)$ only worsens the final regret by a constant factor, since it only affects the analysis of case 1 of Appendix D.2 by multiplying a factor of two in line 1006 to deal with the larger $\\rho'\\_t$. Now compared with Merlis and Mannor [22] that achieves $(1-1/e, 1)$-approximate regret bound, our SESCB achieves the same $(1-1/e, 1)$-approximate regret bound but completely removes the $O(\\log K)$ dependency. Moreover, our greedy oracle is efficient with computational complexity $O(TKL)$, where $T$ is the total number of rounds, $K$ is the number of source nodes to be selected in each round and $L$ is the total number of source nodes, which is much faster than our previous enumeration method. We will add the above discussion to improve the computational efficiency of SESCB.", " We thank the reviewer for this question. We have provided experiments to support the theoretical improvements of our work in the Appendix F. Specifically, we consider two representative applications, the combinatorial cascading bandits (Appendix F.1) and the PMC bandit (Appendix F.2), for the CMAB-T and the CMAB with independent arms, respectively. Compared with benchmark algorithms, our BCUCB-T algorithm achieves about 20\\% less regrets for the combinatorial cascading bandits, and our SESCB achieves about 15\\% less regrets for the PMC bandits. We will move these experimental results into our main paper in a future version.\n", " This paper considers the combinatorial semi-bandits problem under two different settings: (1) probabilistically triggered arms setting where the set of played arms can trigger reward on other arms, (2) the non-triggering setting with independent arms. For both these settings the authors improve batch-size dependence in regret as compared to previous work by considering a new variance-based condition on the underlying reward distributions. The authors show that this condition is satisfied in many applications such as cascading bandits, influence maximization on DAGs etc, and leads to significant improvements in the regret achievable for these applications. Strengths: The paper provides a good contribution to the literature on combinatorial semi-bandits as it improves the dependence on batch size from linear to logarithmic in many practical applications. The paper is very well-written and provides clear intuition behind various ideas/assumptions. The comparison with prior work is also adequate. 1. In Section 4, is the notion of regret still (\\alpha, \\beta)-approximate regret? If yes, then I do not understand the reason behind using this notion of regret. In the independent arms setting you are not using the oracle to return a (\\alpha, \\beta)-approximate set of arms, instead you are computing the argmax using brute force. In that case you should be able to compete with the best set. \n\n2. It would be great if the authors can provide experimental results for some applications which support the theoretical improvements. I do not foresee any negative societal impact. ", " This paper proposes new smoothness conditions for the well-known combinatorial semi-bandit problem with probabilistically triggered arms. It is shown that well-known combinatorial learning problems of interest, including combinatorial cascading, influence maximization, and probabilistic maximum coverage bandits, satisfy these conditions. With the so-called triggering probability and variance modulated smoothness condition, the authors significantly improved the O(K) factor that appears in the previous regret bounds, where K represents the batch size. The authors also propose a non-triggering version of the smoothness condition and show that dependency on K can be completely removed for the non-triggering combinatorial semi-bandit. In order to achieve the improved regret bounds for the triggering version of the problem, the paper relies on empirical Bernstein inequality to construct upper confidence bounds of base arms. For the non-triggering version, batch size independence is achieved by constructing sub-exponential concentrated confidence intervals. Strengths: \n\nFinding general enough smoothness conditions under which the effect of batch size on the regret can be significantly improved is an important problem. This paper intuitively develops new smoothness conditions and algorithms that use tailored confidence intervals to address this problem. \n\nComputationally efficient algorithms are proposed for triggering and non-triggering cases. \n\nWriting is clear. Theorems are well organized. \n\nWeakness:\n\nTrue batch size independence comes with an algorithm that requires enumeration of all possible actions. I wonder if any computationally efficient algorithm can achieve batch size independence. I wonder if any computationally efficient algorithm can achieve batch size independence. Limitations are adequately discussed. This work is theoretical in nature and does not have a societal impact. ", " Remark: Throughout my review, [n] refers to the n-th reference in the full paper (with appendix) from the supplementary material. (I mention this because the reference numbers differ in the 9-page submission.)\n\nThis paper studies Combinatorial Multi-Armed Bandit (CMAB) problems with probabilistically-Triggered arms (CMAB-T). In essence, CMAB is a variant of the standard bandit setting where the learner chooses a subset of arms (a.k.a. a “super arm” or “action”) and the mean reward is a function of the chosen subset and of its component arms’ mean rewards, and for CMAB-T the learner only observes feedback on a random subset of the chosen arms. This formulation generalizes problems including cascading bandits for ranking search results, online influence maximization, etc.\n\nPrior work [27] introduced a smoothness condition (with respect to the function that maps the component arms’ mean rewards to the subset’s mean reward) that improved existing regret bounds [7] by a factor of $1/p^*$ (see Line 36 for details). This work provides a refined smoothness condition that involves the variance of the arms’ reward distribution, i.e., the condition is less restrictive when the variances are smaller (simply because the mean rewards are easier to learn in such cases). Provided this condition holds, the authors develop a variance-aware UCB-style algorithm based on the empirical Bernstein inequality [1] and show its regret improves existing work with respect to the “batch size” K, which is the maximum of arms that can be triggered at each round (e.g., the number of search results that a cascading bandit algorithm chooses).\n\nIn addition, the authors study non-triggered arms (Section 4) and specialize their results to various application settings (Section 5). STRENGTHS:\n\nIn my opinion, the strengths of the paper are (1) a novel and fairly natural smoothness condition that incorporates variance information, (2) algorithms that exploit low variance problem instances via empirical Bernstein confidence sets (instead of “variance-unaware” Hoeffding-based confidence sets), and (3) analysis that shows exploiting variance in this manner can dramatically reduce regret in terms of $K$ (e.g., from $K$ to $log K$ in some settings).\n\nAt a higher level, I feel the main strength of this paper is to show that variance-aware algorithms lead to polynomial improvement in terms of K, and to do so in a fairly general setting (CMAB-T). More specifically, paper A (see below) recently proved something similar for cascading bandits (a special case of CMAB-T), although it focuses on gap-free bounds so is complementary to the current work (which focuses on gap-dependent bounds). More broadly, a number of recent papers have demonstrated similar (in spirit) polynomial improvements resulting from variance-aware algorithms in various bandit and RL settings (e.g., paper C below achieves a polynomial improvement in terms of the horizon for finite-horizon RL; see also the references therein and in paper B below). Thus, the current paper seems rather timely given the similar flavor to these works.\n\n(To be clear, I’m not demanding these papers be cited — reference A is a very recent preprint and the others are on different topics — but rather, emphasizing the timeliness of the current work).\n\n(A) Vial, Sanghavi, Shakkottai, Srikant, “Minimax Regret for Cascading Bandits”, arXiv preprint\n\n(B) Zhang, Yang, Ji, Du, “Improved Variance-Aware Confidence Sets for Linear Bandits and Linear Mixture MDP”, arXiv preprint\n\n(C) Zhou, Gu, Szepesvari, “Nearly Minimax Optimal Reinforcement Learning for Linear Mixture Markov Decision Processes”, COLT 2021\n\nWEAKNESSES:\n\n(1) While mostly well-written, the paper is very dense, particularly the first few technical sections (i.e., starting from Section 2). I imagine this is mostly a side effect of packing many technical definitions and results into a short page limit, but it was hard to follow at times. For example, Section 2 would have been easier to understand if the authors had used a simple special case of the CMAB-T model (e.g., cascading bandits) as a running example to help illustrate the technical definitions (of which there are many in Section 2).\n\n(2) Along these lines, I did not find the proof sketches very illuminating, and it was difficult to glean much intuition from the full proofs in the appendix given their length and density. In other words, I would have preferred an intermediate explanation -- high-level like the proof sketches, but more detailed like the actual proofs -- to illustrate the key ideas of the analysis (again, perhaps in a simpler special case of CMAB-T).\n\n~~(3) Unless I'm mistaken (in which case I'll gladly update during the discussion period), the improvement over prior work in the disjunctive cascading bandit setting is oversold. (See \"Questions\" section for details.)~~\n\nUpdate post-rebuttal: Weakness (3) has been satisfactorily addressed and I remain in support of acceptance. ~~In the first row of Table 3, the authors claim that their regret bound for disjunctive cascading bandits improves the existing one from [16] by a factor of $K / log K$. The existing bound is not explicitly shown, but the authors’ bound has the form $log(K) log(T) \\sum_i \\Delta_i^{-1}$, so I assume the existing bound refers to the bound $K \\log(T) \\sum_i \\Delta_i^{-1}$ from [16]. However, as best I can tell, this bound from [16] is specialized to cascading bandits from a more general setting; if one restricts to the special case like in the original cascading bandit paper [15], the bound $\\log(T) \\sum_i \\Delta_i^{-1}$ from Theorems 2 and 3 in [15] actually seems better than the bound from the current paper ... or at least, much better than the bound in [16], since [15] has no explicit multiplicative dependence on $K$? (Note: [15] does not refer to its model as \"disjunctive\", but I believe it's the same formulation.)~~\n\nUpdate post-rebuttal: This question has been satisfactorily addressed and I remain in support of acceptance. Overall, I feel the limitations were acknowledged -- for example, the assumptions are clearly stated in each of the theorems, Remark 3 acknowledges that additional assumptions are needed to ensure the proposed Condition 3 implies the existing Condition 2, etc. In terms of negative societal impact, the authors simply state (in the checklist) that there is no foreseeable impact. Personally, I feel this view is somewhat narrow -- while the paper is theoretical, the algorithms could obviously have real-world impact if deployed -- but I understand the authors' point and view this more as a difference of opinion than an objective weakness." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "vwuK5VKM1ma", "ET6xAcroMFx", "OC06JGccak", "JQioC-fcJPp", "vwuK5VKM1ma", "JQioC-fcJPp", "guea8oQS_V5", "vwuK5VKM1ma", "nips_2022_6hzH8pohyPY", "nips_2022_6hzH8pohyPY", "nips_2022_6hzH8pohyPY" ]
nips_2022_H-6iczs__Ro
A Unified Diversity Measure for Multiagent Reinforcement Learning
Promoting behavioural diversity is of critical importance in multi-agent reinforcement learning, since it helps the agent population maintain robust performance when encountering unfamiliar opponents at test time, or, when the game is highly non-transitive in the strategy space (e.g., Rock-Paper-Scissor). While a myriad of diversity metrics have been proposed, there are no widely accepted or unified definitions in the literature, making the consequent diversity-aware learning algorithms difficult to evaluate and the insights elusive. In this work, we propose a novel metric called the Unified Diversity Measure (UDM) that offers a unified view for existing diversity metrics. Based on UDM, we design the UDM-Fictitious Play (UDM-FP) and UDM-Policy Space Response Oracle (UDM-PSRO) algorithms as efficient solvers for normal-form games and open-ended games. In theory, we prove that UDM-based methods can enlarge the gamescape by increasing the response capacity of the strategy pool, and have convergence guarantee to two-player Nash equilibrium. We validate our algorithms on games that show strong non-transitivity, and empirical results show that our algorithms achieve better performances than strong PSRO baselines in terms of the exploitability and population effectivity.
Accept
This paper provides a unifying framework for promoting diverse behaviors in multi-agent RL. The framework---the unified diversity measure--- is general enough to be able to capture several other recently proposed measures as special cases (associated with specific kernel functions). The paper then provides extensions two MARL algorithms (PSRO and Fictitious-play) which make use of UDM to promote diverse behaviors in MARL and show that they converge asymptotically to relevant equilibria and provide numerical examples. Reviewers were generally positive on the paper, finding it well written and proposing an interesting idea for promoting diversity in MARL that seemed intuitive.
train
[ "RjPmnhNkGNy", "qSaoXsc-_FWK", "7HOtGmgmQJC", "an9W-MnErtO", "x5AkIfL8-hk", "oCx8P4dSmrq", "p40Jbc8MVnf", "1VoJI3gaWA", "5BsVUTlAulr", "pgECdCfBuT" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response and most of my concerns are addressed. I would raise my evaluation.", " Thanks for the response and the additional experiments! ", " Thank you for the answers.", " **Q6: \"If a game has NE, why do we need to explore the diversity, especially when we can get the whole payoff matrix.\"**\n\n**A6:** In theory, we can compute its NE if we have the whole payoff matrix of a game. However, it is computationally expensive to search for the NE directly when the game size is large since no polynomial-time solution is available even in 2-player cases [2]. An iterative method, such as PSRO, PSRO-rN, etc., is therefore a better solution with lower computational cost, but at the same time, might encounter the diversity issues. As discussed in the RPS-X game (Appendix A.2.1), PSRO-rN fails to find the best strategy X (i.e., the NE), but promoting the strategy diversity in the iterative process can tackle this problem properly. \n\n**Q7: \"I am curious about the experiments on AlphaStar. AlphaStar is trained for about 14 days with 16 TPUs for each agent. How to get meta-payoffs for 888 RL meta-strategies?\"**\n\n**A7:** The work [4] derives the meta-payoff matrices of some complex real-world games including AlphaStar to analyze the non-transitive properties in these games. These meta-payoff matrices have been used to validate the diversity-aware algorithms in subsequent work like [1] [5]. We also use this meta-payoff matrix to validate our methods. \n\n**Q8: About the Minor**\n\n1) Line 203 the meaning of Hadamard product of a vector and a matrix\n\n2) The meaning of $\\Vert\\cdot\\Vert_F$ above line 206\n\n3) What is $m_i$* in line 207\n\n4) Line 186 “similar with” -> “similar to”\n\n**A8:** \n\n1) Hadamard product $\\odot$ is defined as $A\\odot B := [a_{ij} b_{ij}]$, where $A=[a_{ij}], B = [b_{ij}]$. (page 6, line 206, in paper)\n\n2) $\\Vert\\cdot\\Vert_F$ usually refers to the Frobenius norm, which is defined as $\\Vert{A}\\Vert_F:=\\sqrt{\\sum_{i,j}a_{ij}^2}=\\sqrt{\\textrm{Tr}(A^{\\mathsf{T}}A)}$, where $A=[a_{ij}]$. (page 6, line 207, in paper)\n\n3) $m_i$* is the $i$-th row of $\\mathcal{M}^*$. (page 6, line 209, in paper)\n\n4) We have proofreaded the paper again to elliminate any potential typos. (page 5, line 188, in paper)\n\n\n**Q9: \"I didn't find the description of the limitation of the proposed methods. And the authors didn't discuss any potential negative societal impacts. The author can describe the limitation, for example, is the proposed method limited to the two-player zero-sum game.\"**\n\n**A9:** Thank for your comment. We have added the limitations of our method in the Discussions in Appendix A.5 (page 16, lines 242-252, Appendix). \n\n----\n\nref.\n\n[1] Nieves et al., Modelling behavioural diversity for learning in open-ended games, ICML 2021.\n\n[2] Chen et al., Settling the complexity of computing two-player nash equilibria, JACM 2009. \n\n[3] McMahan et al., planning in the presence of cost functions controlled by an adversary, ICML 2003. \n\n[4] Czarneck et al., Real world games look like spinning tops, NeurIPS 2020. \n\n[5] Liu et al., Towards unifying behavioral and response diversity for open-ended learning in zero-sum games, NeurIPS 2021. ", " We thank the reviewer for all these valuable comments. We provide point-by-point responses below.\n\n**Q1: \"Add the background of the geometric intuition.\"**\n\n**A1:** We have added the background of the geometric intuition in the revised version (page 4, lines 163-166, in paper). \n\n**Q2: \"Polish the notation and the writing of this paper for more general audiences, not only for the game theory community but also the reinforcement learning/machine learning community.\"**\n\n**A2:** Thanks for this comment. The notations and writing of this paper generally follow previous work [1]. According to your advice, we have added a table of notations in Appendix A.1.1 (page 2, line 24, Appendix) to further impove the readability. \n\n**Q3: \"How to generate strategy feature and diversity kernel when applying UDM in the new games.\"**\n\n**A3:** As for the strategy feature, we can choose $\\phi_{i}=\\mathcal{M}_{[i,:]}$ if we focus on RD (response diversity), \n\nor $\\phi_{i}=\\\\{\\pi_{i}(\\cdot|s)\\\\}_{s}$ for BD (behavioral diversity). \n\nAs for the diversity kernel, we can choose some simple but effective kernel functions such as the linear kernel, polynomial kernel and Gaussian kernel. Since the dimension of the feature vector (i.e. $\\mathcal{M}_{[i,:]}$) in our experiments is large, the computational burden of Gaussian kernel would be much higher than the others. We finally choose $K\\langle{x,y}\\rangle=(\\langle{x,y}\\rangle+1)^{3}$ due to its best performance as shown below. \n\n| kernel function | exploitability | negative PE |\n| :-: | :-: | :-: |\n| linear kernel | 0.032 | 0.013 |\n| 1-order polynomial kernel | 0.037 | 0.012 |\n| 2-order polynomial kernel | 0.029 | 0.012 |\n| 3-order polynomial kernel | **0.025** | **0.010** |\n| 4-order polynomial kernel | 0.038 | 0.013 |\n\nThe above results have been added in Appendix A.5 (page 16, lines 230-241, Appendix). \n\n**Q4: \"What if the number of agents is not two?\"**\n\n**A4:** Theoretically, UDM can still work in n-player games. For each player $n$, UDM measures the diversity of a population through the diversity kernel $[K(\\phi_{i},\\phi_{j})]$, which is determined by the strategy features $\\\\{\\phi_{i}\\\\}$ of the population. Thus, to show that UDM can still work in multi-player games, it suffices to show that the strategy features $\\\\{\\phi_{i}\\\\}$ are independent of the types of games. Concretely, we can choose $\\phi_{i}=\\mathcal{M}_{[i,:]}^{(n)}$, where \n\n$\\mathcal{M}_{i,j}^{(n)}$\n\n$:=\\sum_{S^{n}}\\sum_{S^{-n}}\\pi_{i}^{(n)}(S^{n})\\cdot g^{n}(S^{n},S^{-n})\\cdot\\pi_{j}^{(-n)}(S^{-n})$\n\nis the utility of the $i$-th policy $\\pi_{i}^{(n)}$ of the player $n$ against the $j$-th joint policy $\\pi_{j}^{(-n)}$ of the players $-n$. However, since the length of joint strategy $S^{-n}:=(S^{1},\\cdots,S^{n-1},S^{n+1},\\cdots,S^{N})$ increases with the number of the players, the computational cost of UDM would be expensive. Investigating how to reduce the computational cost when extending UDM to n-player games can be an important future work. \n\nWe have added the above explanations in Appendix A.5 (page 16, lines 242-252, Appendix).\n\n**Q5: \"Line 293, missing the experiments of AlphaGO.\"**\n\n**A5:** In AlphaGO, the following numerical results show that our method performs better than the diversity-aware baselines. \n\n| method | exploitability | negative PE |\n| :-: | :-: | :-: |\n| PSRO-rN | 0.41 | 0.06 |\n| EC-PSRO | 0.13 | 0.02 |\n| FEP-PSRO | **0.09** | 0.02 |\n| UDM-PSRO | **0.09** | **0.01** |\n\nThe above results have been added in Appendix A.4.3 (page 15, lines 225-227, Appendix). \n\n\n", " We thank the reviewer for all these valuable comments. We provide point-by-point responses below.\n\n**Q1: \"Can this diversity measure be easily extended to n-player, general-sum, or non-symmetric games?\"**\n\n**A1**: Theoretically, UDM can still work in n-player, general-sum, or non-symmetric games. For each player $n$, UDM measures the diversity of a population through the diversity kernel $[K(\\phi_{i},\\phi_{j})]$, which is determined by the strategy features $\\\\{\\phi_{i}\\\\}$ \nof the population. Thus, to show that UDM can still work in these games, it suffices to show that the strategy features $\\\\{\\phi_{i}\\\\}$ are independent of the types of games. Concretely, we can choose $\\phi_{i}=\\mathcal{M}_{[i,:]}^{(n)}$, where \n\n$\\mathcal{M}_{i,j}^{(n)}$ \n\n$:=\\sum_{S^{n}}\\sum_{S^{-n}}\\pi_{i}^{(n)}(S^{n})\\cdot g^{n}(S^{n},S^{-n})\\cdot\\pi_{j}^{(-n)}(S^{-n})$\n\nis the utility of the $i$-th policy $\\pi_{i}^{(n)}$ of the player $n$ against the $j$-th joint policy $\\pi_{j}^{(-n)}$ of the players $-n$. However, since the length of joint strategy $S^{-n}:=(S^{1},\\cdots,S^{n-1},S^{n+1},\\cdots,S^{N})$ increases with the number of the players, the computational cost of UDM would be expensive. Investigating how to reduce the computational cost when extending UDM to n-player, general-sum, or non-symmetric games can be an important future work. \n\nThe above explanations have been added in Discussions in Appendix A.5 (page 16, lines 242-252, Appendix). \n\n**Q2: \"Exploitability of extensive-form games was not evaluated.\"**\n\n**A2**: Actually, we provided the results of extensive-form games including Kuhn Poker and Tic-Tac-Toe in Apendix A.4.3 (pages 14-15, lines 202-217, Appendix) and the results show that our method achieves the lower exploitability than the non-diversity baselines. \n\n**Q3: \"Experiments Section (Figure 1): I believe all the experiments are made on normal-form games? Therefore it might be good to mention/cite double oracle as well as PSRO.\"**\n\n**A3**: The experiments provided in Section 5 are investigated on normal-form games. As shown in Table 1 (page 3, line 110, in paper), double oracle is an instance of PSRO with $N=2$ and the policy solver $\\mathcal{S}$ set to the NE in normal-form games [1] [2]. Therefore, the performance of double oracle is consistent with PSRO's in normal-form games provided $\\mathcal{S}=\\textrm{NE}$ and $\\mathcal{O}=\\textrm{BR}(\\cdot)$. We have cited double oracle in Table 1 for comparision of the existing main game solvers. \n\n---\n\nref.\n\n[1] Lanctot et al., A unified game-theoretic approach to multiagent reinforcement learning, NeurIPS 2017. \n\n[2] Balduzzi et al., Open-ended learning in symmetric zero-sum games, PMLR 2019. \n\n---\n\n", " We thank the reviewer for all these valuable comments. We provide point-by-point responses below.\n\n**Q1: \"Could you elaborate why the kernel function $K⟨x,y⟩=(⟨x,y⟩+1)^3$ and $f(x)=\\frac{1}{1+exp⁡(−x)}−\\frac{1}{2}$ is chosen? Or, is there any principle for choosing these two functions in UDM?\"**\n\n**A1:** As for the function $f(x)$, the principle of choosing $f(x)$ is that the function should be bounded, monotonically increasing, and $f(0)=0$ (refer to Section 3.1 for more explanations). There are lots of functions that satisfy these properties, e.g., $f(x)=\\frac{g(x)}{\\gamma+g(x)}-\\frac{g(0)}{\\gamma+g(0)}$, where $\\gamma>0$ is a constant, $g(x)$ is a monotonically increasing function and $g(0)\\ge0$. In our paper, we choose $g(x)=\\exp(x)$ since $f(x)=\\frac{1}{1+\\gamma\\exp(-x)}-\\frac{1}{1+\\gamma}, \\gamma\\in(0,1]$ has a sufficiently large convergence region $R=(0,\\infty)$. We have added an ablation study on $\\gamma$ and it shows that $\\gamma=1$ is the best, as shown below. \n\n| $\\gamma$ | exploitabiliaty | negative PE |\n| :------: | :-------------: | :---------: |\n| $0.25$ | 0.031 | 0.012 |\n| $0.50$ | 0.033 | 0.012 |\n| $0.75$ | 0.031 | 0.012 |\n| $1.00$ | **0.025** | **0.010** |\n\nAs for the diversity kernel, we can choose some simple but effective kernel functions such as the linear kernel, polynomial kernel and Gaussian kernel. Since the dimension of the feature vector (i.e. $\\mathcal{M}_{[i,:]}$) in our experiments is large, the computational burden of Gaussian kernel would be higher than the others. We finally use $K\\langle{x,y}\\rangle=(\\langle{x,y}\\rangle+1)^{3}$ due to its best performance in the ablation study as shown below. \n\n| kernel function | exploitability | negative PE |\n| :-----------------------: | :------------: | :---------: |\n| linear kernel | 0.032 | 0.013 |\n| 1-order polynomial kernel | 0.037 | 0.012 |\n| 2-order polynomial kernel | 0.029 | 0.012 |\n| 3-order polynomial kernel | **0.025** | **0.010** |\n| 4-order polynomial kernel | 0.038 | 0.013 |\n\n\nAll the above experiments have been added in Appendix A.5 (page 16, lines 230-241, Appendix). \n\n**Q2: \"Will UDM-FP and UDM-α-PSRO perform better (in terms of expl and PE) than the baselines that used FP and α-PSRO respectively?\"**\n\n**A2:** We have made additional experiments of UDM-FP and UDM $\\alpha$-PSRO, and the results in Appendix A.4.3 (page 14, lines 189-201, Appendix) show that UDM $\\alpha$-PSRO and UDM-FP perform better than $\\alpha$-PSRO and FP respectively. Since the solution concept of (UDM-)$\\alpha$-PSRO is $\\alpha$-Rank, PCS-score is adopted as a metric to assess the quality of the population insteading of exploitability, as argued in [1]. \n\n* | method | PCS-score |\n | :---------------: | :-------: |\n | $\\alpha$-PSRO | 0.68 |\n | UDM $\\alpha$-PSRO | **0.99** |\n\n* | method | exploitability | negative PE |\n | :----: | :------------: | :---------: |\n | FP | 0.14 | 0.04 |\n | UDM-FP | **0.13** | **0.03** |\n\n**Q3: \"Can UDM incorporate RD and BD simultaneously and achieve better performance than only considering one of RD and BD?\"**\n\n**A3:** Yes, UDM-PSRO can achieve a better performance by considering RD and BD at the same time as shown in the following table. \n\n| method | exploitability | negative PE |\n| :---------------: | :------------: | :---------: |\n| Self-Play | 0.17 | 0.076 |\n| PSRO | 0.04 | 0.015 |\n| PSRO-rN | 0.04 | 0.014 |\n| P-PSRO | 0.04 | 0.014 |\n| EC-PSRO | 0.03 | 0.011 |\n| FEP-PSRO | 0.03 | 0.011 |\n| UDM-PSRO w. RD&BD | **0.02** | **0.008** |\n\nWe have added these results in Appendix A.4.3 (page 15, lines 222-224, Appendix). \n\n**Q4: \"The convergence of UDM-FP is provided by showing it is a GWFP process. Is it possible to show a faster convergence speed with this diversity term?\"**\n\n**A4:** Intuitively, since the diversity term encourages UDM-FP to explore the strategy space, UDM-FP could find the best strategy faster and thus converge faster, which is also validated by the empirical results in Appendix A.4.3 (page 14, lines 192-195, Appendix). However, a strict theoretical proof is not straightforward and we leave it to our future work. \n\n---\n\nref.\n\n[1] Muller et al., A generalized training approach for multiagent learning, ICLR 2019. \n\n---\n\n", " This paper proposes a unifying diversity measure of three diversity measures (ED, PD, and EC), and uses the unification to explain properties of the existing metrics. UDM-PSRO is proposed which uses the new diversity metric as an oracle. It is compared to other baselines on some simple normal-form games. Strengths: This paper is well-written and clear to follow. The maths and proofs seem sound to me. The key idea is interesting and important research.\n\nWeaknesses: Exploitability experiments show similar performance to other pre-existing methods. Exploitability of extensive-form games was not evaluated.\n\nI think this is a strong paper. More thorough evaluation on (real) extensive form games would make it stronger.\n\nComments:\n\nExperiments Section (Figure 1): I believe all the experiments are made on normal-form games? Therefore it might be good to mention/cite double oracle as well as PSRO. Q1: Can this diversity measure be easily extended to n-player, general-sum, or non-symmetric games? The usual limits on symmetric tow-player zero-sum should be states again in the conclusion please.", " This paper offers a unified diversity measure for multi-agent reinforcement learning. The authors first show the existing diversity measure and then provide Unified Diversity Measure (UDM) from a geometric perspective to unify all existing diversity measures. After showing the relationship between UDM and existing diversity measures, the authors provide two algorithms, UDM Fictitious Play and UDM PSRO, to provide diversity policies. Experiments on AlphaStar and Blotto show the performance of the proposed method over baselines. - Strengths\n - The unified framework reveals the similarity among various diversity measures, and provides a new view of the diversity of policies.\n - The overall writing flow is good\n\n- Weaknesses\n - Lack of the background of the geometric intuition.\n - The notations are too much and confusing, which are unfriendly to the audiences outside the game theory community.\n - Add the background of the geometric intuition.\n- Polish the notation and the writing of this paper for more general audiences, not only for the game theory community but also the reinforcement learning/machine learning community.\n- How to generate strategy feature and diversity kernel when applying UDM in the new games.\n- What if the number of agents is not two?\n- Line 293, missing the experiments of AlphaGO. \n- If a game has NE, why do we need to explore the diversity, especially when we can get the whole payoff matrix.\n- I am curious about the experiments on AlphaStar. AlphaStar is trained for about 14 days with 16 TPUs for each agent. How to get meta-payoffs for 888 RL meta-strategies?\n\n- Minor\n - Line 203 the meaning of Hadamard product of a vector and a matrix\n - The meaning of $|| \\cdot ||_{F}$ above line 206\n - What is $m^*_i$ in line 207\n - Line 186 “similar with” -> “similar to”\n I didn't find the description of the limitation of the proposed methods. And the authors didn't discuss any potential negative societal impacts. The author can describe the limitation, for example, is the proposed method limited to the two-player zero-sum game.", " This paper presents a unified diversity measure (UDM) for MARL learning. By choosing different diversity kernels and the function $f$, UDM can recover different existing diversity metrics. The authors establish some convergence properties under UDM and then conduct several experiments (3 in the main text and 2 additional in the appendix) including both transitive and non-transitive games. The experimental results show that UDM outperforms baselines without explicit diversity objectives and is comparable to baselines with diversity objectives in terms of exploitability and population effectivity. **Strengths**:\n\nThis study is closely related to the learning for MARL. With a more diverse population, we are expected to achieve a faster learning speed to the target objective (e.g., lower exploitability, higher population effectivity). However, existing diversity metrics may be motivated by different practical observations and no well-defined unified diversity metric for the learning of multi-agent systems. UDM serves this purpose well by unifying existing diversity metrics into a single function, which may help to design new learning algorithms.\n\nI think this paper is well-written and easy to follow. Based on a diversity kernel matrix, the authors show how UDM correlates to and differs from existing diversities. Some benefits of UDM are also revealed -- UDM can not only recover ED and PD separately but also tackle some of the notorious problems (ignoring weak but useful strategies and cannot distinguish redundant strategies).\n\nBesides, UDM also seems technical sound where the methods and proofs are intuitive.\n\n\n**Weaknesses**:\n\nCurrent implementations (the choice of the kernel function and $f$) and experiment results are less informative. In the experiment part, the authors focus on $K\\langle x,y \\rangle=(\\langle x,y\\rangle+1)^3$ and $f(x)=\\frac{1}{1+\\exp(-x)}-\\frac{1}{2}$ and only RD metrics. UDM generalizes RD and BD, while only a small part of the generation is shown in the current version. From the experiments, in terms of exploitability (expl) and PE, the differences among UDM, EC-PSRO, and FEP-PSRO are indistinguishable in AlphaStar888 and Blotto. I noticed that the experiments on two extensive-form games also demonstrated the above results, and the expl and PE of FEP-PSRO seems to be slightly better. The experiments for UDM-FP and UDM-$\\alpha$-PSRO are missing. More insights (empirical or theoretical) from other instantiations of UDM are expected, which can be achieved by an ablation study. The questions are mainly concerning the instantiation of UDM.\n\n(1) Could you elaborate why the kernel function $K\\langle x,y \\rangle=(\\langle x,y\\rangle+1)^3$ and $f(x)=\\frac{1}{1+\\exp(-x)}-\\frac{1}{2}$ is chosen? Or, is there any principle for choosing these two functions in UDM?\n\n(2) Will UDM-FP and UDM-$\\alpha$-PSRO perform better (in terms of expl and PE) than the baselines that used FP and $\\alpha$-PSRO respectively?\n\n(3) This work seems to focus on RD, while UDM can be readily applied to BD. FEP-PSRO unified BD and RD, and the experiments showed it is helpful. Can UDM incorporate RD and BD simultaneously and achieve better performance than only considering one of RD and BD?\n\n(4) The convergence of UDM-FP is provided by showing it is a GWFP process. Is it possible to show a faster convergence speed with this diversity term? I favor the motivation of this paper, and the proposed method can provide a better perspective on the population diversity in MARL (especially in open-ended learning for two-player zero-sum games). It would be very helpful to provide an ablation study so that future researchers can know how to instantiate UDM." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "an9W-MnErtO", "p40Jbc8MVnf", "oCx8P4dSmrq", "x5AkIfL8-hk", "5BsVUTlAulr", "1VoJI3gaWA", "pgECdCfBuT", "nips_2022_H-6iczs__Ro", "nips_2022_H-6iczs__Ro", "nips_2022_H-6iczs__Ro" ]
nips_2022_wiBEFdAvl8L
GLIPv2: Unifying Localization and Vision-Language Understanding
We present GLIPv2, a grounded VL understanding model, that serves both localization tasks (e.g., object detection, instance segmentation) and Vision-Language (VL) understanding tasks (e.g., VQA, image captioning). GLIPv2 elegantly unifies localization pre-training and Vision-Language Pre-training (VLP) with three pre-training tasks: phrase grounding as a VL reformulation of the detection task, region-word contrastive learning as a novel region-word level contrastive learning task, and the masked language modeling. This unification not only simplifies the previous multi-stage VLP procedure but also achieves mutual benefits between localization and understanding tasks. Experimental results show that a single GLIPv2 model (all model weights are shared) achieves near SoTA performance on various localization and understanding tasks. The model also shows (1) strong zero-shot and few-shot adaption performance on open-vocabulary object detection tasks and (2) superior grounding capability on VL understanding tasks.
Accept
All three reviewers provided positive reviews and scores for this paper. They were happy to see the strong empirical evaluations and improvements over GLIP, impressed by the zero shot results, and found the new combination of pre-training objectives interesting. A few questions and concerns were brought up by reviewers that had to do with differentiation to the GLIP paper and model. These concerns including novelty in the loss term, tasks accomplished, need for detection boxes at training time, etc were well addressed by the authors. The reviewers also acknowledged that their questions were answered. Given these positive reviews and discussions, I recommend acceptance. Note to authors: Please address the comments raised by the ethics reviewer in your final manuscript. Thank you.
train
[ "hzoykbU2tqD", "4EKJULcZ7PM", "PSr22ue1bHo", "0g9F3DQbHPg", "7tSku1rn58a", "DeNN53stpdV", "6VEWeQvTtp", "NmGPYCze6iB", "Ll2znDYLTLm", "5ElAx_bDHgL", "cTZCcEuHSIE" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank Reviewer WQgW for the reply! \nWe will include the ablation of text encoder initialization, e.g., text-only pretraining model vs clip/unicl-like multimodal pretrained model, in the final version. As we presented in the rebuttal, their performance are nearly the same. \n\nFor the second point, if the reviewer has any questions/concerns, we are happy to answer.", " Hi Authors,\n\nThanks for responding to my comments and questions, I just wanted to let you know I have read over your responses. I do not have further questions.", " 1. That's exciting to hear that performance is maintained even with a text encoder initialization from text-only pretraining! To make this argument slightly more convincing to readers, I would suggest including the localization ablation and ideally other text-only and vision-only metrics as well time-permitting. It still does in fairness feel less convincing to initialize the text stream of the multimodal model with a text encoder from another multimodal model.\n\n3. Awesome -- looking forward to reading more about this in the updated section! I understand it's a known limitation of using any pretrained vision model; this just warrants very clear disclosure and in particular for labels coming from a model like GLIP that has not undergone (to my knowledge) any fairness evaluation.\n\nI'll circle back to 2 in a separate comment later on.", " The authors do not explicitly discuss the negative societal impact of their work, as pointed out by one of the reviewers. \n\nSince the paper proposes a method for localization and vision-language understanding, an applied task with a potentially dual use problem, the paper would benefit from a brief comment/discussion about potential dual use/negative societal impact. In their response, the authors did not explicitly commit to discuss the negative societal impact in a revised version of their work. I think it is possible to address the concerns in the current version of the paper. The authors should just add a brief comment/discussion about potential dual use/negative societal impact", " We appreciate the reviewer for the positive and insightful feedback. Our response to the reviewer’s questions is as follows.\n***\n1. Mentioned in the previous section as well (\"The text transformer appears to use the text transformer from CLIP including its pretrained weights (let me know if I'm misunderstanding). If it does use those weights, this skews these results because CLIP's text transformer already had visual supervision from the contrastive pertaining.\"), is the text transformer from a pretrained CLIP model (L239)? \n\n**Our Response**: For GLIPv2-T, we use the ImageNet pre-trained Swin-Transformer to initialize the image encoder and BERT-base-uncased to initialize the language encoder. For GLIPv2-B, we use the pre-trained paired image-language encoder from UniCL (CLIP-like pre-training, https://github.com/microsoft/UniCL) for initialization. We did an ablation study on the different language encoders (UniCL vs. BERT) and found that their results are nearly the same. Therefore, UniCL initialization **does not skew** the good localization performance. The main reason for us to keep the UniCL(CLIP-like) language encoder is due to its Pre-LayerNorm (Xiong et al.) operation. We find the UniCL(CLIP-like) language encoder with Pre-LayerNorm is more **stable** during the training compared with BERT, which uses Post-LayerNorm. We will include the ablation study in the revised version. \n- Xiong et al., On Layer Normalization in the Transformer Architecture.\n***\n2. GLIPv2 uses GLIP (v1) to generate bounding boxes for the unlabeled (image, text) pairs in the pretraining data versus just an off-the-shelf object detector that has not had linguistic supervision. This also clouds the data and pretraining approach a bit given how similar GLIPv2 is to GLIPv1. \"What are the authors thoughts about requiring another large, multimodal model to generate pretraining data?\"\n\n**Our Response**: An off-the-shelf object detector cannot generate the bounding boxes and their corresponding phrases for the unlabeled image-text pairs because the traditional object detector **cannot be used as an open-vocabulary grounding model**. GLIP and other grounding models (e.g., MDETR) should be used to generate bounding boxes for the unlabeled image-text pairs. We use the GLIP model to get pseudo labels on Cap and CC+SBU data due to its best grounding performance prior to GLIPv2's work. Furthermore, we can even use GLIPv2 model itself, which is trained on human-annotated OD and GoldG data, to scale up the pre-training data. This method is self-consistent in terms of the self-training approach to utilize large-scale unlabeled image-text pairs data. \n***\n3. (a) Any biases learned by GLIP (v1) – for instance, having significantly lower detection accuracy for certain demographics of people in images or performing worse for images where people do not fit in the gendered roles of the training data – propagates to the pretraining data for GLIPv2 as well. This is compared to detection datasets that were hand-annotated. It’s a component of using large-scale data, and particularly using large-scale data labeled by another ML model, that the authors should address this in their limitations section. \n(b) Section 5 is titled “Conclusion and Social Impacts” without a description of social impacts.\n\n**Our Response**: Thank you for all the reviewers' suggestions! While our paper shows promising results on both object detection and VL understanding tasks, additional analysis of the data and the model is necessary before deploying it in practice because large-scale web data may contain unintended private information, unsuitable images/text, or some bias leakage. We will check more carefully about the generated pseudo data and address this in the \"Conclusion and Social Impacts\" section. ", " 4. (a) Although this work shows better performance compared to previous works, I wonder if this is also due to additional data used for pretraining which is the localized data generated by GLIP.\n(b) It seems there is additional localization data generated using GLIP model that is used for training. If so, are the GLIP and GLIPv2 models trained in a teacher/student setup? In fact, how does a GLIP model perform if retrained on this additional data (kind of like self-training)? Are the GLIP baselines in Table 1 using this additional localization data? If not, then the comparison is a bit unfair.\n\n**Our Response**: GLIPv2-T does not use additional pre-training data compared to GLIP-T. The pre-training data of GLIP-T / GLIPv2-T consist of two types of data: 1) gold human-annotated data (gold detection data + gold grounding data); 2) image-text pairs with pseudo boxes (Cap4M). The pseudo boxes are coming from a \"teacher GLIP-T\" model. Thus, GLIP-T reported in the GLIP[36] paper (see Section 4 in the GLIP paper) is already trained in such a \"self-training\" manner.\n\nTo account for any implementation differences, we have provided a rigorous comparison between GLIP and GLIPv2 in Table 4. Row 3 ($L\\_{loc}$ + $L\\_{intra}$) is our re-implementation of GLIP-T (including using \"self-training\" data); Row 6 is the GLIPv2-T. Row 3 and Row 6 use the exact same pre-training data (the same image-text pairs and the same pseudo boxes).\n***\n5. Table 4, Row 6 .. is the MLM objective trained alongside other objectives? Or is there another step of pretraining here? Appendix section 4 mentions that Row 6 has additional stage of training without MLM. I think it will be beneficial to add more analysis behind the intuition of a second stage of pre-training.\n\n**Our Response**: An additional stage of pre-training is applied for small models (GLIPv2-T and GLIPv2-B) due to limited model capacity. In order to achieve higher performance on both localization and understanding tasks, we find that including all data (even with some noise) and MLM loss in the first stage of pre-training will benefit the model for learning a better representation of both localization and understanding capability. Since the OD tasks require the model with more accurate localization ability, in our 2nd stage of pre-training, we decide to eliminate the MLM loss. The large model (GLIPv2-H) does not need this additional stage because it has enough capacity to learn both word-region alignment and MLM together in a single stage. We will include this analysis in the revised version.\n***\n6. How does the training time gets impacted with the newly introduced loss?\n\n**Our Response**:We provide the comparison of the training speed for GLIP and GLIPv2 on V100 below. The GLIP-T achieves 1.62 FPS, and GLIPv2-T achieves 1.46 FPS with both inter-image region-word contrastive loss and MLM loss; GLIP-L achieves 0.88 FPS while GLIP-B achieves 0.83 FPS with comparable performance. Introducing these new losses with nearly negligible computational cost but provides extra gain on the performance on both localization and understanding tasks (Table 1\\&2, GLIP vs. GLIPv2). ", " We appreciate the reviewer for the positive and insightful feedback. Our response is as follows.\n***\n1. I am concerned about the novelty as the additional loss term is somewhat similar to well known region-word loss applied over full batch in multiple works and same setup as GLIP but showing performance on new vision+language tasks. \n\n**Our Response**: As far as we know, up to the deadline (05/19/2022) for NeurIPS submission, there are only three published papers (VILD (ICLR22), RegionCLIP (CVPR22), and X-VLM (ICML22)) that have the flavor of \"region-word\" loss applied over full batch. We discuss the difference between our work and the three aforementioned works in the following: (1) All these three works use **\"region-sentence\"** loss, i.e., the similarity between a region feature and the [CLS] token of a sentence, instead of true **\"region-word\"** loss used in GLIPv2. As a result, none of these three works made use of the phrase grounding data, which may contain multiple entities in one sentence during their training. It is the most important point in GLIPv2 to use phrase grounding data and pseudo grounding data to train a unified grounded VL understanding model. (2) GLIPv2 has carefully designed the **positive label propagation** in our inter-image region-word contrastive loss to mitigate the wrong assumption that \"every unpaired region-word pair is negative\". We discussed the intuition and necessity of positive label propagation (Please refer to our response to Q2 below). As far as we know, no previous work has mentioned this mechanism of positive label propagation before. (3) There are some other differences. For example, in VILD, its ``region-sentence loss\" is actually not a contrastive loss over full-batch but a classification loss over a fixed vocabulary per sample (see the definition of $L_{ViLD-text})$. After all, we believe that our inter-image region-word contrastive loss is novel and has a significant difference from previous works. We will include this discussion in the revision.\n***\n2. Line 184-185 is unclear to me what the authors are trying to say. I understand the explanation why Inter-image region-word loss is different from CLIP but the methodology is not very clear.\n\n**Our Response**: We introduce the positive label propagation for the inter-image contrastive loss in L184-185. In our inter-image region-word contrastive loss, we **cannot** simply assign all regions and texts coming from unpaired image-text as negative pair, as done in CLIP. Our datasets contain object detection datasets, such as Objects365. As mentioned in L182-183 and Figure 2, if a region of an image from Objects365 is labeled as \"person\", this \"person\" region should be a positive pair with all \"person\" phrases in language queries from other Objects365 images. \"Positive label propagation\" is such a mechanism to propagate positive labels based on the detection labels, to avoid false negative supervision in region-word contrastive loss. \n\nHowever, consider two phrases from phrase grounding datasets (Flickr30k-entities), e.g., \"a person with a red hat.\" and \"a person wearing a blue shirt.\". Even though they have the same phrase \"person\", each of them carries semantic context that is unique to that image-sentence pair. Therefore, we do not apply \"positive label propagation\" to grounding-type data, as mentioned in Line 184-185.\n***\n3. Since the authors discuss how the loss differs from CLIP, did they also compare on vision classification tasks?\n\n**Our Response**: The proposed region-word contrastive loss is specific for learning **region-level** representations. It is inspired by CLIP's sentence-image contrastive loss, which focuses on **image-level** representations. Thus, we do not view our loss as an improvement over CLIP but rather a much-needed extension to region-level tasks. Thus, we do not compare with CLIP on classification tasks, which only value image-level representations. \n***\n(Please see below for more responses.)", " We appreciate the reviewer for the positive and insightful feedback. Our response to the reviewer’s questions is as follows.\n***\n1. While there are some performance improvements over GLIP, I don’t see significant method changes other than the inter loss. This loss does not seem to contribute that much (1-2 point improvement compared to the ablations without the loss), and the performance improvement compared to GLIP is quite small.\n\n**Our Response**: First of all, the reviewer also agrees that \"The method does outperform its predecessor GLIP consistently, albeit small gains on OD. But it does have large performance improvements on VL understanding tasks compared to prior work MDETR and others\". Note that GLIP cannot do VL understanding tasks (e.g., Captioning, VQA). GLIPv2 extends GLIP to a unified localization and VL understanding model, which is a significant methodology advance from GLIP. Second, in our ablation study in Table 4, with the same amount of pseudo data added in pre-training, the inter-image word-region contrastive loss achieves consistent gain across both localization and VL understanding tasks. Notice that 1-2 points improvements on OD tasks are not small. Specifically, 1.0 AP improvement on COCO and 2.0 AP increase on LVIS are considered quite significant for OD tasks, especially compared with a nearly SoTA baseline GLIP. \n***\n2. On L82-84 authors write “Many VL models (e.g., BUTD) (2; 58) rely on a pre-trained localization model as their visual encoder; the downside is the pro-longed “localization$\\to$VLP” pre-training pipeline (41; 48; 13; 47; 34; 32; 60; 37; 35). In contrast, GLIPv2 simplifies the pre-training pipeline and enables grounded VL understanding for better interpretability.\" Doesn’t GLIPv2 use image tags/bbox labels from a pre-trained detector too? Again, the paper states “ triplet format (Img, Text, T)” data inputs where T are (box-label) annotations, so don’t we still start with localization to some extent?\n\n**Our Response**: The localization information is necessary for both traditional BUTD[2;58] models and GLIPv2 models. The major difference is that the traditional BUTD VL models may require a two-stage pre-training: first pre-train on detection modules, then pre-train on VL understanding (alignment) modules. However, GLIPv2 unifies the detection and VL understanding to become a single \"grounded VL understanding\" pre-training task. \n\nThe \"(Img, Text, Boxes)\" data used in GLIPv2 pre-training can be just human-annotated data (see Row1\\&2 in Table 5), with which GLIPv2 pre-training does not involve any pseudo data from a pre-trained grounding/localization model. In order to achieve the best performance, GLIPv2 uses image-text pair data with pseudo boxes from a pre-trained GLIP model (see Row3-6 in Table 4), which is trained with the same \"grounded VL understanding\" task but just with smaller data. Thank you for the suggestion, and we will make this clearer in the revised version!\n***\n3. L183-185 “We do not propagate positives to grounding-type texts (natural sentences) because phrases in sentences carry contexts that 185 are unique to that image-sentence pair.” Out of curiosity, did you try to include these as positives? If so, what happened? Seems like a thoughtful design choice.\n\n**Our Response**: We did not try it for two reasons: (1) It is reasonable not to do it. Our datasets mainly include two types of data in our training: (a) object detection datasets, and (b) phrase grounding datasets with natural sentences description. As mentioned in L182 and Figure 2, if a region from (a) with \"person\" is annotated, it should be a positive pair with all \"person\" phrase in detection-type (a). However, for natural sentences from the grounding type (b), by looking at the below two phrases, e.g., \"a person with a red hat.\" and \"a person wearing a blue shirt.\". Even though they have the same phrase \"person\", we believe each of them carries a unique semantic context. Therefore, we apply label propagation for detection-type (a) but not for grounding type (b). (2) Practically, it is difficult to determine two **free-form** phrases in natural sentences to have the same meaning and also hard to align them in the implementation. It is indeed a thoughtful and careful design choice, and the above are reasons why we apply only label propagation to (b) instead of (a).\n", " This paper proposes a GLIPv2 model trained on vision-language grounding, where localization and vision-language understanding tasks are reframed through the lens of grounding to have a unified model. Additionally, a new inter image-text token loss is introduced which provides some performance gains. \n Strengths\n- The losses and unified framework introduced in the paper are sound and an important direction for vision-language work. It is ideal to have a single model capable of performing both grounding and understanding type tasks, and making the vision-language understanding tasks more grounded for interpretability and improved performance.\n- The figures are helpful, well made illustrations and there are extensive experiments for the paper’s method to be validated.\n- The method does outperform its predecessor GLIP consistently, albeit small gains. But it does have large performance improvements on VL understanding tasks compared to prior work MDETR and others (e.g., PhraseCut, VQA)\n\nWeaknesses\n- While there are some performance improvements over GLIP, I don’t see significant method changes other than the inter loss. This loss does not seem to contribute that much (1-2 point improvement compared to the ablations without the loss), and the performance improvement compared to GLIP is quite small.\n Questions\n- On L82-84 authors write “Many VL models (e.g., BUTD) (2; 58) rely on a pre-trained localization model as their visual encoder; the downside is the pro-longed “localization->VLP” pre-training pipeline (41; 48; 13; 47; 34; 32; 60; 37; 35). In contrast, GLIPv2 simplifies the pre-training pipeline and enables grounded VL understanding for better interpretability.\" Doesn’t GLIPv2 use image tags/bbox labels from a pre-trained detector too? Again, paper states “ triplet format (Img, Text, T)” data inputs where T are (box-label) annotations, so don’t we still start with localization to some extent?\n- L183-185 “We do not propagate positives to grounding-type texts (natural sentences) because phrases in sentences carry contexts that\n185 are unique to that image-sentence pair.” Out of curiosity did you try to include these as positives? If so, what happened? Seems like a thoughtful design choice.\n\nStyle and writing comments\n- The writing grammar is not consistently correct throughout the paper.\n- L56 “world” → word\n- L78 bold section should probably be on a new line. Same with L93?\n- L93: arriving *at* a…\n- L128: “classier” → “classifier”\n- L179: “easy” → “easily”\n- L229: Figure reference is missing.\n- L206: One set of weight*s*, L207 task*s* The authors did not include limitations or negative societal impact discussion. ", " This work proposes a new VL grounding framework called GLIPv2, which unifies several localization and VL understanding tasks in a same unified interface. It shows that doing pretraining on localization data + image-text pairs with this setup improves downstream model performance on all taska and achieves SOTA performance on most of them. It inroduces an inter sample contrastive loss which improves performance. ### Strengths\n- Proposed work improves SOTA performance on a variety of localization and vision-language understanding tasks\n- The paper is well written and structured\n- They present both finetuning with individual task specific heads as well as prompt tuning and using same weights for different tasks for zero shot\n- The authors do thorough ablation on all the different combinations of the pretraining objectives and dataset combinations\n\n\n\n### Weaknesses\n- I am concerned about the novelty as the additional loss term is somewhat similar to well known region-word loss applied over full batch in multiple works and same setup as GLIP but showing performance on new vision+language tasks. Although this work shows better performance compared to previous works, I wonder if this is also due to additional data used for pretraining which is the localized data generated by GLIP. \n- It seems there is additional localization data generated using GLIP model that is used for training. If so, are the GLIP and GLIPv2 models trained in a teacher/student setup? In fact, how does a GLIP model perform if retrained on this additional data (kind of like self-training)? Are the GLIP baselines in Table 1 using this additional localization data? If not, then the comparison is a bit unfair.\n- Line 1184-185 is unclear to me what the authors are trying to say. I understand the explanation why Inter-image region-word loss is different from CLIP but the methodology is not very clear.\n- Table 2 if there is trian-test overlap, I would suggest the authors to remove those results. Authors should have been careful about this while preparing the training dataset to deduplicate the train set against any downstream test-validation sets. - Table 4, Row 6 .. is the MLM objective trained alongside other objectives? Or is there another step of pretraining here? Appendix section 4 mentions that Row 6 has additional stage of training without MLM. I think it will be beneficial to add more analysis behind the intuition of a second stage of pretraining.\n- How does the training time gets impacted with the newly introduced loss? \n- Since the authors discuss how the loss differs from CLIP, did they also compare on vision classification tasks? Yes they have but I would encourage them to discuss in detail any potential negative societal impact of their work.", " This paper presents a new V&L model, GLIPv2, that builds upon the original GLIP v1 model centered around multimodal pretraining. GLIPv2 aims to unify visual localization tasks (e.g. object detection) and uses V&L understanding tasks like visual question answering. To do so, three distinct pretraining tasks are used 1) phrase grounding, where the model computes the alignment between image regions and tokens 2) and 3) the standard masked language modeling task. Notably, task-specific classification heads are not used for pretraining. In finetuning on downstream tasks, GLIPv2 performs competitively. The model can also be used in zero-shot and prompt-tuned settings. **Strengths**\n\n- Similar to CLIP, GLIPv2 can perform open-vocabulary tasks because of the classification-to-matching trick that computes the dot product between the fused visual and linguistic representations. This means the model is particularly adaptable compared to many other V&L models because it can handle new and out-of-domain visual classes.\n- The zero-shot and prompt-tuning experiments have impressive results and are, in and of themselves, exciting to see as tasks for a large V&L model. The ability to evaluate models on downstream tasks with either little or no parameters updates and remain comparable to full finetuning shows the model has learned a lot during pretraining.\n\n**Weaknesses**\n\n- GLIPv2 uses GLIP (v1) to generate bounding boxes for the unlabeled (image, text) pairs in the pretraining data versus just an off-the-shelf object detector that has not had linguistic supervision. This also clouds the data and pretraining approach a bit given how similar GLIPv2 is to GLIPv1.\n\n- The text transformer appears to use the text transformer from CLIP including its pretrained weights (let me know if I'm misunderstanding). If it does use those weights, this skews these results because CLIP's text transformer already had visual supervision from the contrastive pretraining.\n *Questions*\n\n- Mentioned in the previous section as well, is the text transformer from a pretrained CLIP model (L239)?\n- What are the authors thoughts about requiring another large, multimodal model to generate pretraining data?\n\n*Suggestions*\n\n- About half of the prior work section provides details about GLIPv2; space can be saved by compressing some of the details of this paper. For instance L75-78, which is about half of the localization models subsection, describe the novel contributions of this work instead of prior work and how GLIPv2 differs.\n- This sentence (“See GLIP (36) for more 132 details”) is essentially repeated twice in Section 3.1; one can be removed to save space and for readability.\n\n*Typos*\n\n- L103: output -> outputs\n- caption Figure 2: compute -> computes\n- L118: extract -> extracts\n- L128: classier -> classifier\n- L139: tyeps -> types\n- L206: one set of weight -> one set of weights\n- L207: task -> tasks\n- L210: keep -> keeps\n- L229: broken link to figure\n - Section 5 is titled “Conclusion and Social Impacts” without a description of social impacts.\n\n- Any biases learned by GLIP (v1) – for instance, having significantly lower detection accuracy for certain demographics of people in images or performing worse for images where people do not fit in the gendered roles of the training data – propagates to the pretraining data for GLIPv2 as well. This is compared to detection datasets that were hand-annotated. It’s a component of using large-scale data, and particularly using large scale data labeled by another ML model, that the authors should address this in their limitations section." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "PSr22ue1bHo", "NmGPYCze6iB", "7tSku1rn58a", "nips_2022_wiBEFdAvl8L", "cTZCcEuHSIE", "6VEWeQvTtp", "5ElAx_bDHgL", "Ll2znDYLTLm", "nips_2022_wiBEFdAvl8L", "nips_2022_wiBEFdAvl8L", "nips_2022_wiBEFdAvl8L" ]
nips_2022_08Yk-n5l2Al
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding
We present Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Our key discovery is that generic large language models (e.g., T5), pretrained on text-only corpora, are surprisingly effective at encoding text for image synthesis: increasing the size of the language model in Imagen boosts both sample fidelity and image-text alignment much more than increasing the size of the image diffusion model. Imagen achieves a new state-of-the-art FID score of 7.27 on the COCO dataset, without ever training on COCO, and human raters find Imagen samples to be on par with the COCO data itself in image-text alignment. To assess text-to-image models in greater depth, we introduce DrawBench, a comprehensive and challenging benchmark for text-to-image models. With DrawBench, we compare Imagen with recent methods including VQ-GAN+CLIP, Latent Diffusion Models, and DALL-E 2, and find that human raters prefer Imagen over other models in side-by-side comparisons, both in terms of sample quality and image-text alignment.
Accept
This paper proposes Imagen that uses large transformer language models and diffusion models for text-to-image generation. The major finding is that using large language models pretrained only on text data as text encoders are effective. Dynamic thresholding and Efficient U-Net architecture are proposed to improve the training effectiveness and efficiency of the diffusion model. It received scores of 578. All the reviewers agree that the image generation results are impressive, and the zero-shot results on COCO are strong. This paper also proposes a new benchmark for comprehensively evaluating text-to-image tasks. On the other hand, Reviewer 9tx5 pointed out that one major concern is that the novelty is quite limited. Overall, the AC thinks that the paper presented impressive results and has great significance, therefore, the AC would like to recommend acceptance of the paper.
train
[ "AJACV6zQL--", "OgTNkT8GesN", "fc0GBiwGFdA", "ChSsMWIrEU", "7O7bFM0O8K1", "mJLdxx1Rtj6", "BapW13bEVBU", "sh_fnD7Jdx6S", "dZzP9BPUpki", "y_Zz3qo1oYm", "qr7gHw8LDvd" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The rebuttal addressed some of my concerns, and I like the results shown in this paper. I have raised the score.\n\nHowever, I am not convinced by the author's response that the proposed idea is novel.\n\nFor example, my question `Could the authors explain, except using the massive data and large models, what is the technical contribution of this paper and what is the novelty?`\n\nThe response is `Our work is a novel combination of frozen large language models and cascaded diffusion models, in addition to a novel sampling technique (dynamic thresholding) and novel architecture modifications (efficient U-Net).`\n\nThe combination of the frozen language model and diffusion models is not novel. Many works have used such a combination before, such as GLIDE. The sampling technique is more like an implementation trick. ", " We apologize if the message was not clearly stated in the paper. Through the comparison between text only language models and image-text models as text encoders, we wanted to emphasize that at the current state of the community, where language models are much bigger than image-text models and are trained on much larger datasets, they may be a more promising class of models to be used as an off-the-shelf text encoder.\nWe agree with the reviewer that a comparison between text-only vs image-text encoders over various amounts of data would be interesting. The ideal chart would be tradeoff FID-CLIP curves of various encoders trained on different amounts of data and seeing their results. We leave this to future work.", " The paper describes a text-to-image diffusion model and benchmark for assessment of such models. These models rely on LLMs for image generation, often a significant source of ethics issues already well documented by other researchers. The authors have chosen to not release this model publicly due to the potential issues for unintentional harm, abuse and intentional misuse. The paper includes a strong description of the many different types of biases/harms that may be inherited based on training data and other factors. The description goes into detail on the limitations of not only usage/deployment of this model for user-facing applications, but also acknowledges the constraints of LLMs more broadly. The authors might describe what criteria need to be met if a model similar to this one would be released publicly in the future. The authors might also want to consult https://openai.com/blog/best-practices-for-deploying-language-models/ if they do seek to release the model at some point. ", " Thanks for the author's response. It is indeed that text-only data is always more accessible than paired image-text data. It is clear that Imagen benefits from utilizing large-scale text-only data. But I suggest the authors clearly emphasize the technical findings of this paper and avoid guiding the community to incorrect conclusions. To be more specific, if we restrict we have the same amount of text data as CLIP which paired image data is available. Should we train the text-to-image generative models in the CLIP fashion or Imagen fashion?", " I thank the authors for their responses. I am fairly content with them.\nIt would be good if they include the missing references in their final version.", " We thank the reviewer for their valuable feedback. We address key comments below.\n\n>The novelty is limited. The method is a simple combination of transformer language models and diffusion models.\n\nWe agree that many of the design choices we make are not new. Our work is a novel combination of frozen large language models and cascaded diffusion models, in addition to a novel sampling technique (dynamic thresholding) and novel architecture modifications (efficient U-Net). The novelty of our work is not merely in the design choices themselves, but also in the systematic empirical investigation of various design choices (e.g., the choice of CLIP or T5 text encoder, the impact of size of text encoder and the diffusion models). The magnitude of the improvement we obtain over previous work indicates that our empirical findings are new to the research community.\nWe agree that our method is simple, but we consider it as a strength when simple techniques are able to advance the state of the art. We do not require latent variables, image quantization, or similar complexity to achieve high fidelity and strong image-text alignment. \n\n> Using large models and massive data always seems helpful for better results and zero-shot generalization. This has become an obvious \"fact\" based on recent papers, such as DALLE [1], and CLIP[2] ... \n\nWe show detailed ablations that demonstrate the impact of scaling the size of the text encoder over the U-Net model. We believe this is not an obvious fact that scaling the size of the text encoder has more impact than the size of the U-Net model and this is a helpful result for the research community. Furthermore, DALL-E [1] is a 12 billion parameter model, while Imagen is a 3B parameter model that achieves significantly better performance. This shows that scaling is not the only important ingredient for better performance. Fundamental choices such as the choice of generative model, the choice of text encoder, the method of text conditioning etc. have a significant impact on the performance. \n\n> The results are impressive, but many text-conditioned image generation works, such as DALLE-2, Parti (just come out), also have good performance ... \n\nDALL-E 2 is concurrent work. Parti was released on arXiv after the NeurIPS submission deadline. We included significant comparison with DALL-E, but this was not required.\nBy comparison, Imagen uses ~ 2x less trainable parameters than DALL-E 2, while achieving better performance on two benchmarks (better MS CoCo FID and better human evaluation on DrawBench). This shows that simply scaling model size may not be the best way to obtain good performance for text to image generation, and technical details are important. Similarly, Parti uses 6x more trainable parameters (20B) to obtain decent text to image synthesis, suggesting that our approach is much more parameter efficient. In addition, upon close inspection of Parti results, one can observe blurry image outputs, which are less desirable than the output of diffusion models such as Imagen and DALL-E 2.\n\n> Could the authors explain, except using the massive data and large models, what is the technical contribution of this paper and what is the novelty?\n\nOur work is a novel combination of frozen large language models and cascaded diffusion models, in addition to a novel sampling technique (dynamic thresholding) and novel architecture modifications (efficient U-Net).\n\n> What makes it different from DALLE-2 and how do those differences influence the model's performance and use cases?\n\nDALL-E 2 is concurrent work and should not be used to minimize our contributions.\n\nRegardless, there are several important differences between Imagen and DALL E-2:\n* Unlike DALL E-2, Imagen does not use a latent prior model. DALL-E requires first training a CLIP model to define the latent space. By contrast, our approach is much simpler and more powerful. We outperform DALLE-2 using approximately half as many trainable parameters.\n* Imagen uses a pretrained frozen language model as a text encoder, while DALL-E 2 uses a CLIP text encoder + another text encoder learned from scratch. We show that using CLIP as a text encoder may harm some capabilities of the model (such as binding attributes to objects, text rendering, etc.). Consequently, Imagen uses a large language model (T5-XXL) as a text encoder which allows deeper language understanding and several compositional capabilities.\n* DALL-E 2 uses relatively small guidance weights, while Imagen introduces dynamic thresholding which allows the capability to use significantly higher guidance weights which enable better text alignment than DALL-E 2 while maintaining a high degree of photorealism. \n\n> Will the authors release their code or data for supporting research in this area?\n\nWe refer the reviewer to our Limitations and Societal Impact section regarding our considerations for not releasing the code. A major part of Imagen’s training data is LAION-400M which is public. \n", " We thank the reviewer for their valuable feedback. We address key comments below.\n\n> [1] showed that scaling the data and the model are important factors in improving performance of zero shot models. The authors show that scaling the model size in terms of text encoder is indeed important for improved performance, however, it might be worthwhile to test how well scaling laws hold in the data dimension. Along the same lines, the Table 1 of the paper compares zero-shot models with different model size and training data. For a fairer comparison of how well the architectural choice is doing, it might be good to compare with an equivalent version of Imagen in terms of model parameters and/or training dataset size.\n\nWe agree with the reviewer that analyzing scaling laws w.r.t the training dataset size is an interesting avenue to explore. We leave this to future work.\n\n> Interestingly, the paper shows that Imagen achieves lower preference rate while generating people when compared to the set with no people. It is not clear if the model is finding it difficult to draw people because it has seen a lot of images without people as compared to the images with people in the training dataset or is it ‘inherently’ difficult to create photorealistic people with this architecture? How does Imagen rate when compared to the existing text to image models on the quality of images with and without people?\n\nPrior work has shown [SR3], diffusion models to be capable of generating photorealistic human faces. However, we believe the dip in performance for images with people comes from 1) limited training data with people, 2) the complex structure of a human face and body (such as hands), and our ability to quickly spot the imperfections. We believe this issue can be resolved by re-weighting the training data to over-represent faces. \n\nIt will be difficult to compare Imagen to other text → image models on human generation due to limitations (e.g., DALL-E 2 API ToS limits human generation).\n\n[SR3] Image Super-Resolution via Iterative Refinement, 2022.\n", " We thank the reviewer for their valuable feedback. We address key comments below.\n\n> The technique contributions of this paper are limited. The proposed dynamic thresholding heuristic and U-Net architecture are somehow technically incremental. However, dynamic thresholding seems to be very effective for training the diffusion model. If some theoretical justification can be included, I will increase my rating.\n\nWe emphasize that improvements such as dynamic thresholding and U-Net architecture are key critical components to making Imagen work. Furthermore, such general techniques for improving diffusion samplers and making the neural net architectures efficient can be helpful for many other conditional and unconditional diffusion models.\nWe also emphasize other technical contributions, such as a detailed analysis of different types of text encoders for text to image generation, scaling laws for text encoders and U-Net architectures, and a new benchmark for evaluating text to image models. \n\n> It is not clear if we should pursue the text language encoding models or paired image-text models if we have enough image and text data. In another word, if we have enough paired image-text data in the future, can the paired image-text encoding models outperform the text language encoding models.\n\nIt is difficult to procure paired “image and text data” and we will always have more text-only data compared to paired image-text data. Additionally, even if there is comparable data, it is difficult to scale the size of the text encoder on an image-text model (e.g., CLIP) -- this is due to the memory consumption of the image tower.\n\n", " This paper proposes Image which is a text-to-image diffusion model. The major finding of this work is that using large language models pretrained only on text data as text encoders are effective for text-to-image generation and can benefit from the scaling power of language models. Dynamic thresholding and Efficient U-Net architecture are proposed to improve the training effectiveness and efficiency of the diffusion model. SOTA experimental results are achieved on COCO FID and the proposed DrawBench benchmark. Strengths:\n\n* The paper is well written and presented.\n* The major finding of this work is that using large language models pretrained only on text instead of trained on paired image-text data only on text data is insightful.\n* The experiment results in generating photorealistic with text alignment are amazingly impressive. \n\nWeaknesses:\n* The technique contributions of this paper are limited. The proposed dynamic thresholding heuristic and U-Net architecture are somehow technically incremental. However, dynamic thresholding seems to be very effective for training the diffusion model. If some theoretical justification can be included, I will increase my rating.\n\n* It is not clear if we should pursue the text language encoding models or paired image-text models if we have enough image and text data. In another word, if we have enough paired image-text data in the future, can the paired image-text encoding models outperform the text language encoding models. Please refer to the weaknesses for my questions. The limitations are well-addressed.", " Summary:\n\nThe paper proposes a text to image diffusion model (Imagen) that generates high fidelity images that accurately match the prompts. The work uses powerful language models (T5-XXL) that have been trained on text corpora that eventually aids in improved language understanding of the final model. Additionally, the paper makes modifications to the existing diffusion models by having dynamic thresholding along with some architectural changes to the U-Net to make it more efficient. With this work, Imagen becomes the state-of-the-art on MS-COCO based on the FID scores surpassing the models that are trained on it. The authors further introduce a benchmark, DrawBench, to evaluate the quality and accuracy of the image synthesis for a battery of challenging prompts.\n\n\nStrengths:\n- The paper makes an important contribution by using the models trained on surplus text corpora (unpaired data) rather than learning from purely paired image-text data. Their experiments also confirm that scaling the pretrained text encoders improves the image generations.\n- Imagen model is one of its kind to achieve unprecedented zero-shot results on MSCOCO on the FID metric – surpassing all the existing models that are directly trained on the dataset. \n- The paper invents a plethora of ‘tricks’ that facilitated image generations. It includes Efficient U-Net, having dynamic thresholding, text cross-attention layers in the super resolution model, and noise conditioning augmentation. It will be interesting to see if some of these tricks (or principles) can be used for improved generative modeling beyond the scope of the paper.\n- The paper introduces DrawBench that aids in evaluating the quality of the model generations across various dimensions. It provides us with a way to systematically compare complementary text to image generative models along these dimensions. \n- It was nice to see that the paper uses Human evaluations to compare Imagen with existing models using fidelity and alignment scores. In my experience, CLIPScore might not be a very reliable metric to judge the image-text alignment every time. \n\nSuggestions and Weaknesses:\n\n- [1] showed that scaling the data and the model are important factors in improving performance of zero shot models. The authors show that scaling the model size in terms of text encoder is indeed important for improved performance, however, it might be worthwhile to test how well scaling laws hold in the data dimension. Along the same lines, the Table 1 of the paper compares zero-shot models with different model size and training data. For a fairer comparison of how well the architectural choice is doing, it might be good to compare with an equivalent version of Imagen in terms of model parameters and/or training dataset size.\n- Interestingly, the paper shows that Imagen achieves lower preference rate while generating people when compared to the set with no people. It is not clear if the model is finding it difficult to draw people because it has seen a lot of images without people as compared to the images with people in the training dataset or is it ‘inherently’ difficult to create photorealistic people with this architecture? How does Imagen rate when compared to the existing text to image models on the quality of images with and without people? \n\n\nTypos/Edits:\n- Figure 4’s y-axis labels are inconsistent FID-10K vs FID@10K\n- Line 338 - wrote ‘Imagen’ two times\n\nMissing references:\n- Technically, DALLE-mini also leverages the power of a BART encoder pretrained on text-only corpora. The authors should include it in their related work and discuss it a bit.\n\n*References*\n\n[1] Combined Scaling for Open-Vocabulary Image Classification: https://arxiv.org/pdf/2111.10050.pdf\n\n[2] DALLE-mini: https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-Mini-Explained--Vmlldzo4NjIxODA#our-dall-e-model-architecture\n Mentioned in the main review Mentioned in the main review Mentioned in the main review", " This paper uses large transformer language models and diffusion models for text-conditioned image generation. The image generation results are impressive, achieving the state-of-the-art FID results on COCO. This paper uses a dynamic thresholding technique to sample images. This technique achieves better results based on the claims. This paper also builds a new benchmark, DrawBench, for text-to-image tasks. (+) This paper combines transformer language models and diffusion models for image generation. Even though the techniques are not novel, the image generation results are impressive. \n\n(+) This paper observes that using larger transformer language models improves the image generation results significantly.\n\n(+) This paper proposes a new benchmark for evaluating text-to-image tasks.\n\n(-) The novelty is limited. The method is a simple combination of transformer language models and diffusion models.\n\n(-) Using large models and massive data always seems helpful for better results and zero-shot generalization. This has become an obvious \"fact\" based on recent papers, such as DALLE [1], and CLIP[2]. However, purely having good results without solid methodology contribution is hard to be accepted.\n\n[1] Learning Transferable Visual Models From Natural Language Supervision \n[2] Hierarchical Text-Conditional Image Generation with CLIP Latents\n\n\n\n\n The results are impressive, but many text-conditioned image generation works, such as DALLE-2, Parti (just come out), also have good performance. Most of these models use massive data and large models. So it seems like the improvements come from the usage of massive data and large models, which are usually not released. \n\nThis paper is a combination of existing techniques, such as language models and diffusion models. \nCould the authors explain, except using the massive data and large models, what is the technical contribution of this paper and what is the novelty?\n\nThis paper would have much greater impact if it could distinguish itself from other recent/concurrent works. What makes it different from DALLE-2 and how do those differences influence the model's performance and use cases?\n\nWill the authors release their code or data for supporting research in this area?\n\nIs it ok to have such a large figure in the second page. It seems do not fit the NeurIPS template. This paper claimed their limitations and potential negative societal impact.\n\n(-) This paper has solid results to show the effectiveness of the proposed method. However, the good results are most likely from using large data and large models. This paper simply combines transformer language models and diffusion models, which is not novel. The dynamic thresholding technique is more likely a trick for training models.\n\n(-) This is a good paper with impressive results, but it seems the novelty is limited and not a good match for NeurIPS.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "mJLdxx1Rtj6", "ChSsMWIrEU", "nips_2022_08Yk-n5l2Al", "sh_fnD7Jdx6S", "BapW13bEVBU", "qr7gHw8LDvd", "y_Zz3qo1oYm", "dZzP9BPUpki", "nips_2022_08Yk-n5l2Al", "nips_2022_08Yk-n5l2Al", "nips_2022_08Yk-n5l2Al" ]
nips_2022_wKd2XtSRsjl
Mutual Information Divergence: A Unified Metric for Multimodal Generative Models
Text-to-image generation and image captioning are recently emerged as a new experimental paradigm to assess machine intelligence. They predict continuous quantity accompanied by their sampling techniques in the generation, making evaluation complicated and intractable to get marginal distributions. Based on a recent trend that multimodal generative evaluations exploit a vison-and-language pre-trained model, we propose the negative Gaussian cross-mutual information using the CLIP features as a unified metric, coined by Mutual Information Divergence (MID). To validate, we extensively compare it with competing metrics using carefully-generated or human-annotated judgments in text-to-image generation and image captioning tasks. The proposed MID significantly outperforms the competitive methods by having consistency across benchmarks, sample parsimony, and robustness toward the exploited CLIP model. We look forward to seeing the underrepresented implications of the Gaussian cross-mutual information in multimodal representation learning and future works based on this novel proposition.
Accept
The paper studies the evaluation metric for multimodal generation models. The authors propose a method MID based on estimating mutual information of visual and text embeddings at sample and distribution level. From experiments, the MID correlates with human evaluation on multiple tasks (text-to-image and image captioning). The authors provide theoretical intuition and analysis of MID and relation to other divergence scores. Experiments are solid and convincing. The reliance on CLIP is discussed though other multimodal encoders than CLIP are not evaluated in experiments. Author discussion with reviewers are helpful to better understand the paper. Overall, it is a solid paper with a clearly described, simple, and effective method.
val
[ "koWWaPwRv5e", "xM0lScaUkcb", "-jn2Ex_eCeb", "Q00fAUjt4Cb", "yootlWiiFSp", "yt6S6ezXu0w", "E2ttyYYCFcF", "C9IlKnmEfH", "KskACgyUx2g", "DAxFk28Yff", "xvsw8z5WRyD", "_oJvIlmFgNt", "QrI68_w7YN", "aJ_QBYMN52C", "NkN4kZD9ZdF" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Most of my concerns have been addressed, and I will raise my vote to \"weak accept\".", " The author's response and revisions largely address my concerns. I think CLIP-reliance could still be an issue where a CLIP-like model is assessed, but given the array of potential applications of the method, it is okay to just address this as a limitation. ", " Politely, we remind you that the discussion phase is ending soon. We want to know if the raised concerns may be resolved after the following author's feedback. We have carefully read the reviewers' thoughtful comments and encouraging suggestions, and provided constructive reflections.", " The paper proposes a new metric for evaluating text to image generation and image captioning models. The effectiveness of the proposed metric is then demonstrated through extensive experiments on various datasets. The paper in its nature does not raise any ethical concerns and this is indicated by the majority of the reviewers. There are no issues and this is acknowledged in the authors' checklist. Again, there are no ethical issues here.", " **1. Potential Limitations and Biases of CLIP.** We will add the dedicated section for the limitation as follows:\n\n> **Ethical considerations.** There are multiple reports that the CLIP or the generative models based on this pretrained CLIP has problematic social biases toward racial or ethnic groups (Cho et al., 2022, Wolfe et al., 2022). Unless appropriate measures toward these biases in the CLIP, our metric is potentially subject to the risk of fairness, accountability, and transparency. \n\n> **CLIP-reliant.** Do the CLIP-based models take advantage of overfitting to the metric comparing other models? Sec 4.1.1 “Inspecting possible over-fitting with the CLIP features” partly answers that. Multiple versions of the pretrained CLIP vary the architectural type of visual backbone networks. We found that the normalized scores are generally consistent across generative models and the backbones, especially for our metric (see Fig. 3). However, it is worth noting that we encourage to use of the feature extractor (e.g., CLIP) with the training loss maximizing mutual information of two modalities (See the last paragraph of Sec 3.1; L125-129). Since we expect efficient multimodal representation learning helps for accurate metrics for multimodal alignments.\n\n**2. If the model were scaled to 4 billion images, for example, would MID improve further?** Although it is hard to predict, we observed that the ViT-L/14 consistently outperforms ViT-B/32, expecting there is a room for scaliability, so the increase of data size may result in substantial performance gain.\n", " **1. Relation Analysis to FID and CLIP-Score.** We believe it is worth including in the main paper. We consider the following paragraph for Sec 3.4 (new):\n\n> The Fréchet inception distance (FID) assesses the perceptual quality of fake images from a generative model (Heusel et al., 2017). It is a symmetric distance where two differences between two means and two variances are independently considered and summed. While our method is an asymmetric divergence (Eqn.7), the distances of means and variances are rescaled to have unit variance as in the Mahalanobis distance (Eqn.6), which is the source of domain robustness (Fig.1 Right). Note that the CLIP-S is a cosine similarity distance using CLIP features. The CLIP features are trained to minimize the InfoNCE loss by increasing mutual information approximately; however, our experiment shows that their approximation significantly degrades the evaluating performance.\n\n**2. Diversity.** This is a good point whether this metric may neglect the diversity of generations. In short, our method does not entirely neglect the diversity by considering the covariance matrices from real and fake samples, similarily to the FID. However, it is possible that a few unique fake samples (e.g., mode collapse) may have a smaller error in the covariance matrix.\n", " As *Reviewer Ezue* noticed, our metric does not rely on input or output modality and the choice of backbone networks. Both modalities can be visual or textual for a given context since the cross-mutual information is based on the defined probability distributions. However, we may be not available to the pretrained model with the CLIP-scale (large data, high computing) for video or audio data, the applications might be limited for the other scenarios.\n", " **1. Gaussian Assumption.** After l2-normalization of CLIP features, these features become unit vectors lying on the hyperspherical surface. In our preliminary experiment, we empirically found that the feature samples projected on the 1st to 3rd eigenvectors of the covariance matrix are (near-)Gaussian for both visual and textual features. We conjecture that the features are following Gaussian in a sufficiently local area. One may think of a tangent space to calibrate the points far from the center; however, these outliers may not need such extra work for accurate evaluation in our scenario.\n\n**2. Domain Generalization.** We understand the “considering image/text mismatched pre-training scenario\" as the domain generalization capability of the MID. Since we measure the multimodal alignments by relatively measuring the divergence from the pairs of real images and texts, we believe our metric would retain the majority of performances; but we do not provide empirical validations, which is the limitation of the current work. Specifically, different languages or drastic change of image domain would result underperformance, but it may be shared among the CLIP-based metrics, e.g., CLIP-S. Notably, since our method is not designed to particular pre-trained backbone models, we could trivially employ domain-specific backbones or state-of-the-art few-shot learning techniques for low-resource scenarios.\n\n**3. More Models for Evaluation.** We refer to Table 9 in Appendix where we include the MID reports comparing with the other metrics and various generative models, i.e., GLIDE, AttnGAN, DM-GAN, OP-GAN, DF-GAN, VQ-Diffusion, and LAFITE. In Sec 4.1, L225-226 points to this table, for your information.\n", " **1. How reliant are the empirical results on CLIP?** We believe our method is not strongly-coupled with the CLIP; however, we still need good multimodal features. Let us elaborate. In Sec 4.1.1, “Inspecting possible over-fitting with the CLIP features” partly explores this. The CLIP consists of visual and textual encoders and the InfoNCE loss maximizing the mutual information of multimodal features. Multiple versions of the pretrained CLIP vary the architectural type of visual backbone networks. We found that the normalized scores are generally consistent across generative models and the backbones, especially for our metric (see Fig. 3). Although we did not explore the textual encoders, we do not expect largely different results. However, it is worth noting that we encouraged the usage of two feature extractors having multimodal alignments for accurate metrics (See the last paragraph of Sec 3.1; L125-129). Thus, it is an interesting question whether a non-CLIP-based feature works for MID. For example, we may use unimodal encoders, e.g., the ViT pretrained on ImageNet (Dosovitskiy et al., 2020) and the sentence-BERT (Reimers & Gurevych, 2019).\n\n**2. A Related Work.** Thanks for the reference. We will add VIFIDEL (Madhyastha et al., 2019) for the comparison in the image captioning task. \n\n**3. Inferior Foiled.** Our foiling technique is based on Shekhar et al. (2017; see https://foilunitn.github.io). They exploit 73 out of 91 MS-COCO categories, excluding multi-word expressions (e.g., traffic light). Since the MS-COCO categories are distinctive, we believe it *severely* hurts the content of captions. For this reason, we do not think the generated images using the foiled captions can be of any better quality than those using the original captions when we match the generated images with the original captions. In this way, we intend that the foiled fake images are deliberately mismatched with the corresponding original captions. Please see Sec 4.1, L164. *“the generated images”* includes the foiled fake along with the fake images.)\n\n**4. Ethical considerations.** We acknowledge the raised issue that the pretrained CLIP potentially induces the risk of fairness, accountability, and transparency. We plan to include a dedicated paragraph in the main paper. For the detail, please refer to the comment under the *Reviewer wkz2*.\n", " Only one reviewer flagged an ethical issue, and they didn't elaborate on their concern. This is a technical paper on a well-established task (image captioning), and while I guess it would be possible to contrive of a setting where such captions produced harm, the same is true for any task. I don't see any ethical issues with this paper. Not applicable Review this paper as-is", " This paper presents an automated metric based on mutual information divergence for multimodal generative models. The method exploits gaussian mutual information framework and cross-mutual information. It uses image and text encoders of CLIP to compute the mutual information divergence. The method is theoretically well motivated and presents promising empirical results. Strengths: \n1. The metric is theoretically well motivated and is consistent across datasets and domains. \n2. The proposal in the paper correlates well with human judgement across several datasets. \n3. Thorough comparison to previous work. \n\n\nPotential concerns: \nThe results currently do not support if this metric is absolute - it is not clear how reliant the empirical results are with CLIP. What if there was no CLIP? \nMissing reference: VIFIDEL: Evaluating the Visual Fidelity of Image Descriptions by Madhyastha et al (ACL 2019) \n In section 4.1: there is an assumption being made where it says that fake images are generated by 'foiled' captions are inferior compared to fake images created using regular captions. How true is this assumption? FOILed captions are not necessarily \"wrong\" captions. They can indeed be plausible and hence the generated images can be of better quality. Some validation for this would be useful. The paper does not present critical limitations of the metric (such as reliance of CLIP/lack of validity over languages) or issues relating potential dual use of the metric. ", " This paper introduced Mutual Information Divergence (MID), a new metric for measuring the quality of image-to-text and text-to-image generation. Similar to model-based metrics like FID, MID relied on a pre-trained backbone (CLIP) to extract features which are used for computing scores. The score is computed by measuring the point-wise MI between to two feature spaces (with Gaussian assumption). Results show that MID can better measure the quality of image-text pairs by comparing against existing metics on augmented dataset. Pros\n+ The metric is simple and straightforward (in a good way) by combining powerful multi-modal backbone that allows cross-modal relation to be considered and MI to be computed rather easily. (my only concern is the Gaussian assumption, see Questions)\n+ The most important contribution is perhaps the fact that this metric can be applied regardless of the input/output modality and the choice of backbone can potentially be flexible as well.\n+ Experiments showed that MID is better than existing metrics and aligns nicely to human evaluation.\n+ The authors considered many different aspects (such as consistency, overfitting backbone, hallucination sensitivity) of designing a new metric and conducted experiment respectively to show that MID is robust.\n\nCons\n- Only few models are considered for evaluation. It would be nice to see the results when applying MID to evaluate more models since metrics are designed to be used to compare models.\n- While the advantage over simple metrics like BLEU is clear, the disadvantage is not discussed throughout in the paper. (see limitations) The feature from the transformer-based CLIP model (layer-normed) seems to be contradicting the assumption that both X and Y are Gaussians, am I missing something or MID simply works regardless of the normalization of the input feature.\n Besides numerical limitation, there are other limitations not discussed in the paper. For example, as a trade-off for leveraging pre-trained backbone model, this metric is might not be applicable to image/text that mismatches the pre-training scenario of the backbone model. (e.g. different language, different shape of image) In other word, this metric is only feasible for high-resource problems.", " This submission proposes a new evaluation metric for visual-language generation tasks. Compared to previous metrics, the authors propose to leverage negative cross-mutual information with multivariate Gaussian distributions to calculate mutual information. The authors conduct experiments on both text-to-image generation and image captioning datasets and show the effectiveness of the proposed metrics. + The motivation is clear and proposed framework is easy to understand. Besides, the authors also released the source code to reproduce the metric calculation.\n\n+ The technical details are clearly explained with valid experiments' results support.\n\n+ The authors also conduct ablation study to discuss the metrics usage on both text-to-image and image captioning tasks on various datasets, which is much appreciated. Overall I am satisfied with the submission. There is one problem with the metric:\n\nThe title focuses on \"Multimodal Generative Models\". However this metric is limited only within visual and textual modalities. As shown in Figure 1 left, to calculate the metrics, it also needs to use CLIP model to extract embedding first. This may limit generalization of the metrics. In the experiments, the authors also focus on text-to-image and image captioning tasks. It would be better to show the usage of the metric on more \"Multimodal Generation\" tasks (e.g., Video, sound, point cloud, etc.) to increase its scope. Please check the \"Questions\" section above. ", " This paper introduces the Mutual Information Divergence (MID) metric for multimodal generative models, where the metric attempts to measure the mutual information between conditions and generations under the Gaussian assumption. Various experiments on text-to-image generation / image captioning, and theoretical analysis are performed to demonstrate the effectiveness and rationality of the proposed metric. It is interesting that Human Likert-scale judgment is employed to validate the superiority of MID. - Strengths\n(1) This paper introduce the MID metric for multimodal generative models, which is somewhat interesting and brings more reasonable results.\n(2) The paper conducts solid experiments to validate the effectiveness of MID on both text-to-image generation and image captioning tasks, such as the generated and human Liker-scale judgment correlations, visual reasoning accuracy, Flickr8K-Expert, Flickr8K-CF, Pascal-50S, and FOIL hallucination detection. \n\n- Weaknesses\n(1) It lacks necessary analysis on relations between MID and existing metrics listed in Related Work. (1) Are there any analysis on the relation between MID and existing measuring metrics, such as FID, or image-text matching score? \n(2) Intuitively, MID impose higher alignment between image and text, does this hurt the diversity? Please see questions and weaknesses.", " This paper proposes a new metric for evaluating text-to-image and image-to-text models. The metric is mutual information divergence, which uses CLIP features as a source of ground truth. The authors provide theoretical analysis of their metric and demonstrate that it outperforms other metrics on numerous evaluations of real vs. fake images and captions. Originality\nStrengths: This is a novel area of inquiry and the metric employed achieves better results than any other proposed method.\nWeaknesses: There have been methods that employed CLIP features previously, and the improvement over these is relatively small.\n\nQuality\nStrengths: Outperformance of other methods. Strong theoretical evaluation of the proposed method.\nWeaknesses: No discussion of the potential limitations and biases of CLIP.\n\nClarity\nStrengths: Visualizations are easy to understand. Intro and mathematical descriptions are well presented.\nWeaknesses: I found the paper hard to follow at times, especially as it moves into evaluation and discussions. \n\nSignificance\nStrengths: Outperformance of other methods. Wide variety of environments in which the method might be useful for making sense of the relationship between text and images.\nWeaknesses: Many of the best synthetic generators and image captioners now directly employ the CLIP embedding space to achieve strong results (DALL-E models; GLIDE; VQGAN-CLIP; Antarctic Captions, CLIP prefix-LM and its cousins; etc.). Won’t using the CLIP embedding space as a means of evaluating multimodal representations be a confound whenever these systems that already employ a CLIP model need to be evaluated? To what extent does CLIP’s training distribution play a role in the strong performance observed? If the model were scaled to 4 billion images, for example, would MID improve further? The paper does not have a limitations section. It could use a discussion of the biases in CLIP. See, for example, Wolfe et al. 2022 ACM FAccT; Birhane et al. 2021 arXiv." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 2, 3 ]
[ "yt6S6ezXu0w", "yootlWiiFSp", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "NkN4kZD9ZdF", "aJ_QBYMN52C", "QrI68_w7YN", "_oJvIlmFgNt", "xvsw8z5WRyD", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl", "nips_2022_wKd2XtSRsjl" ]
nips_2022_8RKJj1YDBJT
Neural Surface Reconstruction of Dynamic Scenes with Monocular RGB-D Camera
We propose Neural-DynamicReconstruction (NDR), a template-free method to recover high-fidelity geometry and motions of a dynamic scene from a monocular RGB-D camera. In NDR, we adopt the neural implicit function for surface representation and rendering such that the captured color and depth can be fully utilized to jointly optimize the surface and deformations. To represent and constrain the non-rigid deformations, we propose a novel neural invertible deforming network such that the cycle consistency between arbitrary two frames is automatically satisfied. Considering that the surface topology of dynamic scene might change over time, we employ a topology-aware strategy to construct the topology-variant correspondence for the fused frames. NDR also further refines the camera poses in a global optimization manner. Experiments on public datasets and our collected dataset demonstrate that NDR outperforms existing monocular dynamic reconstruction methods.
Accept
This paper had consistently positive reviews from all reviewers and weaknesses that were expressed were responded to coherently by the authors. I recommend this paper be accepted.
train
[ "KhgpaKdsgR", "TRrKICKJp5", "-tzpDb1mkRG", "CjATPCB5gNB", "Jio87ct4NhX", "Q5DSgGmSKZW", "jSO1LeE3ojK", "J-QYDerZ9hK", "4B3aiP76Oyj", "773pF6PMVdj", "XA7eSlEmvBk", "pgHMs_GX8dg", "bkvyjmr7ex7", "YzdM1Cu4YEQ", "o-S3mRtToq", "aec0hGZTsmH", "m9v1Y24WmXi", "ch-IGmlL8n5", "_Lw-xdHJm0b" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer NfVp,\n\nThank you for your quick reply and review comments!\n\nBest regards, Paper2241 authors", " Thank you for addressing my concerns.\nI modified my rating.\n\nbest", " Thank you for your quick reply and review comments. For BANMo results shown in Fig.5 of the main paper:\n\n- Before submission, we have contacted the authors and set the suggested experimental parameters, such as the frequency of Fourier Encoding.\n\n- In our experiments, we find the smoothness of results relates to the range of global rigid motion of objects. The BANMo results in Fig.4 of supplementary material show that feature registration may play a more correct role as the angle of camera view changes distinctly.", " My questions are addressed adequately. This is a technically sound paper with impressive results. \n\nAn additional comment on baseline comparison (Fig 5) -- Banmo results look much less smooth than those presented in the paper, containing extruding edges on the human face and symmetric pattens on the cloth. I recommend reaching out to the authors and confirm whether this is expected.", " Dear Reviewer BjMr,\n\nThank you for your quick reply and review comments!\n\nBest regards,\nPaper2241 authors", " Dear Authors,\n\nI have no additional comments. Your rebuttal has addressed my concerns.\n\nSincerely,", " Dear Reviewer 1XXC,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Dear Reviewer NfVp,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Dear Reviewer 1SCx,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Dear Reviewer BjMr,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " We thank the reviewer for thinking our method is conceptually simple and effective, obtaining high-quality results. We also thank the reviewer for thinking our bijective map design is interesting and the paper is well-written. We appreciate the reviewer for the detailed comments and constructive advice. Below are our responses to the questions.\n\n#### **Robustness on camera pose initialization**\n\n- Robust ICP may accumulate errors in long sequences with large rotation (especially more than 90 degrees). Therefore, the initial camera poses still contain a large degree of noises in these cases. However, our NDR can still achieve high-quality reconstruction result, as shown in the right example of Fig.3.\n- As reported in `All-Q2`, we add an experiment to evaluate the NDR's robustness against various degrees of noises of camera poses.\n- We will add more analyses to camera pose initialization in the revised version.\n\n#### **Writing of technical part in Sec. 3.1**\n\nThanks for pointing this out. The cause of the invertible property is that after specifying a certain coordinate axis, each block predicts the movement along and rotation around the axis in turn, and the process of predicting the deformation is reversible. According to Fig.2(b), focusing on the reverse process, starting from $(u',v',w')$, each block can infer the rotation around and movement along the axis and invert them in turn to recover the original $(u,v,w)$. We will add more clear discussions and formulas in the revised version.\n\n#### **Property of invertible wrapping fields**\n\nIn fact, this design will not result undesirable artifacts. Each block of bijective map $F\\_\\textbf{h}$ only represents a motion related to a certain coordinate axis. And the entire network contains multiple blocks, decomposing motion in 3D space into motions related to multiple axes. We have further discussed the design in Sec. A and C.1 of supplementary material. Taking a plane $w=a$ (constant) for example, points on the surface have different predicted motions related to $u$- and $v$-axis at least.\n\n#### **Unclear explanation of Eq.13**\n\n$\\textbf{v}\\_\\textbf{p}$ refers to the view direction in the 3D canonical space, which is $J\\_\\textbf{p}(\\textbf{p}\\_i)\\textbf{v}\\_i$ determined by the view direction $\\textbf{v}\\_i$ in the observation space and transformed via the Jacobian matrix $J\\_\\textbf{p}(\\textbf{p}\\_i)$. The design purpose of visible loss term is to constrain the angle between the normal vector of the sampled point on depth map and the view direction to be larger than 90 degrees, which aims to constrain depth points to be visible surface points under the camera view. We have demonstrated and analyzed this loss in Sec. B of supplementary material. We will modify the related part in the revised version.\n\n#### **Inapparent difference in Fig.6**\n\nThanks for pointing this out. We will add annotations on results or replace them with other examples in Fig.2 of supplementary material in the revised version.", " We thank the reviewer for thinking our proposed bijective map is reasonable and the writing is good. We also thank the reviewer for the detailed comments and constructive suggestions. Below are our responses to the questions.\n\n#### **Canonical space**\n\nIn our method, we do not explicitly select one frame as the canonical frame. The geometry of each frame is represented as canonical geometry plus deformation of the frame. The canonical geometry and deformations are the variables to be optimized. Meanwhile, the proposed bijective map also regularizes the non-rigid motion. With the well designed representation formulation, fitting term and regularization, the optimization tends to represent the canonical space towards the average shape of all frames. In this way, the non-rigid deformation field can be more easily represented and learned.\n\n#### **Cycle consistency in challenging cases**\n\n- As analyzed in Line 138-147, the cycle consistency of bijective map is strictly maintained, which is independent of data.\n- As reported in `All-Q1`, we evaluate the cycle consistency of whole deformation field (bijective map and topology-aware network). Seq. *Rotated Body* records a human body rotated in 360 degrees (big motion), while Seq. *Talking Head* contains 500 frames (long sequence). The results show that the cycle consistency maintaining of deformation field is not affected by the inreversible topology-aware network overly, owing to the design of bijective map.\n\n#### **Failure cases of motions**\n\nOur method might fail for the inputs with both large and fast movements, such as running. In this case, the RGB image is blurry and the depth map contains a lot of noises, and thus the captured RGB and depth data do not contain enough effective information to guide the reconstruction. Meanwhile, it is quite difficult to get a reasonable camera pose as initialization in this case. We will add the discussions in the revised version.", " We thank the reviewer for thinking our paper has a great idea, provides solutions to some challenges and exhibits convincing results. We appreciate the reviewer for the detailed comments and constructive suggestions which can make the paper more solid. Below are our responses to the questions.\n\n#### **Discussion and evaluation of bijective map and topology-aware network**\n\nAs reported in `All-Q1`, we evaluate the effect of bijective map and topology-aware network to cycle consistency maintaining. It shows that the bijective map does help preserve the deforming path independency among arbitrary three timestamps. We will add a visualization of correspondence estimation results and an ablation study on topology-aware network to the revision. The ablation study is to evaluate the quality of estimated correspondence without topology-aware network, especially in topology-variant regions.\n\n#### **Computational expenses and how to handle them**\n\nThe following strategies might be useful to improve the computation speed.\n- We might do not need differentiable rendering at the beginning of optimization, which is a time-consuming step. The training strategy is to let deformation field and neural SDF converge to a good initial value only with the captured depth maps.\n- Based on the converged result of item 1, we can directly sample some points near the approximate surface, which also saves sampling time in free-space. Then we leverage RGB information to jointly refine reconstruction.\n- We will consider compact geometry representations (e.g., Instant-NGP [Müller et al. 2022]) and combine with our NDR in the future work. It should be pointed out that our method does not need to completely evaluate cycle consistency all over the frames during optimization, since the cycle consistency of bijective map is preserved naturally.\n\n#### **Robustness on camera pose initialization**\n\nPlease refer to the response in `All-Q2`.\n\n#### **The effect of segmentation results**\n\nLike most dynamic neural implicit methods (e.g., BANMo), inaccurate object segmentation will affect the reconstruction results. As shown in the duck toy of Fig.1, when a small region of the background is incorrectly segmented into the foreground, the reconstructed geometry also varies correspondingly in this region. A possible solution is to further optimize the segmentation results in our reconstruction framework based on the segmentation consistency from different views, which could further improve the reconstruction quality.\n\n#### **Inference time and example settings**\n\nThe testing videos have at least 200 frames and at most 500 frames, which is related with the motion complexity. The optimization time of a single scene on a single A100 GPU is about 12 hours, which has been discussed in Line 228. We will add related experimental settings to the revision.\n\n#### **Stripe artifacts in DynamicFusion results**\n\nAs shown in Fig.3, it mainly depends on the degree of global rotation around the $y$-axis and the number of deformation nodes, which affect the interpolation continuity of rotation of vertices. On the other hand, it is also relevant to the smoothness of the dependence weight of vertices on deformation nodes. The stripe artifacts tend to disappear when the object rotates wildly (shown in 2:00-2:12 of demo video) or indistinctively (shown in Fig.4).\n\n[Müller et al. 2022] Thomas Müller, Alex Evans, Christoph Schied, and Alexander Keller. Instant neural graphics primitives with a multiresolution hash encoding. *ACM Transactions on Graphics (TOG)*, 41(4):1–15, 2022.", " We appreciate the reviewer thinks our reconstruction network is novel. We also thank the reviewer for the detailed comments and constructive suggestions. Below are our responses to the questions.\n\n#### **Qualitative ablation study and relationship with HyperNeRF**\n\n- Except ablation studies in the paper, some other qualitative ablation studies are presented in the supplementary material, including Fig.2 and 2:40 to 3:06 of demo video. The effect of each component has been clearly presented and analyzed.\n- HyperNeRF is a method for novel-view synthesis which does not include a reconstruction module, such as SDF or Occupancy representation. The difference of our method with HyperNeRF is similar with NeuS and NeRF. Except different targets, we also propose a new representation with bijective mapping and several terms to utilize captured depth, which are all specifically designed for our task.\n- Actually, the baseline with 6D motion in ablation study can be treated as an RGB-D dynamic reconstruction version of HyperNeRF. As shown in Fig.6 of paper and Fig.2 of supplementary material, our NDR can not only preserve geometry details but also reduce artifacts.\n- As reported in `All-Q1`, we add a quantitative evaluation on cycle consistency, which also compares the baseline with 6D motion. It can be seen that our bijective map does help preserve cycle consistency.\n\n#### **Lack of quantitative experimental results**\n\nWe have tried our best to do comparisons with existing methods. We have compared DynamicFusion and OcclusionFusion on 2 common datasets and our captured dataset on the most common geometry metric to evaluate reconstruction quality. For some other methods without public codes and results, we also did qualitative comparisons (Fig.4).\n- From the qualitative experiments, we can see that our NDR outperforms existing methods by a large margin, which have fully demonstrated the superiority of our method.\n- We also conduct quantitative comparisons between our NDR with other methods (Tab.1). As described in Sec. C.1 of supplementary material, some methods do not release their codes and results. This makes it difficult for us to conduct quantitative experimental comparisons with them.\n- For BANMo: Fig.5 could clearly show that our full model outperforms BANMo. We can add a quantitative evaluation: register reconstructed model of BANMo to depth frame and calculate the metric.\n- For HyperNeRF: As stated above, the baseline with 6D motion can be seen as a comparison with RGB-D reconstruction version of HyperNeRF.\n- For VolumeDeform: We have quantitatively compared a state-of-the-art RGB-D method, OcclusionFusion (CVPR 2022). On the other hand, as far as we know, the code of VolumeDeform is not public available.\n\n#### **Unclarity about cycle-consistency performance**\n\nWe have analyzed the strict cycle-consistency of bijective map in Line 138-147. The evaluation of whole deformation field is reported in `All-Q1`.\n\n#### **Fig.8 of HyperNeRF**\n\nWe will add an ablation study on topology-aware network and follow HyperNeRF's Fig.8 exhibition (including geometry under novel views) to the revision.\n\n#### **Effects of captured depth noises**\n\n- The noises contained in captured depth map will definitely affect reconstruction quality. To alleviate the effect of noise on modeling quality, we first perform a depth denoising. The denoised depth map provides a reliable geometric constraint for reconstruction, as evaluated in Fig.5.\n- Although a single-frame depth map contains noises, the fusion of depth maps can restore high-quality 3D geometry, which is the key idea of KinectFusion and DynamicFusion. Except for the global fusion of depth maps, our NDR also utilizes image information in a differentiable rendering framework to jointly optimize geometry, thus achieving better reconstruction results. It can be seen in Fig.6 of paper and Fig.2 of supplementary material that the reconstruction quality gets improved with the help of color images.\n- It is true that depth map can not be treated as GT. However, considering that it is very difficult to obtain accurate geometry shapes of non-rigid deforming objects in the reconstruction problem, most of existing geometric reconstruction works (e.g., DeepDeform and OcclusionFusion) adopt denoised depth map as GT.\n\n#### **Synthetic dataset**\n\nThanks for proposing to use synthetic data for ablation study. We will add this kind of ablation studies to the revision. We think it would be both useful to analyze the effect of each component with synthetic data and real data. As shown in Fig.2 of supplementary material and 2:40 to 3:06 of demo video, we can clearly see the different role of each component on real captured data. In addition, as reported in `All-Q2`, we evaluate the robustness to camera pose initialization on synthesis data from AMA dataset (render meshes to obtain depth frames). It can be seen that the reconstruction quality is more related to the degree of non-rigid motion of sequence.", " All reviewers appreciate that our proposed NDR is effective and the results are impressive. We thank the reviewers for their constructive comments. Below are our responses to those common questions.\n\n#### **All-Q1. Evaluation of Cycle Consistency**\n\nWe add a numerical experiment for cycle consistency evaluation on the whole deformation field. In the experiment, we randomly select 3 frames (indexed by $i,j,k$) as a group in a video sequence. Given points on one frame, we calculate the corresponding coordinates on another frame, and record this scene flow as $\\mathbf{f}$. Then it includes 2 deforming paths from frame $i$ to $k$, based on the direct flow $\\mathbf{f}\\_{ik}$, or the composite flow $\\mathbf{f}\\_{ij}+\\mathbf{f}\\_{jk}$. To evaluate the cycle consistency, we calculate the Euclidean norm of $\\mathbf{f}\\_{ij}+\\mathbf{f}\\_{jk}-\\mathbf{f}\\_{ik}$ as the error. The error smaller, the cycle consistency (invariant on deforming path) maintains better. We conduct experiments on a human body rotated in 360 degrees from KillingFusion dataset and a talking head from our captured dataset. In the experiment, we randomly select 100 groups of frames and calculate the mean error on depth points of the object. Since the topology-aware network is inreversible, we optimize the corresponding coordinates with fixed network parameters and Adam optimizer. As a comparison, we also evaluate them on our framework with 6D motion:\n\n||w/ Bijective Map|w/ 6D Motion|\n|:----:|:----:|:----:|\n|Rotated Body|5.37|441.12|\n|Talking Head|4.91|261.58|\n\nEach value is the mean error ( $\\times 10^{-4}$ ) per point in the unit coordinate system. The quantitative results show that the cycle consistency of whole deformation field among frames is maintained by our proposed bijective map quite well, although it might be affected by the inreversible topology-aware network.\n\n#### **All-Q2. Robustness to camera pose initialization**\n\n- For examples of small rigid motion (e.g., Fig.4 and Fig.6), our NDR can achieve high-quality reconstruction results even without Robust ICP based camera pose initialization.\n- For examples of large rigid and non-rigid motions (e.g., 2:01 to 2:10 of demo video), the initial poses are not accurate enough although the Robust ICP is adopted. However, our NDR can still refine the camera poses well and obtain impressive reconstruction results.\n- In order to systematically analyze the performance of our camera pose optimization module, we add an experiment to test NDR's robustness under various degrees of noises on both real and synthetic dataset. We choose 2 sequences of small rigid motion separately from DeepDeform dataset (a body with moving joints, 200 frames) and Articulated Mesh Animation (AMA) dataset [Vlasic et al. 2008] (a Samba dancer, 175 frames). AMA dataset is a multi-view dataset which contains reconstructed mesh corresponding to each video frame. To construct synthesis depth data, we render meshes to a chosen camera view. In the experiment, we do not utilize any multi-view messages but only monocular RGB-D frames. We add Gaussian Noises with $5,10,20,40,60$ degrees of standard deviation to initial Euler angles and calculate mean geometry errors ( $0$ denotes without adding noises as a reference):\n\n||0|5|10|20|40|60|\n|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n|Moving Joints|2.95|4.58|6.58|9.17|11.07|30.28|\n|Samba Dancer|3.90|5.49|8.18|11.60|13.46|27.02|\n\nAll values are in *mm*. The results show that our NDR is robust against noisy camera poses to a certain extent, owing to its neural implicit representation and abundant optimization with RGB-D messages. If the standard deviation of Gaussian Noises is over 20 degrees, the reconstruction quality will be obviously affected (geometry error is over 1 *cm*).\n\n[Vlasic et al. 2008] Daniel Vlasic, Ilya Baran, Wojciech Matusik, and Jovan Popović. Articulated mesh animation from multi-view silhouettes. In *ACM SIGGRAPH 2008 Papers*, pages 1-9. 2008.", " This paper introduces a template-free method to reconstruct a high quality geometry and motion of a dynamic scene from a single RGB-D camera. It proposes a bijective deformation map to preserve the cycle consistency between two frames, thus it doesn’t require any scene or optical flow map. To handle the topology changes, the deformation network is combined with a topological-aware network. Experimental results show that the proposed method overcomes the state-of-the-art RGBD methods, such as DynamicFusion and OcclusionFusion, and RGB methods, such as BANMO. # Strenghts\n\n- A novel 3D reconstruction network of a dynamic scene from a single RGB-D camera. The combination between topology-aware network and deformation-based network makes the network to model the geometry and motion of a dynamic object.\n- The proposed bijective map is able to preserve the cycle-consistency because it models the points in 3D observation space to the points in 3D canonical space.\n\n# Weaknesses\n\n- There is only a qualitative ablation study. The overall framework is like an extended version of HyperNERF for RGBD videos. Thus, it is essential to perform a quantitative ablation study, especially compared to HyperNERF.\n- Lack of quantitative experimental results. The proposed method only performs a qualitative evaluation for various comparison methods. While qualitative evaluation can be subjective, it is essential to perform quantitative evaluation, especially with BANMo, HyperNERF, VolumeDeform, etc.\n- It is unclear about the cycle-consistency performance of the proposed bijective map. The evaluation method only focuses on a single frame. There should be a way to evaluate the consistency between frames because it is also a part of the proposed contribution.\n- It is recommended to follow the HyperNERF Fig. 8 to show the performance of the proposed method. It is unclear how the topology-aware and bijective map-based deformation networks affect the overall performance.\n - While the proposed method utilizes depth from Kinect as the ground truth, is there any effect if the captured depth information is noisy? Note that an active depth camera could not be used as the ground truth due to its noisy characteristic?\n- Why don’t use a synthetic dataset for the ablation study to prove the proposed idea?\n The authors have addressed the limitations in the conclusion.", " This paper proposes a template-free RGB-D based 3D scene reconstruction method that handles non-rigid changes with deformation in dynamic scenes.\nThe proposed method, Neural Dynamic Reconstruction (NDR) follows the similar steps with the classic dynamic scene reconstruction methods.\nIt uses a neural implicit function for surface representation, and thus using neural signed distance fields (SDF) and proposes a novel neural invertible deformation network that utilizes a bijective map between frames and a canonical space in order to constrain the non-rigid deformation of observed surfaces.\nThe paper also adds a topology aware network that tackles the well known challenges of dynamic scene reconstruction of the free-form deformation, where dramatic changes of topology (or assumptions of topology) could make handling deformation/motion constraints hard.\nThe experiments show that the proposed method outperforms (in the context of the accuracy of surface reconstruction over time) the other existing monocular dynamic scene reconstruction methods.\n I appreciate the research effort from the authors. The results look very impressive and the contribution look very clear. Here are the +/- of the proposed methods and the submitted article.\n\n+Convincing results compared to existing methods\n\n+Provides solutions of each challenging limitation of classic methods (and other latest methods).\n\n+Great idea on the use of bijective map together with topology aware network\n\n-Need more detailed discussion and the evaluation of the bijective map and topology aware network\n\n-Need more detailed discussion of computational expenses and how to handle them. \n To make the paper more solid and to help the readers understand the article more clearly, I enumerated several questions below. Some questions may focus on how far the use of the proposed method is from the real world applications.\n\n-How much offset the method can handle refining camera poses, and how much error or residual that wrongly initialized camera pose could be handled?\n\n-How the segmentation results affect the overall results? How do the residuals in the boundary of target surfaces affect the quality of the reconstruction results?\n\n-What is the sole inference time (including optimization steps) for each example? Particularly, providing some numbers, settings (settings, pose accuracy, # samples for each example demonstrated in the result sections would be very helpful to understand the correlation between the complexity of the scene/topology/motion.\n\n-Is there any number or visualization that shows how accurately the bijective map is constructed?\n\n-How wrongly handled topology affects the bijective map quality?\n\nRegarding the importance of the major contribution of this paper, rather than the comparison to the 6D motion, extra discussion on the bijective map and its dependency on the actual topology aware network would make the paper more solid.\n\n-The method is obviously very expensive. Is there any potential idea or open discussion to reduce the computational expenses? For example, not completely evaluate cycle consistency all over the frames, making some steps sequentially updatable. etc.\n\n-In Figure 3. the results from DynamicFusion do not look right. What is the main source of the stripe artifacts in the most of results? As addressed in the conclusion, the major limitation of the method is probably the computational expense.\nAt least adding a small section about the discussion of how to make the method scalable (even with the trade off between quality) or how to make it easier to use the method in the general use cases (end to end scenario) would make the paper more solid.\nFinally, as the major contribution also lies in the use of bijective map and topology aware network, providing more discussion and detailed evaluation (in addition to ablation test) would make the paper even more solid.", " This paper proposed a method for surface reconstruction from a sequential RGBD input. The main strategy for this method is the cycle consistency between canonical and observation space. This approach seems to represent non-rigid deformation. Moreover, to be topology-aware, this paper employed [45]. The results show some comparisons with DynamicFusion[42], OcclusionFusion[58], and so on. Both qualitative and quantitative results show that the method proposed in this paper is better than the methods for the comparison in some parts. The ablation study is also conducted but it is qualitative results only. Strength\n- The idea of a cycle consistently between observation and canonical space is reasonable.\n- Topology-awareness improves the results but it is basically the previously proposed method. \n- The organization and writing are good.\n\nWeakness\n- As written in the questions and limitations of this review, I cannot find how to define the canonical space. Also, I have some concerns in the limitation of this method. As for the canonical space, how is it decided? Moreover, does the choice, deciding, or learning of canonical space affect the performance?\nIn a case of a long sequence with big motion like a 360 rotation, does the cycle consistency correctly work? If there are some motions that this method cannot apply, I think it should be mentioned as a limitation of this method.\n", " The paper tackles the problem of dynamic object reconstruction from a single RGBD video. It solves the problem in a differentiable rendering pipeline and designs a novel 3D warping function that is guaranteed to be invertible. It achieves high-quality reconstruction results on KillingFusion, DeepDeform and iPhone videos. **Strengths**\n- The method is conceptually simple and effective. The authors did well in integrating the best existing solutions (neural surface, canonical hyper-space, and invertible NNs) into a system that not only works well but also stays clean. \n- The bijective warping field is an interesting technical improvement over SE(3) fields. it ensures 3D warping functions to be invertible by design. \n- The results are high-quality and state-of-the-art.\n- The paper is well-written with adequate amount of details. Design choices are also well-motivated. \n\n**Weakness**\n- Some details can be clarified. As there is genuine ambiguity between camera motion and object motion, it is worth explaining the camera pose initialization in more detail and analyze the failure modes. When does Robust ICP fail? For instance, does it fail when the object exhibits rotational motion? How robust is the method to inaccurate camera pose initialization? \n- Some writing could be improved (also see questions). The technical part of sec. 3.1 is not easy to follow possible due to lack of concise equations in l148-165. It is also not obvious what design choice made it invertible. The key idea of coordinate splitting is only mentioned in l150 and the high-level intuition is not conveyed.\n - For the implementation of invertible wrapping fields in 155-165, if points in the canonical space happen to have the same $w'$ coordinate, their predicted $R_{uv}$ is constrained to be the same. Similarly if points in the observation space have the same $(u, v)$ coordinates, their predicted $\\delta_w$ is constrained to be the same. Does this cause undesirable artifacts, for instance, when dealing with an object containing a flat surface with same $w'$ coordinates?\n- Eq(13) is not clearly explained. What is v_p? What does it mean to force points on depth images to be visible from camera? \n- Fig. 6: the difference between 6D motion fields and full is not obvious. Consider adding more descriptive captions to highlight the difference or choose the example more carefully. Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 5 ]
[ "TRrKICKJp5", "pgHMs_GX8dg", "CjATPCB5gNB", "jSO1LeE3ojK", "Q5DSgGmSKZW", "773pF6PMVdj", "XA7eSlEmvBk", "ch-IGmlL8n5", "m9v1Y24WmXi", "aec0hGZTsmH", "_Lw-xdHJm0b", "ch-IGmlL8n5", "m9v1Y24WmXi", "aec0hGZTsmH", "nips_2022_8RKJj1YDBJT", "nips_2022_8RKJj1YDBJT", "nips_2022_8RKJj1YDBJT", "nips_2022_8RKJj1YDBJT", "nips_2022_8RKJj1YDBJT" ]
nips_2022_K2PTuvVTF1L
Variational inference via Wasserstein gradient flows
Along with Markov chain Monte Carlo (MCMC) methods, variational inference (VI) has emerged as a central computational approach to large-scale Bayesian inference. Rather than sampling from the true posterior $\pi$, VI aims at producing a simple but effective approximation $\hat \pi$ to $\pi$ for which summary statistics are easy to compute. However, unlike the well-studied MCMC methodology, algorithmic guarantees for VI are still relatively less well-understood. In this work, we propose principled methods for VI, in which $\hat \pi$ is taken to be a Gaussian or a mixture of Gaussians, which rest upon the theory of gradient flows on the Bures--Wasserstein space of Gaussian measures. Akin to MCMC, it comes with strong theoretical guarantees when $\pi$ is log-concave.
Accept
This paper proposes a novel method for variational inference based on Wasserstein flows. The key contribution is perhaps the rigorous guarantees that are derived from an assumption of log-concavity. While the initial submission was unaware of some existing work on VI that derives guarantees from similar log concavity or smoothness assumptions, the proof strategy that is given uses novel technical methods, and thus is of interest in any case. Readers would benefit from a detailed discussion that can contextualize this work to previous work, which the authors have committed to doing.
train
[ "xSpQOUOJTAc", "DiyQue009Mb", "0Mco7yh3oAX", "KLEoD4SEK1X", "XkUsLBxJ_na", "HplH0lKp61l", "CXVIwlMkPV", "TnJPSTWDwyK", "ex0_EqVR9Nx", "CbhN-frIqd1" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors address most of my concerns and I raise the score.", " Thank you; I appreciate your commitment to improving your excellent work further.", " Thank you for your kind review. We are glad that you enjoyed reading our submission.\n\n> If I have one point of criticism towards the paper, it would be that the research within the field of variational inference is misrepresented.\n\nThank you for bringing up this omission. As we note in our response to all reviewers, we only became aware of this literature after submission, and many of the inappropriate sentences of the paper regarding VI stemmed from this omission. We will conduct a more thorough literature search, provide a discussion of the placement of our paper in the context of this work, and soften some of our claims accordingly.", " (continued from previous comment)\n\n\n> For the Gaussian mixture, are there objective functionals other than the KL divergence, for instance, maximum mean discrepancy, to make the resulting problem on the measure of the Bures-Wasserstein space convex?\n\nThis is an interesting research direction and other objectives have indeed been proposed in the VI literature, albeit not necessarily in the context of Wasserstein gradient flows (see, e.g., Daudel et al.). Note however, that convexity depends not only on the objective but also on the chosen parametrization (together with the geometry with which the parameter space is endowed). In particular, modifying the objective to MMD for example, may not be sufficient to obtain convexity. On the other hand, simple modifications of the parametrization can lead to convexity; for example choosing the mixing measure itself as a parameter makes the problem convex for the classical L^2 geometry but it is not amenable to good implementation strategies. In this convexity/implementability tradeoff, Wasserstein gradient flows strike a good balance because not only do they lead to tractable interacting particle systems but also to theoretical guarantees in specific cases (single particle with log-concave target).\n\n> For the Gaussian family, the gradient flow in equ. (4) does not require the calculation of the Hessian of the log posterior density. However, the gradient flow of Gaussian mixture in equ. (10) does require the Hessian. As the calculation of the Hessian for high dimensional inference problem can be problematic and the proposed method itself is a first-order method, is it possible to only utilize the gradient information of the log-posterior density?\n\nHessian-free updates for Gaussian mixtures can also be obtained via integration by parts; it is presented in Appendix A.2 due to a lack of space. We have added a pointer in the main body.\n\n> Some literature related to the Bures-Wasserstein space are missing, see [1, 2].\n\nThank you for these pointers. We will add them in our next revision.", " Thank you for your constructive suggestions. Below, we address some of your points in turn.\n\n> For the Gaussian family, the analysis of the continuous-time dynamics seems to be a direct application of the convergence results of Wasserstein gradient flow. For the Gaussian mixture, although the authors acknowledge that the problem is non-convex, some theoretical analysis of the proposed method can strengthen the paper.\n\nOur mathematical analysis for the Gaussian case is indeed a direct application of Wasserstein theory that leverages the fact that the BW manifold is a totally geodesic subset of the Wasserstein space. We believe that this constitutes novel and strong evidence that the well-developed theory of Wasserstein gradient flows provides the correct framework to design and analyze VI algorithms (note that it is the first justification of Särkkä’s heuristic which dates back to 2007). While previous work necessitates a clever ad-hoc square-root reparametrization to elicit convexity, the Wasserstein framework is completely natural and directly leads to a new algorithm for mixtures of Gaussians. A theoretical analysis for the landscape of mixtures of Gaussians is a notoriously hard problem that is inherent to parametrization rather than geometry (see below for more comments in this direction).\n\n> The structure of the paper can be improved. The algorithm for Gaussian mixture is of great interest in practice but the detailed description of that algorithm is not presented in the paper.\n\nThe algorithm is a numerical implementation of the two coupled ordinary differential equations (11) and (12). The revision now emphasizes that they constitute the core of the algorithm. In turn, numerical implementation of the ODE and the expectations it contains may be performed following various standard methods: cubature rules and Runge–Kutta. These numerical strategies are somewhat classical and their detailed implementation is provided in Appendix J.2.\n\n> The comparison of the proposed method for Gaussian mixture with other variational inference methods using Gaussian mixture is missing. Some experiments on Bayesian inference can strengthen the paper.\n\nA Bayesian inference example has been considered for our Gaussian mixture model in Figure 1 and in Appendix J.3.1 with the logistic target. We have compared our algorithm with Laplace approximation in the single Gaussian case. In the mixture of Gaussians case, our experiments are a proof of concept that illustrates the practical relevance of Wasserstein gradient flows as a theoretical framework for VI. Detailed comparison with other methods for the mixture model is beyond the scope of this paper and left for future work.\n\n> The interpretation of the Gaussian mixture model as the measure on the Bures-Wasserstein space is interesting. I’m wondering if the target density itself is a Gaussian mixture, can the proposed algorithm exactly fit the posterior distribution?\n\nThe reviewer has raised several questions regarding theoretical guarantees for the Gaussian mixture case. In fact, such difficulties are pervasive to Gaussian mixture modeling in which algorithmic guarantees for maximum likelihood estimation are very limited. In a way, the good interpretability of Gaussian mixtures comes at the cost of a difficult optimization landscape. Overcoming these algorithmic barriers is an active and ongoing area of research that is beyond the scope of this paper. Indeed, while we argue for Wasserstein gradient flows as a generic framework for VI, we do not claim that it can solve all problems. However, akin to expectation/maximization for likelihood maximization, it provides a novel heuristic that performs well in numerical experiments. In particular, it appears that with appropriate initialization it can indeed recover the target in the well-specified case.\n\n(response continued in next comment)", " Thank you for your kind words and helpful suggestions. We will incorporate them into the revised manuscript.", " Thank you to all reviewers for the helpful feedback. After submitting this manuscript, we became aware of a line of work (e.g., [1, 2]) which also obtains convex guarantees for Gaussian VI via the square root parameterization of the covariance matrix in the context of Bayesian logistic regression. We will revise our submission to include these references and provide a discussion of the placement of our paper in the context of this work.\n\nAlthough this literature shows that quantitative guarantees for VI can also be obtained via alternative methods, we believe that our approach via Wasserstein gradient flows remains a natural and powerful framework for extensions to other settings, as we demonstrate via the family of mixtures of Gaussians.\n\n[1] E. Challis and D. Barber. Gaussian Kullback–Leibler approximate inference. 2013.\n\n[2] P. Alquier and J. Ridgway. Concentration of tempered posteriors and of their variational approximations. 2019.", " The paper provides an analysis of variational inference algorithms based on Gaussian approximation of variational distributions. The paper exploits the theory around Bures-Wasserstein spaces of Gaussian measures to come construct a sequence of Gaussian distributions (or mixtures) that converge to the target measure, i.e., the variational approximation of the target distribution. \nThe paper uses Wasserstein gradient flows for their theoretical analysis and establishes that Saerkkae’s heuristic coincides with the sequence of distributions arising in the gradient flow of the KL divergence. \nThis analysis is extended to the use Gaussian mixture distributions. \nTheoretical guarantees are derived and discussed in detail. The theoretical results rely on the assumption of log concave distribution, an assumption that is also made in the study of Langevin type algorithms. The theoretical contributions of the paper look sound and the work is well situated among other contributions in the field. \n\nTo add: \nIn the related work section it might be worthwhile to add reference to the work on sequential Monte Carlo algorithms. \n\nMinor: \nLine 272: generality? \n\nContributions:\n- Add more numerical results in the main body of the paper? This would make the work stronger on spark practical applications\n\nLine 335 : “ …. yields a new Gaussian particle method which appears to be significantly more powerful than classical particle methods.” \nThis is an overly strong statement that requires more justification. You are not explicitly discussing how your method is more powerful. This should happen both from an empirical and theoretical point of view (which you both don’t do). \n see above Yes", " This paper proposes a variational inference method to find the optimal approximation of the target distribution, which has the smallest KL divergence to the target distribution in the family of Gaussian distribution or Gaussian mixture. For the family of Gaussian distribution, the convergence analysis of the continuous-time dynamics and the discrete-time stochastic gradient algorithms are established under the strong log-concavity assumption of the target density. For the Gaussian mixture, they propose the gradient flow of the KL divergence and the particle discretization by parametrizing the Gaussian mixture model as a probability measure over the Bures-Wasserstein space. \n Strength:\n- The paper is well-motivated\n- It provides an interesting interpretation of the variational Kalman filtering from the perspective of Wasserstein gradient flow.\n- The convergence analysis of the proposed Bures-Wasserstein SGD is novel.\n- The numerical results of the proposed method for Gaussian mixture seem promising.\n\nWeakness:\n- The theoretical contribution is not enough. For the Gaussian family, the analysis of the continuous-time dynamics seems to be a direct application of the convergence results of Wasserstein gradient flow. For the Gaussian mixture, although the author acknowledge that the problem is non-convex, some theoretical analysis of the proposed method can strengthen the paper. See details in the Questions. \n- The structure of the paper can be improved. The algorithm for Gaussian mixture is of great interest in practice but the detailed description of that algorithm is not presented in the paper.\n- The comparison of the proposed method for Gaussian mixture with other variational inference methods using Gaussian mixture is missing. Some experiments on Bayesian inference can strengthen the paper. \n The interpretation of the Gaussian mixture model as the measure on the Bures-Wasserstein space is interesting. I’m wondering if the target density itself is a Gaussian mixture, can the proposed algorithm exactly fit the posterior distribution?\n\nFor the Gaussian mixture, are there objective functionals other than the KL divergence, for instance, maximum mean discrepancy, to make the resulting problem on the measure of the Bures-Wasserstein space convex? \n\nFor the Gaussian family, the gradient flow in equ. (4) does not require the calculation of the Hessian of the log posterior density. However, the gradient flow of Gaussian mixture in equ. (10) does require the Hessian. As the calculation of the Hessian for high dimensional inference problem can be problematic and the proposed method itself is a first-order method, is it possible to only utilize the gradient information of the log-posterior density?\n\nSome literature related to the Bures-Wasserstein space are missing, see [1, 2].\n\n[1] Malagò, Luigi, Luigi Montrucchio, and Giovanni Pistone. \"Wasserstein Riemannian geometry of Gaussian densities.\" Information Geometry 1.2 (2018): 137-179.\n\n[2] Modin, Klas. \"Geometry of matrix decompositions seen through optimal transport and information geometry.\" arXiv preprint arXiv:1601.01875 (2016).\n Yes.", " This paper produces an algorithm for computing variational posteriors that are normals or mixtures of normals. As the title suggests, this is achieved via Wasserstein gradient flows. Remarkably, the system of the Langevin-type stochastic process this gives rise to was proposed before---but as a heuristic in the setting of sequential bayesian inference/kalman filtering. Relative to existing variational methods, the benefits of the proposed approach are a number of theoretical guarantees that are usually not available, and hold under reasonable (even if somewhat limiting) assumptions so long as one uses a single normal (rather than a mixture of normals) as the variational family. Originality: \nThe work is highly original from a technical point of view, & to the best of my knowledge the first attempt at using Wasserstein Gradient flows in a computationally attractive way for the purposes of variational inference. Its originality lies not so much in the end goal of the algorithm---there are plenty other variational methods using Gaussian families/mixtures of Gaussians---but in using an area of mathematics that has hitherto seen little use in the context of variational inference to derive a new algorithm for achieving an (old/well-known) end goal and guaranteeing a number of attractive properties for the resulting algorithm. \n\nQuality:\nThis paper is of the highest technical standard. Beyond that, the authors have made a very serious effort to show that the method is not just a mathematical curiosity, but practically meaningful. It has been a while since a paper has inspired me to dive into a different literature and read around 5-10 other papers to fully appreciate all its details, but this paper has managed it effortlessly.\n\nClarity: \nThe paper articulates its (complicated) subject clearly, and leaves the reader satisfied that she or he can understand every component part of the paper. The supplementary material is extensive, and maintains an exceptionally high standard.\n\nSignificance:\nI think the paper is significant in showing how to use Wasserstein gradient flows for the design of attractive variational methods. If I have one point of criticism towards the paper, it would be that the research within the field of variational inference is misrepresented. For example, the paper's abstract states that \"VI is still poorly understood and dominated by heuristics. In this work, we propose principled methods for VI [...]\". I think this sentiment does severe injustice to a lot of good work that has happened on VI in recent times. I understand that the authors are focused specifically on optimisation algorithms with theoretical guarantees that are common in optimisation, but other theoretical work from other fields and specifically statistics has recently put VI on a much more solid foundation than it used to be. To name but a few examples, there are PAC-Bayesian bounds for variational objectives [1] as well as asymptotic studies [2], work motivating variational methods as constrained optimisation [3], and even some prior work on convergence guarantees (under conditions that are in some sense milder than those of the current paper) [4]. I think the paper would do good toning down the claims of the abstract & the other claims in the paper aligned with the spirit of the abstract---or at least make them more specific to the kind of guarantees/theory that the paper is focused on. It certainly isn't accurate to describe all of the research on VI as 'dominated by heuristics'. I think the paper would also do well positioning itself in the context of a recent line of work reaching across various disciplines and dedicated to making VI a more principled set of tools as in [1-4].\n\n[1] https://jmlr.org/papers/volume17/15-290/15-290.pdf\n\n[2] https://www.tandfonline.com/doi/full/10.1080/01621459.2018.1473776\n\n[3] https://www.jmlr.org/papers/v23/19-1047.html\n\n[4] https://proceedings.mlr.press/v119/domke20a.html\n\n\nTypos:\n- p.7, l.270 'variation' -> 'variational' I would advise the authors to tone down the claims about VI being a mostly heuristic field without significant theoretical advances; and to embedd their paper into the context of [1-4] and related work provided above. I think that the paper does not point to the limitations of its results enough, but I don't think this is such a big problem---there is only so much space a NeurIPS submission is allowed to cover, and the paper does a better job than most others. The three most important limitations are\n(1) V has to be convex\n(2) guarantees only hold for the single Gaussian measure case; and not even for the mixture Gaussian case [and to be fair, this one is pointed out explicitly!]\n(3) it is unclear if the algorithm is actually faster/better than standard VI techniques/optimisation methods in practice" ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "KLEoD4SEK1X", "0Mco7yh3oAX", "CbhN-frIqd1", "XkUsLBxJ_na", "ex0_EqVR9Nx", "TnJPSTWDwyK", "nips_2022_K2PTuvVTF1L", "nips_2022_K2PTuvVTF1L", "nips_2022_K2PTuvVTF1L", "nips_2022_K2PTuvVTF1L" ]
nips_2022_GCNIm4cKoRx
Finite-Time Analysis of Adaptive Temporal Difference Learning with Deep Neural Networks
Temporal difference (TD) learning with function approximations (linear functions or neural networks) has achieved remarkable empirical success, giving impetus to the development of finite-time analysis. As an accelerated version of TD, the adaptive TD has been proposed and proved to enjoy finite-time convergence under the linear function approximation. Existing numerical results have demonstrated the superiority of adaptive algorithms to vanilla ones. Nevertheless, the performance guarantee of adaptive TD with neural network approximation remains widely unknown. This paper establishes the finite-time analysis for the adaptive TD with multi-layer ReLU network approximation whose samples are generated from a Markov decision process. Our established theory shows that if the width of the deep neural network is large enough, the adaptive TD using neural network approximation can find the (optimal) value function with high probabilities under the same iteration complexity as TD in general cases. Furthermore, we show that the adaptive TD using neural network approximation, with the same width and searching area, can achieve theoretical acceleration when the stochastic semi-gradients decay fast.
Accept
The reviewers agree that the theoretical results presented in the paper are solid and advance our understanding of the behavior of temporal difference (TD) methods, which are at the core of most reinforcement learning algorithms. The contributions of the paper can be summarized in two main results: - Adaptive TD combined with a ReLU neural network converges when the width of the network is sufficiently large; - Adaptive TD combined with a ReLU neural network converges faster than its non-adaptive counterpart. Both results are important and novel. One consistent complaint among the reviewers was the paper presentation, which was considered slightly sloppy and not very accessible. We strongly encourage the authors to perform a thorough revision of the paper, paying special attention to the definition and consistency of the notation adopted. We also suggest the authors add intuitive explanations wherever possible to make the paper accessible to a wider audience.
val
[ "xpFAo13JjMP", "k6AvAalI9KR", "xZFg2ravERtE", "GWIUvemofC", "wpf-RpFgc3b", "mC1Gpk0Kcbl", "4-wA-BkHQXk", "PNOdrntt5xJ", "gevetM6LZLI", "c75PLnSOj-s" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear AC and reviewers,\n\nThanks for your thoughtful reviews and valuable comments, which have helped us improve the paper significantly. We are encouraged by the endorsements that: 1) The main result of our paper is significant and highly non-trivial (tHmB), which is the first analysis of the convergence of adaptive TD with multi-layer ReLU NNs (Npn2). 2) The paper introduces a new proof technique that may prove useful beyond the scope of this paper (Npn2). 3) The paper is well written with a clear motivation and contribution (Npn2). \n\n-----\n\nIncorporating the comments and suggestions from all reviewers, we have made the following changes in the revised paper.\n\n\n- 1. We have included a Notation paragraph at the beginning of Section 2.\n\n- 2. We have added some numerical experiments in the appendix to illustrate the tightness of the bound established in Theorem 1.\n\n- 3. We add Proposition 2 in the revision to characterize the difference between MSPBE and MSBE by using the difference between our results and the optimal action-value function.\n\n- 4. We have clarified the differences between our proofs and existing works in Section 1.2 of the revised paper.\n\n- 5. We have added a Nomenclature in the appendix to clarify the notations in the proof.\n\n-----\n\nWe are glad to answer any further questions you have on our submission.", " \n**Q7. It is in general hard to understand the intuitive meaning of Theorem 1. For example, what is the meaning of the objective function being small? How can we interpret the solution $\\theta^{*}$? Moreover, it is not easy to conclude that the bound is small in practice. More discussions are needed to clarify the above points.**\n\n\nReply: We stress that (neural) TDs aim to search the approximate stationary point $\\theta^*$, see Definition 1. In lines 158-162, we have explained the approximate stationary point, which is repeated as follows: ``In [R2], it has been proved that such an approximate stationary point is well-defined, exists, and minimizes the mean squared projected Bellman error (MSPBE). This fact becomes more straightforward if the function $f$ is linear; in particular, the approximate stationary point of TD is identical to the unique solution to the projected Bellman equation [R3]''.\n\nNow, we explain why our theoretical results guarantee the adaptive neural TDs to find the approximate stationary point. Due to over-parameterization, the matrix of the linear function $\\mathbb{E}\\hat{f}(s,a)$ is always non-singular, and thus $\\mathbb{E}||\\theta^k-\\theta^*||^2 \\leq \\nu \\mathbb{E}\\left(\\hat f(\\theta^k;s,a)-\\hat f(\\theta^*;s,a)\\right)^2$ with $\\nu>0$ being a constant. Therefore, Theorem 1 indicates that $\\mathbb{E}||\\theta^k-\\theta^*||^2$ can be sufficiently small. However, in Theorem 1 we only prove the property of the MSPBE rather than the mean squared Bellman error (MSBE). As suggested by Reviewer Npn2, we have also added Proposition 2 in the revision, whose proof is based on Theorem 1. In Proposition 2, we show that the difference between action-value function obtained by $\\theta^K$ and the optimal action-value function ${\\bf Q}^*$ (${\\bf Q}^*$ minimizes the MSBE). As proved in Proposition 2, if the function family $\\mathcal{F} _{{\\bf V},m}$ contains ${\\bf Q}^*$, the adaptive neural TD can find the ${\\bf Q}^*$, coinciding with intuitions. \n\n------\n\n**Q8. Overall the overall presentation is sloppy. There are many math expressions whose definitions are lacking.**\n\nReply: We have revised our paper according to your comments and added explanations for all basic notations, which significantly improves our paper's exposition.\n\n------\n\n**Q9. There is notational explosion. It is recommended to simplify the notations.**\n\nReply: In the revised paper, we have presented a nomenclature for the notations in the appendix.\n\n------\n\n[R1] Yuan Cao and Quanquan Gu. Generalization bounds of stochastic gradient descent for wide and deep neural networks. In Advances in Neural Information Processing Systems, pages 10835–10845, 2019.\n\n[R2] Qi Cai, Zhuoran Yang, Jason D Lee, and Zhaoran Wang. Neural temporal-difference learning converges to global optima. In Advances in Neural Information Processing Systems, pages 11312–11322, 2019.\n\n[R3] JN Tsitsiklis and B Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 1997.\n\n------\n\nWe have revised our paper according to your comments, and hopefully, we have cleared your concerns about our paper. We look forward to and appreciate your further feedback.\n", " Thank you for your thoughtful review, valuable feedback, and endorsement. Below we address your concerns, which mainly revolve around notation issues and technical details. \n\n------\n\n**Q1. It includes the projection step, which is in general hard to perform.**\n\nReply: The constrained set is cubical and the projection is very simple. \nNote that the set ${\\bf V}$ is defined as follows:\n$$\n{\\bf V}:=\n\\left\\\\{\n{\\bf \\theta}= \n[{\\bf W}_1,\\ldots,{\\bf W}_L] | ||{\\bf W}_l - {\\bf W}_l^{\\rm init}||\n\\leq \\omega\\ \\mbox{for}\\ 1\\leq l\\leq L \n\\right\\\\},\n$$\nGiven a point $\\tilde{{\\bf \\theta}}=[\\tilde{{\\bf W}}_1,\\ldots,\\tilde{{\\bf W}}_L]$, the projection can be performed as follows:\n\n$$\n{\\bf Proj}_{\\bf V}(\\tilde{\\bf \\theta}) = \\left[ \n\\tilde{\\bf W} _l\\cdot{\\bf 1} _{\\{||\\tilde{\\bf W} _l-{\\bf W} _{l}^{\\rm init}|| \\leq \\omega\\}} +\n\\left( \n\\frac{\\tilde{{\\bf W}} _l - {\\bf W} _{l}^{\\rm init}}{||\\tilde{{\\bf W}} _l-{\\bf W} _{l}^{\\rm init}||} +{\\bf W} _{l}^{\\rm init}\n\\right)\n\\cdot{\\bf 1} _{\\{\\|\\tilde{{\\bf W}} _l - {\\bf W} _{l}^{\\rm init}\\|> \\omega\\}}\n \\right] _{1\\leq l \\leq L}\n$$\n\nwhere \n\n$$\n{\\bf 1}_{\\{\\|\\tilde{\\bf W} _l-{\\bf W} _{l}^{\\rm init}\\|\\leq \\omega\\}}=\n\\begin{cases}\n1 & \\ \\mbox{if} \\ ||\\tilde{\\bf W} _l - {\\bf W} _{l}^{\\rm init}||\\leq \\omega; \\\\\\\\\n0 & \\ \\mbox{otherwise}. \n\\end{cases}\n$$\n\nand\n\n$$\n{\\bf 1}_{\\{||\\tilde{\\bf W} _l - {\\bf W} _{l}^{\\textrm{init}}||> \\omega\\}} = \n\\begin{cases}\n1 & \\ \\mbox{if}\\ ||\\tilde{\\bf W} _l - {\\bf W} _{l}^{\\textrm{init}}||> \\omega; \\\\\\\\\n0 & \\ \\mbox{otherwise}.\n\\end{cases}\n$$\n\n------\n\n**Q2. $\\phi$ is not defined in the definition of the neural approximated ${\\bf Q}$ in line 124. How can we understand the feature vector in the neural approximation? Please clarify it in the paper.**\n\nReply: We have defined the function $f(\\theta;{\\bf x})$ in the line above Line 130 \nof the revised paper. $\\phi(s,a)$ is the feature vector for the pair $(s,a)$, whose definition has been given in Line 105 of the revised paper. Thus, in neural TD, we use the following approximation\n\n$$\n{\\bf Q} _{\\pi}(s,a) \\approx \\sqrt{m}{\\bf W} _{L}\\sigma({\\bf W} _{L-1}\\cdots\\sigma({\\bf W} _{1}\\phi(s,a))\\cdots).\n$$\n\nAs you suggested, we have clarified this notation in the revised paper.\n\n------\n\n**Q3. Why do you use boldface for value function while the normal font for Q-function?**\n\nReply: We use normal font for Q-function because we often use $Q(s,a)$ in the following, which is a real number. As suggested, we have used boldface for $Q$ in the revised manuscript.\n\n------\n\n**Q4. Why do we have square root of $m$ in the definition of $f$? Please clarify it in the paper.**\n\nReply: This is because in [Lemma 4.4, R1], it is proved that $f(\\theta;{\\bf x})=\\tilde{\\mathcal{O}}(1)$ as $m$ is large and $\\theta$ is randomly initialized. If we remove $\\sqrt{m}$, the function value is then of the order $\\tilde{\\mathcal{O}}(1/\\sqrt{m})$, which tends to 0 as $m$ is large. Such a use is standard in deep ReLU network theory. As you suggested, we have clarified it in the revised paper.\n\n------\n\n**Q5. The norm $|| ||_F$ in eq (5) is not defined. **\n\nReply: It is the Frobenius norm. We have revised $|| \\cdot||_F$ as $||\\cdot||$ according to Notation paragraph in Section 2. \n\n\n------\n\n**Q6. The set ${\\bf B}$ in Lemma 1 is not defined.**\n\nReply: ${\\bf B}(\\theta^*,\\omega)$ is the ball centered at $\\theta^*$ with radius $\\omega$, and we have made it clear in the revision.\n", " Thank you for your thoughtful review, valuable feedback, and endorsement. Below we address your concerns. \n\n------\n\n**Q1. My main concern is the work is likely to be incremental, in the sense that the proof framework and the method of proving technical assumptions, and lemmas are quite similar to those existing papers as cited by the authors, especially [7,18,48]. I expect an explicit discussion about the differences in the techniques used for proofs.**\n\nReply: We stress that all papers mentioned by the reviewer only consider the basic TD, while our paper uses adaptive stepsize and momentum, i.e., we study a different scheme. Notice that even for the simple nonconvex optimization, the adaptive stepsize and momentum involve a much more complicated analysis than the vanilla stochastic gradient descent (SGD), see, e.g. [R1,R2,R3,R4], let alone analyzing adaptive TD using neural network approximators and non-i.i.d. samples. A similar lemma to the existing papers, i.e., Lemma 1, is used to describe the neural tangent kernel (NTK) region of the ReLU networks rather than the properties of the iterates. Because both our paper and the papers mentioned by the reviewer use ReLU networks. We have explicitly discussed the differences between our proofs from existing works in the revised paper.\n\nExisting related works contain two categories: adaptive TD with linear approximations (ATD-L) and neural TD. However, our work is significantly different from these related works. 1) In contrast to ATD-L, we consider the neural network approximation, in which case we do not have nice properties that linear approximation enjoys, and we have to consider the NTK regime and develop a new analysis leveraging the semi-Lipschitz continuity property. 2) Compared to neural TD, we use the adaptive stepsize and momentum, which has never been considered in neural TD.\n\n------\n\n**Q2. The main theoretical result is not informative; it is known that usually adaptive learning rate strategies can improve the convergence rate by choosing appropriate hyper-parameters (as the non-adaptive/plain version is simply a special case). But the result does not provide any additional insight into what designs/hyper-parameter choices in the adaptive method can help improve convergence.**\n\nReply: We respectfully disagree, and let us clarify our theoretical results. Our results first show that adaptive TD also works with deep ReLU network approximation. In lines 229-245 of the revised manuscript, we have explained that the convergence rate of adaptive TD with deep ReLU network approximation is the same as that of the vanilla TD with ReLU network approximation [R5] under the same assumptions and conditions. Notice that the result in [R5] achieves the optimal $\\mathcal{O}(1/\\sqrt{K})$ convergence rate if the neural network approximator is sufficiently overparameterized (see page 2 of [R5]); thus, our result is tight. Moreover, we show that the adaptive TD with DNN will be faster than the non-adaptive one when the stochastic gradients are ``sparse'', see the discussion in lines 246-254 \nin the revision.\n\n------\n\n[R1] Bartlett, P. L., Hazan, E., and Rakhlin, A. Adaptive online gradient descent. Proceedings of the 20th International Conference on Neural Information Processing Systems, pp. 65–72, 2007.\n\n[R2] Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.\n\n\n[R3] Li, X. and Orabona, F. On the convergence of stochastic gradient descent with adaptive stepsizes. The 22nd International Conference on Artificial Intelligence and Statistics, pp. 983–992, 2019.\n\n[R4] Ward, R., Wu, X., and Bottou, L. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. In International Conference on Machine Learning, pp. 6677–6686. PMLR, 2019.\n\n[R5] Xu P, Gu Q. A finite-time analysis of Q-learning with neural network function approximation. International Conference on Machine Learning. PMLR, 2020: 10555-10565.\n\n------\n\nWe have revised our paper according to your comments, and hopefully, we have cleared your concerns about our paper. We look forward to and appreciate your further feedback.", " Thank you for your thoughtful review, valuable feedback, and endorsement. Below we address your concerns.\n\n------\n\n**Q1. My main issue with this paper is that it does not delineate the difference between MSBE (which is not analyzed) and MSPBE (for which the results hold).**\n\n\nReply: Thank you for your suggestion. We have added Proposition 2 in the revision. In particular, we characterize the difference between MSPBE and MSBE using the difference between our established results and the optimal action-value function.\n\n------\n\n**Q2. The bounds obtained depend quadratically (in fact, cubically) on the diameter of the set ${\\bf V}$. The authors avoid this issue by carefully setting a whole host of magic constants to have them magically vanish, yielding the final bound. However, it is unlikely that any practical application can actually make these choices. Do the authors see an avenue forward for this line of research that avoids such strong depends on the diameter of the projection set?**\n\nReply: The set ${\\bf V}$ is the neural tangent kernel (NTK) region, in which DNN enjoys certain nice properties that help to establish our theoretical results. However, we cannot guarantee these nice properties when the iterates are out of the NTK region. In a related paper [R1], the authors also use the projection. As far as we know, there is still no analysis that does not use such a projection set. We believe that the key lies in the foundational theory of NTK, and the projection can be removed only for very special cases.\n\n------\n\n**Q3. The authors mention a novel proof technique, do you believe this technique can be useful in other cases as well?**\n\nReply: Our technique may be useful for analyzing the adaptive scheme with highly nonconvex objective functions with non-i.i.d. sampling. In contrast, notice that TD has no fixed objective to optimize in each iteration and is not a standard optimization problem. \n\n------\n\n**Q4. Do the authors have an intuition of how hard it would be to extend these results to also include policy improvement iteration?**\n\nReply: The challenge lies in the changing policy in the iterations. Our paper considers TD, in which the policy is fixed. When the policy is time-varying, we need to deal with many extra items and may need additional conditions or assumptions.\n\n------\n\n[R1] Xu P, Gu Q. A finite-time analysis of Q-learning with neural network function approximation. International Conference on Machine Learning. PMLR, 2020: 10555-10565.\n\n------\n\nWe have revised our paper according to your comments, and hopefully, we have cleared your concerns about our paper. We look forward to and appreciate your further feedback.", " Thank you for your thoughtful review, valuable feedback, and endorsement. Below we address your concerns.\n\n------\n\n**Q1. The analysis is limited to ReLU networks. Might the authors briefly explain why the analysis is limited to ReLU networks?**\n\nReply: Our theory is built on the semi-Lipschitz continuity of the ReLU networks [Lemma 1]. Without the ReLU activation, we cannot guarantee Lemma 1, and thus we only consider deep ReLU networks. As the reviewer suggested, we have clarified this point in the revision. Indeed, there has been a line of theoretical research on deep ReLU networks leveraging their particular properties, see e.g., [R1,R2,R3,R4,R5,R6]. \n\n------\n\n**Q2. It is not clear if the bound in Theorem 1 is tight. The authors might consider running some experiments to illustrate this. Might the authors briefly explain the tightness of the bound in Theorem 1?**\n\nReply: In lines 228-238, we have shown that the bound in Theorem 1 is as large as the vanilla TD with neural network approximation [R6] under the same assumptions and conditions. Notice that the result in [R6] achieves optimal $\\mathcal{O}(1/\\sqrt{K})$ convergence rate if the neural network function approximator is sufficiently overparameterized (see page 2 of [R6]), thus our result is tight. Also, $\\mathcal{O}(1/\\sqrt{K})$ is the optimal rate for SGD for general nonconvex cases [Theorem 4, [R7]]. Moreover, we show that the adaptive TD with neural network function approximator is faster than the non-adaptive one provided the stochastic gradients are ``sparse''.\n\nWe have also added some numerical experiments in the appendix to illustrate the tightness of the bound in Theorem 1.\n\n------\n\n**Q3. The writing of this paper can be further improved. In particular, the current version of Section 2.4 is a little bit hard to follow. This paper also has a complicated notation system; however, some notations are used without first defining them. Examples include $\\hat{f}$ in Definition 1 and ${\\bf B}$ in Lemma 1.**\n\nReply: Thank you for your suggestion, and we have revised our paper to make it more clear and easy to follow. We have clarified the notations in the revised paper by listing them in the Notation paragraph at the beginning of Section 2. Furthermore, we have presented a nomenclature for the notations in the appendix.\n\n$\\hat{f}$ is a function in the function family $\\hat{f}\\in \\mathcal{F}_{\\mathbf{V},m},$ \nsee line 157 of the revised paper. \n\nAnd the function family $\\hat{f}\\in \\mathcal{F}_{\\mathbf{V},m}$ is introduced in line 151 in the revision. $\\mathbf{B}(\\theta^*,\\omega)$ denotes the ball centred at $\\theta^*$ with radius $\\omega$.\n\n------\n\n[R1] Yarotsky D. Optimal approximation of continuous functions by very deep ReLU networks. Conference on learning theory. PMLR, 2018: 639-649.\n\n[R2] Du S, Lee J, Li H, et al. Gradient descent finds global minima of deep neural networks. International conference on machine learning. PMLR, 2019: 1675-1685.\n\n[R3] Schmidt-Hieber J. Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics, 2020, 48(4): 1875-1897.\n\n[R4] Cao Y, Gu Q. Generalization error bounds of gradient descent for learning over-parameterized deep relu networks. Proceedings of the AAAI Conference on Artificial Intelligence. 2020, 34(04): 3349-3356.\n\n[R5] Zou D, Cao Y, Zhou D, et al. Gradient descent optimizes over-parameterized deep ReLU networks. Machine learning, 2020, 109(3): 467-492.\n\n[R6] Xu P, Gu Q. A finite-time analysis of Q-learning with neural network function approximation. International Conference on Machine Learning. PMLR, 2020: 10555-10565.\n\n[R7] Drori Y, Shamir O. The complexity of finding stationary points with stochastic gradient descent. International Conference on Machine Learning. PMLR, 2020: 2658-2667.\n\n--------\n\nWe have revised our paper according to your comments, and hopefully, we have cleared your concerns about our paper. We look forward to and appreciate your further feedback.", " This paper proves that the adaptive temporal-difference (TD) learning with ReLU neural network approximation converges when the width of the network is sufficiently large. Moreover, this paper proves that adaptive TD is faster than TD with the ReLU neural network approximation. Strengths:\n\n- To the best of my knowledge, Theorem 1, the main result of this paper, is significant and highly non-trivial. In particular, this paper has extended the analysis of adaptive TD with linear function approximation to multiple layers neural network approximation.\n\nWeakness:\n\n- The analysis is limited to ReLU networks.\n\n- It is not clear if the bound in Theorem 1 is tight. The authors might consider running some experiments to illustrate this.\n\n- The writing of this paper can be further improved. In particular, the current version of Section 2.4 is a little bit hard to follow. This paper also has a complicated notation system; however, some notations are used without first defining them. Examples include $\\hat{f}$ in Definition 1 and $\\mathbf{B}$ in Lemma 1.\n\n - Might the authors briefly explain why the analysis is limited to ReLU networks?\n\n- Might the authors briefly explain the tightness of the bound in Theorem 1?\n The authors have adequately addressed the limitations and potential negative societal impact of this paper.", " This paper studies the adaptive TD method in the nonlinear function approximator setting, specifically MLPs with ReLU activations. It essentially bridges two prior works, one of which establishes the convergence of TD under MSPBE to the global optimum, while the other proves accelerates rates of convergence for adaptive TD in the linear setting. The current submission combines these the two and proves accelerated rates of convergence to the minimizer of MSPBE for adaptive TD in the nonlinear setting. The paper provides a natural next step for a recent body of work that have analyzed finite-time convergence rate for (projected) TD methods in the non-linear setting. In terms of originality and significance, the main result itself, if somewhat incremental, is novel and useful. Beyond the result itself, the authors introduce a new proof technique that may prove useful beyond the scope of this paper. The paper is technically polished with a clear exposition of the results and its derivations (I did look at the proofs, but not in great detail). Overall, the paper is well written with a clear motivation and contribution. \n\nMy main issue with this paper is that it does not delineate the difference between MSBE (which is not analyzed) and MSPBE (for which the results hold). This is extremely important because the proof technique that this paper uses (introduced in prior work) relies on an implicit linearization. The approximation error induced by linearization grows with diameter of the projection set. This is the main limitation of this type of analysis, because tight convergence rates requires the projection set to be small, and hence the minimiser to which it converges is often very different from the minimizer of MSBE. Consequently, these result are really only relevant for MSPBE. While understanding MSPBE is in itself useful, it important to delineate the contribution made to not con that the paper fails to make. With that said, this can be resolved by adding a paragraph or two. - The bounds obtain depends quadratically (in fact, cubically) on the diameter of the set V. The authors avoid this issue by carefully setting a whole host of magic constants to have them magically vanish, yielding the final bound. However, it is unlikely that any practical application can actually make these choices. Do the authors see an avenue forward for this line of research that avoids such strong depends on the diameter of the projection set?\n\n- The authors mention a novel proof technique, do you believe this technique can be useful in other cases as well? \n\n- Do the authors have an intuition of how hard it would be to extend these result to also include policy improvement iteration? See above. Overall, the authors are very clear about assumptions under which their results hold.", " Disclaimer: I do not have enough background to check the technical details and hence basically make a guess. There may be another reviewer (sorry for my late notice) for this paper. I will go through other reviews carefully and adjust my score accordingly.\n\nThe paper is purely theoretical. It provides the finite-time analysis for the adaptive TD with multi-layer ReLU networks. \n Strength: \n\nThis should be the first work in analyzing the convergence of adaptive TD with multi-layer ReLu NNs.\n\nWeaknesses: \n\nMy main concern is the work is likely to be incremental, in the sense that the proof framework and the method of proving technical assumptions, and lemmas are quite similar to those existing papers as cited by the authors, especially [7,18,48]. I expect an explicit discussion about the differences in the techniques used for proofs. \n\nThe main theoretical result is not informative; it is known that usually adaptive learning rate strategies can improve the convergence rate by choosing appropriate hyper-parameters (as the non-adaptive/plain version is simply a special case). But the result does not provide any additional insight into what designs/hyper-parameter choices in the adaptive method can help improve convergence. NA. NA", " This paper establishes the finite-time analysis for the adaptive TD with\n9 multi-layer ReLU networks approximation whose samples are generated from\n10 a Markov decision process. Overall, the paper includes interesting results. However, it needs modifications to improve its quality.\n Strengths: New analysis of TD learning with neural network. The established theory shows that if the width of\nthe deep neural network is large enough, the adaptive TD using neural network 12 approximation can find the (optimal) value function with high probabilities under he same iteration complexity as TD in general cases. \n\nWeaknesses: \nIt includes the projection step, which is in general hard to perform. \nThe overall presentation is sloppy. 1) \\phi is not defined in the definition of the neural approximated Q in line 124. How can we understand the feature vector in the nueral approximation? Please clarify it in the paper.\n2) Why do you use boldface for value function while the normal font for Q-function?\n3) Why do we have sqaure root of m in the definition of f? Please clarify it in the paper.\n4) The norm || ||_F in eq (5) is not defined. \n5) The set B in Lemma 1 is not defined. \n6) It is in general hard to understand the intuitive meaning of Theorem 1. For example, what is the meaning of the objective function being small? How can we interprete the solution theta^*? Moreover, it is not easy to conclude that the bound is small in practice. More dicussions are needed to clarify the above points. \n7) Overall the overall presentation is sloppy. There are many math expressens whose definitions are lacking. \n8) There is notational explosion. It is recommended to simplify the notations. It seems that it is hard to conclude meaningful results in practice from the analysis in this paper. " ]
[ -1, -1, -1, -1, -1, -1, 6, 7, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 1, 4 ]
[ "nips_2022_GCNIm4cKoRx", "xZFg2ravERtE", "c75PLnSOj-s", "gevetM6LZLI", "PNOdrntt5xJ", "4-wA-BkHQXk", "nips_2022_GCNIm4cKoRx", "nips_2022_GCNIm4cKoRx", "nips_2022_GCNIm4cKoRx", "nips_2022_GCNIm4cKoRx" ]
nips_2022_Yul402KcD5d
Multi-Granularity Cross-modal Alignment for Generalized Medical Visual Representation Learning
Learning medical visual representations directly from paired radiology reports has become an emerging topic in representation learning. However, existing medical image-text joint learning methods are limited by instance or local supervision analysis, ignoring disease-level semantic correspondences. In this paper, we present a novel Multi-Granularity Cross-modal Alignment (MGCA) framework for generalized medical visual representation learning by harnessing the naturally exhibited semantic correspondences between medical image and radiology reports at three different levels, i.e., pathological region-level, instance-level, and disease-level. Specifically, we first incorporate the instance-wise alignment module by maximizing the agreement between image-report pairs. Further, for token-wise alignment, we introduce a bidirectional cross-attention strategy to explicitly learn the matching between fine-grained visual tokens and text tokens, followed by contrastive learning to align them. More important, to leverage the high-level inter-subject relationship semantic (e.g., disease) correspondences, we design a novel cross-modal disease-level alignment paradigm to enforce the cross-modal cluster assignment consistency. Extensive experimental results on seven downstream medical image datasets covering image classification, object detection, and semantic segmentation tasks demonstrate the stable and superior performance of our framework.
Accept
A multi-granularity cross-modal alignment framework is proposed, which learns data representations from medical scans paired with the corresponding text reports. The reviewers find the appraoch novel and the paper well-written with an overall clear structure. Extensive experimental results show the effectiveness of the proposed model and experimental details are provided. After the discussion with the authors, all reviewers vote towards acceptance of the paper.
train
[ "Vmi1_HERtqZ", "mrdTA_98M_", "qVUocUgDYEK", "xrkjDSUqGCc", "0nFuglSgXcZ", "l_IOABEa1qI", "nEYk53267b3", "FhN9uD6Q4m", "ON7N8BsTmho", "7gg7j4H-KsM", "FN6dNtlrMUZ" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewer ZGvT,\n\nThanks a lot for your time and valuable feedback! We will include your suggested experimental results in our paper. ", " Thank you for the response. I appreciate the extra ablation study regarding the dense prediction task. It is indeed an interesting finding that further corroborates the claims made in the paper. I would suggest finding a way to include it in the eventual camera-ready version.\nThe error bars computed using the approach you mentioned seem to be significantly lower than the margin by which your approach outperforms GLoRIA, so I would not hold it against the performance itself, however, as a general principle, you should include error bars in results tables, particularly when comparing against other works. \nI am inclined to agree on the novelty of the disease-level cross-modal prototypical alignment module, as well as the overall framework, and that the combination of the three components is not trivial from a technical standpoint. \nOtherwise, thank you for answering the other questions. I have also noted the additional ablation performed at the request of reviewer RZTv, and the reply regarding the comparison with existing works. \nI am inclined to keep the original rating.", " Dear reviewer RZTv,\n\nThank you for your valuable comments again. We have responded to your comments as below.\nWe are looking forward to your feedback or any further discussion on your concerns.\n", " We sincerely thank you for the comments. We would like to address your questions below one by one.\n\n> Q1: The authors incorporate several well-known technologies, including instance-wise alignment, fine-grained token-wise alignment, and disease-level alignment, into contrastive learning to enhance the generalizability of learned visual representations. In this case, which parts are new? Thus, the overall novelty needs to be strengthened.\n\nThank you for your comments. We would like to highlight the overall novelty of our proposed approach from the following three aspects.\n\n- i. Our MGCA framework is designed **for the first time**, to leverage multi-granularity semantic correspondence for generalized medical visual representation learning to facilitate versatile downstream tasks. For the novelty, our whole framework design based on the prior knowledge of multi-granularity is not trivial and needs careful balance among each component to make full use of their complementary properties at three different levels to learn better visual representations. We thereby carefully design different unsupervised learning strategies (e.g., contrastive losses, cross-attention, clustering with Sinkhorn-Knopp), instead of simply incorporating existing techniques on the contrastive learning paradigm. Contrastive learning is only one of the manners for achieving cross-modal alignment in our framework. Such framework novelty **is acknowledged** by reviewers ZGvT and D1Wu.\n\n- ii. We design a novel disease-level cross-modal prototypical alignment (CPA) module to leverage the high-level inter-subject relationship semantic correspondences by enforcing the cross-modal cluster assignment consistency. To the best of our knowledge, this is the first time of its kind in cross-modality learning. Such CPA design novelty **is also acknowledged** by reviewers ZGvT and D1Wu.\n \n- iii. For the experimental design, as we know, **label-efficient learning** is significantly essential in the medical image field. Our proposed framework can effectively leverage multi-granularity cross-modal correspondence for medical image pre-training and significantly boost the performance of several downstream tasks even trained with only 1% annotated data. Also, according to our ablation study in Table 4, these three different granularity correspondences all benefit downstream tasks and these benefits are complementary. \n \n Based on the above three points, we believe that our work will bring new insights for medical imaging and machine learning researchers on the task of medical image and report processing, as well as assist the development of the relevant clinical (radiology) fields. \n\n\n> Q2: In Table 1, it can be seen that the proposed model performs better than other methods. However, because the proposed model has been pre-trained on a large-scale medical image and report dataset, whether other methods have been trained on the large dataset?\n\n*A*: Thanks for your comments. Our experiment comparison is fair and based on a similar pre-training strategy on large-scale medical image and report datasets. The methods in Table 1 are **all pre-trained** on two large-scale medical image and report datasets, as indicated by “pre-trained on CheXpert” and “pre-trained on MIMIC-CXR”. Note that the results of other methods on CheXpert and RSNA are from the original paper [1] [2]. The results of ConVIRT [1] on the RSNA dataset are from the reimplemented results in GLoRIA [2]. The results on COVIDx are implemented by ourselves. \n\nAlso, for a fair comparison with the main competing approach GLoRIA, we also evaluated a variant of our model that uses the same image encoder and pre-processing step as GLoRIA did. This is also acknowledged by Reviewer ZGvT, thus our experimental comparisons are extensive and fair.\n\n\n> Q3: The generated visual and text token embeddings are projected into normalized lower-dimensional embeddings, and the dimension d is set to 128. How to set the dimension? The parameter study should be included.\n\n*A*: The embedding dimension d is set to 128, following the previous work [3][4]. Per your suggestion, we also conducted an ablation study on this hyperparameter and the results are shown in the following table. As we can see, d=128 performs better than other configurations in most of the settings. \n \n| | | ChexPert (AUC) | | | RSNA (AUC)|| \n| :--: | :--: | :--: | :--: | :--: | :--: | :--: |\n| d | 1% | 10% | 100% | 1% | 10% | 100% |\n| 32 | 88.2 | 88.8 | 88.9 | 88.5 | 89.2 | 90.3 |\n| 64 | 88.6 | **89.2** | 89.3 | 88.8 | 89.4 | 90.5 |\n| 128 | **88.8** | 89.1 | **89.7** | **89.1** | **89.9** | **90.8** |\n| 256 | 88.2 | 88.7 | 88.8 | 87.6 | 89.5 | 90.5 |\n| 512 | 88.5 | 88.9 | 89.0 | 88.9 | 89.4 | 90.7 |\n", " > Q4: From the results in Table 1, it can be seen that the proposed model obtains large improvements over DSVE and VSE++. Please explain the reasons for these situations.\n\n*A*: We think the main reason for this situation is the different learning objectives. DSVE and VSE++ optimize a triplet ranking loss to learn image-text alignment. These methods only focus on minimizing the distance between the representations of true image and text pairs, without exploring the relationship with other samples. Therefore, as mentioned in [2], when applying the triplet ranking loss to medical images with high inter-class visual similarities, these methods can easily overfit by learning irrelevant patient/case-specific visual cues. These misleading cues will degrade its performance when transferring to downstream tasks.\n\nDifferently, instance-wise contrastive learning (including ConVIRT, global loss of GLoRIA, and ITA of our method), optimizes an InfoNCE loss to learn instance-wise alignment, which contrasts positive image-text pairs and negative image-text pairs. As mentioned in relevant papers [4], InfoNCE maximizes the mutual information between true image-text pairs, which enables the image encoder to learn more transferable features. \n\n\n> Q5: It is better to provide more discussions on using multi-granularity cross-modal alignment.\n\n*A*: Our motivation for this paper is to leverage **naturally existing multi-granularity semantic correspondence** between medical images and radiology reports to learn better medical visual representations. As illustrated in Q1, we notice that this problem is important while challenging in the medical image domain.\n To fill this gap, we are the **first** one to propose the MGCA framework and achieve **state-of-the-art performance** on several downstream tasks (i.e., classification, object detection, and semantic segmentation) with limited annotated data. Ablation study results in Table 4 also support our idea that the multi-granularity alignment significantly boosts the performance of learned medical visual representation.\n\nReferences:\n\n[1]. Y. Zhang, H. Jiang, Y. Miura, C. D. Manning, and C. P. Langlotz. Contrastive learning of medical visual representations from paired images and text. arXiv preprint arXiv:2010.00747, 2020.\n\n[2]. S.-C. Huang, L. Shen, M. P. Lungren, and S. Yeung. Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3942–3951, 2021.\n\n[3]. Zhirong Wu, Yuanjun Xiong, Stella Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance discrimination. In CVPR, 2018. Updated version accessed at:\nhttps://arxiv.org/abs/1805.01978v1.\n\n[4]. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729–9738, 2020.\n\n[5]. A. v. d. Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.\n", " We highly appreciate your insightful reviews and positive comments on the novelty of our framework, the extensive evaluation, and the paper presentation. Our responses are shown as follows.\n\n> Q1: Mostly, I believe the weaknesses in this work are in some missing pieces.\n\n*A*: Thank you very much for pointing out these relevant references! We have added and discussed these related references in the revised paper. Please feel free to take a look at it. \n\n> Q2: It would be interesting to discuss in the paper on how the ITA level behaves with the existence of CPA. These two losses have a somehow contradictory behavior. The contrastive loss in ITA pushes the different instances (negatives) apart from positive instances. However, CPA clusters instances in the feature space to learn disease (prototype) clusters. CPA disregards the instance's identity and tries to capture high-level unifying topic models. How do they behave then? It seems to me in Table 4 that the addition of the CTA loss is essential for improved performance (rows 2 and 4). But adding CPA and ITA alone (in row 3), seems to perform a little worse than row 2. Please discuss how do you imagine the losses are behaving.\n\n*A*: We think the joint optimization of ITA and CPA leads to a somehow balanced embedding space that **retains the local smoothness** of instances and **semantic structure** of the whole dataset. In the training stage, we optimize ITA to retain the property of cross-modal smoothness. This strategy has been proved effective in several previous works [1][2][3]. As discussed in [4], we think the local smoothness of instance features is significantly important for transferring to downstream tasks. However, this cross-modal smoothness ignore the semantic structure in the dataset. This is why we also incorporate CPA into our framework. The CPA can provide a constraint (or regularization) for the embedding space and encodes more high-level semantic information into it. Thus, the learned features can demonstrate more generalized and transferrable performance.\n\nFor the results in Table 4, we notice that row 3 (ITA+CPA) is better than row 2 (ITA + CTA) in the CheXpert dataset while performing slightly worse in the RSNA dataset. We believe there might be two reasons leading to this phenomenon: (1) The downstream task of RSNA dataset (binary classification) is **easier** than CheXpert (multi-label classification). In this case, the high-level semantic structure learned by CPA does not significantly affect the downstream performance in RSNA dataset. (2) The disease-level semantic structure (prototypes) of MIMIC-CXR dataset are more **similar** to CheXpert than RSNA. Thus, the learned high-level semantic information might be more applicable to CheXpert than RSNA. This might also partly explain why CPA trained in MIMIC-CXR dataset is more powerful in CheXpert than RSNA.\n\nReferences:\n\n[1]. J. Li, P. Zhou, C. Xiong, and S. C. Hoi. Prototypical contrastive learning of unsupervised representations. arXiv preprint arXiv:2005.04966, 2020.\n \n[2]. Y. Guo, M. Xu, J. Li, B. Ni, X. Zhu, Z. Sun, and Y. Xu. Hcsc: Hierarchical contrastive selective coding. arXiv preprint arXiv:2202.00455, 2022.\n\n[3] Y. Li, P. Hu, Z. Liu, D. Peng, J. T. Zhou, and X. Peng. Contrastive clustering. In 2021 AAAI Conference on Artificial Intelligence (AAAI), 2021.\n\n[4]. Nanxuan Zhao, Zhirong Wu, Rynson WH Lau, and Stephen Lin. What makes instance discrimination good for transfer learning? arXiv preprint arXiv:2006.06606, 2020.\n", " We highly appreciate your insightful comments and positive feedback on the extensive evaluation, the novelty of our proposed framework, and the detailed implementation. We would like to address your questions below one by one. \n\n> Q1: Lack of an ablation study involving individual learning tasks in an object detection or segmentation setting (as was done under the classification setting). Their approach does seem to more convincingly outperform GLoRIA in a object detection and segmentation setting (compared to the classification featuring the ResNet-50 encoder), but it would have been interesting to see an ablation study that explicitly shows how relevant, for instance, CTA is in a dense prediction setting.\n\n*A*: Thanks for your constructive suggestion. We have conducted an extra ablation study to validate the importance of CTA under dense prediction settings (medical image segmentation) on the SIIM dataset. The results are shown as follows. \n\n| | Training tasks | | | SIIM (Dice) | | \n| :--: | :--: | :--: | :--: | :--: | :--: |\n| ITA | CTA | CPA | 1% | 10% | 100% |\n| &#10004; | | | 25.0 | 43.2 | 59.9 |\n| &#10004; | &#10004; | | 47.6 | 55.4 | 61.3| \n| &#10004; | | &#10004; | 37.4 | 46.7 | 55.0 |\n| &#10004; | &#10004; | &#10004; | **49.7** | **59.3** | **64.2** | \n\t\t\t\t\nIt is observed that CTA and CPA can both improve semantic segmentation performance when combined with ITA. When we train ITA, CTA, and CPA jointly, it achieves the best performance. \n\n> Q2: Lack of error bars may be a potential issue, since the differences in performance between their approach and baselines (in some settings) are sometimes small.\n\n*A*: Thanks for your suggestions. We have re-run the downstream task experiments (linear classification) of our methods three times and calculated the mean and standard deviations. \nWe showed the error bars of our experiment results in the following table.\n\n| | 1% | 10% | 100% |\n| :--: | :--: | :--: | :--:|\n| CheXpert (AUC) | 88.7 $\\pm$ 0.18 | 89.13 $\\pm$ 0.16 | 89.5 $\\pm$ 0.21 |\n| RSNA (AUC) | 89.03 $\\pm$ 0.11 | 89.92 $\\pm$ 0.14 | 90.77 $\\pm$ 0.04 | \n| COVIDx (ACC) | 73.9 $\\pm$ 0.64 | 84.75 $\\pm$ 0.22 | 92.85 $\\pm$ 0.50 |\n\nAccording to the results, we notice that the error bar is pretty small relative, which shows that our proposed method performs stably in these downstream tasks. \n\n\n> Q3: How is CTA implemented in cases when ResNet50 is used as the image encoder? In what way are visual tokens obtained? To what degree would you say the difference in the way the tokens are obtained can account for the difference in performance between the ResNet50 and ViT variant of your approach?\n\n*A*: As indicated in GLoRIA[1], when ResNet50 is used as the image encoder, we take the feature maps ($f \\in \\mathbb{R}^{1024\\times19\\times19}$) of 3-rd bottleneck building blocks as token-level features. Then, we reshape the token-level features into $f \\in \\mathbb{R}^{1024\\times361}$, where $361$ is the total number of image regions.\n\nWe think the different network architecture, instead of the token acquisition strategy, mainly contribute to the performance difference between ResNet50 and ViT variants of our approach. In [2], the authors find that using a convolutional stem in ViT achieves much better performance than the ResNet50 counterpart, which also supports our thought that the difference of architecture design plays an important role in the performance gap. Specifically, ViT-based methods can retain more spatial or global information than ResNet [3], thus it achieves better performance. \n\n> Q4: Does the use of CTA give an even bigger performance improvement in a segmentation or object detection setting (compared to classification)?\n\n*A*: The use of CTA can give a larger performance improvement in a segmentation setting. The specifical ablation study can be seen in Q1. In general, both CTA and CPA could enhance the dense prediction performance of downstream tasks.\n", " > Q5: This particular setup for contrastive learning based approach for visual representation learning (also in the context of paired radiographs and radiology reports), as well as the idea and motivation behind token-wise cross-modal alignment objective, at its core, seem to be inspired by ConVIRT and/or GLoRIA, so the work can, to a degree, be seen as an extension of existing work and not entirely novel.\n\n*A*: Thank you for your comments. We are glad to restate our contribution of our proposed approach. (1) Our MGCA framework is designed **for the first time**, to leverage multi-granularity semantic correspondence for generalized medical visual representation learning to facilitate versatile downstream tasks. The whole framework design based on the prior knowledge of multi-granularity is not trivial and needs careful balance among each component to make full use of their complementary properties at three different levels to learn better visual representations. We thereby carefully design different unsupervised learning strategies (e.g., contrastive losses, cross-attention, clustering with Sinkhorn-Knopp). (2) As you mentioned, the design of disease-level cross-modal prototypical alignment (CPA) module to leverage the high-level inter-subject relationship semantic correspondences by enforcing the cross-modal cluster assignment consistency is interesting and novel. To the best of our knowledge, this is the first time of its kind in cross-modality learning. (3) For the experimental design, as we know, **label-efficient learning** is significantly essential in the medical image field. Our proposed framework can effectively leverage multi-granularity cross-modal correspondence for medical image pre-training and significantly boost the performance of several downstream tasks even trained with only 1% annotated data. Based on the above three points, we believe that our work will bring new insights for medical imaging and machine learning researchers on the task of medical image and report processing, as well as assist the development of the relevant clinical (radiology) fields. \n\nReferences:\n\n[1]. S.-C. Huang, L. Shen, M. P. Lungren, and S. Yeung. Gloria: A multimodal global-local representation learning framework for label-efficient medical image recognition. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3942–3951, 2021.\n\n[2]. Tete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Dollar, and Ross Girshick. Early convolutions help transformers see better.In NeurIPS, 2021.\n\n[3] Dosovitskiy, Alexey, et al. \"An image is worth 16x16 words: Transformers for image recognition at scale.\" arXiv preprint arXiv:2010.11929 (2020).\n", " This paper presents a novel Multi-Granularity Cross-modal Alignment (MGCA) framework to seamlessly leverage the multi-granular semantic correspondences between medical images and radiology reports for generalized medical visual representation learning. Extensive experimental results on different downstream datasets show that the proposed framework achieves substantial performance with limited annotated data. Strengths: \n(1) A multi-granularity cross-modal alignment framework is proposed for learning generalized medical visual representations from free-text radiology reports.\n(2) The overall structure is clear.\n(3) The experimental results show the effectiveness of the proposed model.\n \nWeaknesses:\n(1) The authors incorporate several well-known technologies, including instance-wise alignment, fine-grained token-wise alignment, and disease-level alignment, into contrastive learning to enhance the generalizability of learned visual representations. In this case, which parts are new? Thus, the overall novelty needs to be strengthened.\n(2) In Table 1, it can be seen that the proposed model performs better than other methods. However, because the proposed model has been pre-trained on a large-scale medical image and report dataset, whether other methods have been trained on the large dataset? \n(3) The generated visual and text token embeddings are projected into normalized lower-dimensional embeddings, and the dimension d is set to 128. How to set the dimension? The parameter study should be included. \n\n (1) The authors incorporate several well-known technologies, including instance-wise alignment, fine-grained token-wise alignment, and disease-level alignment, into contrastive learning to enhance the generalizability of learned visual representations. In this case, which parts are new? Thus, the overall novelty needs to be strengthened.\n(2) In Table 1, it can be seen that the proposed model performs better than other methods. However, because the proposed model has been pre-trained on a large-scale medical image and report dataset, whether other methods have been trained on the large dataset? \n(3) The generated visual and text token embeddings are projected into normalized lower-dimensional embeddings, and the dimension d is set to 128. How to set the dimension? The parameter study should be included. \n(4) From the results in Table 1, it can be seen that the proposed model obtains large improvements over DSVE and VSE++. Please the reasons for these situations. \n(5) It is better to provide more discussions on using multi-granularity cross-modal alignment. See weakness. ", " The authors present a visual representation learning approach based on contrastive learning of correspondences between visual radiographs and textual radiology reports at multiple levels of granularity using three different learning objectives. The first objective features standard cross-modal alignment between radiographs and their corresponding radiology report pairs, by which they enforce instance level agreement. The second objective features a cross-attention approach that enforces the alignment between the visual token embeddings (local representations obtained using the Vision Transformer) and aggregated text token representations (where the visual token is used as a query for attention-based aggregation of word token representations), and likewise, the alignment between text tokens and aggregated visual tokens. This enforces alignment between mutually informative local image regions in the radiograph and parts of the textual radiology report, and the authors argue that this region-level alignment improves performance of resulting visual representations on downstream task that involve dense predicitons. The final objective features alignment on a higher-than-instance level, by enforcing consistent cross-modal clustering of representations of corresponding visual and text data. This implicitly encourages the radiographs and reports that share high-level semanics to have similar representations, regardless of instance-level pairing and modality. Authors also demonstrate that pretraining on in-domain images (medical) is important for performance on downstream tasks, and that general-domain pretraining is not as effective. Strengths:\n- The paper is clearly written and well structured.\n- Implementation details and reproducibility info are clearly provided, with appendix containing additional ablation studies justifying parameter and design choices.\n- Extensive experiments (both qualitative and quantitative) as well as ablation studies. The choice of downstream tasks appears to be justified given the stated focus on learning visual representations, and they seem diverse enough to put different individual aspects of their approach to the test. The comparison with the main competing approach, GLoRIA, seems fair, as the authors also evaluated a variant of their model that uses the same image encoder as GLoRIA as well as the same pre-processing step.\n- The idea and implementation of the disease-level cross-modal alignment module in this setting, to the best of my knowledge, can be seen as novel.\n\nWeaknesses:\n- Lack of an ablation study involving individual learning tasks in a object detection or segmentation setting (as was done under the classification setting). Their approach does seem to more convincingly outperform GLoRIA in a object detection and segmentation setting (compared to the classification featuring the ResNet-50 encoder), but it would have been interesting to see an ablation study that explicitly shows how relevant, for instance, CTA is in a dense prediction setting.\n- Lack of error bars may be a potential issue, since the differences in performance between their approach and baselines (in some settings) are sometimes small.\n- This particular setup for contrastive learning based approach for visual representation learning (also in the context of paired radiographs and radiology reports), as well as the idea and motivation behind token-wise cross-modal alignment objective, at its core, seem to be inspired by ConVIRT and/or GLoRIA, so the work can, to a degree, be seen as an extension of existing work and not entirely novel. - How is CTA implemented in cases when ResNet50 is used as the image encoder? In what way are visual tokens obtained? To what degree would you say the difference in the way the tokens are obtained can account for the difference in performance between the ResNet50 and ViT variant of your approach?\n- Does the use of CTA give an even bigger performance improvement in a segmentation or object detection setting (compared to classification)? The authors list the lack of consideration of retrieval tasks as one of the limitations of their work. However, the outlined scope of their experiments is in my opinion reasonable. With regards to potentially negative societal impacts, the authors mention the possible use of sensitive data in their framework. This is a general concern when medical data is used and is therefore not particularly specific to their work. ", " This paper proposes a Multi-Granularity Cross-Modal Alignment method, which learns data representations from medical scans paired with the corresponding text reports. The method exploits multiple unsupervised techniques to obtain the learned representations (e.g. contrastive losses, clustering with Sinkhorn-Knopp), and each of these techniques are utilized at the appropriate level of granularity. \nThe learned representations are then evaluated on a large set of image-based downstream tasks, to assess the quality of the image representations. The experimental results support the merits of the proposed method. However, there are some weaknesses and limitations, which will be listed below. Strengths:\n- The proposed method is novel, if viewed as a whole framework that employs the granularity of the features that are present in medical scans. However, if one views each level in the method independently, these levels are not novel per se. But since the proposed method aims to encapsulate these levels together, then I deem the method as novel. The only level that I would deem more novel is the CPA, which proposes to learn cross-modal disease prototypes.\n- The evaluation experiments are extensive. The paper reports results on multiple dataset benchmarks, which include multiple types of tasks (classification, detection and segmentation). The work also evaluates two types of image encoders (resnet & ViT).\n- The paper is clearly written, and the used language is well understandable. \n\nWeaknesses:\nMostly, I believe the weaknesses in this work are in some missing pieces:\n1- Missing relevant references from the literature. The following are some works that perform contrastive learning on medical images:\n* Taleb, Aiham, et al. \"ContIG: Self-supervised Multimodal Contrastive Learning for Medical Imaging with Genetics.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n* Chaitanya, Krishna, et al. \"Contrastive learning of global and local features for medical image segmentation with limited annotations.\" Advances in Neural Information Processing Systems 33 (2020): 12546-12558.\n* Han, Yan, et al. \"Pneumonia detection on chest x-ray using radiomic features and contrastive learning.\" 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI). IEEE, 2021.\n* Feng, Ruibin, et al. \"Parts2Whole: Self-supervised contrastive learning via reconstruction.\" Domain Adaptation and Representation Transfer, and Distributed and Collaborative Learning. Springer, Cham, 2020. 85-95.\n* Xu, Jiarui, et al. \"GroupViT: Semantic Segmentation Emerges from Text Supervision.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\nThere are quite a lot of works that are similar along this line, would be great to cite the most relevant once from the medical imaging literature. \nAlso I believe some references related to the methodological part (mainly cross-attention) are missing, such as:\nChen, Yen-Chun, et al. \"Uniter: Learning universal image-text representations.\" (2019).\nLu, Jiasen, et al. \"Hierarchical question-image co-attention for visual question answering.\" Advances in neural information processing systems 29 (2016).\n\n It would be interesting to have a discussion in the paper on how the ITA level behaves with the existence of CPA. These two losses have a somehow contradictory behavior. The contrastive loss in ITA pushes the different instances (negatives) apart from positive instances. However, CPA clusters instances in the feature space in order to learn disease (prototype) clusters. CPA disregards the identity of the instance and rather tries to capture high-level unifying topic models. \nHow do they behave then? It seems to me in Table 4 that the addition of the CTA loss is essential for improved performance (rows 2 and 4). But adding CPA and ITA alone (in row 3), seems to perform a little worse than row 2.\nPlease discuss how do you imagine the losses are behaving. The authors list the limitations adequately " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "mrdTA_98M_", "nEYk53267b3", "ON7N8BsTmho", "ON7N8BsTmho", "ON7N8BsTmho", "FN6dNtlrMUZ", "7gg7j4H-KsM", "7gg7j4H-KsM", "nips_2022_Yul402KcD5d", "nips_2022_Yul402KcD5d", "nips_2022_Yul402KcD5d" ]
nips_2022_-3Pg7QNIF1S
An Embarrassingly Simple Approach to Semi-Supervised Few-Shot Learning
Semi-supervised few-shot learning consists in training a classifier to adapt to new tasks with limited labeled data and a fixed quantity of unlabeled data. Many sophisticated methods have been developed to address the challenges this problem comprises. In this paper, we propose a simple but quite effective approach to predict accurate negative pseudo-labels of unlabeled data from an indirect learning perspective, and then augment the extremely label-constrained support set in few-shot classification tasks. Our approach can be implemented in just few lines of code by only using off-the-shelf operations, yet it is able to outperform state-of-the-art methods on four benchmark datasets.
Accept
This paper aims to improve semi-supervised few shot learning by utilizing negative pseudo-labels. The authors report significant improvement over the previous methods in this setting. The reviewers originally had concerns about the significance of the results, but after the discussion period they all supported acceptance more than they supported rejection. Given the simplicity of the method, the size of the improvements, and the unanimous agreement from the reviewers, I support the acceptance of this paper. While the authors improved the paper significantly during the discussion stage, I would urge them to keep working on the presentation and writing for the camera-ready version. There are still writing mistakes throughout the paper, and the meaning of some of the sentences is not clear.
train
[ "i-52FAUiE-o", "8GejR_tbRcF", "VJn-iSHXyF", "F2PXPP-gfOT", "6tjlFl725k5", "85BShUdciPD", "Z-R_ws7GJW8", "kBgTkJ8pHHf0", "KghI6PaRzGO", "gXrrUoc9G1s", "KovXDvRR4R4", "_i89ciFLXVq", "1bpoBdoXfwh", "jjGSW4yO_z_", "BFq-3JfeWUk" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After careful consideration of the rebuttal, the author's comments regarding my concerns, and the discussions pursued by other reviews, I will be maintaining my current recommendation for acceptance.", " Thank you so much for the valuable feedback and your recommendation to accept our work. We will clarify these mentioned points in the final version.", " Thanks to the author for the clarification. My original low rating comes from two aspects:\n1. It was unclear about what pre-training the network as in [38] means and made it suspicious as an incremental work to [38]\n2. I misinterpret \"using only positive pseudo-labels\" as \"not using negative pseudo-label at all, so not excluding the k-th class\", which makes it looks like a basic method with pseudo-label\n\nThe detailed responses from the author alleviate these concerns, thus, I will recommend a weak acceptance of the paper instead, but I will suggest the author clarify these points in the main paper.", " Thank you so much for the feedback and your recommendation to accept our work.", " I want to thank the authors for the rebuttal and the new significance test. My major concerns have been convincingly addressed. Therefore, I would recommend a Weak Accept.", " Thank you so much for the feedback. Below please find our responses to the additional comments.\n***\n*Comment_2: It is unclear to me why you want to emphasize that you use FC layers rather than LR. Is there any drawback using LR?*\n\nResponse_2: There is no drawback using LR, on the contrary, LR will bring higher accuracy. Our previous response just wanted you to notice that the final classifier used by [38] and ours is different (or even unfair), and this classifier has an impact on the classification accuracy of SSFSL tasks. In fact, in our original experiments, we chose to use FC only because it can be more easily coupled with the overall network training. In the rebuttal stage, thanks to your suggestion that we \"need to compare the baseline work [38] clearly to show a more convincing result\", we noticed that [38] was using the off-the-shelf classifier LR for the final classification. Moreover, we also find that LR leads to better classification accuracy as shown in these works, e.g., [38, 15, 31, 36]. Therefore, we replace the original FC classifier with LR for fair and sufficient experimental comparisons with [38], whose results are as presented in these tables above. As shown, our MUSIC with LR does further improve the SSFSL accuracy over our MUSIC with FC, which also significantly outperforms [38] in most cases (15 wins and 5 losses), especially for the realistic distractive SSFSL setting.\n\n***\n*Comment_3: For your \"only positive pseudo-labels\", do you mean you use loss (3) and ignore (5), but would still exclude the $k$-th class, i.e., the negative pseudo-label in the previous iteration?*\n\nResponse_3: Yes. We will clarify this statement in the final version. Thank you.", " Comment_1: Thanks for clarification\n\nComment_2: Thanks for the detailed result. However, it is unclear to me why you want to emphasize that you use FC layers rather than LR. Is there any drawback using LR?\n\nComment_3: For your \"only positive pseudo-labels\", do you mean you use loss (3) and ignore (5), but would still exclude the k-th class, i.e., the negative pseudo-label in the previous iteration?\n\nComment_4: Thanks for clarification\n", " *Comment_1: When say pretraining the network as in [38], do you mean you use their method to pre-train the model, or do you mean you pre-train it in the same way but do not use pseudo labels at all during pre-training?*\n\nResponse_1: When pre-training the network as in [38], we only pre-trained it in the same way but do NOT use any pseudo labels at all during pre-training. We will clarify this statement in the final version.\n***\n*Comment_2: The author needs to compare the baseline work [38] clearly to show a more convincing result.*\n\nResponse_2: We would like to point out that [38] employs logistic regression (LR) as the classifier, while our MUSIC uses a linear fully connected (FC) layer as the classifier. Many state-of-the-art SSFSL methods have chosen LR as the classifier for better accuracy, e.g., [38, 15, 31, 36]. Thus, for fair and comprehensive comparisons with [38] by following your comment, we change our original FC classifier to LR and further conduct experiments on the distractive semi-supervised setup (cf. Table 3 of the paper). For [38] with the distractive setup, we use the source codes released by its authors for experiments. The results are shown in the following tables.\n\nResponse-Table-1: _Comparisons of the basic semi-supervised few-shot setup._\n||_miniImageNet_|_miniImageNet_|_tieredImageNet_|_tieredImageNet_|_CIFAR-FS_|_CIFAR-FS_|_CUB_|_CUB_|\n|:----------------------:|:--------------:|:--------------:|:----------------:|:----------------:|:----------:|:----------:|:---------:|:---------:|\n||1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|\n|[38]|73.12|83.28|78.99|86.76|**80.74**|87.16|**92.12**|94.52|\n|**Ours (Original FC)**|74.96|85.99|85.40|90.79|78.96|87.25|90.76|93.27|\n|**Ours (LR)**|**75.60**|**86.61**|**86.03**|**91.23**|79.67|**87.97**|91.69|**94.60**|\n\nResponse-Table-2: _Comparisons of the transductive setup._\n||_miniImageNet_|_miniImageNet_|_tieredImageNet_|_tieredImageNet_|_CIFAR-FS_|_CIFAR-FS_|_CUB_|_CUB_|\n|:----------------------:|:--------------:|:--------------:|:----------------:|:----------------:|:----------:|:----------:|:---------:|:---------:|\n||1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|\n|[38]|72.39|83.27|77.48|86.84|**79.19**|**86.66**|**90.89**|94.36|\n|**Ours (Original FC)**|72.01|83.49|83.57|89.81|77.56|85.49|89.40|92.91|\n|**Ours (LR)**|**72.69**|**84.14**|**84.18**|**90.45**|78.24|86.18|90.23|**94.38**|\n\nResponse-Table-3: _Comparisons of the distractive semi-supervised setup._\n||_miniImageNet_|_miniImageNet_|_tieredImageNet_|_tieredImageNet_|\n|:----------------------:|:--------------:|:--------------:|:----------------:|:----------------:|\n||1-shot|5-shot|1-shot|5-shot|\n|[38]|68.09|77.71|73.47|82.09|\n|**Ours (Original FC)**|68.62|80.67|79.69|88.50|\n|**Ours (LR)**|**69.14**|**81.20**|**80.23**|**89.11**|\n\nAs shown, in most cases (15 wins and 5 losses), our MUSIC with LR outperforms [38] by a large margin, especially for the two most popular used benchmark datasets, i.e., _miniImageNet_ and _tieredImageNet_. In addition, even if in several cases our method is worse than [38], the accuracy drop is basically no more than 1%. More importantly, for the realistic setting, i.e., distractive SSFSL, our methods have great improvements over [38], which reveals its good robustness.\n***\n*Comment_3: Improvement from using only positive pseudo-label to using both negative and positive pseudo-labels seems trivial.*\n\nResponse_3: We would like to emphasize that what our work proposes is a novel way of annotating pseudo-labels based on negative learning, rather than simply emphasizing the effect of negative pseudo labels. In our method, both positive and negative pseudo-labels are obtained by our negative learning-based strategy. Compared with previous works, our MUSIC with only positive pseudo-labels (i.e., \"Ours (only pos)\" in Table 1 & 2 of the paper) can still outperform them, which can validate the effectiveness of the obtained positive pseudo-labels by our negative learning-based pseudo-labeling strategy. Moreover, as noticed by the other three reviewers (cf. the paper strengths of their reviews), they also praised our novelty and contributions of our negative pseudo-labeling strategy.\n***\n*Comment_4: What if you apply only (6) and do not use (5)?*\n\nResponse_4: We conduct experiments by only applying (6) and do not use (5), and report the results of the basic semi-supervised few-shot setup as follows.\n||_miniImageNet_|_miniImageNet_|_tieredImageNet_|_tieredImageNet_|_CIFAR-FS_|_CIFAR-FS_|_CUB_|_CUB_|\n|:--------:|:--------------:|:--------------:|:----------------:|:----------------:|:----------:|:----------:|:---------:|:---------:|\n||1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|\n|Only (6)|72.84|83.21|83.54|88.36|76.83|85.47|89.50|92.15|\n|**Ours**|**74.96**|**85.99**|**85.40**|**90.79**|**78.96**|**87.25**|**90.76**|**93.27**|\n\nAs shown, although applying only (6) can still work, its results are significantly worse than the results of our MUSIC.", " *Comment_1: In the ablation study, the authors only investigate whether the reject option $\\delta$ is effective or not. It would be better to further study what would be the optimal value of $\\delta$ and whether this hyper parameter is agnostic to different datasets' distribution.*\n\nResponse_1: As stated in the experiments (cf. Ln. 181 of the paper), the reject option $\\delta$ is set to $1/c$ (i.e., 0.20) as the default value for all experiments. We had indeed conducted ablation studies of different values of $\\delta$ to show its sensitiveness. The results on *miniImageNet* and *CUB* are reported as follows. We can obviously find that different values of $\\delta$ are not sensitive and $\\delta$ is agnostic to different datasets. We will also add these experiments and discussions in the final version by following your constructive suggestion. Thank you.\n| _Value of $\\delta$_ | _miniImageNet_ | _miniImageNet_ | _CUB_ | _CUB_ |\n|:------:|:--------------:|:--------------:|:--------:|:--------:|\n| $K$-shot | 1-shot | 5-shot | 1-shot | 5-shot |\n| 0.15 | 74.84 | 85.80 | 90.63 | 93.11 |\n| 0.20 | 74.96 | 85.99 | 90.76 | 93.27 |\n| 0.25 | 75.04 | 86.17 | 90.69 | 93.20 |\n| 0.30 | 74.85 | 85.73 | 90.61 | 93.17 |\n***\n*Comment_2: In Table 4, the authors conduct an interesting experiment for the order of negative and positive pseudo labels learning. I am curious what is the optimal number of iterations for the neg -> pos -> neg ... in the MUSIC method, or it is dependent on different datasets?*\n\nResponse_2: It is not dependent on different datasets. In this ablation study, since the task is 5-way few-shot classification, we fix the number of iterations as four times of \"neg->pos\" (or \"pos->neg\"). More specifically, at each time/iteration, our MUSIC returns the most confident negative pseudo-label in the current iteration and excludes it for the next iteration. Thus, after four times of negative pseudo-labeling by our MUSIC, all negative labels can be obtained (cf. Ln. 151-152 of the paper). Therefore, the iteration times are relevant to the number of $K$ in the $K$-way classification.", " *Comment_1: There are some typos, such as \"detailedly\" (261), \"can performs\" (208), and most important \"logits = f.forward(x)\" and \"loss = F.nll_loss(F.log_softmax(logits), labels)\" in the algorithm blocks where \"x\" should be \"S\" and \"labels\" should be \"targets\" to my understanding of the procedure.*\n\nResponse_1: Thank you so much for pointing out these issues. We will fix them in the final version.\n***\n*Comment_2: The algorithm block is in PyTorch which I personally appreciate but can be difficult to navigate if read doesn't have existing PyTorch proficiency; I believe a pseudo-code algorithm block would be more appropriate with the PyTorch code moved to the supplementary material. I recognize that this was done to reinforce the fact that the method can be implemented in a few-lines of code.*\n\nResponse_2: Thank you for the constructive suggestion. We will follow the suggestion to polish the algorithm block in the final version.\n***\n*Comment_3: The choice to decouple the training of the feature extractor from the few-shot learning algorithm itself is an interesting one. It is often seen that end-to-end training of the extractor through episodic procedures where the few-shot updates are also applied results in better performance. More specifically, the updates shown in the PyTorch code block could be applied directly to f and F together where the inputs are just the raw images. Was this something the authors explored? If so, what was the outcome? If not, why not?*\n\nResponse_3: In fact, we follow [31] of the paper to decouple the training of the feature extractor from the few-shot learning algorithm itself, and [31] empirically encourages to not fine-tune the feature extractor (aka the embedding model) during the meta-testing stage.", " *Comment_1: Although some negative labels may be indeed easier to be predicted than the positive one, there still exist hard negative classes that are equally hard to be recognized. Those hard negative classes are in fact the most important information for learning a good classifier. I am missing how this work can handle this case.*\n\nResponse_1: Our strategy for dealing with hard negative classes is a series of successive exclusion operations. Specifically, after excluding easily predicted negative labels, the remaining classes are falsely annotated with a reduced probability of false pseudo-labeling. Thus, the original hard negative classes may be correctly pseudo-labeled in such a gradually negative label rejecting way. However, the reality is that all algorithms have hard (negative) classes that cannot be solved. For these unsolvable hard classes, our MUSIC adopts a conservative strategy, i.e., introducing the reject option to ensure the correctness of the pseudo-labels. On the other hand, we agree that hard negative classes are in fact the most important information for learning a good classifier. But, explicitly and effectively handling the hard negative classes is challenging, which could depend on a sophisticated method. In that case, it would destroy the simplicity and scalability of our approach.\n\n***\n*Comment_2: Although the method is simple and novel, the achieved improvement over SOTA is marginal (less than 1%) in most of cases, see Table 1.*\n\nResponse_2: As stated in Ln. 188 of the paper, all the results are the **_average_** accuracy, which is not the result of a single experiment. Therefore, even for some cases (whose improvement is less than 1%) in Table 1, it is statistically significant. To clearly show a convincing comparison in Table 1, we conduct the pairwise $t$-test at a 95% significant level on our MUSIC with the compared methods, which are presented in the following table. “$\\bullet$ ($\\circ$)” indicates that MUSIC is significantly better (worse) than the corresponding method.\n\n| | _miniImageNet_|_miniImageNet_|_tieredImageNet_|_tieredImageNet_|_CIFAR-FS_|_CIFAR-FS_|_CUB_|_CUB_|\n|:-----------:|:--------------:|:--------------:|:----------------:|:----------------:|:--------------:|:--------------:|:--------------:|:----------------:|\n||1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|1-shot|5-shot|\n|MAML|48.70$\\bullet$|63.11$\\bullet$|51.67$\\bullet$|70.30$\\bullet$|58.90$\\bullet$|71.50$\\bullet$|54.73$\\bullet$|75.75$\\bullet$|\n|ProtoNet|49.42$\\bullet$|68.20$\\bullet$|53.31$\\bullet$|72.69$\\bullet$|55.50$\\bullet$|72.00$\\bullet$|50.46$\\bullet$|76.39$\\bullet$|\n|LEO|61.76$\\bullet$|77.59$\\bullet$|66.33$\\bullet$|81.44$\\bullet$|—|—|—|—|\n|CAN|63.85$\\bullet$|79.44$\\bullet$|69.89$\\bullet$|84.23$\\bullet$|—|—|—|—|\n|DeepEMD|65.91$\\bullet$|82.41$\\bullet$|71.16$\\bullet$|86.03$\\bullet$|74.58$\\bullet$|86.92|75.65$\\bullet$|88.69$\\bullet$|\n|FEAT|66.78$\\bullet$|82.05$\\bullet$|70.80$\\bullet$|84.79$\\bullet$|—|—|73.27$\\bullet$|85.77$\\bullet$|\n|RENet|67.60$\\bullet$|82.58$\\bullet$|71.61$\\bullet$|85.28$\\bullet$|74.51$\\bullet$|86.60$\\bullet$|82.85$\\bullet$|91.32$\\bullet$|\n|FRN|66.45$\\bullet$|82.83$\\bullet$|72.06$\\bullet$|86.89$\\bullet$|—|—|83.55$\\bullet$|92.92|\n|COSOC|69.28$\\bullet$|85.16$\\bullet$|73.57$\\bullet$|87.57$\\bullet$|—|—|—|—|\n|SetFeat|68.32$\\bullet$|82.71$\\bullet$|73.63$\\bullet$|87.59$\\bullet$|—|—|79.60$\\bullet$|90.48$\\bullet$|\n|MCL|69.31$\\bullet$|85.11$\\bullet$|73.62$\\bullet$|86.29$\\bullet$|—|—|85.63$\\bullet$|93.18|\n|STLDeepBDC|67.83$\\bullet$|85.45|73.82$\\bullet$|89.00$\\bullet$|—|—|84.01$\\bullet$|**94.02$\\circ$**|\n|TPN|52.78$\\bullet$|66.42$\\bullet$|55.74$\\bullet$|71.01$\\bullet$|—|—|—|—|\n|TransMatch|60.02$\\bullet$|79.30$\\bullet$|72.19$\\bullet$|82.12$\\bullet$|—|—|—|—|\n|LST|70.01$\\bullet$|78.70$\\bullet$|77.70$\\bullet$|85.20$\\bullet$|—|—|—|—|\n|EPNet|70.50$\\bullet$|80.20$\\bullet$|75.90$\\bullet$|82.11$\\bullet$|—|—|—|—|\n|ICI|69.66$\\bullet$|80.11$\\bullet$|84.01$\\bullet$|89.00$\\bullet$|76.51$\\bullet$|84.32$\\bullet$|89.58$\\bullet$|92.48$\\bullet$|\n|iLPC|70.99$\\bullet$|81.06$\\bullet$|85.04|89.63$\\bullet$|78.57|85.54$\\bullet$|90.11$\\bullet$|—|\n|PLCM|72.06$\\bullet$|83.71$\\bullet$|84.78$\\bullet$|90.11$\\bullet$|77.62$\\bullet$|86.13$\\bullet$|—|—|\n|**Ours**|**74.96**|**85.99**|**85.40**|**90.79**|**78.96**|**87.25**|**90.76**|93.27|\n\nAs observed, except for very few cases, our MUSIC is significantly better than other comparison methods across different datasets. Furthermore, to further test the significance of differences between these SSFSL methods, we also employ the Friedman test (at significance level $\\alpha = 0.05$) and show the result as https://anonymous.4open.science/r/MUSIC-Friedman-Test/FriedmanTest.png We can see that our MUSIC ranks at the first place and significantly outperforms other methods.", " This work proposes a novel negative pseudo-labeling algorithm to tackle semi-supervised few-shot learning. The key insight is negative labels are easier to predict, therefore, pseudo labels on unlabeled samples can be better predicted by iteratively predicting negative labels until all the negative ones are excluded. Extensive experiments have been conducted on four few-shot learning benchmark and show better performance than SOTA. Strengths\n\n1. The idea of generating pseudo-labels by gradually rejecting negative labels is novel and interesting. \n2. The experiments are quite extensive, including results on four public benchmarks and many analysis. \n3. The paper is written well and easy to follow.\n\nWeakness\n\n1. Although some negative labels may be indeed easier to be predicted than the positive one, there still exist hard negative classes that are equally hard to be recognized. Those hard negative classes are in fact the most important information for learning a good classifier. I am missing how this work can handle this case. \n\n2. Although the method is simple and novel, the achieved improvement over SOTA is marginal (less than 1%) in most of cases, see Table 1.\n\nPost-rebuttal \n\nMy concerns about the weakness have been addressed. I would suggest the authors to address my concerns regarding the marginal improvement and hard negative classes, as listed in the weakness. \n\nPost-rebuttal \nThe authors addressed all of my concerns. Therefore, I would recommend a Weak Accept. I cannot find the limitations and potential negative societal impact in this paper. ", " The paper proses a simple approach to SSFSL that employs negative label prediction when producing pseudo-labels for unlabelled examples. The model consists of a standard ResNet12 architecture that is pre-trained on the base data following an L2-normalizing single-layer classifier that is fine-tuned based on the support data. The fine-tuning process first updates parameters based on standard cross-entropy loss of the support examples. It then iteratively removes negative pseudo-labels from the unlabelled examples but using a modified cross-entropy negative label predictor loss, and finally, performs updates based on the positive pseudo-labels of the unlabelled set for examples which all negative labels have been identified. Various experiments and ablations studies are reported that demonstrate the efficacy of the approach. Strengths:\n- Paper is overall very well-written.\n- Algorithmic choices are well-motivated, backed up by both good intuition and supportive ablation studies.\n- Method is simple to understand on the first go but also very empirically powerful as demonstrated by series of experiments.\n- Negative labelling is a very interesting insight and can prove consequential specifically in the domain of SSFSL which is very applicable to applied settings where labelling can be expensive but lots of unlabelled data is available.\n\nWeakness:\n- There are some typos, such as \"detailedly\" (261), \"can performs\" (208), and most important \"logits = f.forward(x)\" and\n\"loss = F.nll_loss(F.log_softmax(logits), labels)\" in the algorithm blocks where \"x\" should be \"S\" and \"labels\" should be \"targets\" to my understanding of the procedure\n- The algorithm block is in PyTorch which I personally appreciate but can be difficult to navigate if read doesn't have existing PyTorch proficiency; I believe a pseudo-code algorithm block would be more appropriate with the PyTorch code moved to the supplementary material. I recognize that this was done to reinforce the fact that the method can be implemented in a few-lines of code.\n\nMiddleground:\n- The algorithm is simple and effective; but as a result doesn't contain very significant technical novelty and contribution. That being said, the authors have embraced its simplicity in the language of the paper throughout which addresses this potential problem. The choice to decouple the training of the feature extractor from the few-shot learning algorithm itself is an interesting one. It is often seen that end-to-end training of the extractor through episodic procedures where the few-shot updates are also applied results in better performance. More specifically, the updates shown in the PyTorch code block could be applied directly to f and F together where the inputs are just the raw images. Was this something the authors explored? If so, what was the outcome? If not, why not? The authors have adequately addressed technical limitations of the work (although further studies on empirical biases based on data domain would have been interesting to see). However, there is no discussion of potential negative societal impact of the work; in fairness to the authors, this is an algorithmic work and the societal impacts can be speculative at time; but they could benefit from a short discussion of what their method, as an effective SSFSL classifier, can enable in applied industrial settings.", " This submission proposes a simple yet effective learning method for Semi-Supervised Few-Shot Learning (SSFSL) called MUSIC. Compared to previous methods, the authors propose to learn negative labels first and then focus on positive label learning. The underlying logic is that under few-shot learning scenario, it is easier to exclude negative predicted labels than select positive labels. The experiments show that the simple method achieves state-of-the-art performance on four benchmark datasets. + The authors propose a straightforward learning method for SSFSL. The motivation is clear and technical details are illustrated with sufficient details.\n\n+ The authors conduct experiments on four benchmark datasets and achieves state-of-the-art performance.\n\n+ The authors further dive into different aspects of the MUSIC method and provide ablation study, which is much appreciated. Overall I have little confusion for the methodology and technical details. There are two suggestions:\n\n1) In the ablation study, the authors only investigate whether the reject option $\\delta$ is effective or not. It would be better to further study what would be the optimal value of $\\delta$ and whether this hyper parameter is agnostic to different datasets' distribution.\n\n2) In Table 4, the authors conduct an interesting experiment for the order of negative and positive pseudo labels learning. I am curious what is the optimal number of iterations for the neg -> pos -> neg ... in the MUSIC method, or it is dependent on different datasets? I am basically satisfied with the submission in terms of methodology, motivation and experiments. I provided some suggestions in the above section (\"questions\") which also contain some constructive suggestions. ", " This work applies negative learning to the problem of semi-supervised few-shot learning. It uses negative pseudo-label to gradually get rid of unlikely predictions. It also minimizes the entropy of the predicted probability of unlabeled target domain data to promote pseudo-labeling. Strength:\nIn general, the paper is easy to follow and highlights the key point clearly.\n\nWeakness: \n1. The major problem of this work lies in the experiment section. The author mentioned that this work pretraining the network as in previous work [38], so it should be the baseline of this work. However, the table does not include that work [38] at all. Furthermore, comparing Table 2 in the baseline work [38] and Table 1&2 in this work, we can find that the baseline work [38] actually performs better in some settings. \n\n2. While negative learning is the key point of this work, as shown in Table 1&2, the improvement from using only positive pseudo-label to using both negative and positive pseudo-labels seems trivial. Since pseudo-label itself is a well-known idea, the novelty will be very limited if the negative pseudo-labels do not really make a difference.\n\n 1. The entropy-loss in (6) seems to share a similar spirit with pseudo-labeling (also known as soft pseudo-labeling in other places). What if you apply only (6) and do not use (5) at all? \n\n2. The author of the baseline work [38] also uses pseudo labels, and they rank pseudo labels in a sophisticated way. When you say pretraining the network as in [38], do you mean you use their method to pre-train the model, or do you mean you pre-train it in the same way but do not use pseudo labels at all during pre-training? The author needs to compare the baseline work [38] clearly to show a more convincing result." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "gXrrUoc9G1s", "VJn-iSHXyF", "85BShUdciPD", "6tjlFl725k5", "KovXDvRR4R4", "Z-R_ws7GJW8", "kBgTkJ8pHHf0", "BFq-3JfeWUk", "jjGSW4yO_z_", "1bpoBdoXfwh", "_i89ciFLXVq", "nips_2022_-3Pg7QNIF1S", "nips_2022_-3Pg7QNIF1S", "nips_2022_-3Pg7QNIF1S", "nips_2022_-3Pg7QNIF1S" ]
nips_2022_FhWQzNY2UYR
Geo-SIC: Learning Deformable Geometric Shapes in Deep Image Classifiers
Deformable shapes provide important and complex geometric features of objects presented in images. However, such information is oftentimes missing or underutilized as implicit knowledge in many image analysis tasks. This paper presents Geo-SIC, the first deep learning model to learn deformable shapes in a deformation space for an improved performance of image classification. We introduce a newly designed framework that (i) simultaneously derives features from both image and latent shape spaces with large intra-class variations; and (ii) gains increased model interpretability by allowing direct access to the underlying geometric features of image data. In particular, we develop a boosted classification network, equipped with an unsupervised learning of geometric shape representations characterized by diffeomorphic transformations within each class. In contrast to previous approaches using pre-extracted shapes, our model provides a more fundamental approach by naturally learning the most relevant shape features jointly with an image classifier. We demonstrate the effectiveness of our method on both simulated 2D images and real 3D brain magnetic resonance (MR) images. Experimental results show that our model substantially improves the image classification accuracy with an additional benefit of increased model interpretability. Our code is publicly available at https://github.com/jw4hv/Geo-SIC.
Accept
Although there were a couple of initial questions/concerns about certain aspects of the paper, all reviewers appreciated the approach, the quality of presentation and the empirical results. After reading all responses by the authors, my impression is that all questions have been answered satisfactorily during the rebuttal period. Hence, I do recommend acceptance of this paper.
train
[ "U1wrvfGxlha", "VoGoCqDLJt", "3UfWWNE_x5b", "8NfQ2I1GdN2", "XpfoBEiI_vg", "_AS5C0J-Sj0" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank R3 for all the positive comments and constructive feedback. We will add (i) more details of the geometric learning (atlas building-based) loss function in the supplementary material, (ii) add descriptions of the CNN model parameters, and (iii) clarify that the network error propagates iteratively in the phase currently being trained in our revised manuscript. According to R3’s question of computational cost, we kindly point out that the main difference is the additional geometric learning network developed in Geo-SIC, which is included in the right panel of Fig. 6.\n\nAll typos will be corrected in the revised manuscript. ", " We thank R2 for the positive comments and suggestions. Please see our responses to all the questions below: \n\n*Since Geo-SIC is focusing on deriving the features of geometric changes/deformations between objects in images, the data metric could be on images (as what we present in the paper), or point-based data (once the deformed model point is mathematically formulated). \n\n*Yes, we carefully removed the affine transformations on all image data, which is typically done for deformation-based models. We have not had any issues of sensitivity empirically so far. \n\n*We will show the atlases of brains for each class in the revised manuscript. \n\n*Thanks for this great point! The heatmaps in Fig. 5 are initial results of Geo-SIC and suggest that our network gains better attention on identified AD regions (such as ventricles, hippocampus, etc.) than the baselines. Our next step is to develop further analysis (i.e., utilizing feature gradient flows) to quantify and better interpret shape-specific changes particularly on brain regions that are indicative of dementia. We hope to include this work in an extended journal paper. \n\n*Great question on the robustness of Geo-SIC to the variations of image intensities! We designed a simple test by adding universal adversarial noises in the google draw images (with five classes) and found out Geo-SIC consistently achieves better accuracy (~10% higher) than classifiers that focus on image intensities. We will try to add this interesting experiment to our revised manuscript if the space allows us to. ", " We thank the comments and suggestions from R1. Please see our responses to all questions below: \n\n*Aside from developing a joint optimization of geometric learning (based on atlas building) and classification, our proposed Geo-SIC has other major contributions that have been rarely explored in the literature. As R2&R3 also pointed out, Geo-SIC was the first to learn deformation-based geometric descriptors within an image classifier. It provides an image distance function of both intensity and geometric changes that are most relevant to classify different groups. In contrast to previously used two-step approaches, Geo-SIC has multiple advantages, such as (i) a direct optimization of geometric feature learning and group classification; (ii) a reduced computational cost of training inference by employing a low-dimensional parameterization of deformation fields in a bandlimited space; and (iii) an improved performance of model accuracy and robustness. We will make sure to clarify these contributions in the introduction section. \n\n*The current work of Geo-SIC focuses on deriving geometric features that measure the differences between two objects; we carefully removed affine (rigid) transformations for all datasets. We agree that it would be interesting to see how rigid transformations play a role in the classification task. This could be considered as an extension on top of our current work. \n\n*At inference time, our boosted classifier with learned parameters in both image and geometric feature space will predict the label of each testing image. While a registered template is not necessary for the testing, a user can pass the predicted image with a label (and revealed atlas) to the geometric learning network to predict transformations (to generate a deformed atlas). \n\n*Thanks for bringing this up! We will add an elaborated discussion on why our atlas building has improved performance over the current approaches. To summarize, our model Geo-SIC parameterized the high-dimensional deformation fields in a nicely-investigated low-dimensional bandlimited space [44], which provides faster convergence with superior accuracy due to the local minima issues in the original high-dimensional space. \n\n*We thank R1 for pointing out the potential limitations. We will include a paragraph with regard to broader options for the image similarity metric, an extension to multiple-template within each class data, etc., in the revised manuscript. ", " This paper proposes to solve a classification task by jointly solving a problem of atlas building for each specific class. Thus the classifier network uses the shape features that are used to deform the templates as well as the features learned directly by the classifier. The trained model exhibits good classification performances as well as a good representation of the data (sharp templates). Strengths : the paper is well written , with solid mathematical basis and the proposed concept of joint learning is simple to grasp. The results are solid with extensive comparisons with baseline methods in various aspects : classification performance, as well as representation by deformable templates Two datasets are chosen, the former being toy examples and the latter being related to neuroimaging MRIs.\n\nWeaknesses : the main contribution is to solve jointly two tasks instead of solving them sequentially (two step method). This makes it a fairly limited contribution for this conference. Another weakness is that during inference, the choice of the atlas is not known and therefore comparison of the input image with the deformed atlas cannot be done. Besides, the paper assumes that each class can be well represented by the deformation of a single template (unimodal distribution) -The authors should comment on the use of Unet (including skip connections) as opposed to (variational) autoencoders in order to generate the shape features. \n\n- How is the proposed approach sensitive to the rigid transform between the template and image ? How is this taken into account in the proposed framework ?\n\n- At inference time, the choice of the template to be towards the image is unknown. How to combine the classification network and the shape reconstruction network to get the registered template ?\n\n- The authors should try to provide explanations about the slight improved performance for the generation of atlas since the atlas generation method is similar to the ones proposed before - The authors do not provide a set of potential limitations of their work. \n- Limitations of the work include : \ni) the proposed approach is only valid for same modality images since simple similarity criteria (SSD) are used \nii) It represents shape as the deformation of a single template which is only valid for a simple unimodal distribution of shapes \niii) it does not allow to estimate the template deformation at inference time\n ", " This paper presents a deep learning approach that improves the classification performance using shape priors that are jointly trained with the classification learning task without the need for pre-extracted shapes. The core assumption is that class-specific shape information can be encoded by matching the input image to a representative atlas, in this paper this atlas is learned. The efficacy of the proposed method is showcased on synthetic and real medical data.\n Strengths:\n- Jointly learning atlases and sample-specific shape representation to boost classification performance is a novel idea.\n- The use of deformation-based representation allows for detailed shape descriptors and avoids pre-extracting shapes from images.\n- Implicit incorporating shape information within image classifiers should improve the robustness of the classifiers to intensity variations.\n- The method provides explicit classification interpretations in terms of geometric features that derive the classification task.\n- The method shows classification improvements in addition to interpretability. \n\nWeaknesses:\n- Although claimed, it is not clear how would the proposed method be generalized to other representations (e.g. point-based models).\n- The robustness to variations in image intensity is not demonstrated in the experiments.\n- The method seems to only operate on aligned samples.\n - How would the proposed method be used for / adapted to point-based shape models?\n- Are the images (training and testing) assumed to be roughly aligned? how is this accomplished? how sensitive is the method wrt misalignments in training? and how misalignment in testing can be handled?\n- The learned multiple atlases of the brain data are not shown. In figure 6, only one atlas is shown. Shouldn’t we expect an atlas per class?\n- It is not clear if the heat maps in Figure 5 show shape-specific changes that are indicative of dementia, Geo-SIC is different from others, however, there is no discussion (or detailed figures) provided to demonstrate the specificity of these heatmaps to the dementia problem. No. Authors are encouraged to discuss the limitations of the methods and delineate potential negative societal impact. ", " The article presents an approach to image classification using deformable atlases. This not only improves the classification itself but also makes it possible to better explain the decisions made by the models. In addition, the authors show that this approach is computationally less expensive than some previous approaches.\nFor the introduction of the atlas information into the deep learning model, they first train an encoder-decoder capable of predicting the atlas from the input images. In parallel, with the embeddings generated by the first one, they train a second network that will be in charge of making the predictions. \n Strengths:\n*.- It appears to be a robust approach.\n*.- For similar (or somewhat better) results to previous versions it has a lower computational cost.\n\nWeaknesses:\n*.- Equations (1-4) (basic for the construction of the loss functions) would require a somewhat more detailed explanation to facilitate the comprehensibility of the proposal. (Maybe it could be added in the annexes). \n*.- The models are not sufficiently explained. It seems that there are two loss functions that are applied alternatively depending on the model being trained. It is also not clear whether the error propagates to all models or only to the one in the phase currently being trained, freezing the rest. \n *.- Improve the explanation of the equations used to build the loss functions. Include them in the annexes if you need extra space.\n*.- References to Tables 1 and 2 are wrong.\n*.- A comparison of the computational cost between the base models (AlexNet, CNN, ...) and the models with Geo-SIC should be presented.\n*.- Due to the fact that the CNN base model is not a standard one, a better description of it should be added. \n The limitations of the model are correctly addressed. \n\n" ]
[ -1, -1, -1, 6, 7, 7 ]
[ -1, -1, -1, 4, 5, 3 ]
[ "_AS5C0J-Sj0", "XpfoBEiI_vg", "8NfQ2I1GdN2", "nips_2022_FhWQzNY2UYR", "nips_2022_FhWQzNY2UYR", "nips_2022_FhWQzNY2UYR" ]
nips_2022_zGvRdBW06F5
On-Device Training Under 256KB Memory
On-device training enables the model to adapt to new data collected from the sensors by fine-tuning a pre-trained model. Users can benefit from customized AI models without having to transfer the data to the cloud, protecting the privacy. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory. On-device training faces two unique challenges: (1) the quantized graphs of neural networks are hard to optimize due to low bit-precision and the lack of normalization; (2) the limited hardware resource (memory and computation) does not allow full backpropagation. To cope with the optimization difficulty, we propose Quantization- Aware Scaling to calibrate the gradient scales and stabilize 8-bit quantized training. To reduce the memory footprint, we propose Sparse Update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/1000 of the memory of PyTorch and TensorFlow while matching the accuracy. Our study enables IoT devices not only to perform inference but also to continuously adapt to new data for on-device lifelong learning. A video demo can be found here: https://youtu.be/XaDCO8YtmBw.
Accept
In this work the authors propose a framework for training CV models on tiny IoT devices with very limited memory. The reviewers agreed that the paper is well written and represents a valuable contribution to the area of efficient / on-device ML. Questions raised by reviewers were sufficiently addressed in the response.
test
[ "EVrNEZEm6HG", "-kprvNikHnq", "GvACt6b4VsY", "EAvL32MkiUR", "Iqvc6cecH7", "B8DEmvoaQ_v", "8MOdtHxw1YV", "g_FQeq5bHtH", "F_HROsgu5Ic", "Tbs8ORY1yk7", "N7RIhEOalVK", "W547II4O81w", "Uuc9sZ_FtYa", "3j7oseeqOvI", "P5n4a2nqXk4", "99ibPJaWTBI" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Authors,\nThank you for your response!\nBased on your replies and your promise of opensourcing the code. I am raising my score to 6.\nGood luck!\n\n", " Dear Reviewer Dz6f,\n\nThanks again for your insightful suggestions and comments. We have not heard from you and the rebuttal window is going to close. We believe our rebuttal should help clarify the concerns about related work, ablation studies (more datasets, QAS vs. w/o gradient scaling), and the statement of open sourcing. Please let us know if you have more questions.\n\nBests,\n\nAuthors\n", " Thanks for the author's response. You have addressed my concerns.", " I have read all the reviews and author response, the authors made significant efforts to address all the raised concerns. The demo videos are interesting, and training on MCU seems to be a promising technique for AI application. I would keep my decision as strong accept.", " I have read other reviews and author response. The authors have provided good clarification and insightful new experiments during the rebuttal. They have satisfactorily answered my questions. As a result I have increased the rating to \"6: Weak Accept\". If the paper is accepted, I would like the authors to just show a simple comparison between standard quantization-aware-training (QAT, e.g., where BNs are still present and are fused after QAT is complete) and QAS on real quantized graph (e.g., the one without BN). While I can see that on a real quantized graph, QAS has benefits for non-transfer learning scenarios (based on new results), it would be useful to see the gap between regular QAT and QAS on real quantized graphs. Indeed, this can help motivate future works that may improve upon QAS to close the gap between QAT and quantization algorithms that directly work on real quantized graphs.", " Dear reviewer JJZV,\n\nThanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nIn our previous response, we added the following results:\n\n1. The codebase to reproduce our on-device training demo;\n2. Comparison between BN-only update and our method;\n3. Study of QAS under non-transfer learning scenarios;\n4. Comparison with gradient sparsification.\n\nWe hope the provided new experiments and additional explanations have convinced you of the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again!\n\nBest,\n\nAuthors\n", " Dear AC and reviewers:\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper.\n\nSince the discussion phase started four days ago, we have not heard any post-rebuttal response yet. Please don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. Thanks!", " We sincerely appreciate all reviewers’ time and efforts in reviewing our paper and for the constructive feedback. In addition to the response to specific reviewers, here we would like to thank reviewers for their acknowledgment of our work and highlight the new results added during the rebuttal:\n\nWe are glad that the reviewers appreciate and recognize our contributions:\n- First work to train vision models on microcontrollers, together with a live demo [Dz6f, JJZV]\n- Support the training of real quantized graphs with a novel scaling method [Dz6f, JJZV]\n- Complete system-level support and significant memory saving over existing frameworks [HCuR, JJZV]\n- Sufficient and detailed experiments to support the proposed method [JJZV, HCuR, xaNn, Dz6f]\n- Well-written and easy to understand [Dz6f, HCuR]\n\nIn this rebuttal, we have added more supporting results following the reviewers’ suggestions.\n\n- Open source code to reproduce the on-device training demo [JJZV, Dz6f]\n- Comparison between BN-only update and our sparse update [xaNn]\n- Train-from-scratch results with QAS [xaNn, JJZV]\n- Speedup evaluation of sparse update on Qualcomm mobile CPU, Jetson Nano, and Raspberry Pi [xaNn]\n", " **Q1: The comparison between QAS and LSQ (Esser et al., 2019)**\n\nWe have cited the LSQ paper in the revised version. Our paper has a **fundamentally different** setting compared to LSQ; QAS also **consistently outperforms** the scaling method in LSQ in our setting (73.5% vs. 39.8% accuracy).\n\nSetting-wise:\n1. LSQ is doing QAT on a **fake** quantization graph: the full precision weights need to be stored and updated during training, which costs 4x larger memory than int8 weights and does not lead to training memory saving. In fact, this is the setting of most existing papers aiming to train an accurate quantized model for efficient **inference** where our goal is efficient **training**.\n2. In our setting, we are updating a **real** quantized graph, where all the weights and biases are integers from the very beginning and stay as integers after updates. We are not targeting the quantization process itself (like LSQ); instead, we are _given_ a quantized graph and want to update it for transfer learning.\nPlease also kindly refer to supplementary Section D for the comparison between real quantized graphs and fake quantized graphs.\n\nWe perform experiments to compare QAS with the scaling method in LSQ:\n\n1. QAS consistently outperforms LSQ scaling. QAS can achieve an average accuracy of 73.5% on 8 datasets, while LSQ scaling only achieves 39.8% despite grid tuning on hyper-parameters.\n2. QAS is based on the mathematical derivation of the quantization process, while LSQ scaling is based on a heuristic to keep the same weight/gradient scale across parameters. We think such a heuristic does not apply to the **real** quantized graph with very different tensor ranges (e.g., int32 has a $10^7\\times$ larger range than int8) and leads to unstable training. This is a unique challenge when we update the real quantized graph and has not been observed in previous work.\n\n**Q2: Compare with HFP8 and the follow-up FP4 work.**\n\nWe have cited the two papers in the revised version. The two papers target a fundamentally different setting to ours. The two papers use hybrid and customized FP8 or FP4 formats, which are only supported in specialized hardware. It cannot run on general-purpose hardware like MCUs, Arm CPUs, Intel CPUs, etc. While our work focuses on int8 quantization, which can efficiently run on general-purpose hardware and has wider applications like tinyML.\n\nWe kindly remind the reviewer that QAS is just one part of our methods. Sparse update and training engine optimization bring **much larger memory saving** (17.9x, 1.9x in Figure 1), which should not be neglected. Limiting the scope to quantization does not provide a holistic view of our work.\n\n**Q3: Regarding using “tricks” to enable on-device learning.**\n\nOur proposed methods are fundamental principles instead of simple tricks. Our framework is the **first** solution to actually enable tiny on-device training of CNNs under a 256KB memory budget. We believe our design principles will shed light on later studies.\n\n1. We are targeting a new setting of updating the **real** quantized graph instead of using QAT to train a fake quantized graph for inference. Existing optimization methods will lead to inferior accuracy while QAS successfully addresses the difficulties.\n2. Sparse layer/tensor update is a novel memory-efficient update method. It consistently outperforms existing work like BN-only update [25], bias-only update [12], fine-tuning last k layers, etc. in terms of accuracy-memory trade-off.\n3. Efficient training systems on edge are rarely explored. Our Tiny Training Engine provides fundamental designs like compile-time differentiation, backward graph pruning, op reordering, etc. to improve memory and computation efficiency.\n\n**Q4: Experiments on a broader range of datasets.**\n\nWe have thoroughly verified the effectiveness of our proposed method on 8 datasets: Cars, CIFAR-10, CIFAR-100, CUB, Flowers, Food, Pets, and VWW, where our method achieves consistent improvement. These datasets are widely used to benchmark transfer learning in previous work (e.g., [13, 39]). So we believe the results we provided are comprehensive. Note that for the experimental results in the paper, we report the **average** accuracy on the 8 datasets to reflect the overall performance.\n\n**Q5: Ablation study on QAS vs. w/o gradient scaling.**\n\nWe have already provided the ablation study in Table 1, comparing vanilla SGD-M (i.e., w/o gradient scaling) and SGD-M+QAS. QAS consistently improves the average learning accuracy on 8 datasets (73.5% w/ scaling v.s. 64.9% w/o scaling).\n\n**Q6: Statement of open sourcing.**\n\nOur method is fully reproducible. We will open source the code upon publication. We have uploaded the [code to reproduce the video demo](https://anonymous.4open.science/r/on-device-training-0FE3) of transfer learning to the VWW dataset on a microcontroller (see supplementary Section A). We hope the code will help the community reproduce our work and inspire more later studies.", " We thank reviewer xaNn’s useful comments and would like to respond as follows:\n\n**Q1: Can the proposed methods generalize to other commodity devices (e.g., NVIDIA Jetson or Neural chips, not just STM series)? Does the sparse update rely on specific chips to achieve true speedup?**\n\n- **The int8 operations are well-supported on commodity devices**: The int8 operations used in our method follow a standard TF-Lite protocol [33]. It is not specific to the STM series; instead, it is widely supported on a wide range of hardware like NVIDIA GPUs (CUDA Cores, Tensor Cores), Arm CPUs, Intel’s CPUs, Qualcomm’s SoCs, etc. Therefore, our quantized training method is general and can be extended to different devices.\n- **Our techniques can bring general speedup**: The other components like sparse update, compile-time differentiation, backward graph pruning, operator reordering, etc. are not specific to any device or platform. They are general techniques to make on-device adaptation more efficient and do not require special support from the underlying hardware.\n- **Speedup from the sparse update on more edge platforms**: Our sparse update uses channel/op level sparsity and does **not** involve fine-grained sparsity. Therefore it can accelerate training on various hardware platforms. We report the latency (ms) of training MobilenetV2-0.35 with Tiny Training Engie (TTE) using input size 1x3x128x128 in the following table. Our sparse update scheme shows consistent speedup (1.4x-3.0x) over conventional dense/full update while maintaining the same accuracy.\n\n| | Dense-Update | Sparse Update (ours) | Speedup |\n|---------------------|:------------:|----------------------|:-------:|\n| Raspberry Pi CPU | 74.0ms | 24.4ms | 3.0x |\n| Qualcomm S8Gen1 CPU | 39.9ms | 28.4ms | 1.4x |\n| Jetson Nano GPU | 5.2ms | 2.8ms | 1.8x |\n\n**Q2: The performance of segmentation and detection tasks.**\n\nTinyML suffers from an extremely small memory size. It is very difficult to support even the **inference** of applications like segmentation and object detection [47] because of the large input resolution. For example, a single 640x640x3 image (a widely used resolution for the MSCOCO dataset) takes 1200 kB to store and already exceeds the available memory on a microcontroller (320KB), let alone the cost for neural networks’ forward and backward. Segmentation and detection need special designs to handle the large resolution challenge, and we leave it for future work.\n\n**Q3: Can this method train from scratch instead of fine-tuning the pre-trained models?** \n\nOur method can also support training from scratch. Experiments in supplementary Section K show that our QAS (with pre-measured weight and activation ranges) is still effective for these non-transfer learning scenarios: training with QAS consistently improves the average accuracy on 8 datasets from 48.3% to 65.4%. Please refer to Section K for details.\n\n |Method | Average of Other Datasets | Cars |CF10 |CF100 |CUB |Flowers |Foods | Pets |VWW | \n|---------------------|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| SGD-M | 48.3 | 15.8 | 63.5 | 41.0 |30.3 |69.8 | 34.1 |52.5 | 79.2 |\n| SGD-M+QAS | 65.4 | 52.0 |85.5 |58.7 |47.1 |78.2 |53.1 |63.3 |85.6 | \n\n\n\n\n\nWe use pre-training because it usually delivers better performance on downstream tasks compared to training from scratch. Recent lifelong learning work [a, b] also performed fine-tuning from pre-trained weights rather than training from scratch.\n\n[a] Mehta et al., An Empirical Investigation of the Role of Pre-training in Lifelong Learning\n\n[b] Wang et al., Learning to Prompt for Continual Learning\n\n**Q4: Show the training speed or time cost when applying to the downstream tasks.**\n\nWe have provided the training latency in Figure 9(c). Compared with baselines, our method achieves more than 20x training speed up. For example, it takes 325ms to update MobileNetV2-w0.35 on one image, which is 29x faster than the existing baseline (9456ms). So if a new sample comes in, we can finish the training within a second for a timely update.\n\nFor tinyML applications, we usually need a small number of samples for training, so the total training time is also modest. For example, we have built a real demo of fine-tuning the model for a new task (Visual Wake Words) with 100 training images in supplementary Section A. The total training time for MobileNetV2-w0.35 would be just 33s, well within a minute. Given the modest total training time, we believe our framework can enable a vast range of realistic low-power on-device training applications.", " We thank the reviewer very much for the positive comments and insightful suggestions.\n\n**Q1: The necessity of on-device training and its applications. Compare training on the cloud and update the parameters locally.**\n\nOn-device training has a broad range of applications: it can customize the deep learning models for personalized use cases. For example, Google uses on-device learning to improve the smart keyboard prediction; voice assistants like Alexa and Siri can adapt to different users’ accents; cameras can continually recognize more objects to facilitate smart home and smart manufacturing.\n\nOn-device training also has unique advantages. Firstly, it can protect users’ privacy by keeping data locally, especially for sensitive scenarios like healthcare. Secondly, it reduces the data transfer cost and cloud operating cost (which could be huge considering the billions of IoT devices in our daily life). Thirdly, it enables new learning paradigms like life-long learning and federated learning.\n\nOn the contrary, uploading the data to the cloud for training can lead to serious privacy and security issues. It may not be feasible as the regularizations become more strict (e.g., the EU’s GDPR makes it very difficult to collect user data in a centralized cloud for training). It also leads to higher cloud operating costs. Therefore, we believe efficient on-device learning has unique advantages and will enable various novel applications.\n\n**Q2: Is the training time reasonable for on-device learning applications?**\n\nOur framework can enable practical on-device training applications even on a small microcontroller with limited computation capacity. We have built a video demo (see [here]([https://drive.google.com/drive/folders/1E0MU7rPHy7NbnlZDKVAIGuN1TTwogF9U?usp=sharing](https://drive.google.com/drive/folders/1E0MU7rPHy7NbnlZDKVAIGuN1TTwogF9U?usp=sharing)) and the supplementary Section A) showing that we can transfer a model to a new target task (Visual Wake Words) within only 200 seconds (which could be further reduced by 3x if we make a good synchronization).\n\nFor the detailed timing breakdown of finetuning MobileNetV2-w0.35, we need 325ms to train on one sample. For tinyML applications, we usually need a small number of samples (e.g., <100) to recognize an object, so the total training time is quite acceptable (within minutes). For lifelong learning settings, the training on streaming new data is amortized during a very long period, so the training cost for a certain period is still very affordable.\n\nWe also compare the training time for the same workload with a cloud GPU NVIDIA GeForce RTX 3090 using PyTorch, and it takes 20.4ms to train on one sample, which is an order of magnitude faster than our tinyML setting. However, the GPU power is 350W, **three orders of magnitude** higher than microcontrollers (360mW), which is not suitable for tinyML. Given the modest total training time analyzed above, we believe our framework can enable a vast range of realistic low-power on-device training applications.\n\n**Q3: The choice of the used neural network structures.**\n\nOur proposed techniques do not rely on a specific model architecture. They are **general techniques** and can be applied to different CNN model architectures. We chose the three network architectures since they are widely used vision models in tinyML settings. For example, the MCUNet model [45] achieves state-of-the-art accuracy for tinyML applications, outperforming other network architectures in terms of accuracy vs. memory tradeoff. Note that to be friendly for quantization and deployment, the three models do not have attention blocks or Swish activation functions.\n", " We thank reviewer JJZV for the informative discussion. The comments and questions are very relevant.\n\n**Q1: Compare with only fine-tuning the Batch Normalization layers.**\n\nBN-only update is only **parameter**-efficient, but not **memory**-efficient. Training only the norm layers will not lead to further memory savings compared to our proposed approach.\n\nFor the BN-only update, the number of trainable parameters is small. However, we need to save the intermediate activations to calculate the gradients of the BN parameters. Consider the scale $s$ and shift $b$ in BN: $y=sx + b$, to update $s$, we need to save the intermediate activation $x$ ($ds=x * dy$). The intermediate activation is much larger than the parameters of BNs and becomes the major memory bottleneck for transfer learning [12]. While our sparse update scheme allows us to skip a large part of the intermediate activation storage, leading to better memory saving.\n\nTo further address your concerns, we experimented with updating the BN layers of MobileNetV2-w0.35. The results are added to supplementary Section J. As shown in the trade-off curve (can be viewed [here](https://anonymous.4open.science/r/on-device-training-0FE3/assets/figures/compare_bn_only.png)), BN-only update achieves a best average accuracy (on 8 datasets) of **66.4%** at **414kB** memory (OOM), while our sparse update scheme achieves **67.4%** at **49kB**, which is **1% higher on accuracy** with **8.4x smaller memory usage**. Our sparse update consistently outperforms BN-only update in both memory efficiency and accuracy.\n\n**Q2: Is QAS general for non-transfer learning scenarios?**\n\nThanks for the suggestion. QAS can be applied as long as we are updating a **real** quantized graph, where parameters are quantized to integers (also no BNs). However, quantizing a randomly initialized model is not feasible since we cannot reliably measure the weight and activation range before training [33]. To tackle the issue, we first perform warm-up training for a small number of iterations (<5 epochs) to get a reasonable weight and activation range; and then perform model quantization to get the real quantized graph. We then train on the downstream tasks w/ and w/o QAS. Experiments show that **QAS is effective for the non-transfer learning scenario**: training w/ QAS leads to an average accuracy of **65.4%** on 8 datasets, while training w/o QAS can only achieve **48.3%**, as shown below. QAS effectively improves the training results by 17% for non-transfer learning scenarios. Please find the detailed results in supplementary Section K.\n\n |Method | Average Accuracy | Cars |CF10 |CF100 |CUB |Flowers |Foods | Pets |VWW | \n|---------------------|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\n| SGD-M | 48.3 | 15.8 | 63.5 | 41.0 |30.3 |69.8 | 34.1 |52.5 | 79.2 |\n| SGD-M+QAS | **65.4** | 52.0 |85.5 |58.7 |47.1 |78.2 |53.1 |63.3 |85.6 | \n\n**Q3: Compare with gradient sparsification techniques like DGC.**\n\nThanks for the suggestion. We would like to clarify that our sparse update is fundamentally different from gradient sparsification from both the motivation and implementation perspectives:\n\n- Gradient sparsification is designed for large-scale distributed training to save **communication costs** between servers, while our sparse update focuses on tiny-scale on-device learning to handle the limited **memory and computation**.\n- Gradient sparsification methods like DGC [a] perform gradient pruning **after** all the gradient is computed; thus it **cannot save** memory or computation (potentially memory overhead due to sorting and accumulation).\n- Our sparse update applies sparsification at the compile time **before** gradient computation. It can skip the less important gradient computation and intermediate activation storage, leading to real memory saving and speed up (e.g., 17.9x memory saving and 3x speed up for MobileNetV2-w0.35).\n\nDue to space limitations, we will reference and compare with the paper in the final version.\n\n[a] Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training\n\n**Q4: Will the Tiny Training Engine code be open sourced?**\n\nYes, we will open source the implementation upon publication. We have cleaned and uploaded the code to reproduce the video demo of transfer learning to the VWW dataset on a microcontroller (see supplementary Section A): [https://anonymous.4open.science/r/on-device-training-0FE3](https://anonymous.4open.science/r/on-device-training-0FE3). We hope the code will help the community reproduce our work and inspire later studies.\n\nWe hope our response has resolved all of your concerns. Please let us know what other experiments or clarifications we can offer to convince you to increase the rating.\n", " This paper presents a quantization-aware-scaling mechanism to perform on-device training with extremely limited memory bandwidth settings. The system leverages sparse update throug Tiny Training Engine by pruning backward computation graph. The experiments on Visual wakeword demonstrates the efficacy of the proposed system. \n This is a very well-written paper with extensive descriptions and many practical “tricks” to enable on-device training. Namely, QAS, sparse tensor update, DCE, in-place update…The demo shows a functioning working system on the real microcontroller. The work presented in the paper contains significant effort.\n\nThe individual tricks themself are not new, e.g. for the idea of QAS, I think the authors should cite LSQ<Esser et al. 2019>. On the whole, to enable all the parts work together systematically, this is non-trival effort. \n\nThe authors seem to have neglected <HFP8, Sun et al. 2020> and their follow up 4-bit ultra low precision work for comparison. \n Q: The evaluation is done on a rather limited dataset, and the author should do more evaluation on a broader range of dataset to make it more convincing?\n\nAblation does not seem to have ablated QAS VS w/o gradient scale?\n\n\n I don’t see any statement of opensourcing the implementation, without such promise, I felt this work would be extremely difficult to reproduce. This is a glaring issue with the quantization community, so I would really see the code opensourced some day. ", " This paper presents a system-algorithm co-design approach towards conducting transfer learning training on tiny microcontroller-based systems with less than 256KB memory. The ideas focus on: (1) improving optimization characteristics of quantization aware training, (2) gradient sparsification, (3) new compile time optimizations with Tiny Training Engine. Results are shown on several tinyML applications. The paper has several strengths:\n\n1. Training on-device is much more difficult than inference on-device. Hence, doing simple transfer learning kind of tasks on tiny devices is interesting.\n\n2. The paper makes significant contributions towards optimization of quantized networks as well as training support compilers for tiny devices.\n\n3. The results demonstrate significant memory savings compared to standard deep learning frameworks (Tensorflow/Pytorch).\n\nDespite the strengths, the paper does have several significant weaknesses:\n\n1. One of the main motivations that authors use to disregard several existing training time/adaptation techniques (e.g, the ones that rely on updating only the normalization layers (reference [25] in paper)) is that the normalization layers are not present once the model is deployed on tiny devices. This happens because deployment tools like TFLITE, etc., fuse the batchnorm layers into convolution weights. However, this fusing process is extremely cheap, and the compiler needs to do it only once for inference. Can we not keep the norm layers for the on-device training purposes and fuse them only when running inference? Would training only the norm layers result in significant savings compared to the proposed approach? This is a major class of upcoming cheap training time/adaptation techniques and solid evidence must be presented to show that the proposed approach indeed outperforms this new class of models. The baselines considered for comparison (“update last k layers”, “update last k biases”, “update only the classifier”) are not strong. If “update only norm layers” kind of techniques are effective, it may also reduce the motivation behind the proposed quantization scaling method.\n\n2. The improvements to the optimization problem for quantization aware training seem general. However, the authors have not provided any evidence that this would result in general improvements in quantization-aware training (and not just for tinyML applications). Specifically, would the proposed scaling method reduce the gap between FP32 and INT8 models for non-transfer learning scenarios? This kind of experiment can clarify whether the proposed technique is limited to transfer learning only or does it apply to general quantization-aware training? If it only works for transfer learning and not in general, why? Some discussion around specific conditions under which the proposed method works would strengthen the paper.\n\n3. There are many existing gradient sparsification techniques. Authors have not presented any comparison against existing gradient sparsification techniques. For instance, this paper and the many references it cites are relevant: https://arxiv.org/pdf/1712.01887.pdf. Some newer papers probably exist in this space. The above paper is from 2017. I would recommend authors to do a thorough comparison against SotA gradient sparsification techniques.\n\n4. Will the code for tiny training engine be open sourced?\n\nOverall, I like the idea in this paper and the fact that this is the first paper that trains models on-device for realistic applications under 256KB. This is a significant contribution. However, I would like to make sure that the paper is not missing significant baselines and comparisons. I can increase the rating if the above weaknesses are adequately addressed.\n\n\n---Update after rebuttal---\nI have read other reviews and author response. The authors have provided good clarification and insightful new experiments during the rebuttal. They have satisfactorily answered my questions. As a result I have increased the rating to \"6: Weak Accept\". If the paper is accepted, I would like the authors to just show a simple comparison between standard quantization-aware-training (QAT, e.g., where BNs are still present and are fused after QAT is complete) and QAS on real quantized graph (e.g., the one without BN). While I can see that on a real quantized graph, QAS has benefits for non-transfer learning scenarios (based on new results), it would be useful to see the gap between regular QAT and QAS on real quantized graphs. Indeed, this can help motivate future works that may improve upon QAS to close the gap between QAT and quantization algorithms that directly work on real quantized graphs.\n Please refer to the last section. Mostly yes. Other things like point 2 above (see weaknesses) can help further.", " This paper proposes a novel and efficient framework for the on-device training with 256KB memory. In practice, this paper introduces Quantization-Aware Scaling (QAS) technique to stabilize the quantized training with mixed-bit precision, and present the sparse update technique to save the memory footprint. Besides, the Tiny Training Engine (TTE) is deigned to implement the proposed methods in an MCU system. Strengths\n1. This paper is well-written and easy to understand.\n2. This paper propose a complete hardware-software co-design scheme for training on the MCU devices.\n3. The experiments and analysis are sufficient to support the proposed methods, the experimental setup is detailed.\n\nWeaknesses\n1. Even with much optimization, the inference speed is very slow on the MCU device, is training really necessary and suitable? 1. Could you show the total training time on the device for three models on different datasets, and show the comparison with GPU. If the training time is too long, the application of these methods may be limited.\n2. Can you explain the necessity of on-device training technique and its application scenarios? Training on the cloud device and updating the parameters on the IoT device seems to be more promising.\n3. Whether the on-device training has strict requirements on the structure of the neural network models. The plain network seems to be more memory-saving, but this paper do not adopt it. The datasets and models used in the paper are small, some tricks for large models and large datasets seems to be not such important to accuracy, like shortcut connection, attention block, swish activation function. Yes.", " This paper considers a timely topic, especially with the explosion of tiny devices with limited computational resources. It is a trend to deploy the lifelong learning tasks directly on the devices and realize user customization. The authors propose the algorithm-system co-design to fit the rigorous setting, even with only 256KB. The model adaptation methods for gradient calibration and sparse update are straightforward but experimentally work well. This framework matches the practical demands in an edge environment, and thus is a meaningful solution for machine learning analysis on IoT and MCUs. The experiments take commodity on-device learning settings and present promising performance in terms of model size, memory cost, and accuracy. Generally, this is an interesting paper with a clear methodology design and performance evaluation. However, I have some concerns (about the weakness) as follows.\n\n1. The effectiveness of the proposed model compression and learning strategy are highly related to the processing pattern of the underlying hardware. For example, not all the commodity processing chips can support the specific int8 operations (or efficiently handle these operations). The true acceleration of sparse computing relies on specific chips. So I am interested in whether the proposed methods can generalize to other commodity devices (e.g., extending to the NVIDIA Jetson or a mobile phone’s neural chips, not just the specific STM series).\n\n2. The experiments are mainly based on classification and adopt transfer learning with limited epochs on downstream tasks. What about the performance of segmentation and detection tasks? \n\n3. Also, supporting long-time (a large number of epochs) training is critical for on-device lifelong learning. Can this method train from scratch, instead of fine-tuning the pre-trained models? \n\n4. Handling the learning procedure timely is more critical. I think it is better to present the performance of the training/inference speed or time cost when applying to the downstream tasks. Please see the four questions in the weakness part. None." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 5 ]
[ "-kprvNikHnq", "Uuc9sZ_FtYa", "Tbs8ORY1yk7", "N7RIhEOalVK", "W547II4O81w", "3j7oseeqOvI", "nips_2022_zGvRdBW06F5", "nips_2022_zGvRdBW06F5", "Uuc9sZ_FtYa", "99ibPJaWTBI", "P5n4a2nqXk4", "3j7oseeqOvI", "nips_2022_zGvRdBW06F5", "nips_2022_zGvRdBW06F5", "nips_2022_zGvRdBW06F5", "nips_2022_zGvRdBW06F5" ]
nips_2022_StzAAh8RuD
Independence Testing for Bounded Degree Bayesian Networks
We study the following independence testing problem: given access to samples from a distribution $P$ over $\{0,1\}^n$, decide whether $P$ is a product distribution or whether it is $\varepsilon$-far in total variation distance from any product distribution. For arbitrary distributions, this problem requires $\exp(n)$ samples. We show in this work that if $P$ has a sparse structure, then in fact only linearly many samples are required. Specifically, if $P$ is Markov with respect to a Bayesian network whose underlying DAG has in-degree bounded by $d$, then $\tilde{\Theta}(2^{d/2}\cdot n/\varepsilon^2)$ samples are necessary and sufficient for independence testing.
Accept
The manuscript studies the independence testing problem, given samples from a distribution over several binary random variables. While the sample complexity is exponential (in the number of variables), this paper shows that when the distribution is a Bayesian network with small in-degree, the sample complexity is linear. All reviewers asked for a clarification in the motivation, and some reviewers asked for comparison to literature and/or possible alternatives. The authors addressed this well during the rebuttal phase. I recommend adding this to the camera-ready version of the paper, as well as other discussions and clarifications raised by all the reviewers.
train
[ "jHzqcRhTHc", "KV7CmgAITG", "UWQxNchdiLT", "143DfUKRo1J", "KhO50N2O3xn", "9_yfcEaLaut", "JTWXV-H20F", "U_qQi_-erNE" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " ... we will make sure to incorporate these to the paper.", " I would like to thank the authors for their detailed reply. \n\nI feel my comments have been properly addressed. I would suggest incorporating your reply to certain points into the manuscript, especially on point 1(ii) (why focusing on the total variation) and point 3 (using localization in TV leads to sub-optimal results).", " We thank the reviewer for their feedback, and respond in detail to the points they raised below. We hope that they address the reviewer’s concerns, and will lead to their reevaluating their score. \n\n1. We would like to clarify that while both a Bayes net and a product distribution satisfy certain factorization properties, they are not equivalent, e.g., a distribution P is a Bayes net with bounded in-degree $d$ does not necessarily imply that it is a product distribution (the converse is true). Specifically, the factorization property the reviewer refers to (p.4 in the paper) expresses the joint distribution as a product of *conditional* distributions; while for a product distribution, this is just the product of the individual marginals. Put differently, the question we address is whether a Bayes net with maximum in-degree $d$ (unknown graph structure) can be expressed as a Bayes net with maximum in-degree 0. \n\\\nRegarding the second question: no, our tester would not distinguish between the two cases (and, more generally, we are not addressing the question of structure learning or identification). Our tester would distinguish between either of the two cases (degree-1 Bayes nets) and the corresponding product distribution on $X_1, X_2, X_3$ (no edges), assuming the edges in the proposed example encode non-trivial dependencies (i.e., removing them significantly changes the resulting distribution). \n\n2. While we do not know the structure (graph) of the Bayes net, for fixed $n,d$, the number of possible graphs is finite, and therefore, it is possible to iterate through them; the resulting algorithm is computationally inefficient, but as stated in the paper our focus here is the *sample complexity*. \n\n3. We defined the notion of degree used (bound on the maximum in-degree) on l.172. We followed here the literature on learning and testing Bayesian networks (see, e.g., https://arxiv.org/abs/2011.04144 or https://proceedings.mlr.press/v65/canonne17a.html); we will clarify this by adding “in-degree” explicitly when defining the degree, on lines 172-173. \n\n4. We are not sure to understand what the reviewer is suggesting. If the suggestion is the (related, but different) question of estimating how far to a sparse Bayes net a given distribution is, we note that this is orthogonal to our current paper (we work under the assumption of sparse Bayes net, not trying to test that assumption), but also that this is partially addressed in Theorem D.1, where we show that even testing if a distribution is *exactly* a sparse Bayes net requires exponentially many samples (in the dimension $n$). \n\n5. The main focus of this paper is on the theoretical analysis and understanding of the sample complexity of the problem, and especially the lower bound. We believe our algorithms to be practical, and agree that an extensive evaluation of this aspect would be interesting. However, we leave this for future work, as this would distract from the main message of the current paper.\n\nWe are happy to engage in more discussion if the reviewer has further questions. ", " We thank the reviewer for their time and useful comments, especially for pointing out some typos in the paper (their question 4 (2)) and for their feedback and suggestions on the presentation of our lower bound constructions. Please find our response, clarifications and corrections below: \n\n1. (i) This is a great question — indeed, testing independence is a special case of testing graphical structure (testing the Bayes net has in-degree 0 on every node, or as the reviewer writes, testing if the network has no edge). We refer the reviewer to our motivation section in line 55-60: by answering this (fundamental) question of independence testing, we are making an important step towards the general (and even more challenging) problems of testing whether a sparse network has even sparser connections (not just the sparsest, as in the case of independence testing). Put differently, a long-term goal is to address the more general question, as stated in ll. 55-60; yet, even the “base case” of independence testing was completely open prior to our work, and is a first and necessary step towards a full understanding of the general case. We hope that our paper will inspire others to study related graphical testing problems, e.g., testing the maximum in-degree of $k$, where $k \\ll d$. \n\\\n
(ii) Total variation distance is a standard metric used in hypothesis testing, due to its relation to Type I and II error (Pearson–Neyman lemma), indistinguishability, and its other properties (data processing inequality, relation to Hellinger/KL/$\\chi^2$/$L_2$, etc.) … It does make sense, in some cases, to consider other notions of distances (see, e.g., reference [Daskalakis, Kamath, and Wright. *Which distribution distances are sublinearly testable?*]), often as a proxy for TV distance or in combination to obtain some additional robustness. We note that our upper bound result, while stated for TV, does imply a result for Hellinger distance as well, since our analysis relies on Hellinger distance as a proxy for TV. \n\\\n(iii) While our paper has a strong theoretical flavor, we strongly believe that our algorithm and result can have some implications for a range of hypothesis testing questions, given the fundamental nature of the problem solved and the pervasiveness of the sparse Bayes net assumption/modelling in natural sciences and science at large. We refer the reviewer to lines 37-43 of the submission for some pointers to such applications. \n\n2. Thank you for pointing this out; indeed, in some special cases, the results overlap, though the conversion between parameterizations makes the results difficult to compare even in these cases (e.g., dependence on the parameter $\\beta$ (maximum edge value) in the results for the Ising model case, and max-undirected-degree vs. max-in-degree). We will add a more thorough discussion and comparison in the final version. \n\n3. There are analogous localization results on TV distance. But these translate to much weaker results compared to the ones provided in Daskalakis and Pan (2017) -- using the localization results for TV, we would get worse dependency in $\\varepsilon$, which due to our setting of that parameter when using that result also then leads to a worse dependence on $n$. From a technical standpoint, the issue with trying to use or derive such localization results for TV distance directly is that TV distance does not “tensorize” (while, say, Hellinger, $\\chi^2$, and KL do): *even for product distributions*, TV does not tensorize, and there is no tight localization result possible: so, obtaining a sufficiently tight one (for our purposes) for more general Bayes net structures is impossible. \n\n4. (1) Thank you for the feedback: we will clarify this point in the final version. Namely, we use the term “tree” to refer to the structure of degree-$1$ Bayes nets (as in the lower bound construction), while this is indeed a forest (the max degree is 1). This technically still factorizes as a Bayesian path (tree), but not all edges are necessary.\n\\\n
(2) It is a typo on our end; indeed, it should be $\\\\{0, 1\\\\}$.\n\\\n(3) By “pointer”, we mean that the first $d$ nodes with a support of $\\\\{0,1\\\\}^d$, can be seen as a $d$-bit string that acts as a pointer to one of the $2^d$ distributions, i.e., the value of the first $d$ nodes encodes which distribution the rest of the $(n-d)$ nodes are on. We will clarify this in the final version, and follow the reviewer’s suggestion about the construction by adding an illustration (figure). \n\n 5. This is a fair point; we will do our best to make this proof sketch more concise and less dense, and if enough space remains, we will add a short discussion section.\n\nWe are happy to engage in more discussion if the reviewer has further questions. ", " We are thankful to the reviewer for their reviews and thoughtful comments. Please find our response and clarifications below: \n\n1. Besides applications to hypothesis testing (such as the ones discussed in the introduction, e.g., checking whether variables are independent in medical settings; see line 37-41), we would give one example scenario for applying our algorithms in the real world, namely, *model selection*: suppose the ML practitioner might have some rough ideas (valid assumptions) on the generation process and would like to leverage this piece of information to save on data collection. Namely, they know, based on well-established prior knowledge of the domain that the data collected does have sparse dependency structures (satisfying our low in-degree assumption) and would like to obtain good estimates of the distribution (density estimation). They also suspect (based on less certain, yet plausible knowledge/modeling assumptions) that the variables are actually statistically independent, i.e., the Bayes net they want to learn might in fact be a product distribution.\\\nIn this case, they have two strategies: one is to be safe and learn the distribution as a Bayesian Network, which requires (up to constants) $2^d n/\\varepsilon^2$ observations. The other is to first check whether that less-certain hypothesis does holds, and, if so, estimate the density with much fewer observations, only $n/\\varepsilon^2$ (as we then know that it is close to a product distribution). With our algorithm, the second approach is possible using an additional $2^{d/2} n/\\varepsilon^2$ observations, which dominates the total number and leads to significant savings over the first approach. \n\n2. The fact that Bayesian networks arise naturally in a variety of settings (see ll .37-41) makes this assumption a very natural one. In that sense, these savings (in terms of sample complexity, i.e., data size; and the resulting savings in time and storage to process, collect, and store this data) are paramount. Since this structure is readily available in a lot of applications, having algorithms to leverage it in such a fundamental application as testing statistical independence can be incredibly useful in any data-starved setting. Put differently: if your goal is to perform independence testing, then leveraging all the structural assumptions you know you have is very natural, especially (as we show) since it brings huge savings in data requirements. \n\n3. We will refer the reviewer to Line 644-648 and onwards, where we use existing and establish new stochastic dominance results (which we believe are of independent interest) to prove the upper bound on the MGF of truncated Binomials. \\\nAt a higher level, our analysis combines a variety of techniques which, while not separately new per se, are used in a non-trivial way: decoupling inequality, stochastic dominance, and coupling of random variables, for instance. \\\nFinally, Theorem D.1 is also a new and very general result on testing, which we believe could find other applications to easily obtain testing lower bounds for other classes of high-dimensional probability distributions from known learning results. \n\n4. No; no intervention is needed for our algorithm. For “Use the $m$ samples from $S$ to generate $m$ i.i.d. samples from $P_T$\" in Algorithm 1”, we are looking at marginals of the full distribution $P$, and are interested in a subset of variables $T$. To get (generate) one sample for $P_T$, we can simply take one sample of $P$, keep the corresponding node values of $T$ and ignore the rest. \n\n \n\nWe are happy to engage in more discussion if the reviewer has further questions. ", " The paper studies the independence testing problem. Given samples from a distribution $P$ over $n$ binary random variables, one should detect where all $n$ random variables are statistically independent or not (i.e., at least $\\epsilon$-far in TV from any product distribution).\n\nFor a general $P$'s, the sample complexity is in the order of $\\exp(n)$. The main contribution in this paper is to show that when $P$ is a Bayesian network with in-degree at most $d \\ll n$, the sample complexity is in the order of $n \\exp(d) / \\epsilon^2$. Strengths:\n- In section 1.2, I truly appreciate that the authors spent a good amount of space comparing their results with relatively simpler alternatives, but that have worse sample complexity than their result.\n- I also appreciate how the authors provide some intuition about the novelty of the techniques used in this paper, specifically in Lines 114-120 and Lines 147-156.\n- The theoretical result seem sound and thorough.\n\nWeaknesses:\n- The theoretical result is interesting, but the (possible) Machine Learning application is unclear.\n- It is unclear (at least to me) whether the problem/algorithm assumes observational data or it requires interventions.\n I would appreciate if the authors would address the following:\n\n1) Please comment on why the problem would be interesting from the viewpoint of applied Machine Learning. Specifically, why a Machine Learning practitioner would like to run the independence testing algorithm?\n\n2) Learning the structure of Bayesian networks is useful for understanding independence between variables (e.g., genes, brain regions, etc.) in exploratory research. Why are Bayesian networks useful in the independence testing problem (besides the fact that it leads to a better sample complexity)?\n\n3) Besides the bounds on the MGF of squared binomials and the result for the Hellinger distance. Could the authors point out to other specific novel technical results (lemmas, equations, etc.)? Any argument related to the challenge of solving this problem as well as the technical novelty will be highly appreciated.\n\n4) Please clarify if the problem/algorithm assumes that data is observational, or whether the algorithm needs to perform interventions (fixing some variables and sample other variables). See for instance \"Use the $m$ samples from $S$ to generate $m$ i.i.d. samples from $P_T$\" in Algorithm 1. As the paper is theoretical, there is no clear and direct societal impact.", " This paper shows that for a distribution $P$ over $n$ binary variables such that it factorizes according to an underlying degree-$d$ Bayesian network, testing that $P$ is a product law versus $P$ is $\\varepsilon$-away from any product law in total variation, requires $O(2^{d/2} n / \\varepsilon^2 )$ samples. Further, the bound is shown to be tight. ### Strengths\n1. The paper solves the problem studied: the bound is tight.\n2. The problem and techniques are connected to and built upon many recent developments, e.g., Diakonikolas and Kane (2016), Daskalakis and Pan (2017), and Canonne et al. (2020). \n3. The proofs seem solid. \n\n### Weakness\n1. The authors should better motivate the problem studied, e.g., why focusing on testing complete independence, why using the total variation, etc. \n2. The presentation of certain sections can be improved. 1. The problem studied should be better motivated. \n\n(i) Why focused on testing complete independence (i.e., product measure)? Perhaps elaborate on *testing graphical structure* --- the problem is testing that the underlying Bayesian network has no edge, I suppose? \n\n(ii) Why focusing on the total variation distance? Does the result also extend to the problem posed in Hellinger distance?\n\n(iii) Does this problem bear any practical implication? \n\n2. Relation to Daskalakis et al. (2019) for Ising models. \n\nThe relation is dismissed as \"Ising models and Bayes nets are incomparable modeling assumptions\". But certain Ising models are Bayesian networks, e.g., when the undirected graph is chordal. Perhaps, at least for these cases, it is worth more clarification?\n\n3. Total variation vs. Hellinger\n\nThis issue arises again when reading the high-level description of techniques in $\\S$1.2, where the challenge seems to be that while previous testing algorithms are developed for total variation, the localization result in Daskalakis and Pan (2017) is in Hellinger. The authors chose to stick with Hellinger and devised a testing algorithm for Hellinger. \n\nMy question is about the alternative: can the localization result be extended to the total variation?\n\n4. Definition 2.2 is confusing. \n\nI am confused by the following. \n\n(1) $\\lambda$ is disconnected (as a perfect matching) but called a \"tree\". \n\n(2) The expression for $\\text{Cov}(X_i, X_j)$ does not make sense to me: $(-1)^{\\mu_{l,k}} = (-1)^{\\pm 1}$ is always $-1$?\n\n(3) What does \"pointer\" mean here?\n\nIn general, it is very difficult for me to parse how the construction works. For better clarity, I would suggest moving the text description from page 3 to follow the definition. Or, even better, illustrate with an example. \n\n5. The proof sketch in $\\S$2.1 is too dense. \n\nIt should be reduced and simplified to make it a readable **sketch**. Meanwhile, the paper should end properly with some discussion, which is currently impossible due to space limit. I do not foresee any potential negative societal impact of their work.\n\nI have listed my suggestions in the previous section. ", " This study focuses on the independence test where a given bayeisan network distribution is a product distribution or it is epsilon-far in total variation distiance from any product distribution. This study shows that the required sampale size is O(2^{d/2} n/ epsilon^2) rather than exp(n) using the sparsity of a directed acyclic graph. \n Strenghts: Better sample complexity. \nWeakeness: Some unclear statements. 1. Unclear main problem: This paper states that the main problem is whether the joint distribution of a Bayesian network is a production distribution. However, as stated on the page 4 in the paper, it satisfies the factorization property, and hence, the joint distribution is a product distribution. It would be better to clarify the focused problem. I am not sure if I understand correctly, is the proposed tester can distinguish P from a bayesian network with X_1 -> X_2 -> X_3, and Q from a bayesian network with X_1 <- X_2 <- X_3?\n2. This paper does not assume the known structure. However, I do not understand how this study can apply the unknown ordering of a graph, which is in general not solvable. \n3. This paper misuses the term degree as in-degree. This makes the paper is really unclear. The maximum indegree, d_{in} = \\max_{j \\in V} | \\Pi_j |, the maximum degree, d = \\max_{j \\in V} | \\Pi_j \\cup {child of j} |,, and the maximum degree of the moralized graph is d_{m} = \\max_{j \\in V} | \\Pi_j \\cup {child of j} \\cup {spouse of j} |. \n4. It would be better to discuss that how to a given a distribution is from a sparge Bayesian netowrk?\n5. For the enhanced clarity, the theoretical findings would be better to be confrimed through numerical experiments. None." ]
[ -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "KV7CmgAITG", "143DfUKRo1J", "U_qQi_-erNE", "JTWXV-H20F", "9_yfcEaLaut", "nips_2022_StzAAh8RuD", "nips_2022_StzAAh8RuD", "nips_2022_StzAAh8RuD" ]
nips_2022_yewD_qbYifc
PCRL: Priority Convention Reinforcement Learning for Microscopically Sequencable Multi-agent Problems
Reinforcement learning (RL) has played an important role in tackling the decision problems emerging from agent fields. However, RL still has challenges in tackling multi-agent large-discrete-action-space (LDAS) problems, possibly resulting from large agent numbers. At each decision step, a multi-agent LDAS problem is often faced with an unaffordable number of candidate actions. Existing work has mainly tackled these challenges utilizing indirect approaches such as continuation relaxation and sub-sampling, which may lack solution quality guarantees from continuation to discretization. In this work, we propose to embed agreed priority conventions into reinforcement learning (PCRL) to directly tackle the microscopically sequenceable multi-agent LDAS problems. Priority conventions include position-based agent priority to break symmetries and prescribed action priority to break ties. In a microscopically sequenceable multi-agent problem, the centralized planner, at each decision step of the whole system, generates an action vector (each component of the vector is for an agent and is generated in a micro-step) by considering the conventions. The action vector is generated sequentially when microscopically viewed, and such generation will not miss the optimal action vector, and can help RL's exploitation around the lexicographic-smallest optimal action vector. Proper learning schemes and action-selection schemes have been designed to make the embedding reality. The effectiveness and superiority of PCRL have been validated by experiments on multi-agent applications, including the multi-agent complete coverage planning application (involving up to $4^{18}>6.8\times 10^{10}$ candidate actions at each decision step) and the cooperative pong game (state-based and pixel-based, respectively), showing PCRL's LDAS dealing ability and high optimality-finding ability than the joint-action RL methods and heuristic algorithms.
Reject
While the ideas in this paper are promising, there are issues with the paper's presentation and experimental results. The paper needs to be (further) updated to clarify the proposed method and discuss additional related work. More extensive experimental results are also needed to show the benefits of the proposed approach.
val
[ "s9l-GsDV5JT", "pZ656sf2bdq", "EXIzaCEW8eM", "ejvZv4lIcqE", "-7v8OzRsCJS", "OedQjZPIxbh", "IZfFewvZaH3", "drCSeutgpmU" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the suggestions and help. We will do as recommended.\n\nFor issue 1: The question is very good. Since we did not express it clearly, we are sorry for the misunderstanding. We will modify the corresponding parts of the manuscript to make the expression more accurate, clearer, and more backed up. \n\nFor issue 2: The macroscopic view of PCRL treats a decision of the multi-agent system as one macro-step, and this macro-step is the decision step of the MDP: The planner maps the current state (including map state + agent state) to an optimal action vector whose first component is for the highest priority agent, etc. That is \\begin{equation}\n\t\t\t\\vec{a_t}=f(s_t).\n\t\t\\end{equation} The macro-step microscopically consists of several micro-steps.\n\nFor issue 3: Yes. Your suggestion is more accurate.\nFor issue 4: The main contribution of this paper is about large discrete action spaces. We deal LDAS with microscopical sequencing. As multiple optimal sequences (i.e., action vectors) exist, we introduce priority to break ties, learning convergence to the lexicographic-smallest optimal action vector.\n\nFor issue 5: Sorry for the unclearness. This means that the parameters for LDAS will not increase exponentially with the number of agents but linearly or remain constant.\n\nFor other things: Thanks very much for pointing out these errors. We apologize for these grammatical problems and have corrected them based on the suggestions.\n", " Thanks to the authors for this response. I think this is a better paper (and that there's a good paper inside of the current paper, just one that still needs a lot of work on the writing to get to) though I do not believe I can improve my score to an accept as the contributions of the paper are still extremely unclear. I believe that this is a writing issue / insufficient clear demonstration of the ideas in the experiments. However, before getting to the issues I still have, I **highly recommend coloring the changes between papers to make it easy for reviewers to see what has changed in the paper.** This is just a useful practice and a friendly recommendation to the authors.\n\n**Issue 1:**\nPutting that aside, I take issue with several pieces of the claim *\"If multiple optimal joint-actions exist in the right side of the “=”, RL for LDAS is difficult to train because multiple return-maximum peak landscapes may impact the convergence and exploitation of RL, so new action selection and training schemes are required. The social conventions, especially the position-based priority can introduce partial ordering and break ties\"*. \n\nMy issue is that this claim is, as stated, an opinion and should instead be backed up with an experiment demonstrating clearly that this is a problem and that the proposed methods solve this issue rather than leading to improvements by solving some other issue.\n\nBesides this, there are other places in the paper where opinions are stated as facts. \n\n**Issue 2:**\nThe paper proposes to do something about microscopic sequenceability but only contains an explanation in the text. Since this is basically the key concept in the paper it should stand alone as a definition. Additionally, what it means to view the system `macroscopically` should be described more carefully. \n\n**Issue 3**:\nLine 181 still contains a claim about \"physical\" meaning. I do not think the authors are using the term \"physical\" correctly; I think they mean that it has an intuitive meaning?\n\n**Issue 4**:\nIs the contribution of this paper about tie-braking or about large discrete action spaces? I can't tell from the introduction or the motivation and the authors seem to flip-flop between the two. I'm assuming that the underlying argument is actually that multi-agent systems are inherently large action space due to exponential blow-up of the joint action space but that is not gotten across through the introduction.\n\n**Issue 5**:\nWhat is the importance of `linearly expressible space complexity`? This term is not defined and simply introduced in one of the theorems.\n\nSome other small things:\n- the axes in Figure 6 are too small and I am not convinced that there is a statistically significant difference between PCRL and DQN on 6a.\n- in Figure 2b the text overlaps with one of the boxes \n- amd ryzen should be capitalized as AMD Ryzen\n- Table 1 could round the values to 1 decimal place without any significant loss of information. There should be more spacing around the text in the explanation. \n- The axes in Fig. 3b should probably be adjusted ", " Thanks for the reviewer's comments and questions.\n\nFor Q1: Some multi-agent problems are microscopically sequenceable, i.e., an optimal joint action can be selected out one component by one component in micro-steps according to the conventions, besides of being selected out simultaneously, i.e., the \"microscopically sequenceable\" defined in RL language is that the joint action $\\pi^*(s)=(a_1,a_2,...,a_n) $ where each component can be produced as $a_1=\\pi^*_1(s),a_2=\\pi^*_2(s|a1),a_3=\\pi^*_3(s|a2),...$ and $\\pi_i$ functions are related. lexicographic order, cargmax and other inaccuracies all have been further clarified.\n\nFor Q2: \nThe agreed priority conventions for a specific multi-agent problem are prescribed by humans before the learning starts, and can be problem-specific designed. The priority conventions usually consist of (1) \\textbf{Agent priority convention}: This is for determining the order of agents to decide action and breaking symmetries: Dynamically, at each decision step of the centralized system, the agents' priorities are determined by current states, say, the relative positions of agents. For example, in \\fref{example}(c), the agents' priorities can be determined from left to right and upper to lower position, so the priorities (deciding order) of agents are $R_{1}\\prec R_{2}...\\prec R_{16}$. Here, $x\\prec y$ means $x$ has a higher priority than $y$. If the priority is not position based but ID based, then the state space will have many mirror states. (2) \\textbf{Agent ID convention}: This is for coinciding agents and breaking symmetries. Statically assign each agent a unique ID before learning to identify itself from others. When at least two agents coincide in the same position during the application, the smaller id agent has a higher priority. (3) \\textbf{Action priority convention}: This is for breaking ties of equally optimal actions. The actions of an agent have prescribed priority, say, up action$\\prec$left action$\\prec$down action$\\prec$right action, to break action ties if two actions both will bring about optimality, and to converge to one minimum lexicographic action vector. Hereafter, we define that two action vector $\\vec{\\alpha}\\prec \\vec{\\beta}$ (or called \"$\\vec{\\alpha}$ is \\textbf{lexicographic smaller} than $\\vec{\\beta}$\") if and only if $\\exists i\\in [1,N],\\ \\alpha_i\\prec \\beta_i$ and $\\forall j\\in [1,i-1],\\ \\alpha_j=\\beta_j$. \n\n\nThe robots are identical and we need a precedence relationship to break symmetries and to lead to consistency for the optimal joint-action among the agents. It doesn’t matter if you change the position priority, say the down most rightmost agent has the highest priority. Different priorities will get different $\\pi^*(s)$, but the maximum return from the same current state is the same.\n\nFor Q3: \nThe reason we still need RL is because of the NP-completeness of the concerned problems. As the appendix pointed out, many planning paths for CCPP exist and to find the best one is complicated with exhaustive search or dfs search, so RL can help to optimize to (sub-)optimal via sample-based optimization. We based our RL on priority conventions. The objective of each agent is not defined, but the objective for those agents is to visit shaded cells the quickest.\n\nFor Q4: Our method is centralized-training-centralized execution. It directly deals with large discrete action space ,so we were not meant to compare to the CTDE framework. We are now conducting experiments to comprehensively compare our work.\n", " Thanks for the reviewer's comments and questions.\n\nFor the \"General Comments\" part: \n\nThank you very much for your suggestion, and sorry that we have improperly written here.We have modified the original statement in the revised version. We want to express that independent reinforcement learning methods will face nonstationary problems in the multi-agent environment.\n\nGrammar things have all been carefully revised.\n\nCompared with the algorithm of CTDE learning paradigm(for example the Qmix,RODE), we proposed a multi-agent sequential decision algorithm inspired by social conventions and our algorithm does not need value decomposition constraints. In order to reflect the superiority of our method, we have been conducting experiments these days comparing PCRL with the existing CTDE algorithms in benchmark environments. We also want to note that our approach directly faces to the LDAS problem and optimize toward the lexicographic smallest optimal joint action. Our work is to help exploitation around the lexicographic smallest joint action. For the CTDE framework, some work needs the IGM principle, some uses factor mechanism, and some cannot tackle LDAS, so we cite some representative work of them in the related work section. These are the novelty and application-oriented settings of our work. Though not commonly observed but practicable.\n\nFor the \"Comments on Experimental Section\" part: The paper now has the contribution of (1) Priority conventions and proofs: this paper proposes the concept of \"microscopically sequenceable\" to tackle LDAS multi-agent problems without missing the optimality, and can help exploitation around the lexicographic-smallest optimal action vector. (2) New schemes: it proposes new learning schemes that fully exploit the agreed conventions, such as auxiliary equality constraint, and neural network action selection schemes for PCRL. (3) Proof-of-concept practices: applying PCRL to tackle seemingly different problems, including $10^{10}$ magnitude LDAS multi-agent path planning problems and the pixel cooperative task, and achieves 20\\% fewer steps and competitive performance.” In order to further highlight the advantages of our method, we have established problems such as 27m_ vs_ 30m StarCraft game scene as a relative-position-based microscopically sequenceable problem and have been conducting experiments to verify the algorithm on a larger action space problem.\n\nFor the \"Questions\" part:\nThe agreed priority conventions for a specific multi-agent problem are prescribed by humans before the learning starts, and can be problem-specific designed. The priority conventions usually consist of (1) \\textbf{Agent priority convention}: This is for determining the order of agents to decide action and breaking symmetries: Dynamically, at each decision step of the centralized system, the agents' priorities are determined by current states, say, the relative positions of agents. For example, in \\fref{example}(c), the agents' priorities can be determined from left to right and upper to lower position, so the priorities (deciding order) of agents are $R_{1}\\prec R_{2}...\\prec R_{16}$. Here, $x\\prec y$ means $x$ has a higher priority than $y$. If the priority is not position based but ID based, then the state space will have many mirror states. (2) \\textbf{Agent ID convention}: This is for coinciding agents and breaking symmetries. Statically assign each agent a unique ID before learning to identify itself from others. When at least two agents coincide in the same position during the application, the smaller id agent has a higher priority. (3) \\textbf{Action priority convention}: This is for breaking ties of equally optimal actions. The actions of an agent have prescribed priority, say, up action$\\prec$left action$\\prec$down action$\\prec$right action, to break action ties if two actions both will bring about optimality, and to converge to one minimum lexicographic action vector. Hereafter, we define that two action vector $\\vec{\\alpha}\\prec \\vec{\\beta}$ (or called \"$\\vec{\\alpha}$ is \\textbf{lexicographic smaller} than $\\vec{\\beta}$\") if and only if $\\exists i\\in [1,N],\\ \\alpha_i\\prec \\beta_i$ and $\\forall j\\in [1,i-1],\\ \\alpha_j=\\beta_j$. \n\nDifferent priorities will get different $\\pi^*(s)$, but the maximum return from the same current state is the same.\n\n\n\n", " Thanks for the reviewer's comments and questions.\n\nFor the \"Questions\" part: \n\nresponse to Q1: “Multi-Agent Reinforcement Learning is a Sequence Modeling Problem” was uploaded on 30th, May while our PCRL was submitted on 19th, May, so the authors of both papers were not aware of another paper. This interesting coincidence again showed the significant meaning and novelty of our paper. The authors of two papers independently developed their own idea and experiments. The difference between our methods and the MAT is that: (1) in PCRL, agents’ order is relative position based, and can reduce the symmetries of the state mapping into joint action. This priority ordering can help exploitation around the smallest lexicographic joint-action. (2) PCRL considered multiple optimal joint-action situations, which are commonly encountered as figure 1 shows. (3) MAT is advantage decomposition based while ours is Q value-based and both have shown the correctness independently. (4) We use LSTM rather than Transformer, with fewer parameters and can the number of parameters increases less slowly. So PCRL is light weighted and scalable. (6) The idea of MAT borrows ideas from the Transformer and largest sequence model, while ours borrowed from the social conventions and partial ordering in discrete mathematics.\n\nresponse to Q2 and Q3: For example, in many decomposition based RL, they often requires IGM principle, i.e., $\\operatorname{argmax} Q_{\\text {total }}(\\boldsymbol{s}, \\boldsymbol{u})=\\left[\\operatorname{argmax} Q_{i}\\left(o_{i}, u_{i}\\right)\\right]_{i=1}^{N}$ . If multiple optimal joint-actions exist in the right side of the “=”, RL for LDAS is difficult to train because multiple return-maximum peak landscapes may impact the convergence and exploitation of RL, so new action selection and training schemes are required. The social conventions, especially the position-based priority can introduce partial ordering and break ties. This priority ordering can help exploitation around the smallest lexicographic joint-action. More responses can refer to our revised paper.\n\nFor the \"Grammar+sundry things\" part: \n\nThe grammar things have now been revised and modified, and can refer to the corresponding section. The hyperthesis means that we should set the reward and discount factor in RL properly so that maximizing the return of the MDP $\\leftrightarrow$ minimizing our real-world goal, say, the step to cover all shaded cells. In the appendix, we give an example of an improper setting and proper setting for CCPP MDP.\n\nFor the \"Writing\" part: \n\nWe now make clearer our fig2 and adds words to explain our motivation, please see the revision. Social conventions have nothing to do with a macroscopic view. We break a macro step (i.e., a decision event) of MDP into micro-steps. At each micro-step, an optimal joint action can be selected out one component by one component in micro-steps according to the conventions, besides of being selected out simultaneously, i.e., the \"microscopically sequenceable\" defined in RL language is that the joint action $\\pi^*(s)=(a_1,a_2,...,a_n) $ where each component can be produced as $a_1=\\pi^*_1(s),a_2=\\pi^*_2(s|a1),a_3=\\pi^*_3(s|a2),...$ and $\\pi_i$ functions are related. By doing so, The action selection process breaks the exponential action space into linearly expressible space complexity. Moreover, the action representation scheme and selection process will not lose the optimality of the action space and can select out the lexicographic-smallest optimal action vector.", " The paper aims to tackle the problem of large discrete action spaces in multi-agent MDPs (MMDPs) where the number of actions rises exponentially with the number of agents. The paper proposes to embed agreed priority conventions (agent priorities, agent ID, action priorities) into reinforcement learning (PCRL) to tackle large action spaces. The paper leverages RNNs to generate Q-values for the system one dimension of the action vector at a time sequentially (while maintaining priority conventions), and proposes an equality auxiliary loss to make sure that the V*(s) are equal for each RNN output corresponding to each action dimension. The paper claims that the priority conventions do not miss the optimal actions. The authors evaluate their proposed approach on coverage planning and pong domains. ## General Comments \n- The proposed approach seems sound and it is interesting to see authors approach match up to the performance of DQN with significantly lesser number of parameters required.\n\n- The paper is a bit hard to understand at places with grammatical mistakes. Figure 2 makes it easier to understand the paper. It would be great if the authors could take some time to focus on the writing of the paper. Some points to consider:\n\n - For example: “some researchers resorted to independent reinforcement learning (RL) regardless of other agents, but that will form loose cooperation and yield suboptimal solutions” → Independent MARL has shown to be a surprisingly good baseline in recently published papers in cooperative MARL. [6]\n - Some minor points:\n - Line 161: \"where h_0 = C_0 is the featured and supplied state s\" → I am not sure what the authors mean by featured and supplied state?\n - Line 166: ungaurenteed → unguaranteed; inconsitent → incosistent\n - Line 198: (??) reference missing\n\n- The paper is missing several key citations from recent MARL literature which especially focuses on improving exploration in MARL large action spaces with role-based learning (RODE: [1]) as well as papers leveraging the centralised training decentralised execution (CTDE) framework (an intermediary between fully decentralised learning and centralised learning) [2,3,4,5]\n\n- The idea of leveraging RNNs to predict one dimension of the action vector at a time seems to be very similar to the other works [7,8] with similar equality constraints as proposed by authors in Eq. 11. Can the authors comment on the novelty of their approach?\n\n- The considered problem in the paper seems to be difficult from the perspective of a large number of agents (thereby large action spaces), and therefore the proposed approach seems to be only useful for scenarios with full observability and homogeneous agents which is not commonly observed in real-world multi-agent problems.\n\n## Comments on Experimental Section\n\n- The authors only compare their approach against the joint-action DQN based approach. There have been several recent works on tackling the large action spaces in MARL with role-based learning [1], and other CTDE based approaches [2,3,4,5] which can tackle large action spaces pretty effectively especially on the coverage and pong domains in the paper. The authors should compare against these approaches to fully showcase the effectiveness of their approach.\n\n- The authors does not evaluate on any large-scale multi-agent RL domains like Starcraft II (SMAC) [7] (with some maps having 27 agents and each agent allowed to take 8 actions) which would help prove the efficacy of their method on large action space domains.\n\n[1] Wang, Tonghan, et al. \"Rode: Learning roles to decompose multi-agent tasks.\" ICLR 2020.\n\n[2] Son, Kyunghwan, et al. \"Qtran: Learning to factorize with transformation for cooperative multi-agent reinforcement learning.\" International conference on machine learning. PMLR, 2019.\n\n[3] Mahajan, Anuj, et al. \"Maven: Multi-agent variational exploration.\" Advances in Neural Information Processing Systems 32 (2019). APA \n\n[4] Rashid, Tabish, et al. \"Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning.\" Advances in neural information processing systems 33 (2020): 10199-10210. APA \n\n[5] Son, Kyunghwan, et al. \"QTRAN++: improved value transformation for cooperative multi-agent reinforcement learning.\" arXiv preprint arXiv:2006.12010 (2020).\n\n[6] de Witt, Christian Schroeder, et al. \"Is independent learning all you need in the starcraft multi-agent challenge?.\" arXiv preprint arXiv:2011.09533 (2020).\n\n[7] Samvelyan, Mikayel, et al. \"The starcraft multi-agent challenge.\" arXiv preprint arXiv:1902.04043 (2019).\n - Can the authors answer questions from the weakness/strengths section?\n\n- Additional Questions\n\n - It seems like the different priority conventions (agent priorities, agent ID, action priorities) conventions are predefined before learning. Would it affect the convergence if conventions are changed? For example, if action priorities are changed, would it change the final solution? How would one define such priority conventions for large scale domains like Starcraft II [7]?\n\n - Line 235: the authors mentions that “PCRL network can successfully learn the conventions”. Aren’t the conventions already predefined for the domains? Yes, the authors does mention the limitations of their work via a future work paragraph. Additionally, the authors should consider partial observable domains and possibly learning priority conventions.", " This paper looks to tackle multi-agent reinforcement learning problems operating in large-discrete-action spaces. The authors do so by introducing a priority convention into the RL loop. The priority convention results in generating a sequence of actions that are somehow better. The authors demonstrate the effectiveness of their algorithm by demonstrating experiments on various multi-robot problems. Strengths\n1. The paper is thorough with its literature review. \n\nWeaknesses\n1. The paper is extremely poorly written and throws around a lot of words that carry no meaning scientifically. For example in the abstract alone \"In this work, we propose to embed agreed priority conventions into reinforcement learning (PCRL) to directly tackle the microscopically sequencable multi-agent LDAS problems\" To the best of my knowledge I have never heard the term microscopically sequencable (and after reading the paper I still do not know what the authors are trying to say with the term microscopically sequencable means in a rigid mathematical sense.) Further phrases such as \"can help exploitation around the lexicographic-smallest optimal action vector\" make no sense to me. Such phrases and terms occur in the paper at multiple places and either serve very little purpose or only stand to confuse the reviewer when not rigidly defined. The phrase cargmax or consistent argmax at a first glance only confuses the reader. I would very sincerely request the authors to overhaul Section 3 and define everything instead of vaguely throwing out terms that either mean nothing or only confuse the reader which is to say nothing of the poor choice of notation (writing out \"one hot\", ? in Line 198, writing out \"length\") that only makes things worse.\n2. Now to come to the actual methodology devised in the paper here; the authors say nothing about what the actual priority convention is anywhere in the paper or in their experiments. How is the priority convention decided? How is it set. What are the parameters. How does it affect the training. In the experiments the robots all look identical and how does interchangeable convention set precedence to anything; i.e if Robot 1 is the same as Robot N then how does it matter if I switch priority from 1 to N and vice-versa. \n3. Further, the paper itself doesn't say anything about what happens if you force in a priority convention/ie why do you still need RL. In most cases in multi-robot problems, setting a priority convention is akin to setting a heuristic and removes the need for any RL. One chooses multi-agent RL if there exists a hard to define objective function for each robot but there exists a global objective function we wish to optimize for. If one is able to discretize this and say R1 is more important than Rn, then it is fairly trivial (especially in the grid world situations) to design a controller without any RL or machine learning that simply optimizes for the R1 > Rn objective. If one thinks back about the problem in Fig 1c, if, there exists a priority convention for the 18 robots, you do not need a learning based solution and can simply write a dfs solution with some edges carrying more weights (corresponding to the priority convention) and have the optimal solution. In the case where you have stochasticity it still seems to me that one can get away without doing any RL and setup an objective function that accounts for the stochasticity.\n4. Lastly, the experiments are extremely poorly designed. The proposed method needs to be compared with a relevant algorithm, By biasing the proposed algorithm and not providing the baseline DQN any reward based on the priority convention or information about the priority convention, the baseline is handicapped. Further, the experiments themselves are not compelling enough to determine that the proposed method here outperforms any standard non heuristic enabled multi-agent RL algorithm. \n I do not think this paper in its current form meets the high bar for publication at NeuRIPS. The paper is weak in many areas from writing. novelty, actual methodology, experimentation. \n\nI recommend the authors take a step back and re evaluate the proposal here and what introducing priorities does to the problem and if it even needs RL if one were to introduce a heuristic like priority convention. Please refer above. ", " The authors attempt to tackle the problem of RL in settings with large action spaces caused by there being many agents in the system. The authors provide a mechanism to pick an ordering over which actions for each agent are generated by a planner and show that they can pick the action that is \"lexicographically smallest\" Unfortunately, this paper is written in a manner that makes it not possible to assess the quality of the paper. The main figure in the paper is unreadable, basic definitions are missing, spelling errors are abundant in almost every paragraph, and the figures are low enough pixel density that I have trouble reading them on my laptop (they appear to be screenshots). This is unfortunate because the underlying idea, to the extent that I can figure out what it is, that actions can be selected sequentially in centralized, cooperative multi-agent problems is interesting and the related work suggests the authors have expertise on this topic. I would ask the authors to do a more thorough job in preparing their paper for submission and provide some suggestions on how to do so. What is the connection between the results in this work and the results in “Multi-Agent Reinforcement Learning is a Sequence Modeling Problem”? and how does this paper go beyond the results there?\n\nWhat is the importance of tie-braking and the connection to social conventions posited in the introduction?\n\nWhat is the importance of small lexicographic orderings?\n\n\n## PCRL: Priority Convention Reinforcement Learning for Microscopically Sequencable Multi-agent Problems\n\n### Grammar + sundry things:\n\n- I think in the title the word should be sequenceable?\n- In abstract it should be “existing work has mainly tackled”\n- Figure 2 is too low resolution and has to be improved to be legible. This is essential.\n- The paper is full of what I would call “unbacked opinions”. The authors will say things like “the probability is hard to model” (line 104). Hard how? Why? The reviewers are asked to believe this on faith rather than with some backing. Or in line 178 the authors say “the approximation has physical meaning” but don’t state what it is.\n- The paper should probably have a reference to “**Multi-Agent Reinforcement Learning is a Sequence Modeling Problem”** which contains quite similar ideas.\n- What is the relevance of the reward hypothesis in lines 185-186? Also, there’s a paper to cite for this rather than a footnote.\n\n### Writing:\n\n- There are many components in the introduction that need to be rewritten to make the contributions of the paper clearer:\n - What do social conventions have to do with microscopic and macroscopic sequenceable models?\n - Why are priority conventions important?\n - Why is the sequencing a solution to action space blow-up?\n - More intuition on what microscopic sequenceability is\n- This paper is not written in standard academic form and employs colloquialisms in many places such as lines 129-130\n- There are spelling mistakes through the paper. Please employ a spell-checker. The authors, in the conclusion, point out that they investigate their results on a relatively limited number of systems." ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "pZ656sf2bdq", "-7v8OzRsCJS", "IZfFewvZaH3", "OedQjZPIxbh", "drCSeutgpmU", "nips_2022_yewD_qbYifc", "nips_2022_yewD_qbYifc", "nips_2022_yewD_qbYifc" ]
nips_2022_9GXoMs__ckJ
On the Effect of Pre-training for Transformer in Different Modality on Offline Reinforcement Learning
We empirically investigate how pre-training on data of different modalities, such as language and vision, affects fine-tuning of Transformer-based models to Mujoco offline reinforcement learning tasks. Analysis of the internal representation reveals that the pre-trained Transformers acquire largely different representations before and after pre-training, but acquire less information of data in fine-tuning than the randomly initialized one. A closer look at the parameter changes of the pre-trained Transformers reveals that their parameters do not change that much and that the bad performance of the model pre-trained with image data could partially come from large gradients and gradient clipping. To study what information the Transformer pre-trained with language data utilizes, we fine-tune this model with no context provided, finding that the model learns efficiently even without context information. Subsequent follow-up analysis supports the hypothesis that pre-training with language data is likely to make the Transformer get context-like information and utilize it to solve the downstream task.
Accept
The paper unanimously receives positive rates thanks to strong motivations and interesting results. As the reviews show satisfaction on the authors’ feedback, the final draft needs to respect it accordingly, for example, about the limitations of this research.
train
[ "kkdyDv2XqhE", "CySwqYtQqw_", "wDZSGV701W", "2-4wzhJhzvb", "kEQVXVQlyA2", "MWM3nIsoBWd", "FcEs-_BBTEQ", "tKQXHngFi_A", "7_nJ9oNMlDKB", "qiTK1XqKwc", "W09RvgbxlOQ", "rU_0tC1Ux2h", "EBYe_O1W4X", "j43DHndu18D", "549CGljMccL", "PWjgJYjylPh" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate your positive evaluation of our response. We changed a sentence in the limitation section a bit so that we emphasize the importance of studying the average result of many more seeds.", " We deeply appreciate the positive feedback on our response. We would gladly open source the code for the research community.", " I would like to thank the authors for their extensive rebuttal response that tackled many of my questions. Primarily, I appreciate their modifications to the paper that better highlight the findings from their analysis; and the insightful responses regarding the performance of the models specifically relating to discrepancy between label prediction and actual returns of a policy; the effect of context etc. which are added to the appendices. Given the burgeoning popularity of Transformers for control/RL tasks, I think the extent of the analysis performed in the paper as well as the discussion together form a good contribution to the community. Furthermore, the collection of techniques used in this work can be applicable for other domains where the pretraining-finetuning paradigm is common. I am happy to upgrade my score, and would recommend the authors to open source their code so others can build upon this work. ", " Thank you for the response and the revised manuscript. The rebuttal partially addresses my concerns. I have increased my score to reflect that. My main remaining concern is regarding evaluation with multiple seeds. While it is nice to see that the results hold for an additional seed it would be more convincing to show all results averaged across multiple different seeds (e.g., 5).", " > “Why does the random init model with no context (K = 1) perform significantly worse than the pretrained ones only in Hopper?”\n\nThank you for your question. We speculate that this may be because *how much of the range of context that needs to be looked at* changes depending on the data set. For example, prior studies on decision transformer reported that the effect of context varied depending on the task [3]. Improvement by context means that the context was important information for solving the task. Thus, we believe that Hopper may be the dataset that requires a longer look at context than the other two data sets.\n\nAs a test of this hypothesis, we randomly sampled a batch sample and calculated the mutual information between action and state or return-to-go at the same time step and compare them between different environments; note that this is the mutual information between the data, not between the data and representation. The higher the value of the mutual information, the higher the mutual dependence between state or return-to-go and action at the same time step. Hence, higher mutual information suggests that the model could predict action better only from the information at the current time step.\n\nAs a result of this analysis, we found that mutual information between return-to-go and action was smaller for Hopper than for the others, though that between action and state does not differ between different environments that much. Prior research has indicated that return-to-go information seems to be important for prediction [1]. Hence, we can say that models have to use more information from the other steps to solve the Hopper task than models do for other environments. This result supports our hypothesis above. We will summarize the procedure and the result in the Appendix H.2.\n\nReferences\n\n[1] Machel Reid, Yutaro Yamada, and Shixiang Shane Gu. Can wikipedia help offline reinforcement learning? arXiv preprint arXiv:2201.12122, 2022.\n\n[2] Karthik Abinav Sankararaman, Soham De, Zheng Xu, W Ronny Huang, and Tom Goldstein. The impact of neural network overparameterization on gradient confusion and stochastic gradient descent. In International conference on machine learning, pages 8469–8479. PMLR, 2020.\n\n[3] Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems, 2021.", " > “models that are trained on large corpuses of RL data (similar to the concepts in multi-game decision transformer, or Gato) would fare better at RL and shed light on pretraining-finetuning”\n\n> “The analysis could change a bit if the models were to be trained on a different corpus, such as not language or images, but rather trajectories from different RL tasks in a multi-task learning fashion”\n\nAs you say, it is expected to perform better with pre-training in RL and it is a very important direction to go into. We would like to do this in the future. We have noted in the discussion section that this is important for future work.\n\nOne of the contributions of our work is that we have utilized analysis methods that can be applied to these RL pre-trained models as well, as you pointed out. For example, by examining CKA, we can determine whether and in which layer feature-reuse occurs for RL pre-trained model. One of the significances of analyzing multimodality is to deepen our understanding of the limitations and applicability of the foundation model, as you pointed out. Another benefit we believe is to clarify information that can be used across different modalities, in order to consider what kind of inductive bias we should install to the general-purpose models.\n\n> “Have the authors experimented with changing the gradient norm clipping value for iGPT? It seems unlikely that it would cause a major shift in the performance of iGPT, but it might help with understanding the role of gradient magnitudes and gradient confusion in the final performance of these models.”\n\nWe did not do at submission any experiments to change gradient clipping. Because we thought your point was essential, we took it and performed a simple experiment. \n\nThe experiments of the previous study [1] and our experiments both used a Pytorch function called `torch.nn.utils.clip_grad_norm_` for gradient clipping. This function divides each gradient by the “total” norm of all gradients and multiplies the clipping constant. However, as we pointed out in the paper, the gradient norm values are dominated by only a few parameters (the layer normalization layer in the first block). This large norm affects the normalization of all gradients, decreasing the informational value of most gradients. Therefore, we hypothesized that one of the possible reasons why iGPT is difficult to learn can be the use of the `torch.nn.utils.clip_grad_norm_` function. Thus we trained the image-pre-trained model without using gradient clipping. Since we didn't have time, we only trained for 10 epochs (instead of the original 40 epochs). \n\nThe results showed that, as you said, eliminating gradient clipping did not immediately solve the catastrophic performance at least up to 10 epochs. At the same time, we did find that the learning process seemed to be more stable and efficient than before the clipping was applied. We also found that the performance seemed to improve, albeit very slightly. Based on these results, we have weakened the statement from \"too large gradients may be the cause of the catastrophic performance\" to \"large gradient and gradient clipping may be one of the causes of the poor performance\". We then have clearly stated that the essential cause of the catastrophic performance should be pursued, and added the results and procedure of the simple experiment explained above in Appendix G.3. Thank you very much for your very significant suggestions.\n\nFor gradient confusion, a previous study claims that widening layers lessens the impact of gradient confusion [2]. Thus, checking if employing a wider network when pre-training mitigates the difficulty of the training might help to understand the importance of gradient confusion in performance. Since we do not have the time to re-train the wider network we leave this as future work. Instead, we have weakened the statement of the paper to be consistent with our findings.\n", " > “it looks like the randomly initialized model does a better job at predicting the appropriate action given the states. Yet, random init does noticeably worse than GPT-2 on Hopper medium”\n\nThank you for your valuable feedback. Of course, in general, decreasing action error is associated with increasing mean return (e.g., as in Figure 9 a, where a decrease in action error and an increase in mean return seem to be generally related as a trend). However, in the current problem setup, it appears that more accurate action prediction does not always lead to higher performance improvement. For example, looking at Figure 9, it seems that block 9 (the pink line in Figure 9 a) with the lowest action error does not achieve the highest mean return (the pink line in Figure 9 b).\n\nThe most likely cause of this observation is the current problem setup. The D4RL mujoco task used as offline RL data for our analysis has *expert*, *medium*, *medium-replay*, *medium-expert*, and *random* datasets. The *expert* dataset is the one trained by Soft Actor-Critic, *medium* is the one partially trained by Soft Actor-Critic and was stopped early, *medium-replay* is the one accumulated in the replay buffer before the model reached medium’s level, and *medium-expert* is a mixture of medium and expert results. The *random* dataset is the trajectory collected by the random policy. We conducted our experiments on the *medium* data set since it was used by the previous study [1] and our analysis is based on the observations of this previous study. As mentioned earlier, this dataset is early stopped and so does not necessarily converge to an optimal policy. This means that if the model accurately learns trajectory and predicts the next action, it does not always mean that it is the best action. Thus, we can see that a low action error does not necessarily mean a large mean return. We do not believe that the use of medium will hurt the validity of the analysis using this data since this dataset is not that pathological as explained above. We have added the explanation of the data in more detail in Appendix B.\n\nAnother cause might be related to mutual information. The current analysis of mutual information shows that the representation of the hidden layer of the randomly initialized model holds more information as well of the *input* as well as the labels. Since the neural net representation is a vector representation if a single vector contains both types of information, these types of information might have been mixed. This may make it difficult to properly utilize only the label information, making it harder to accurately predict the action. Another possibility is that the pre-trained model and the randomly initialized model differ in terms of which input token type (return-to-go, state, or action) information they encode. For example, in Fig. 2 (b), the language pre-trained model encodes information uniformly from the same token type: the hidden representation with high mutual information is the representation corresponding to the input from the triangles, i.e., the part corresponding to the return-to-go. On the other hand, in the random init models, there are variations (*triangle*, *circle*, and *cross*). It was noted in a previous study that attention weight was strong between the same token types [1]. So, perhaps the pre-trained model might encode less total information, but the amount of information effectively utilized is not that different. Finally, as pointed out in the explanation about the limitation, the limitations of mutual information as a metric might be a cause. We have included these discussion above in the Appendix P.\n\nIn any case, what we can say from the analysis in Section 5.2, at least, is that the reason pre-trained models work well does not seem to be because the models acquire more information about the data. In Section 5.6, we found that preserving the attention distance enables efficient learning. However, this alone does not rule out the possibility that acquiring more data is \"also\" beneficial. We believe that the results in Section 5.2 lowers that possibility and highlight the possibility that model performance is uniquely dependent on attention distance.\n\nThank you for providing your thought-provoking question. It has given us an opportunity to delve into the results of the analysis in Section 5.2. \n", " > “In terms of parameters, for the pretrained models, it seems that the only significant changes are in the shallow layers, and to a lesser extent, in the layer norms (section 5.1, section 5.3). Whereas for the randomly initialized model, most of the changes are in the layer norms. Does this still explain the consistency of representation in the shallow to middle layers of random init? (section 5.1)”\n\nThank you very much for pointing out the part that is difficult to understand. We may not have accurately captured the intent of your question, if so we would appreciate it if you could point out if we have missed the point of my response. \n\nFirst, we would like to respond to your point about the random initialization model. First of all, let us reiterate that the higher the value of CKA, the higher the degree of similarity. Thus, what the figure in 5.1 shows is that the expression of the shallow to middle layers, especially that of layer norm has “not” changed. This may have been a point of misunderstanding because our notation was not clear. Thus we have made it clear in Section 5.1. Since the shallow to middle layers have not changed, we have described in the paper that the representations of randomly initialized model have a higher similarity to that of the initial state from the middle to the shallow layers. \n\nNext, we would like to respond to your point about pre-trained models. For the pre-trained model, the CKA of the pre-trained model in Figure 1 in Section 5.1 also has a few high values in the shallow layers. This again indicates that the changes of representations are relatively \"small\" in this part of the model, as we have just explained above. As you said in the review, the figures in Section 5.3 shows that most of the changes in the parameters of the pre-trained model occur in the shallow layer, in the sense of the l2 distance of the parameters.\n\nThis observation of 5.1 and 5.3, i.e., that the parameters change and the representation does not change or vice versa, is not necessarily contradictory. First, the representation of layer $\\ell$ is affected by all parameters and input data before layer $\\ell$. Hence, even if the parameters of layer $\\ell$ have not changed, if the parameters before layer $\\ell$ have changed, the representation of layer $\\ell$ may change. Second, since not all of the parameters of a neural network necessarily contribute to the output, the output may not change even if some parameters change. For example, if the input value to ReLU is negative, the output will remain 0 no matter how much the parameters involved change. Therefore, for example, it is possible that even if the parameters of the $\\ell$-layer change, the representation of the $\\ell$-layer remains the same. We have added this complementary explanation of this relationship between parameters and representation in Appendix O. We are not sure if this is the answer to your question, but we would appreciate it if you could point out if we misunderstood your intention. \n\n\n\n\n\n\n", " Thank you very much for reading our manuscript so carefully and giving us so much constructive feedback. We would like to respond to each of your points below.\n\n> “the paper can benefit from a bit of reframing and rewrite”\n\n> “It is a bit unclear exactly what is leading to what in terms of performance”\n\n> “It would also be useful to have a summary of the findings at the end of the results section that summarizes the main takeaway points from all the subsections combined”\n\n> “exactly which objective ends up mattering the most for performance”\n\nWe appreciate your significant input to improve the readability of our paper. As you suggested, we have added a summary of findings at the end of the results section. We have paid particular attention to the implications of each finding for the performance findings. The added summary is below:\n\n- 5.1:Re-using representation is not the cause of the good performance of GPT2.\n- 5.2:Fitting to data better is not the cause of the good performance of GPT2.\n- 5.3:The good performance of GPT2 might come from some unchanged parameters.\n- 5.4:Gradient clipping and large gradient might be a cause of the bad performance of iGPT.\n- 5.5 :GPT2 can learn efficiently even without the context provided.\n- 5.6.1:Even a single pre-trained Transformer block makes training more efficient.\n- 5.6.2:A cause of the good performance of GPT2 is a pre-acquired way to use context.\n\nAdditionally, we have added explanations on the relationships between findings of each section in Appendix N to make it easier to get how each section contributes to our claim. We believe that these re-framing and re-writing would help to understand which finding lead to what in that of performance. \n\n> “the paper would benefit from a summary of the baseline results in the main text itself (moving Table 2 from appendix to section 5).”\n\nWe have moved the table of baseline results (Table 2 in Appendix 1) to the main text (Table 1 in Section 4). Thank you for your suggestions to make our manuscript clearer.\n\n", " Thank you for your positive feedback on our paper. We would like to answer the comments you have raised below.\n\n> “does not 182 encode more information about the input or the label\". This may be because of limitation of mutual information”\n\nAs you mentioned in the review, we agree that this conclusion can come from the limitations of mutual information. We again emphasized in the text that this conclusion can be affected by those limitations. Thank you for pointing this out.\n\n> “In section 5.3, we can see parameters between pre and post-fine-tuning have changed for random initialization. Then in section 5.1, why representations for random initialization did not change much”\n\nWe appreciate you asking the question on this point. We suspect that this is because neural networks in general do not necessarily change its representation when the parameters are changed. For example, if the input value to ReLU is negative, the output will remain 0 no matter how much the parameters involved change. Hence, for instance, it is possible that even if the parameters of the $\\ell$-layer change, the representation of the $\\ell$-layer remains the same. Another example is that when the values of two weights $w_1^\\ell$ and $w_2^\\ell$ in layer $\\ell$ do not change the output of the vanilla feedforward network because of its symmetry. \n\nAnother possible cause of this observation unique to the current analysis is that we use CKA to measure representational similarity. CKA is designed to be invariant to some transformations on the representation matrix [1]. Thus, different representation matrix is regarded as the same under these transformations even when parameters are changed. We have added this complementary explanation of the relationship between parameters and representation in Appendix O.\n\n> “In line 263, I am guessing, fine-tune without context.”\n\nYou are correct. We added the explicit statement in the paper that it’s fine-tuning without context. Thank you for the comment to improve clarification.\n\n> “it is more like all blocks can contribute only in varying amount. That is, the observation is more in line with lines 303-304.”\n\nYour suggestion sounds more appropriate than our current description. We have rewritten the statement to “we find that the information improving learning efficiency is preserved in all blocks only in the varying amount and that is probably context-related” so that we reflect your comment. Thank you so much for your recommendations for better wording.\n\nReferences\n\n[1] Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. Similarity of neural network representations revisited. In International Conference on Machine Learning, pages 3519–3529. PMLR, 2019.", " > “It would be good to confirm that the observed trends hold when using more than one seed”\n\nWe appreciate you for pointing out this very important point. To reflect your feedback, we have done some experiments with the additional seed (seed = 42 in the manuscript) as well and confirmed that our conclusions are largely unaffected by these results. Specifically, we have added in Appendix J results from the new seed for the following analyses:\n\n- CKA between pre and post-fine-tuning (Figs. 25 - 27)\n- Mutual information between hidden representation and data (Fig. 46)\n- Parameter similarity analysis (Figs. 51 - 56)\n- Gradient analysis (Figs. 59 - 67)\n- Replacement by the pre-trained block (Figs. 71 - 73)\n- Attention distance analysis (Fig. 77)\n\nWe believe that we have made our manuscript more convincing thanks to your suggestion. We appreciate again you for your valuable feedback.\n\nReferences\n\n[1] Machel Reid, Yutaro Yamada, and Shixiang Shane Gu. Can wikipedia help offline reinforcement learning? arXiv preprint arXiv:2201.12122, 2022.\n\n[2] Shibani Santurkar, Dimitris Tsipras, Andrew Ilyas, and Aleksander Madry. How Does Batch Normalization Help Optimization?. In Advances in Neural Information Processing Systems, 2018.\n\n[3] Sheng Shen, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer, PowerNorm: Rethinking Batch Normalization in Transformers, Proceedings of the 37th International Conference on Machine Learning, PMLR, 2020", " Thank you for giving up your time to review our manuscript. Below we would like to respond to your individual feedback one by one.\n\n> “it feels that it often stops too early and presents a hypothesis without trying to explore it further”\n\nThank you for your valuable feedback. We have added analysis to those where the further exploration could bring more insights. For some of those that were not conducted an additional experiment, we have added explanations that may lead to practical insights. In particular, we have added new analysis of the effect of gradient clipping on performance (Appendix G.3) and why randomly initialized model struggle to learn Hopper with no context (Appendix H.2). We have also added notes in Appendix O and P regarding other results where further exploration of the observation would benefit or where the results cross over into the results of each section to provide implications (e.g., the relationship between changing parameters and changing expressions). \n\n> “in the case of the gradient analysis the paper suggests that iGPT does not train well due to large gradients in early layers. It would be nice to explore this further and see if, e.g., this observation can inform a better training recipe for offline RL with iGPT pre-training.”\n\nWe thought your feedback is very significant. Thus, we have conducted an experiment to provide some implications for better training. \n\nThe experiments of the previous study [1] and our experiments both used a Pytorch function called `torch.nn.utils.clip_grad_norm_` for gradient clipping. This function divides each gradient by the total norm of all gradients and multiplies the clipping constant. However, as we pointed out in the paper, the gradient norm values are dominated by only a few parameters (the layer normalization layer in the first block). This large norm affects the normalization of all gradients, decreasing the informational value of most gradients. Therefore, we hypothesized that one of the possible reasons why iGPT is difficult to learn can be the use of the `torch.nn.utils.clip_grad_norm_` function. Thus we trained the image-pre-trained model without using gradient clipping. Since we didn't have time, we only trained for 10 epochs (instead of the original 40 epochs). \n\nThe results showed that, although eliminating gradient clipping did not immediately solve the catastrophic performance, the learning process seemed to be more stable and efficient than before the clipping was applied. We also found that the performance seemed to improve, albeit very slightly. Thus, we might say that removing gradient clipping might be a recipe for better training. We have added the details in Appendix G.3. Thank you very much for your constructive feedback.\n\nAlthough we did not confirm its validity experimentally, another possible way to improve training efficiency would be smoothing the loss landscape because a loss landscape with large gradients dominated by a few parameters is thought to have a distorted topography. Given that batch normalization is known to smooth loss landscapes [2], using PowerNorm [3], which allows batch normalization to be applied to the Transformer, may make the learning process more stable.\n\n> “Writing and presentation are sometimes a bit hard to follow”\n\nThank you for your comment on this important point. We have taken your points as crucial and have made changes in the writing style of the manuscript and added supplementary explanations. We suspected that part of the difficulty in understanding the text might be due to the difficulty in understanding the main argument of each section. Therefore, we summarize the main takeaways of Section 5 in bullet point format at the end of the section. We also thought that the difficulty in understanding how the claims in these sections relate to each other may have added to the difficulty of reading them. Thus, we have added a supplemental description in Appendix N that briefly explains the role of each section (Sections 1 - 5) of this paper. In addition to that, we have added explanations for sentences where the meaning is not clear.", " Thank you very much all reviewers for taking time out of your valuable time to review our work. We have revised the manuscript to reflect your feedback and uploaded it. Changes in the revised manuscript are colored in red. We summarize below the main changes in the revised manuscript:\n\n- We have moved the table showing baseline results from Appendix 1 to Section 4 of the main text (Table 2 → Table 1).\n- We have moved the figure and description of CKA between different models from Section 5.1 to the Appendix (Fig. 2 → Fig. 12).\n- We have added the takeaways of Section 5 at the end of the section.\n- We have added a description of the dataset we used in Appendix B.\n- We have added a new analysis of gradient norm in Appendix G.3.\n- We have added an analysis of why the randomly initialized model fails for Hopper with no context in Appendix H.2.\n- We have added the results of another seed (seed = 42) that we were able to conduct during this rebuttal period in Appendix J: CKA analysis, mutual information analysis, parameter similarity analysis, gradient analysis, block replacement analysis, and attention distance analysis.\n- We have added notes on areas that may be difficult to understand in interpreting the results of the analysis in Appendix N, O, and P.\n\nWe have made other revisions to the wording and typo and added notes so that we reflect the results of our additional analysis and the points raised in the review. We will provide a more detailed explanation of the points raised by each reviewer separately to each of them. In the following, unless otherwise noted, figure and section indexes are those of the revised manuscript (e.g., Fig. 10 in the previous manuscript is now Fig. 9 in the new manuscript).\n\nWe hope that we have been able to improve the manuscript by incorporating your feedback. We would like to offer my sincerest thanks again to you all reviewers.", " The paper presents an empirical study of language pre-training (GPT-2), image pre-training (iGPT), and training from scratch for offline RL. The study looks at the differences in activations, parameters, and gradients as well as the impact of the context length. Strengths:\n- The study considers an interesting question\n- The experiments show interesting results (e.g., impact of context)\n\nWeaknesses:\n- The paper considers the number of aspects (e.g., activations, parameters, etc.) and reports interesting observations. However, it feels that it often stops too early and presents a hypothesis without trying to explore it further. For example, in the case of the gradient analysis the paper suggests that iGPT does not train well due to large gradients in early layers. It would be nice to explore this further and see if, e.g., this observation can inform a better training recipe for offline RL with iGPT pre-training.\n- Writing and presentation are sometimes a bit hard to follow Suggestions:\n- It would be good to confirm that the observed trends hold when using more than one seed\n The paper discusses limitations and potential negative societal impact.", " In this work, the authors empirically investigate how pre-training on language and image data can affects fine-tuning of Transformers on offline reinforcement learning tasks. Previous studies reported that, for offline RL tasks, pre-training a Transformer-based model on image data either maintains or helps performance, whereas pre-training on vision data deteriorates performance. To understand the effect of fine-tuning, the paper compares between models that are randomly initialized, pre-trained with language data (GPT2), and with image data (iGPT). For that purpose they used GPT2 architecture. For offline RL tasks, the paper employs medium datasets of HalfCheetah, Walker2d, and Hopper environments. The authors analyze various network artifacts, like how layer activation and model weights change during training, initials gradient norms, also what type of language-trained model learns and utilizes for down-stream tasks, and others. The conclusion roughly is that, pre-trained model parameters do not change much during fine-tuning, though their layer representations change largely. Pre-trained models also do not acquire more knowledge about the input or the label during fine-tuning. The authors also hypothesize with empirical proofs that, the image-pre-trained model fails probably because of large gradients, and also that language pre-training probably gives the Transformer context-like information that can be taken advantage of during down-stream tasks. Strengths:\n- The topic investigated in this work is important and has room to be explored.\n- The paper presents a well-written, detailed study.\n- Limitations, mainly from the validity of the metrics used, is clearly pointed out. - Lines 181-182, \"does not 182 encode more information about the input or the label\". This may be because of limitation of mutual information, as the authors themselves pointed out in line 326. So should take the finding that, pre-trained Transformers do not acquire more knowledge about the input or the label during fine-tuning, with pinch of salt.\n- In section 5.3, we can see parameters between pre and post-fine-tuning have changed for random initialization. Then in section 5.1, why representations for random initialization did not change much, i.e., the CKA scores for representations from random initialization are high?\n- In line 263, I am guessing, fine-tune without context.\n- In line 281, the authors noted that \"other than a particular block\". But by considering results from all three datasets, it is more like all blocks can contribute only in varying amount. That is, the observation is more in line with lines 303-304. The limitations about the validity of the metrics used, is already pointed out in line 326.", " This paper presents an analysis of how the pretraining-finetuning paradigm works for Transformer models in the context of reinforcement learning. Pretrained Transformer models trained on two different modalities (GPT-2 for language, and iGPT for image) are applied to Mujoco offline RL tasks - Hopper, HalfCheetah and Walker2D. The paper examines how the representations change during finetuning - which is achieved by using several techniques such as activation similarity, distance between pretrained and finetuned weights, mutual information between training data and learned representations, gradient norms and magnitudes etc. Through this analysis, the paper uncovers some findings: primarily, how different language and image-based pretrainings fare during finetuning for RL, how different layers of the transformers exhibit different magnitudes of changes - demonstrating how pretraining knowledge trickles down to finetuning, and potentially why iGPT underperforms compared to the language GPT. It is also shown that the language GPT version achieves reasonable performance on RL tasks even under the case of constrained context length, thus highlighting the fact that it might already be aware of context-related information from pretraining. Strengths:\n\n- The paper presents a very interesting and thought-provoking analysis of representation dynamics between pretraining and finetuning. Given the current interest in large foundational models being the starting point for perception and perception-action, the paper is a good contribution to the field, and can aid in understanding how large Transformers can transfer between different domains. \n- The paper identifies a set of relevant techniques to perform analysis of representation similarity, training dynamics through gradient analysis etc. which can be useful to apply to other large models as well. By applying these techniques to large Transformer models, the paper extends the findings in papers such as [1] to understand why or why not these models perform well at RL tasks. \n\nWeaknesses: \n\n- I believe the paper can benefit from a bit of reframing and rewrite. After reading through the paper, some of the initial points end up sounding a bit confusing. For instance, the paper states in several places that \"pretrained models largely change their representation\", and then \"pretrained models do not change parameters much\". It is a bit unclear exactly what is leading to what in terms of performance. \n\n1. In terms of parameters, for the pretrained models, it seems that the only significant changes are in the shallow layers, and to a lesser extent, in the layer norms (section 5.1, section 5.3). Whereas for the randomly initialized model, most of the changes are in the layer norms. Does this still explain the consistency of representation in the shallow to middle layers of random init? (section 5.1)\n\n2. The analysis in 5.2, which investigates the amount of label information being encoded can benefit from a little more clarity. As it stands, it looks like the randomly initialized model does a better job at predicting the appropriate action given the states. Yet, random init does noticeably worse than GPT-2 on Hopper medium in table 2 or figure 9(a) (with K=20). It would be good to shed some light on exactly which objective ends up mattering the most for performance. \n\n3. In section 5.4, the paper hypothesizes that the fact that the gradient confusion is higher for the pretrained models, that makes them hard to train which explains the smaller changes in parameters. But this does not again seem to affect the performance in a detrimental way. \n\n- While I acknowledge that the analysis is meant build upon the findings from [1], one can't help but wonder that because reinforcement learning performance requires encoding state transitions, whether models that are trained on large corpuses of RL data (similar to the concepts in multi-game decision transformer, or Gato) would fare better at RL and shed light on pretraining-finetuning that's directly \n\n- The analysis being done in the paper is quite extensive and nuanced. It would also be useful to have a summary of the findings at the end of the results section that summarizes the main takeaway points from all the subsections combined, otherwise it can be hard to keep track. Similarly, the paper would benefit from a summary of the baseline results in the main text itself (moving Table 2 from appendix to section 5). \n\n[1] Machel Reid, Yutaro Yamada, Shixiang Shane Gu, \"Can Wikipedia help Offline Reinforcement Learning?\" - Have the authors experimented with changing the gradient norm clipping value for iGPT? It seems unlikely that it would cause a major shift in the performance of iGPT, but it might help with understanding the role of gradient magnitudes and gradient confusion in the final performance of these models. \n\n- Why does the random init model with no context (K = 1) perform significantly worse than the pretrained ones only in Hopper? For the other two environments, it actually performs a bit better than the pretrained ones. Mostly yes. The analysis could change a bit if the models were to be trained on a different corpus, such as not language or images, but rather trajectories from different RL tasks in a multi-task learning fashion. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "2-4wzhJhzvb", "wDZSGV701W", "7_nJ9oNMlDKB", "W09RvgbxlOQ", "MWM3nIsoBWd", "FcEs-_BBTEQ", "tKQXHngFi_A", "7_nJ9oNMlDKB", "PWjgJYjylPh", "549CGljMccL", "rU_0tC1Ux2h", "j43DHndu18D", "nips_2022_9GXoMs__ckJ", "nips_2022_9GXoMs__ckJ", "nips_2022_9GXoMs__ckJ", "nips_2022_9GXoMs__ckJ" ]
nips_2022_BK0O0xLntFM
Estimating and Explaining Model Performance When Both Covariates and Labels Shift
Deployed machine learning (ML) models often encounter new user data that differs from their training data. Therefore, estimating how well a given model might perform on the new data is an important step toward reliable ML applications. This is very challenging, however, as the data distribution can change in flexible ways, and we may not have any labels on the new data, which is often the case in monitoring settings. In this paper, we propose a new distribution shift model, Sparse Joint Shift (SJS), which considers the joint shift of both labels and a few features. This unifies and generalizes several existing shift models including label shift and sparse covariate shift, where only marginal feature or label distribution shifts are considered. We describe mathematical conditions under which SJS is identifiable. We further propose SEES, an algorithmic framework to characterize the distribution shift under SJS and to estimate a model’s performance on new data without any labels. We conduct extensive experiments on several real-world datasets with various ML models. Across different datasets and distribution shifts, SEES achieves significant (up to an order of magnitude) shift estimation error improvements over existing approaches.
Accept
The authors study the important problem of distribution shift under a new SJS model. Identifiability results are proved and empirical experiments illustrate the value of the proposed model. During discussion, some concerns on the experiments were addressed. Overall, there was a weak consensus to accept this paper, which I concur with.
train
[ "kvB-2y1jjpB", "82LgxpouEx1", "gHHJn1gdB1-", "QuDCKfDxJh", "Gn4xPAJ4rl8K", "hE8t1eeXRg", "ZYQa9pmpT_K", "7MpvQhIyNbC", "naBd5YC48jC", "EwuaoMnRAuN", "CXRKpmiTQpz", "o6o11tnV0mB", "54nPj1Q9xg", "_aTdShEtStJ" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional sensitivity analysis. I have increased my score based on the current paper status and the following reasons.\n\n* Based on Figure 6 of Appendix, the SEED-d is robust if there is small parameter mismatch. I believe that the proposed algorithm can be further improved by increasing the searching space (i.e., |J| <= m instead of |J|=m); this significantly increases the computational time, but sampling may help. Given this computationally expensive algorithm, we can say that a user needs not to consider the sparsity parameter too much, assuming d/2 features are at most shifted (as also mentioned in line 722 of Appendix). In short, the algorithm needs to be improved more to be robust to the choice of the sparsity parameter, but I believe that it is not too difficult. (BTW, it is better to highlight the limitations of the algorithm in the main paper)\n* The controlled experiment in Table 5 considers the best case of the proposed algorithm (e.g., when the true sparsity is known). It’s better to have the same experiment when the true sparsity is unknown (BTW, DLU is well-known at least in [R1]; [23] may not be a right citation unless there is a reason). \n* Assuming that the paper demonstrates the possibility of detecting SJS by proposing a few (but possibly naive) algorithms (i.e., proof by demonstration), I think the limitation on the current algorithm is okay. But, the paper may need to be written in this way.\n* Finally, but most importantly, I think that the new concept SJS (and related theories) could contribute to the community to detect shifts better under the sparsity assumption, which compensates for the algorithmic limitations. \n\nThanks for the contributions! \n", " Thank you for the feedback after our initial response! We conducted additional sensitivity experiments as suggested by reviewer 9XVT. Overall, we find that SEES-d is robust to mismatch in the sparsity parameter. Across 100 experiments, there is relatively little change in the estimation error when the sparsity hyperparameter is 2, 3, 4, or 5 when the true number of shifted features is 3. More details can be found in the Appendix (see line 706 - line 723 and Figure 6, page 23-page 24).\n\nIn general, all methods need some assumptions on the distribution shifts since the general data distribution shifts are not identifiable (as discussed on line 108 - line 112, in Section 3). Previous approaches are actually more sensitive to the sparsity parameter due to their stronger assumptions (e.g., BBSE assumes label shift and no feature shifts are allowed). In practice, users can empirically verify whether the sparsity hyperparameter in SEES is close to the true sparsity by comparing the target distribution with the source distribution adjusted by the selected shift features. We have added discussions of this to the text. \n\nThank you again for your feedback, we really appreciate it! Please let us know if you have further questions and we are happy to follow up. \n", " We thank the reviewer for the prompt response and clarification. To further address the sensitivity question, we conducted additional experiments as the reviewer suggested: on the COVID-19 dataset, 3 randomly chosen features and the labels were shifted, and we evaluated SEES-d with the sparsity parameter ranging from 0 to 7 (the total number of features). Both source and target datasets contained 10,000 samples. Overall, we observe that SEEDS-d is robust to small parameter mismatch: there is little change in the estimation error when the sparsity parameter (2, 3, 4, 5) is close to the true number of shifted features (3). \n\nWhen the parameter mismatch is too large, a relatively larger change in the estimation error can be observed (though SEES-d still works better than BBSE here). This is because a too small sparsity parameter restricts the search space, while a too large parameter often incurs an identifiability issue as our theory shows (i.e., different feature-label joint distributions correspond to the same observed target feature distribution). We provide more details in the Appendix (see line 706 - line 723 and Figure 6, page 23-page 24).\n\nIn general, all methods need some assumptions on the distribution shifts since the general data distribution shifts are not identifiable (as discussed on line 108 - line 112, in Section 3). Previous approaches are actually more sensitive to the sparsity parameter due to their stronger assumptions (e.g., BBSE assumes label shift and no feature shifts are allowed). In practice, users can empirically verify whether the sparsity hyperparameter in SEES is close to the true sparsity by comparing the target distribution with the source distribution adjusted by the selected shift features. We have added discussions of this to the text. \n\nThank you again for your feedback, we really appreciate it! Please let us know if you have further questions and we are happy to follow up.\n\n", " I appreciate the author's response. I think the response almost addresses my concerns, but there is a remaining concern on the algorithm's sensitivity over the sparsity parameter (which is also related to my main concern on the controlled experiment). Before the revision, I saw there is already the sensitivity analysis, but what I meant was that the result is not thorough to be convinced; in particular, I believe that COVD-19 dataset has at least 3 features, but Table 4 only considers three different sparsity parameters (i.e., 1,2, 3), assuming the true sparsity parameter is 3. As can be seen, when the sparsity parameter of the algorithm is 1, the result is quite worse than others. Letting the dimension of feature is d, if the true sparsity parameter is d/2, what is the sensitivity of SEED-d over the sparsity parameter of the algorithm from 0 to d? \n\nI believe that this sensitivity analysis is important as we never know the right sparsity parameter in advance. In this case, a user might simply choose the sparsity parameter of the algorithm s = d, which is equivalent to covariate shift thus it is not necessary to consider sparse-covariate shift.\n\n\n====== EDIT\nAlso, when the true sparsity parameter is d/2, d/2 features need to be randomly chosen. ", " Dear reviewer 9XVT\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you again!", " ***[Compare with additional baselines]***: We thank the reviewers for suggesting the additional baseline. We added the suggested baseline in [R2], based on discriminative learning on the union of source and target datasets (and we refer to it as DLU in the following). For the case study on the COVID-19 dataset, we found that DLU’s performance was similar to KLIEP and worse than SEES-d. We also measured DLU’s performance for the suggested controlled experiments. Details can be seen in our answer to the next question and the updated appendix (see line 715-line 740, Table 5 and Table 6, page 23- page 24). \n\n [R2] Sangdon Park et al Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation, 2020.\n\n***[Compare all methods for covariate shifts, label shifts, and joint shifts]***: We measured the performance of SEES-c, SEES-d, BBSE, KLIEP, and the suggested baseline DLU [R2] on the dataset COVID-19 for all three different shifts. The results and detailed discussions can be found in the updated appendix (see line 723-line 738, page 23-page 24). Overall, our observations matched the reviewer’s expectation: for label shift, SEES-c and SEES-d matched BBSE’s performance and outperformed all other baselines, and for (sparse) covariate shift, the performance of SEES-c and SEES-d is close to KLIEP and DLU and is better than BBSE. When both covariates and labels shift, both BBSE-c and SEES-d significantly outperformed existing methods. \n\n*Thank you for your detailed feedback, which has enabled us to improve the paper. We would greatly appreciate it if you would consider increasing your score based on our detailed response. Please let us know if you have any further questions and we are happy to follow up.*", " Thank you for your thoughtful review. We answer your questions as follows.\n\n***[change “s (|I| < s)” to “s (i.e., |I| < s)” for clearness]***: Thanks for the suggestion. We have edited it accordingly and updated the draft.\n\n***[Provide real and motivational examples when SJS happens. Explain the real meaning of the invariance.]***: We gave one motivating example in Section 1 (line 41). Other examples include:\n\n*Cancer diagnosis*: Suppose we wish to build an ML model to diagnose cancer based on patient health records. The model is developed based on a labeled dataset in some developed countries. However, when deploying it to hospitals in a developing country, there might be much more young patients, and the cancer rate for the elderly can also increase. Suppose the other features’ distribution remains unchanged given age and cancer diagnosis. Then the distribution shift is naturally an SJS.\n\n*Toxic text recognition*: Consider a mobile app that detects and filters toxic texts based on the content and senders’ information. Due to unexpected events (for example, disappointing football games), the toxic texts rate, as well as the total number of texts, may both significantly increase in some locations at different time periods. The shift of text locations and toxic text rate is thus another example of SJS.\n\nThose two examples have been added in the appendix, and a reference has been given after Definition 1 (line 124). The invariance introduced in line 122 implies that only features in the set I and labels cause the distribution shift. One practical scenario is when the target dataset is a mixture of two datasets, where labels shift in the first and a few features shift in the second, compared to the source dataset. We also added more discussions in the appendix (see line 645- line 657, page 21). \n \n***[Define the sparse covariate shift better. How is sparse covariate shift related to real-world examples and adversarial patches?]***: We added the formal definition of sparse covariate shift in line 152. Sparse covariate shift occurs when the shifts are caused by a few variables. For example, consider two census datasets collected in two periods. If a large population moved from one city to another between the two periods and everything else remains the same, then there is a sparse covariate shift (location alone). Adversarial patches are related: if adversarial noises are added to a few features (or a small number of pixels in image domains), it also corresponds to the sparse covariate shift. We added a detailed discussion in the appendix (see line 658-line 663, page 21). \n\n***[Is the identifiable (p_s, p_t) in Theorem 1 actually observable in practice?]***: Yes. The identifiable condition basically requires conditional independence of non-shifted features, given the shifted ones. To verify this in practice, one can estimate the probability mass from the empirical dataset, and check if the matrix consisting of all corresponding probability mass is full rank. \n\n***[How sensitive is the algorithm to sparsity s? If sensitive, how to choose s?]***: We provided a sensitivity analysis of SEES-d on the COVID-19 dataset in the appendix (see line 706-line 715, page 23). On a source-target pair where labels and 3 features shift, we evaluated the performance of SEES-d with sparsity parameter 3,2,1, the last two corresponding to model mismatches. Overall, the performance drops mildly. For example, setting the sparsity parameter to 2 only increases the error from 0.0026 to 0.0029. This result suggests that our algorithm is robust to some model mismatch. In general, we recommend picking the sparsity parameter to match the maximum number of shifted features derived from the applications. ", " Thank you for your strong support of our paper, and we are happy that you enjoyed reading it!\n\n***[use a different letter to denote the size of the set]***: Thank you for this suggestion. Yes, we have changed the notation to m. \n", " Thank you for your helpful feedback and support for the paper! We answer your questions below and we have updated the paper to incorporate your suggestions.\n\n***[Using linear features in SEES-c appear odd as x_k can be negative, making the weight function w(x,y) negative. Should there be extra assumptions?]***: x_k needs to be non-negative, which holds in our empirical studies (e.g., age/salary is always non-negative). In general, we can transform negative x_k to non-negative values, e.g., via affine transformation for finite data. We clarified this point in the paper (see line 201, page 6). \n\n***[The authors should provide some evaluations when the sparse-shift assumption is not entirely correct]***: We evaluated the sparsity robustness of SEES-d on the COVID-19 dataset. Specifically, on a source-target pair where the labels and 3 features all shifted, we measured the accuracy estimation error of SEES-d with sparsity parameter being 3, 2, and 1, where the last two correspond to model mismatches. Overall, the performance drops mildly as the sparsity parameter decreases. For example, SEES-d with the sparsity parameter being 2 only increases the l-2 error from 0.0026 (when the parameter matches the true sparsity) to 0.0029. This suggests that the method is relatively robust to model mismatch. The details can be found in the Appendix (see line 706-line 723, page 23). \n", " Thank you for your helpful feedback and support for the paper! We answer your questions below and we have updated the paper to incorporate your suggestions.\n\n\n\n***[Can you expand the empirical results with robustness measures (e.g. variance over multiple experimental runs)?]***: Yes. We measured the variance of the performance estimation error over 200 experimental runs with 10,000 samples on the COVID-19 dataset. Overall, we observe that the variance for all of the methods is small: the variance of SEES-c and KLIEP is less than 0.00003 and the variance for SEES-d is 0.003 and for BBSE is 0.0003. The variance is relatively large for SEES-d and BBSE. We have added this information and more details in the updated appendix (see line 700-line 706, page 23).\n\n***[How should we consider extensions of this framework (for instance, when we have access to multiple dataset shifts and possible task representations)?]***: Our framework is extendable to other scenarios and opens many interesting follow-up research questions. For multiple dataset shifts, for example, it would be interesting to augment the identification of shifted covariates and labels by correlations between different datasets. The proposed framework may also help explain how different task representations relate to each other. \n", " This paper deals with the interesting and important problem of distribution shift. The authors formalize a model (Sparse Joint Shift, SJS) which unifies existing shift models by considering joint shift of both labels and a few features.The authors show conditions for identifiability of models under these shifts, and propose Shift Estimation and Explanation under SJS (SEES), a method to estimate the empirical performance gap and explain it in terms of shifted features. The authors show empirical benefits of SEES compared to BBSE and KLIEP baselines. Strengths:\n\nThe paper is written well, with straightforward assumptions and conclusions, and was enjoyable to read. The problem addressed is important and the provided SJS framework appears to be a useful way to approach this problem. I also appreciate the identifiability condition and the discussion of implications.\n\n\nWeaknesses:\n\nThe empirical measurements are rather shallow, although measured over a few different datasets. All empirical measurements should have robustness measures, but are currently presented as only point estimates. Finally, when claiming seeking as a desired characteristic of the method, it would be good to evaluate the explainability.\n - Can you expand the empirical results with robustness measures (e.g. variance over multiple experimental runs)?\n\n- How should we consider extensions of this framework (for instance, when we have access to multiple dataset shifts and possible task representations)? This paper is appropriately presented in the context of related work. Limitations of the SJS framework are briefly discussed in the Conclusion but could be discussed more straightforwardly throughout the paper.\n\n\n\n===Edit after revision===\n\nAfter reading the other reviews and responses, I would keep my recommendation of acceptance.\n\nThank you for reporting the variance in experimental results. These suggest very stable training.\n\nI agree with reviewer 9XVT about comparing the methods across many types of shift and thank the authors for including some of this comparison in the updated appendix. I believe this is an important topic. I am also interested in reviewer 9XVT's question about the sensitivity to sparsity parameter s and it now seems to me that this may limit the model to be used in applications where we can evaluate performance wrt this hyperparameter (eg by cross-validation).", " The authors propose an interesting model of data shift which only label and a small set of features change between the training and test distributions. They design simple distribution matching algorithms enforcing shift sparsity and show that they work well in sparse shift scenarios. \n\n - The proposed model of shift is realistic and covers target shift and a part of covariate shift. \n- The proposed algorithms SEES-c and SEES-d are natural for continuous and discrete features, and manage to discover the underlying shifts in the scenarios evaluated in the experiments. \n- For weakness, SEES-d does not scale well with size of shifted feature set s, as it requires searching over all subsets J of size at most s. \n- The authors evaluate mostly on artificially created shift scenarios when they know the ground truth has only a few features shifted. It is not clear how the algorithm would perform in more complicated scenarios, such as medical data collected from different hospitals. \n - Using linear features in SEES-c appear odd as x_k can be negative, making the weight function w(x,y) negative. Should there be extra assumptions? \n- The authors should provide some evaluations when the sparse-shift assumption is not entirely correct, as the sparse shift assumption can be difficult/expensive to verify in practice. For example, what would happen if we assume a shift of 2 features only when there are 4 shifted features? Would the performance of the method degrade gracefully? \n - The main limitation of the method is knowing when the sparse-shift assumption is true so that the method can be applied. The discrete version SEES-d can also grow computationally expensive as the size of the shifted features s grows. \n\n\n==========\n\nAfter revision, I am keeping my recommendation of weak accept. The authors have addressed most of the questions raised by the reviewers and performed additional experiments. \n", " This paper introduces Sparse Joint Shift (SJS), a new distribution shift model that detects both label and covariate shifts simultaneously. The authors show how the proposed framework includes existing distribution shifts models and discuss under what assumptions SJS is identifiable. Overall, this paper is very novel and generalize the existing distribution shift approaches in an elegant way. The authors also introduce a new algorithm able to detect SJS which performs better than the existing distribution shift models. The paper has strong theoretical and empirical discussion and I enjoyed reading it very much. Strengths: 1- Novelty, 2- Theoretical foundation, 3- experiments.\n\nWeakness: 1- If space allowed, it would have been refreshing to see experiments on more complex data sets. To prevent confusion with the source s, I suggest authors use a different letter to denote the size of the set. N/A", " The performance of ML models may be degraded due to distribution shifts. To handle the covariate shift and label shift, prior work assumes the the type of the shit is known. In this paper, authors propose a joint way to measure covariate and label shift and explain model performance due to shifts without knowing wether its covariate or label shift. First, the paper describes when a joint shift is identifiable rigorously, then proposes an algorithmic framework to measure the shift in terms of importance weights (aka density ratio and likelihood ratio) under the assumption on the sparseness of the joint shift, and use the importance weights to estimate the model performance change. The proposed algorithm is evaluated based on six datasets along with two comparing methods, demonstrating the efficacy of the proposed algorithm. The claimed contributions include (1) proposing a new distribution shift model, i.e., a sparse joint shift (SJS), (2) proposing a general framework for performance shift estimation and explanation under SJS, and (3) demonstrating the efficacy of the proposed approach. ### originality\nI found that the attacking problem on estimating and explaining model performance under shifts is interesting and timely. Moreover, jointly considering the label and covariate shift under the umbrella of a sparse joint shift (SJS) looks like a novel attempt though the motivation on SJS is weak currently. The algorithm on estimating w(x,y) well exploits known techniques while adding a new component on the sparseness of SJS; but, comparison results are a bit weak. Measuring the performance shift by using the estimated w(x,y) is quite standard. See Questions.\n\n### quality\nThe paper introduces a new shift, called SJS, which combines label and (sparse) covariate shifts, and proposes an algorithm for SJS, claiming that the proposed approach is good for handling both label and sparse covariate shifts. However, the empirical results do not support this claim well. I expect to see controlled experiments (e.g., experiments on a dataset with only covariate shift, a dataset with only label shift, and a dataset for both) to show the benefit of the proposed approach, but these results are missing. See Questions.\n\n### clarity\nThe paper is overall well written, though I found a few points that would improve the clarity if fixed (e.g., the definition of sparse covariate shift, motivational examples on s-SJS, whether the identifiable SJS cases are practical, etc.); I’ll clarify more in the Questions section.\n \n### significance\nI think the crux of this paper is the introduction on SJS and its analysis on identifiability; it would be great if the paper includes clear justification that SJS can be frequently seen in practice and empirical supports that the proposed approach is useful for SJS. \n * (minor) line 122: “s (|I| < s)” is confusing; s is an integer but looks like it is used as a function. I guess it’s better to write “s (i.e., |I| < s)”.\n\n* Definition 1 is introduced in this paper; then real and motivational examples when SJS happens are needed. In particular, what is the real meaning of the invariance (i.e., the equation in line 122)? Is this invariance actually observable in practice?\n\n* It would be better if the definition of the sparse covariate shift is clearly stated; it is partially appeared in line 153 (and it’s kind of understandable based on the context), but having a full definition would be better (e.g., p_s(x_I) \\neq p_t(x_I) and p_s(y|x) = p_t(y|x). Moreover, what’s the real world example for the sparse covariate shift? I think this paper introduces many interesting concepts without motivational examples. It may be interesting if the sparse covariate shift can be motivated using adversarial patches. \n\n* Is the identifiable (p_s, p_t) in Theorem 1 actually observable in practice? It would be better if we can frequently observe the identifiable distributions in real datasets. \n\n* The algorithm requires to know the sparsity of SJS, which cannot be known in advance. In this case, the sensitivity of the algorithm performance would be useful to justify the algorithm. Figure 5 embeds partial results on this, but it’s better to see when 0<=s<=d. If the algorithm is quite sensitive to s, what’s the rule of thumb on choosing s?\n\n* The paper claims KLIEP [30] is the state of the art method for estimating the covariate shift; but KLIEP is a quite old paper (i.e, published in 2007), and it works under some assumptions (e.g., a basis (or kernel) function is properly chosen, which is always difficult). In this case, comparing other approaches is required. In particular, there are many papers on “Density Ratio Estimation” [R1], and especially a probabilistic classifier based density ratio estimation is quite simple and still used (e.g., [33] or [R2]). This could be a good baseline for the covariate shift experiments. \n\n\n* I believe, to justify the efficacy of the proposed approach for SJS, three controlled experiments are required (at least): when the dataset contains (1) only covariate shift, (2) only label shift, and (3) both (sparse) covariate shift and label shift. For (1), the proposed approach needs to be as good as known density ratio estimators for covariate shift. For (2), the proposed approach needs to be as good as known approaches (e.g., [21]), But for (3), the proposed approach should be better than other shift dedicated approaches. Without these results, the proposed approach may not claim to be effective for both shifts when we don’t know the type of shifts. I believe synthetically generating datasets for (1-3) is not difficult (e.g., use exponential tilting in [33] for covariate shift).\n\n[R1] Density Ratio Estimation in Machine Learning by Masashi Sugiyama and others.\n[R2] https://arxiv.org/abs/2003.00343\n I think the authors adequately addressed the limitations and potential negative societal impact of their work. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "gHHJn1gdB1-", "CXRKpmiTQpz", "QuDCKfDxJh", "ZYQa9pmpT_K", "_aTdShEtStJ", "ZYQa9pmpT_K", "_aTdShEtStJ", "54nPj1Q9xg", "o6o11tnV0mB", "CXRKpmiTQpz", "nips_2022_BK0O0xLntFM", "nips_2022_BK0O0xLntFM", "nips_2022_BK0O0xLntFM", "nips_2022_BK0O0xLntFM" ]
nips_2022_GWcdXz0M6a
PopArt: Efficient Sparse Regression and Experimental Design for Optimal Sparse Linear Bandits
In sparse linear bandits, a learning agent sequentially selects an action from a fixed action set and receives reward feedback, and the reward function depends linearly on a few coordinates of the covariates of the actions. This has applications in many real-world sequential decision making problems. In this paper, we devise a simple, novel sparse linear estimation method called $\textrm{PopArt}$ that enjoys a tighter $\ell_1$ recovery guarantee compared to Lasso (Tibshirani, 1996). Our bound naturally motivates an experimental design criterion that is convex and thus computationally efficient to solve. Based on our novel estimator and design criterion, we derive sparse linear bandit algorithms that enjoy improved regret upper bounds upon the state of the art (Hao et al., 2020), especially in terms of the geometry of the given action set. Finally, we prove a matching lower bound for sparse linear bandits in the data-poor regime, which closes the gap between upper and lower bounds in prior work.
Accept
The paper is motivated by the design of low-regret algorithms for high-dimensional sparse linear bandit problems. The challenge is to obtain regret guarantees even in the data-poor regime where the number of samples the learner can gather may be smaller than the dimension. This challenge had been investigated in [12] with a regret scaling as $(sn/C_{min})^{2/3}$ ($s$ is the sparsity of the problem, $n$ the number of samples, $C_{min}$ is the maximum over all possible arm distribution of the resulting average variance. The authors propose a scheme whose regret scales at most as $(sn H)^{2/3}$ where $H$ is a new (minimax) constant, proven to be smaller than $1/C_{min})$. The paper also presents a matching minimax regret lower bound. To achieve this improved regret upper bound, the authors develop a new parameter estimation procedure, based notably on Catoni’s estimator (this kind of estimator has been recently advocated in RL with linear function approximation, see “Reward-Free RL is No Harder Than Reward-Aware RL in Linear Markov Decision Processes”, Wagenmaker et al., ICML 2022, and the authors could mention this paper and stress the differences in the use of this estimator). The derivation of the lower bound also relies on new techniques (as mentioned by one of the reviewers). Overall, this is a solid contribution, even though compared to [12], the improvement is not that spectacular.
val
[ "zAL3v9R2WKK", "L-BRE2ZL2PD", "qFNMb5fYTXW", "pvzfPSWUj4f", "OTVrWM6s6oY", "dGo1AfQc-ew", "9q_S2Q6hjZ", "pQAhh0KTwKv" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for following up!\nI guess saying 'comparison between PopArt and Lasso' was a bit confusing. It's more like comparing how one can do design of experiments with PopArt vs Lasso.\n\nTo answer your question, it depends on what 'algorithm' you mean. Two possibilities are: (a) an algorithm that computes an estimator given a data generated from an external source and (b) an algorithm that designs a distribution over the arm set, sample from it and observe their labels, and then computes and estimator.\n\nWe meant (b) in our response. We aim to have an experimental design algorithm for minimizing the L1 recovery error. Specifically, the L1 recovery error of Lasso depends on the compatibility constant, so it is natural to find a design (i.e., probability distribution over the arms) that would minimize the compatibility constant. Of course, you can just use an arbitrary design like the uniform probability distribution, but there is no guarantee that it will work well. Hao et al. (2020) go around the computational difficulty issue by maximizing the minimum eigenvalue of the design, but it results in an inferior bound than our proposed method.\n\nIf you meant (a), then, yes, you do not need to compute the compatibility constant.", " Thank you very much for your detailed reply. \n\nI have one question regarding your reply. Why do you mention the complexity of computing the compatibility constant? As far as I know, no algorithm requires the computation of the compatibility constant in their algorithms. I believe compatibility constant is only required for the theoretical guarantees.\nHaving an assumption on the compatibility constant does not mean that an optimal algorithm should compute the compatibility constant.\n\nBest wishes,\n\nReviewer 7Qj2", " \nThank you for your feedback and questions. We will incorporate your comments on typos in the final version.\n\n>In the results, the lower bound condition in Hao et al is removed but the terms with the diagonal components of Q are used. Does this mean that the lower bound on the eigenvalue condition can be eliminated? Or do you still need to assume that the action set has to span $R^d$?\n\nFor your main quesiton on the action set that does not span the whole $\\mathbb{R}^d$:\n\n* For our current algorithm, we need to take the inverse of $Q(\\mu)$. Therefore, we need the condition that the action set spans $\\mathbb{R}^d$ since $Q(\\mu)$ should be invertible.\n\n\n* Some other sparse linear bandit algorithms can still deal with arm sets that do not span $\\mathbb{R}^d$ if one is fine with regret order of $\\sqrt{sdn}$, which is meaningful in the data-rich regime only. See the 'OFUL with SeqSEW' algorithm of 'Online-to-Confidence-Set Conversions and Application to Sparse Stochastic Bandits' by Abbasi-Yadkori et al. (2012) for an example. \n\n\n>(Although the same is true of Hao et al.) the action set has to span the entire high dimension so the requirement for the action set can be a bit strict for the high dimensional setting. The compatibility constant is, in this sense, a much weaker assumption as it does not require the lower bound on the minimum eigenvalue. \n\nFor comparison between PopArt and Lasso in the sparse linear bandit settings: \n* In the estimation perspective where the data is given and one has to calculate the optimal estimator, we agree that it is possible that Lasso-based linear bandit algorithms may be applicable to linearly degenerate action spaces, whereas PopArt cannot. \n* However, in the sparse linear bandit and experimental design perspective, experimental design using the compatibility condition criterion (Eq. (2) in our paper) suffers from a computational issue: the compatibility constant is computationally intractable to calculate, at least naively, let alone designing a distribution $\\mu$ that maximizes it. In this sparse linear bandit setting, this intractability of calculating the compatibility constant also makes the learner hard to calculate an optimal exploration schedule. See also the discussions in lines 186-190 in our paper.", " Thank you for your comments on improving the readability of the paper. We will incorporate your comments in the final version.\n\n>Please explain why knowledge of the signal strenght is necessary for an algorithm to achieve $\\sqrt{T}$ regret around Theorem 4.\n\nOn item 5, For the algorithmic perspective, we need the minimum signal condition to ensure the length of exploration is long enough so that we can exactly recover $\\mathrm{supp}(\\theta^*)$. This fact helps us to decrease the length of exploration compared with Algorithm 3 and hence achieve a $\\sqrt{n}$ regret bound. \n\nIf you are asking about whether it is possible to achieve $\\sqrt{n}$ regret bound using Algorithm 4 without the knowledge of $m$, the answer is yes: we can do so by modifying the exploration length of Algorithm 4 to, say, $n_2 = \\sqrt{n}$. As long as $n$ is larger than a constant ( $n> \\frac{2^{16} H_\\star^{4} \\sigma^4 }{m^4} \\log^4 \\frac{2d}{\\delta}$ ), we still guarantee that $\\hat{S} = \\mathrm{supp}(\\theta^*)$ with high probability, and we can still enjoy the same order of $O(s\\sqrt{n})$ regret.", " \nThanks for your positive review and valuable feedback! \n\nFirst, to prove that the optimization problem that defines $H_*^2$ (Eq. (5)) is an convex optimization, we show that the mapping $f(\\\\mu) = \\\\max_i ((Q(\\mu))^{-1})_{i,i}$ \n\nis convex. It suffices to show that $g(X) = \\max_i (X^{-1})_{i,i}$ is convex.\n\n\nTo see this, first note that the inverse of a matrix is convex, i.e. for two positive definite matrices $X, Y$,\n\n$$ ((1-\\lambda)X + \\lambda Y )^{-1} \\preceq (1-\\lambda) X^{-1} + \\lambda Y^{-1}$$\n\nand therefore maximum entry of the inverse is also a convex function since\n\\begin{align*} \\max_{i \\in [d]} e_i^\\top ((1-\\lambda)X + \\lambda Y )^{-1} e_i & \\leq \\max_{i \\in [d]} e_i^\\top [(1-\\lambda) X^{-1} + \\lambda Y^{-1}] e_i \\\\\\\\ &\\leq (1-\\lambda) \\max_{i \\in [d]} e_i^\\top X^{-1} e_i + \\lambda \\max_{i \\in [d]} e_i^\\top Y^{-1} e_i\n\\end{align*}\n\nTherefore, we can apply traditional convex optimization methods to solve the optimization problem that defines $H_*^2$ (Eq. (5)).\n\nTo implement this, we solve the following optimization problem.\n\n\\begin{align*}\n\\min_{\\mu \\in \\Delta^d , T \\in \\mathcal{S}^{d\\times d}} &\\max_{i \\in [d]} T_{ii}\\\\\\\\\n\\text{Subject to } & T \\succeq Q(\\mu)^{-1}\n\\end{align*}\n\nwhere $\\Delta^d:=\\\\{ v\\in \\mathbb{R}_+^d: \\|v\\|_1 = 1\\\\}$ and $\\mathcal{S}^{d \\times d}$ is the set of symmetric matrices. We used CVXPY, a popular convex optimization tool in Python, with code as follows:\n```\nimport cvxpy as cp\nimport numpy as np\n%d is dimension\n%A is our action set matrix\nmu=cp.Variable(d, pos=True)\nQ=A.T@ cp.diag(mu) @A\nT = Variable((d, d), symmetric=True)\nM = bmat([[Q, np.eye(d)],\n [np.eye(d), T]])\nconstraints = [M >> 0, cp.sum(mu)==1]\nobjective = cp.Minimize(cp.max(cp.diag(T)))\nprob=cp.Problem(objective, constraints)\nprob.solve(solver=cp.MOSEK,verbose=True)\n```\n\nHere, we used the fact that $M\\succeq 0 \\Leftrightarrow T \\succeq Q^{-1}$. Below is a proof of this fact:\n\nFirst, note that:\n$M \\succeq 0 \\Leftrightarrow v^\\top M v \\geq 0, \\forall v \\in \\mathbb{R}^{2d} \\Leftrightarrow u^\\top Q u + v_{d+1:2d}^\\top (T-Q^{-1}) v_{d+1:2d}\\geq 0$\nwhere $u=v_{1:d}+Q^{-1}v_{d+1:2d}, \\forall v \\in \\mathbb{R}^{2d}$.\n\nNow:\n* $(\\Leftarrow)$ If $T-Q^{-1}\\succeq 0$, then by the above observation, $M \\succeq 0$ obviously holds. \n* $(\\Rightarrow)$ If $T-Q^{-1}$ is not positive semidefinite, then there exists $v'\\in \\mathbb{R}^{d}$ such that $(v')^\\top (T-Q^{-1})v'< 0$. Let $v_f = -Q^{-1}v'$ and $v=\\begin{bmatrix}v_f \\\\ v'\\end{bmatrix}$ then $v^\\top M v <0$, showing that $M$ is also not positive semidefinite. \n\n\n\nAbout the computational complexity, CVXPY uses the MOSEK solver, which in turn uses primal-dual interior point method, which has a polynomial running time. \n\nWe will add these discussions in the final version.\n", " The authors consider the sparse linear regression problem with an application to experimental design in linear bandits. They propose a new sparse linear estimator of the true model based on Catoni's estimator assuming the knowledge of the population covariance.\nThe authors bound the error of the proposed estimator in terms of the spectral properties of the covariance matrix. They also propose a warm start estimator and improve the error bound further. Using the error bound of this estimator, the authors then propose a criterion for designing sampling distribution in the linear bandits problem. Using this sampling distribution, they propose an explore-then-commit algorithm for experimental design in linear bandits. The authors then bound the regret of this strategy and further provides a matching lower bound. Finally, under an additional assumption on the minimum signal of the unknown model, they design a phase-based algorithm using the estimator and provide its regret bound. Strengths:\n1. The authors provide a novel estimator for sparse linear regression and prove a tighter error bound for it compared to well-known Lasso estimator.\n\n2. The tighter error bound for the estimator also leads to a tighter regret bound for linear bandits.\n\n3. They also improve the lower bound for experimental design in linear bandits compared to prior work and establish that it is near to minimax optimal.\n\n4. The results are well-positioned compared to the existing work and the paper is well-written.\n\nWeakness:\n\nThe bandit algorithm for experimental design needs to solve an optimization problem to find the sampling distribution. I find the implementation details and complexity of that procedure missing from the paper. It would be great if a discussion on implementation of the convex optimization problem is presented. Yes", " The paper studies multi-armed linear bandits in high dimensions under sparsity\nassumptions. This problem has been studied before (notably by Hao et al '20), but the\npaper presents newer (tighter) upper and lower bounds. Key to the algorithm is a novel\nestimator for sparse linear regression which allows us to design a sampling distribution\nso as to minimize the estimation error. While a s^(2/3) T^(2/3) regret (where s is the\nsparsity and T is the time horizon) seems unavoidable in general, the authors show that\n\\sqrt{sT} rate is possible with a modified version of the algorithm if there is enough\nsignal from the non-zero coefficients. Finally the authors present some simple\nsimulations to corroborate their experimental results.\n\nWhile the results are not earth-shattering, the paper makes solid progress on a\nwell-established and useful problem. While I didn't have the time to read the proofs in\ndetail (hence the lower confidence score), the results appear believable and the\nintuitions are presented clearly. Hence, I would like to see this paper accepted.\n I don't have any major comments/criticisms about the paper. But here are some suggestions\nwhich might help improve the presentation.\n1. There were too many in-line equations, and generally, the presentation was somewhat\nequation-driven. While I appreciate that this may be unavoidable for a paper such as this,\nI felt that the authors could have done a better job of making the material easier to\nread.\n2. Please try and use words to describe symbols/notation when they are being introduced\n(e.g: Explain what R0 is in assumption 1 ).\n3. Maybe introduce the compatibility condition (equation 2) closer to the discussion\naround Theorem 2. Right now, it doesn't directly relate to the results in the paper and I\nwas left confused as to what it's purpose was.\n4. Instead of using large equations in the algorithm (e.g. line 6), consider defining the\nquantities in the text and referring to them in the algorithm pseudo-code.\n5. Please explain why knowledge of the signal strenght is necessary for an algorithm to\nachieve \\sqrt{T} regret around Theorem 4. See above. See above.", " This paper revisited the analysis of the sparse regression and the sparse linear bandit problem. \nFirstly, they presented the improved linear regression algorithm POPART by combining the idea of Catoni's estimator and the thresholding.\nThen, they showed that using POPART, a simple explore then commit type algorithm improves the regret bound of Hao et al in the data-poor regime.\nThirdly, they showed that by using the POPART for the support estimate and running a linear bandit algorithm, they presented an improved regret upper bound for the data-rich regime.\nMoreover, a tighter lower bound for the data-poor regime is presented.\n The main contribution of this paper is in its theoretical aspect.\nNovel algorithm/analysis of the linear regression and it's consequence is significant.\nTheir guarantee has tighter dependence than the classical Lasso guarantee by focusing on the different quantities (especially Q^{-1}_{ii}). \n\nWith the help of the novel offline regression analysis, the regret bounds for the sparse linear bandit literature are further improved. For the bandit upper bound part, the algorithmic part may not be so novel but the resulting tighter analysis is significant.\nThey further presented a tighter lower bound for the data-poor regime with the novel application of the change of measure technique with the symmetrization approach, this technique is noteworthy.\n\nAlthough there may be some parts of the paper that can be improved, I think the paper is worth accepted for Neurips.\n \n\n### general questions\n\n(Although the same is true of Hao et al.) the action set has to span the entire high dimension so the requirement for the action set can be a bit strict for the high dimensional setting.\nThe compatibility constant is, in this sense, a much weaker assumption as it does not require the lower bound on the minimum eigenvalue. \nIn the results, the lower bound condition in Hao et al is removed but the terms with the diagonal components of Q are used. Does this mean that the lower bound on the eigenvalue condition can be eliminated? Or do you still need to assume that the action set has to span R^d? My impression was the algorithm utilized the idea of Hao et al, but thanks to the improvement with POPART, the requirement on the action set can be further relaxed.\n\n\n ### minor, typo\nline 53, which can is\n\nI think subgaussian random variable should have zero-mean and no need to state \"zero-mean \\sigma-subgaussian\".\n\n\nEq. (3) I think \\lambda is not defined so far.\n\nAlgorithm 2, Samples are indexed until n, but should be n_0.\n\nAlgorithms 1-4, \\sigma is also an input.\n\nAlgorithm 4, the phased elimination algorithm is not defined in the main paper. Should cite reference or write pointers.\n\nEq. (6) I think \\lambda_min is not defined. \n\nline 228 and and\n\n \n\n\n See Questions." ]
[ -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "L-BRE2ZL2PD", "qFNMb5fYTXW", "pQAhh0KTwKv", "9q_S2Q6hjZ", "dGo1AfQc-ew", "nips_2022_GWcdXz0M6a", "nips_2022_GWcdXz0M6a", "nips_2022_GWcdXz0M6a" ]
nips_2022_UDmPRm-P1nL
Distinguishing Learning Rules with Brain Machine Interfaces
Despite extensive theoretical work on biologically plausible learning rules, clear evidence about whether and how such rules are implemented in the brain has been difficult to obtain. We consider biologically plausible supervised- and reinforcement-learning rules and ask whether changes in network activity during learning can be used to determine which learning rule is being used. Supervised learning requires a credit-assignment model estimating the mapping from neural activity to behavior, and, in a biological organism, this model will inevitably be an imperfect approximation of the ideal mapping, leading to a bias in the direction of the weight updates relative to the true gradient. Reinforcement learning, on the other hand, requires no credit-assignment model and tends to make weight updates following the true gradient direction. We derive a metric to distinguish between learning rules by observing changes in the network activity during learning, given that the mapping from brain to behavior is known by the experimenter. Because brain-machine interface (BMI) experiments allow for precise knowledge of this mapping, we model a cursor-control BMI task using recurrent neural networks, showing that learning rules can be distinguished in simulated experiments using only observations that a neuroscience experimenter would plausibly have access to.
Accept
This paper explores the question of experimentally distinguishing between different hypothesized classes of learning rules in the brain (specifically biased supervised learning and unbiased reinforcement learning). It derives a metric to distinguish between such learning rules based on changes in neural activity seen during learning with a brain-computer interface. The authors show that this metric can be used to identify which learning rules are the best account of the observed activity changes. The reviewers agreed that this paper makes an original and important contribution to the field, and the decision to accept was unanimous.
train
[ "kEXE-gptXR", "8Lbyxh7Edz", "9yNyxqRstZ3", "Awvv_1e5zxN", "Mei7dKLAIlh", "vn73Ji5GrDz", "mlB6ZrCMaVo", "QcRhljZmtPA", "09yURtam7T1", "LULcw35yl0", "0pslcBXGOHQ", "jtHQyhnNI9X", "k59hCcLXDy", "BeeVVG38BK" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the clarifications they provided and agreeing to incorporate some of my suggestions. I am glad I could help improve the quality of this work. \n\nThe points about noise is clear to me and given the author's updates during the rebuttal+discussion period, I believe my concerns are addressed. \n\nThank you for elaborating on the early vs late hidden state distributions. I must admit I was not aware of Golub et al. (2018) and was not aware of this phenomenon that RNNs tend to change their activity very little while learning -- thank you for the pointer, I will read more about this to understand better. A supplementary figure about this would be very helpful. \n\n\"We apologize that we are not able to complete this in the few hours that the reviewer has left us with before the response deadline.\" -- Apologies from my end for delay in responding to your comments. I do not expect the supplementary figure in the current version, but if the authors could add it in the camera-ready version, that would be great. \n\nGiven these developments, I will update my score to 7. I wish the authors good luck and would like to congratulate them on their work. ", " We thank the reviewer for their careful reading of our paper and their response, as well as for their positive overall assessment of our work and willingness to consider revising their previous score. We address the individual points below.\n\n1. Scope of the work\n\n> I would appreciate if the authors could make this explicit in their abstract and introduction.\n\nWe agree with the reviewer that this is an important point about the scope of our work that should – in addition to being discussed in our Discussion section – be made more clearly early in the paper. We will be happy to make this point more explicitly in the abstract and introduction of our final version.\n\n2. Noise in RL learning rule\n\n> If the authors are willing to update the presentation of their work, I am happy to increase my score.\n\nAs we mentioned in our response to Weakness 3 above, we have already made updates to the new text to clarify the misunderstanding about noise averaging of the RL update. Specifically, we have clarified the RL rule in the paragraph beginning on line 138, and clarified the flow field calculation in the paragraphs of our updated submission beginning with line 175. We have also updated the language around RL in Appendix B.2. We hope that this addresses the reviewer’s concern. If not, we would be happy to add additional clarifications about this that the reviewer thinks are necessary to the final version. \n\nRegarding the comment about measurement noise, the reviewer’s point seems to be that measurement noise and exploratory noise for RL are different things. In a sense, yes, since the latter is used for learning slowly over many trials. But this difference is irrelevant if one is asking about how noise corrupts observations of neural activity within each trial. Hence, if the reviewer is concerned about measurement noise, then we believe that the new Fig S8 is relevant for addressing this concern.\n\n3. Concerns over the framework's rigour\n\nWe believe that the reviewer’s concerns about our paper’s rigor are largely based on misunderstandings, which we attempt to address point by point below. We apologize for the misunderstanding and will make clarifications to our final version to avoid such confusions by other readers.\n\n> The network state could be different at different time points and I think it is actually unreasonable to assume that the same $h$ can be obtained at any arbitrary time point.\n\nWe agree that this would be an unreasonable assumption, since, as the reviewer points out, the whole point of training the RNN is to change its activity trajectory in a useful way. However, this is not an assumption that we are making. Essentially, our flow field is a model, and $h^{n,t}$ are the data points used to train the model, while $h$ is a generic point at which we evaluate the trained model. If the reviewer’s point is that the model will give poor predictions if the training and testing points are in completely different parts of the space of possible $h$, then this is certainly true. However, our simulations (data not shown, though we would be happy to include this as a supplemental figure in the final version if the reviewer thinks it is important) show that this is generally not the case. During training, we find that RNNs tend to change their activity as little as possible while learning to perform a task correctly, so that the distribution of training and testing points is not drastically different. (Interestingly, this has also been shown to be the case for neural population activity in motor cortex in BMI experiments–see Golub et al. (2018).) Indeed, the fact that we obtain conclusive results from our simulated data gives evidence that the model we are fitting is doing a decent job.\n\n> The authors claim that the feedback is the same for early vs late stages of training. This assumption does not make sense to me.\n\nThis is essentially the same misunderstanding as the previous point. Our assumption, which we point out explicitly in the manuscript, is that the feedback weights are not changed, not that the activity $h$ itself is the same early and late in training. As described above, $h^{n,t}$ are the data points used to train the model, while $h$ is a generic point at which we evaluate the trained model.\n\n> One possible way to mitigate this issue is to consider how different $h$ is for early vs late stages…\n\nAs we mentioned above, we would be happy to include a supplemental figure on this in the final version. We apologize that we are not able to complete this in the few hours that the reviewer has left us with before the response deadline.\n\n> My suggestion would be to address this point as a note to the readers…\n\nWe will be happy to add a clarification about the above points to the final version of our manuscript. We apologize for not being clearer about these points in our original submission.", " I thank the authors for responding to my review. I am particularly grateful to them for clarifying the impracticality of applying their method to real data given the current climate of data-sharing in monkey neuroscience; I was not aware of this. I will raise my score as my concerns have been alleviated.", " Thanks to the authors for the detailed response! \n\nMy main concern, as you sussed out, is that the paper felt incremental relative to claims. This response clarifies the context behind some of the decision making and experimental design and I buy that it's well-motivated and a useful contribution. I especially appreciate the context about three-factor learning rules, since my knowledge about this space is likely somewhat outdated. \n\nIn terms of language, I do still feel like statements such as \"method for distinguishing biased SL from unbiased RL under the assumption that the mapping from the brain to behavior is known\" overstates what we learn about the brain from these experiments (we know it's supervised learning?) but on re-read I do agree that the majority of the paper is careful about the specific insights. So I'm not going to push on this.\n\nScore raised. ", " I would like to thank the authors for responding to my comments. I would also like to apologize for the delay from my end in getting back to the authors. \n\n1. **Scope of the work:** I appreciate the authors's clarification about their method being useful for distinguishing biased vs unbiased learning rules. I think this is a critical point in defining the scope of their work and I would appreciate if the authors could make this explicit in their abstract and introduction. Although the authors mention that their aim is to distinguish supervised vs reinforcement learning rules, I think it would be helpful to mention the biased vs unbiased point (I am willing to increase my score by 1 point if the authors make this change or agree to make this change).\n\n2. **Noise in RL learning rule:** I apologize for my misunderstanding and thank the authors for clarifying this point. From the original submission, I failed to grasp this but this clarification was quite helpful. As mentioned above, this clarification is key to defining the scope of this work from the perspective of experimental validation of the proposed framework and if the authors are willing to update the presentation of their work, I am happy to increase my score to reflect my views.\n - As an additional clarification about the last point in limitations: I meant noise in measuring the neural activity, not the noise in the sense that the authors use, e.g. in the RL rule. But given the discussion about applying their framework to real world neuroscience data brought up by reviewer Jh97, I agree that this is beyond the scope of the current work.\n\n3. **Concerns over the framework's rigour:** Although I believe the authors do a commendable job in presenting their motivation, their solution and demonstrating that their proposal is useful, I have my concerns over the mathematical details of the framework and I must admit that I found some of the authors' responses slightly hand-wavy. \n - First, the authors changed their notation from $h^t$ to $h$ while defining their flow although $h$ depends on $t$. I feel this slightly obscures a key detail and could be misleading to the reader. Specifically, the network state could be different at different time points and I think it is actually unreasonable to assume that the same $h$ can be obtained at any arbitrary time point. This point has other effects as discussed below. \n - The authors claim that the feedback is the same for early vs late stages of training. This assumption does not make sense to me. For the same input, the model output should be different after learning. Alternatively, no (effective) learning has happened in your model, i.e. the model has not improved in performing the task. The authors' assumption misrepresents this issue by dropping the $t$ subscript from the $h$ and assume that the same $h$ can exist in both early and late stages and the fact that the feedback and bmi weight matrices are held fixed, the feedback would also remain fixed. I believe that as the system learns, the state space statistics would also change and therefore, for the same input $x$ the trajectory of $h$ would be different. Consequently, the model output would be different and therefore the feedback would be different. I believe this is a core issue in computing the change in flow.\n - One possible way to mitigate this issue is to consider how different $h$ is for early vs late stages and it is possible that the authors' description of $\\Delta F$ is a pretty good approximation to the true change in flow that can be measured. Ideally, the authors could characterize this approximation to make their proposal more rigourous. But given the time constraints of the rebuttal period (and to add to that my own delays), I believe it could be a tough ask. \n - My suggestion would be to address this point as a note to the readers while describing the mathematical details of the method (or add a sentence that this is an assumption for the method). Please feel free to refute me if I misunderstood something. If the authors agree to add this note, I am happy to increase my score to 6 because I believe that with some caution in mind, this work is a good contribution to the NeurIPS community. \n", " Thanks for the thorough response and for the provided clarifications.\n\nI especially appreciate the additional analysis performed by the authors that shows the method's capability to distinguish the two learning rules while observing partial activities of the network's neurons. I think this result makes the proposed method more ready to face the challenges of real-world BMI data.\n\nAs to whether the real-world experiments should be included in this paper, I tend to agree with the Authors' arguments that: 1) the BMI data may be difficult to obtain and 2) that, if such experiments were included, the paper would've been more relevant to neuroscience journals, whose readers are likely less interested in the model. For these reasons, I believe that the model alone is a sufficient contribution to NeurIPS.", " We thank the reviewer for the detailed response.\n\nWeaknesses:\n1. We have made the suggested change in the updated manuscript, which was merely a confusing bit of notation and does not affect our results.\n\n2. This observation is correct. In the updated version, we have included the following statement below Eq (7) to clarify this: “While the absolute magnitudes of this quantity are not informative on their own, the relative values can be compared to infer which of the two learning rules is more likely to have generated the data.”\n\n3. The RNN simulations do not use a noise-averaged version of the RL weight update (Eq 3). We only use the noise averaged version of the RL weight update to motivate our proposed metric (Eqs 5-7) and to derive under what conditions the node-perturbed weight updates follow the gradient defined by the BMI decoder (Appendix B.2). We have clarified the language in the main text and appendix in our new version. The noise $\\xi$ that we use for most simulations is relatively high, as can be seen from the noisy cursor trajectories in Figs 2A,D, and is necessary for RL exploration. \n\n4. In general, roughly, the results are significant with p<0.05 whenever the shaded regions (SEM) in our plots are non-overlapping. To avoid clutter, we have chosen not to indicate statistical significance on every result that we present. However, we have explicitly computed statistical significance in the new Fig S8, which shows the effects of varying noise on our main results.\n\nQuestions:\n1. (Addressed in Weakness 1)\n\n2. Each point in Figure 2F is the average of different instantiations of matrices $M$ and $W^{bmi1}$, where we randomly choose matrices $\\hat{M}$ to have the desired alignment with $(W^{bmi1})^T$.\n\n3. The flow field $F$ is a matrix that maps a point $h$ in neural activity space to another point in activity space. It is defined for all $h$, regardless of time. $\\Delta F(h)$ is therefore the change in flow field, i.e. the difference between these two matrices, for any $h$.\n\n4. We have fixed our notation in Eq 4 and the sentence following it. \n\n5. We clarified in our revised text in line 169 that the relation between $\\Delta F$ and $\\Delta W$ only strictly holds when the other weight matrices are unchanged. We mention this as a challenge in line 316 of our updated version.\n\n6. The feedback in our model is given by $y = W^{fb} W^{bmi} h$, and, as stated above, we assume that $W^{fb}$ and $W^{bmi}$ (determined by the experimenter) are fixed. When we perform the autoregression to compute $A$, part of this matrix will be a contribution from $W^{fb} W^{bmi}$, and that contribution will be the same for early and late trials, so that $\\Delta F$, which subtracts these two quantities, will be unaffected. Thus, there are no necessary assumptions regarding $y$ beyond those already discussed above.\n\n7. (Addressed in Weakness 2)\n\n8. We found that networks with alignment between M and $W^{bmi}$ of less than 0.3 mostly were unable to learn the cursor control task. On the other hand, our results show that learning rules become difficult to distinguish when the alignment is $\\gtrsim 0.8$. This leaves a significant range of values in which our framework produces meaningful results. \n\nLimitations:\n1. (See response to Weakness 2)\n\n2. This was addressed in detail in our response to Reviewer SqGf. The short version is that (i) there aren’t many biologically plausible learning rules for training vanilla RNNs in the literature, and (ii) we have included an appendix on “biased BPTT” as an example to show that our framework applies to biased algorithms other than RFLO. Based on our theory and simulations, we think that there are good reasons to expect that biased vs. unbiased learning rules can be distinguished in general within our framework. We don’t mean to claim, though, that our approach would necessarily be capable of distinguishing more subtle differences between learning rules within each of these classes. Hence, our view is that two-way comparisons such as the ones we presented between biased and unbiased learning rules are more likely to lead to meaningful insights when applied to experimental data than trying to identify the one true learning rule from a large number of candidates.\n\n3. Our RL rule is, in fact, a high-variance learning rule (Eq 3). This is likely why the correlation values for the RL-trained networks are much smaller than 1. The noise used for most simulations is relatively large, as can be seen from the noisy cursor trajectories in Figs 2A,D. This noise is necessary for RL exploration at each timestep and over trials. \n\n4. As mentioned in the preceding response, the noise in our simulations was chosen to be fairly large in large part for this reason. We have also added Fig S8 to show how our main results change as the level of noise is increased still further. In general, we find that increasing noise causes our RNNs to no longer train successfully before it causes our flow field metric to fail.", " The reviewer’s major concern is that, while our approach enables us to distinguish between the two learning rules that we consider, we are not able to definitively distinguish between _classes_ of learning rules. We fully agree that this is a limitation of our work and discussed it in some detail in the second paragraph of our Discussion section. The point that we attempted to make there is that any possible learning rule either does or does not make use of a credit assignment mapping. If the algorithm does make use of such a mapping, then, given that there is no plausible way for the brain to instantly have perfect information about how its neural activity maps onto behavior, this mapping will necessarily be biased, and this bias will leave a signature in the neural activity. If future researchers have different candidate learning rules that they would like to test, they will be able to use our framework for those learning rules. Though we don’t have a fully general proof, we conjecture that our approach will be capable of distinguishing biased from unbiased learning rules with a fair degree of generality (cf. our discussion below on Appendix C.2, where we apply our approach to distinguish biased vs. unbiased versions of BPTT). \n\nRelatedly, the reviewer seems to be concerned that there are a large number of biologically plausible learning rules in the literature for training RNNs, and that we are cherry picking. From a systematic review of approximate gradient-based learning rules for vanilla RNNs (Marschall et al, 2020), RFLO is the only one that is fully local, and hence, according to our criteria, biologically plausible. In the last two years, the most prominent biologically plausible algorithm for training RNNs has been e-Prop (Bellec et al, 2020), which is essentially a generalization of RFLO to spiking networks. For RL, the only other algorithm that we are aware of besides the simple node perturbation that we use is from Miconi (2017), which is so similar that it would be highly unlikely to change our main results.\n\nWhile the reviewer’s assessment that “The setup and idea are also completely worthwhile…The fact that this setup led to conclusive experiments about learning rules may be significant enough to warrant publication” is encouraging, we aren’t sure what changes would satisfy the reviewer’s main criticism. We don’t believe that a complete proof for all possible learning rules is attainable, and, while we would be happy to perform additional simulations using extra learning rules, it isn’t clear how useful this would be since there aren’t many others in the literature for vanilla RNNs, and the ones that are in the literature are minor variations on the ones that we consider and are unlikely to give different results. If the reviewer feels that our work is satisfactory but our claims are unsupported, then we would consider amending our language, but have already tried to be careful about this in the paper and discussed it appropriately as a limitation, so we would appreciate more concrete suggestions about which specific changes the reviewer would like to see.\n\n>“We don’t know enough about biological learning rules to...be highly productive...”\n\nWe interpret the reviewer's point to be that the brain may be using very different learning mechanisms than the ones that we consider, in which case our results would have limited relevance to neuroscience. In fact, there is a substantial amount of experimental evidence for so-called 3-factor learning rules in the brain, in which plasticity depends on a multiplicative combination of pre- and postsynaptic activity, as well as a third factor that contains information about error or reward. The learning rules that we consider fall within this framework, and we have added a citation in line 497. Thanks for helping us to clarify this important fact. \n\nWe have divided our Results section into separate Theory and Simulation results sections. \n\nQ1,Q3: The experimenter knows the decoder because they get to define it, and abruptly changing it is a standard feature of BMI experiments, creating a learning problem that the experimental subject has to solve. We have added minor clarifications in the paragraph at line 41 to make these points clearer.\n\nQ2: In Appendix C.2, we show that our systematic bias claim extends to biased and unbiased versions of BPTT. Although BPTT is nonlocal, this shows that we are able to distinguish between biased and unbiased algorithms other than RFLO and node-perturbation RL.\n\nQ4: This idea is not specific to RFLO and applies to gradient-based algorithms generally. A supervised algorithm can only learn if there is positive alignment between the readout and feedback weights (Lillicrap et al, 2016). Thus, the fact that monkeys are able to learn BMI tasks means that, if they are using SL, there must be positive alignment between these matrices (presumably due to learning of $M$, since $W^{bmi}$ is fixed).\n\n", " We thank the reviewer for their enthusiastic review and thoughtful comments. Below are responses to specific suggestions and questions:\n\n> “The correlations … are evaluated using the network’s states h visited in the experiment. Why?” \n\nWhen dealing with high dimensional neural activity space, we wanted to be careful about sampling from the part of the space where activity plausibly exists. As the reviewer points out, we considered such states h as representative for the distribution of the possible states of the network. We were careful to sample from non-overlapping subsets of points in neural activity space when generating predictions of the change in flow field for the matrices $M$ and $W^{bmi}$ (equations 5 and 6 of our revised submission), and when calculating the final correlation metric (equation 7). We have clarified our language around this the main text of our revised submission beginning with section 2.2 “Characterizing changes in neural activity with vector flow fields.”\n\n> “...it seems reasonable to assume that i) there’s an additional input to the network and ii) parts of the network’s activations are not observed. What modifications, if any, should be introduced to the proposed algorithm to account for such omission of the data?”\n\nThe simulations in this study have assumed that the decoder $W^{bmi}$ reads out the neural activity from all the neurons in the RNN. This assumption is not realistic with respect to neuroscience experiments, as the reviewer has pointed out. We completely agree with the reviewer on this point, and address it in our revised appendix. We ran simulations for RNNs where the observed neurons are only a subset of the full network, and find that it does not affect our conclusions.\n\nMore specifically, we ran simulations for RNNs with 200 recurrent units using SL and varied the number of units read out by the decoder between 25 and 200, setting the decoder weights of all non-readout units to zero. Network and training hyperparameters were otherwise the same as in Figure 2. Figure S6 shows that, in this more realistic scenario where a BMI only decodes a subset of the neurons in the neural population, our correlation metric continues to distinguish between the SL and RL training algorithms.\n\nThe reviewer also brought up the possibility that there is additional input to the circuit that is unknown and likely difficult to measure. We acknowledge that this is more difficult to take into account. In our modeling, we have made the assumption that the circuit inputs are the same throughout learning. In equation 1, this would correspond to variables $x$ and $y$ remaining roughly the same throughout learning.\n\n> “Although it’s outside the scope of this work, …how much data is needed to perform the proposed distinction on the biological data, and how does it compare to existing datasets?”\n\nOur modeling assumed access to 50 - 200 neurons, and between 500 - 1,500 trials for SL and 2,500-15,000 trials for RL. While the specific number of learning trials depends on the choices of hyperparameters, we think these numbers fall within the range of chronic BMI experiments.\n\nAn ideal BMI experiment motivated by our framework would involve (i) chronic recordings in motor cortex before, during, and after BMI learning (ii) proficient learning of at least two decoders $W^{bmi0}$ and $W^{bmi1}$ and (iii) BMI decoder mappings that are difficult but learnable over multiple days. \n\nIt is ideal to have at least two decoders so that two matrices can be compared - for example, when learning $W^{bmi1}$ after having proficiently learned $W^{bmi0}$, we can either (i) estimate $M$ using neural activity recorded during the learning of the first decoder or (ii) equate $M$ with $(W^{bmi0})^\\top$, and ask whether changes in neural activity lie in the subspace defined by the image of $M$ or in the subspace defined by the image of $W^{bmi1}$. Finally, because there is the possibility that cerebellum is involved when learning “easy” BMI mappings and perturbations, it would be ideal to make the decoder sufficiently difficult that it requires multi-day learning, presumably involving long-term plasticity in motor cortex.\n\n> \"Figure 3(E) caption\"\n\nThank you for catching the typo in the caption of Figure 3E; we have corrected it in the updated version.\n \n", " Thanks to the reviewer for their supportive review and insightful feedback. \n\nThe single major criticism by the reviewer was that we should apply our theory to real neuroscience data. We absolutely agree, but we feel that this is beyond the scope of this paper for the following reasons:\n\n1. Because the derivation and application of our metric to simulated data requires a significant amount of calculation and explanation, it is our view that the work presented here on its own represents a substantial contribution. A paper including data would necessarily have much more focus on experimental details and less on the development of the theory, to which we wanted to give due attention. Such a paper would have been more appropriate for a neuroscience journal rather than NeurIPS.\n\n2. Our ideal experiment would be chronic recordings in M1 (~2 weeks) where experimenters change the decoder more than once. Such experiments take years to design and train monkeys on. Only a few research groups are capable of running such experiments, and we are hoping this simulation-based study will lead to collaborations with these groups. Such data is rare and extremely valuable. For example, by our count, one of the leading research groups in this field has published half a dozen papers in high-impact neuroscience journals using a single dataset of this type. The culture of openly sharing data has unfortunately not yet caught on in monkey neuroscience to the extent that it has in other fields. Publicly available monkey BMI data, to our knowledge, does not exist.\n\nThe reviewer also made several constructive suggestions which gave us opportunities to clarify our presentation. Below are responses to the reviewer’s more-minor comments:\n\n>“There's a typo in Equation 1…” \n\nThank you for catching these details. We have corrected the superscript on $y$ to $t-1$, define $\\tau$ as the RNN time constant and $\\phi$ as the activation function in the updated version. In the paragraph starting with line 537 of Appendix C.1 we state the $\\tau$ value we used as 10 and the activation function $\\phi$ as $\\tanh$.\n\n>“Linearity is assumed to get to equation 4. Can you justify why this is a valid thing to do both in the simulated settings you consider, as well as how this assumption may or may not be valid when you move to real data.”\n\nIt is important to emphasize that, although the task performed by our RNN is simple enough to be achieved by a linear network (cf. Fig. S4, which we have included to illustrate this), in which case our linearized theory would be fully justified, we chose to perform simulations in a nonlinear RNN for precisely the reason that the reviewer mentions, i.e. to model the nonlinearities in the brain and to show that our approach, despite the linear approximation, still enables us to draw conclusions about the learning rules used to train the RNN.\n\n>“On line 170, you say that M can be readily obtained from experimental data. It was not obvious to me how you would do this.”\n\nWhile we did propose and explain a procedure for this later on in the paper (Sec. 3.4 in the updated version), we failed to signpost the result appropriately where we made the claim in line 170. We have fixed this in the updated version. \nIn Sec. 3.4, we propose one way to estimate M, and show that it works on our simulated data. This proposal requires learning two separate decoders, $W^{bmi0}$ and $W^{bmi1}$, and uses the observed neural activity during the learning of the first decoder to estimate the credit assignment mapping $M$. We then ask whether the network is using some estimate of $M$ during the learning of the second decoder. \n\nWe think that this approach could plausibly be applied to real BMI experiments with non-human primates, by training the monkey on two separate decoders, and estimating a possible “credit assignment mapping” from the neural activity observed during the learning of the first decoder. An even simpler setup would be to equate $M$ with $(W^{bmi0})^\\top$, instead of estimating $M$, and then analyze neural activity during the learning of $W^{bmi1}$.\n\nAn ideal BMI experiment motivated by our framework would involve (i) chronic recordings in M1 before, during, and after BMI learning (ii) proficient learning of at least two decoders $W^{bmi0}$ and $W^{bmi1}$ and (iii) BMI decoder mappings that are difficult but learnable over multiple days. It is ideal to have at least two decoders so that two matrices can be compared - for example, when learning $W^{bmi1}$ after having proficiently learned $W^{bmi0}$, we can either estimate $M$ or equate $M$ with $(W^{bmi0})^\\top$ and ask whether changes in neural activity lie in the subspace defined by the image of $M$ or in the subspace defined by the image of $(W^{bmi1})^\\top$.\n\nFinally, we have named our proposed metric “FFCC” for Flow Field Change Correlation, and thank the reviewer for this suggestion.\n", " This paper presents a method for distinguishing learning rules in the brain using data observed via brain-machine interface (BMI) experiments. The attempt to distinguish learning rules is motivated by the well-known fact that backpropagation is biologically implausible due to (potentially among other reasons) the weight-transport problem.\n\nThe weight transport problem is especially acute when fitting to data from BMI experiments because the weights mapping neural activity read by the BMI to behavior exhibited by the subject may change abruptly and cannot be immediately estimated for global error propagation. This approach therefore requires setting up a machine learning experiment that avoids the weight transport problem. \n\nThe authors model a BMI task using a vanilla recurrent neural network (RNN) with input weights W_in, recurrent weights W_rec, feedback weights W_fb, and BMI decoder weights W_bmi. W_bmi is fixed and learning only occurs in W_rec. In one set of experiments, the RNN is trained with supervised learning, specifically the Random Feedback Local Online (RFLO) rule that avoids the weight-transport problem by assuming that the weights carry an imperfect approximation of the ideal credit assignment mapping and dropping local terms from the weight update. This is expected to be a biased estimator. In the other experiments, the RNN is trained with reinforcement learning with node perturbation and subtracting off a baseline, yielding a somewhat noisy but unbiased estimator. Running these training processes yields a synthetic version of the data that would come from a BMI.\n\nThe two different learning paradigms are distinguished via a metric based on vector field flows over training. The authors hypothesize that supervised learning will have bias emerge over the course of training because of RFLO while reinforcement learning will not, making them distinguishable. They present the results of the training performance and distinguishability via flow field-based metrics. \n Edit: author comment + context from other reviews is more convincing of the value of this experiment for the field. Raising score.\n\n**Strengths**\n*Originality*: This paper takes the well-known problem of understanding and modeling biological learning rules and the existing tool, brain-machine interfaces, and creates a novel application. This application requires creative training formulations and metrics to test and evaluate. To my knowledge, this is novel – the problem and tool exist, but in different fields. The application is non-trivial and unique. The authors show a representative sample of related prior work regarding investigating biological learning rules from experimental data, including the most similar studies I know of (Lim et al. and Nayebi et al.) and distinguish from them. Furthermore, they present some related work on modeling BMI experiments with RNNs; I am not aware of any BMI work that is more related than this. \n\n*Quality*: This paper justifies all its small claims (i.e. details) correctly. The experiment setup with supervised learning and reinforcement learning seems correct to my understanding, the hypothesis about bias specifically from RFLO and noise without bias from node-perturbation-based RL are reasonable, results from execution make sense, and the metrics are presented and used to validate the approach convincingly. \n\nIn my opinion, the paper does not deliver on larger claims in a cascading way. First, the central hypothesis explaining the distinguishability seems to be that supervised learning (not just RFLO) will show bias emerge in training trajectories and reinforcement learning won’t. However, only one local supervised learning rule is shown and the explanation for bias emerging is specific to it. The related works cited here present other local supervised learning rules; they should at least be discussed to make a claim about supervised learning that avoids the weight-transport problem. This then cascades into a larger problem – being able to distinguish between some local supervised learning and some reinforcement learning doesn’t support the larger claim that this method can distinguish between learning rules, at least not in my opinion of the spirit of this claim. Finally, even if we assume these claims are supported, this cascades into the fact that we don’t know enough about biological learning rules to know that ability to distinguish between local supervised learning and reinforcement learning will be highly productive; there is no biological experiment to support this. The authors cite some related work about biological plausibility of each of these, but basing claims from purely simulated experiments on these fit experiments seems insufficient. \n\nEven if others disagree with me on the spirit of the claim the paper makes in its title, intro, and discussion, ultimately we have seen that simulated BMI experiments with flow field-based metrics can distinguish between (an example of) biased and unbiased learning trajectories. I consider this far from the scope of the main claims. \n\n*Clarity*: The writing in this paper is clear and useful for understanindg. I didn’t notice any concerning typos, and the ideas get across. The clarity suffers somewhat on the structure and figures. For structure, the paper basically has two qualitative sections (intro and conclusion, total ~2 pages) and one very long section (termed “results”) with everything else in it; the results only arrive in the second half, after methods have all been communicated. At the moment, the fact that “Results” are signposted before the reader knows experiment setup, the fact that the results don’t come until a long time after “Results” has begun, and the general lack of structure make this paper confusing and a bit overwhelming to read. It would be as simple as breaking this section into “Methods” and a couple results sections (e.g. one for performance, one for distinguishability) to fix that.\n\nThe figures contain the right information, but also aren’t clearly presented. Most of the figures contain many different subfigures, which is fine, but they aren’t well-annotated – it is helpful to show significant points on the charts directly. Furthermore, schematic parts of the figures (like all of figure 1) could benefit from less visual detail in the icons – lots of squiggly lines may look nice and even be more faithful to reality, but if they don’t improve our understanding of the point of the figure, they aren’t necessary and can be visually cluttered. \n\n*Significance*: As stated in the quality section, the claims of this paper are big and worthwhile. The setup and idea are also completely worthwhile: the idea of doing ML-analogous real experiments with BMIs is promising and this paper gives a good example of setting such a direction up. The fact that this setup led to conclusive experiments about learning rules may be significant enough to warrant publication (I’m unsure). But not only do the results not live up to the broad claims of the paper, they don’t make a very significant *result contribution* for machine learning for the reasons discussed in the quality section. We know simulated BMI experiments are a promising tool because of this paper, but not that they will provide publication-level value. Combined with the fact that this promise hasn’t been validated in real BMI experiments, I feel that we end up without a real “get” in this paper. \n -\tWhy does the weight matrix mapping neural activity onto cursor position change abruptly?\n-\tHow much does the systematic bias claim extend to anything beyond RFLO vs. node-perturbed RL?\n-\tHow does the experimenter know the decoder? In general, more background on BMIs would be useful – not because I doubt the claims, but because it would help to have a better mental model.\n-\tHow is the assumption about partial alignment with W_bmi^T justified? For those not deeply familiar with RFLO, could you expand a little bit on this detail?\n The experimental and ethical limitations are stated sufficiently. I consider there to be more limitations, but these are likely the products of a difference of opinion between myself and the authors. The fact that they aren’t here is not a knock on the quality of this section. ", " In this paper, the authors propose a method for distinguishing two biologically-relevant learning rules (unsupervised- and reinforcement-learning) based on neuronal activities in a recurrent neural network throughout an experiment similar to conventional studies with brain-machine interfaces. The authors verify their theoretical results in a series of simulated experiments where they confirm the model’s ability to distinguish the considered learning rules under various assumptions of different strengths. The goal of this paper is extremely worthy: whereas there’s plenty of theoretical work proposing various biologically plausible learning rules, little consideration is given to verifying these theories in practice using brain data. To address this issue, the authors here focus on the two prominent learning rules (i.e. the representative algorithms for supervised and reinforcement learning), for which they derive a theory and perform synthetic tests allowing distinguishing such rules based on typically-available data.\n\nThe paper has multiple strengths. The text is clearly written and well structured; the math is fully described in the appendix (requires no further reading!) and appears correct. The two considered learning approaches are representative of the learning algorithms, provided that the learning algorithms are often split into supervised, unsupervised, and reinforcement learning. The hypotheses in this work are introduced gradually (e.g. in the initial theory and simulations the credit assignment matrix M is considered known, while later on, this assumption is relaxed to “correlated with the real one”, and then to “learned from the data”). This approach streamlines the reading of the paper and also helps estimate the proposed method’s sensitivity to various parameters in the data. To this end, the authors propose the criteria of the method’s applicability. After reading the manuscript, I have a few minor questions remaining.\n\n-the correlations of the observed and predicted changes in the flows (delta F observed and delta F predicted; Equation 5) are evaluated using the network’s states h visited in the experiment. Why? Is it because such states h are considered representative for the distribution of the possible states of the network, or is there another logic involved? Please clarify.\n\n-Unless I’m missing something, in the model, all the network’s states h are considered observable and the only input x to the model is considered to be the task-relevant visual input. In the actual biological experiments, however, it’s hard to imagine recording from an entire functional circuit. Moreover, due to the unknown wiring of neurons on the individual-cell level, it may be hard to know what part of the functional circuit is being recorded. Therefore, it seems reasonable to assume that i) there’s an additional input to the network and ii) parts of the network’s activations are not observed. What modifications, if any, should be introduced to the proposed algorithm to account for such omission of the data?\n\n-Although it’s outside the scope of this work, it would be great to see some estimate of how much data is needed to perform the proposed distinction on the biological data. How this volume of the data compares to the existing datasets and which ones can be used to run the proposed algorithm?\n\n-Figure 3(E) caption: looks like it should say RL, not SL.\n\n-The paper’s title is slightly confusing as there is no actual BMI data for now.\n The limitations appear to be described/addressed fully. Overall, I think that this work is a thorough study of a problem highly relevant to the NeurIPS community.", " This paper introduces a novel metric for distinguishing biological learning rules from changes in neural activity. The proposed metric is the correlation between the observed change in network activity and the predicted change in network activity for each learning rule under consideration. Predictions are made by assuming that the brain can be modeled as a vanilla RNN, and by then deriving the expected change in neural activity for each learning rule of interest. The authors show that their metric allows them to distinguish a biologically-plausible variant of backpropagation (\"RFLO\", Random Feedback Local Online) from the REINFORCE learning rule of Williams (1992) in simulation for a wide variety of parameter settings. Strengths:\n- Clarity: this paper, with the few exceptions noted below, is well written. The figures were clear and easy to understand.\n- Quality: the paper is rigorous and examines the usefulness of the proposed metric in a wide range of settings and in cases where the initial assumptions are invalid. (As an aside: you might want to come up with a name for your metric! The ML field loves an acronym and right now, I am not sure how best to reference it, other than \"proposed metric\".)\n- Significance: the paper tackles an important problem in the computational neuroscience community, namely that of identifying biological learning rules from neural activity. As noted by the authors (line 64), there is a lot of interest in proposing biologically plausible learning rules. Having new ways to distinguish competing hypotheses is useful.\n- Originality: to the best of my knowledge, this paper tackles an open-problem in the computational neuroscience community with a novel approach.\n\nWeaknesses:\n- Significance: the major limitation of this paper is the lack of application to real data. The authors demonstrate the usefulness of their metric only when applying it to data simulated from an RNN. Thus, it is hard to judge how useful their metric will be in real-world scenarios, where some of the assumptions made by the authors may not hold. For example, in lines 170-172, the authors say: \"By averaging over noise, we effectively assume that the cumulative updates during learning that make up differ only in having different noise realizations, but are otherwise in a consistent direction.\" Real world data is always messy, and I would be surprised if this assumption holds there.\n\nMinor:\n- Clarity: there's a typo in Equation 1, where the superscript on y is $t$, but I think it should actually be $t-1$ (as in Equation 15 in the supplement). For equation 1, neither $\\tau$ or $\\phi$ are explicitly defined. For reproducibility purposes, it would be useful for these quantities to be defined and for their values to be explicitly provided. \n My questions are mostly clarification questions:\n- Linearity is assumed to get to equation 4. Can you justify why this is a valid thing to do both in the simulated settings you consider, as well as how this assumption may or may not be valid when you move to real data.\n- On line 170, you say that $M$ can be readily obtained from experimental data. It was not obvious to me how you would do this. My understanding is the $M$ is the matrix that the subject (e.g. monkey) $\\it{thinks}$ maps neural activity to the cursor location, as opposed to the true readout matrix.\n\nWhile answering these questions would help me better understand the paper, the biggest thing that would make this paper a clear accept for me would be a demonstration of the metric to real world data. The authors were upfront about the limitations of their work, namely that:\n- They only demonstrated the ability of their metric to distinguish two different learning rules (RFLO and REINFORCE)\n- They only demonstrated the application of their metric in simulation", " The authors provide a framework to distinguish biased vs unbiased learning rules by observing only the unit activity and demonstrate the usability of the framework for recurrent neural networks. The authors demonstrate that observing the changes in flow field is indicative of the learning rule that leads to synaptic weight changes and can thus be used to distringuish if the learning rule was biased or unbiased wrt the true gradient signal. Subsequently, the results show that the framework can distinguish the two classes of learning rules when there is sufficient bias in one of them, i.e. the similarity between the credit assignment mapping and the true readout weights is low. Furthermore, they add more factors, namely the feedback weights, anisotropic noise as well as a changing credit assignment mapping matrix and show that the proposed framework can identify bias in the learning rule. Given the results, it would be interesting to see how this framework extends to realistic setting with neural data from BMI experiments. Strengths:\n1. The idea of using flow field changes to infer changes in synaptic weight is neat and seems promising. The authors explain the key learning rules that they explore as well as the flow field well.\n2. Overall, the paper is well-motivated and well-presented. It is easy to understand the main goal and the general approach of the paper.\n3. The authors systematically add various aspects and complexities of the system, which provides an insight into how the different components interact with the proposed framework.\n4. The results are encouraging enough for the community to extend this work and either use it to analyze other learning rules or use it for BMI experiments.\n\nWeakness:\n1. I felt the later half of the results section a bit hard to read or follow. Specifically, it was not very clear how the solution to the autoregression problem in h translates to being the mapping from h to F(h). If $h^{t+1} = A h^{t}$, then shouldn't $F(h^{t}) = h^{t+1}-h^{t} = (A-I)h^{t}$, i.e. shouldn't the mapping from $h$ to $F(h)$ be $(A-I)$ instead of $A$ ?\n2. The correlation metric values are not setting invariant, i.e. the absolute values are not necessarily indicative of the correct learning rule. For instance, the correlation values for the RL-predicted flow changes are similar at low values of $sim(M,(W^{bmi1})^T)$ and $sim(\\hat{M},(W^{bmi1})^T)$ in Fig 2C and 2F respectively, although the ground truth learning rule is different. Therefore, it is not very clear how to use the value in realistic experimental settings to identify the correct learning rule. In my understanding, the metric can be used to compare between candidate learning rules but given that correlation values of the ground truth can also be as low as 0.25, it is unclear how reliable it would be for other learning settings. \n3. This work provides a comparison of two learning rules - biased Supervised Learning and unbiased Reiforcement Learning. The proposed framework is able to distinguish (barring some specific cases) between the two rules. However, it is unclear how noise in the learning rule would impact the framework. Specifically, the authors used a noise-averaged version of the RL weight update as the learning rule. But in realistic scenario, weight updates could be characterized by noise around the true gradient direction (high variance update as noted in Fig 1). It is unclear if the framework is relevant for distinguishing such learning rules.\n4. The plots are missing statsitical significance test to infer if the metric is statistically significant in the cases that the authors claim te usability of the framework and if it is statistically insignificant in the cases where the authors claim the framework is not usable. Having the statistical tests will also allow to characterize the limits of the framework. 1. See weakeness point 1\n2. I am unsure why the RL line in Fig 2F fluctuates with changing $\\hat{M}$. In my understanding $\\hat{M}$ is only used by the SL predictions and therefore I can understand the different values of the SL line.\n3. While computing $\\Delta F$, don't you need the same $h$ for both $F_late$ and $F_early$? Is this achieved in the current setting by setting the t to be the same for early and late stages? How can this be achieved in the realistic scenario?\n4. In the RHS of Eq 4, I am assuming by $h$, you mean $h^{t}$. Is that correct?\n5. The relation between $\\Delta F$ and $\\Delta W^{rec}$ only holds when it is assumed that $W^{in}$ or $W^{fb}$ is unchanged. Is that a reasonable assumption for realistic BMI settings? If yes, kindly add citations. If not, I'd suggest adding that to the limitations section of your work. \n6. Furthermore, the relation between $\\Delta F$ and $\\Delta W^{rec}$ assumes same values of $x,y$. Assuming that the early and late stages of your estimation deals with similar inputs (hence same $x$), the output of the system should surely change given that there is some learning. So isn't it unreasonable to assume that $y$ would be same in both early vs late stages?\n7. It is a bit hard to grasp the correlation values in the plots. Although the relative values are higher for the correct learning rule, the absolute values are not very informative. Is it possible to add some sort of \"noise ceiling\" to indicate what could be the best achievable correlation value from theory such that it is possible to compare the efficacy of the framework in correctly identifying the ground truth learning rule?\n8. The framework seems to be able to distinguish between two learning rules when one has significant bias and the other does not. Can it also provide an indication as to what amount of bias is the \"correct\" amount of bias in the weight updates wrt the gradient? Given the current results, it seems unlikely but I am curious to know if you have further insights into this. Overall, I think the authors do a commendable job in presenting their framework. If the issues highlighted above can be addressed, I feel this framework could have significant impact in evaluating the plausibility of different learning rules in the brain. Currently, this work has certain limitations:\n1. A major limitation is the presentation of the flow field change as a way to understand the change in synaptic weights and how the correlation metric is a good summary metric that is indicative of this change. Specifically, I feel it is not immediately clear how the absolute values can be used to identify the correct learning rule or indicate if a learning rule is not plausible. \n2. If the authors choose to present their framework as being a good way to compare the biological realism of learning rules, I feel they would need to add more learning rules and show how the metric can either rank the \"closeness\" to the true learning rule or select the correct learning rule from several candidate learning rules. Although this could involve significant experimentation, I feel this would be helpful in establishing the credibility of the framework as a way to order the different proposals currently acknowledged in the field.\n3. The framework lacks analysis of high variance learning rules, which would be crucial in establishing the tradeoff/impact of bias and variance in bio-plausible learning rules and thereby guide the field into proposing learning rules with specific constraints in mind. \n4. It would be great if the authors could add a potential simulation of how the framework would perform in realistic BMI settings where the recording SNR could be a major bottleneck. However, I understand that this point is more a future work and might not fit in the current scope, if the authors choose to present it as such." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "8Lbyxh7Edz", "Mei7dKLAIlh", "LULcw35yl0", "QcRhljZmtPA", "mlB6ZrCMaVo", "09yURtam7T1", "BeeVVG38BK", "0pslcBXGOHQ", "jtHQyhnNI9X", "k59hCcLXDy", "nips_2022_UDmPRm-P1nL", "nips_2022_UDmPRm-P1nL", "nips_2022_UDmPRm-P1nL", "nips_2022_UDmPRm-P1nL" ]
nips_2022_wYgRIJ-oK6M
BiT: Robustly Binarized Multi-distilled Transformer
Modern pre-trained transformers have rapidly advanced the state-of-the-art in machine learning, but have also grown in parameters and computational complexity, making them increasingly difficult to deploy in resource-constrained environments. Binarization of the weights and activations of the network can significantly alleviate these issues, however, is technically challenging from an optimization perspective. In this work, we identify a series of improvements that enables binary transformers at a much higher accuracy than what was possible previously. These include a two-set binarization scheme, a novel elastic binary activation function with learned parameters, and a method to quantize a network to its limit by successively distilling higher precision models into lower precision students. These approaches allow for the first time, fully binarized transformer models that are at a practical level of accuracy, approaching a full-precision BERT baseline on the GLUE language understanding benchmark within as little as 5.9%. Code and models are available at:https://github.com/facebookresearch/bit.
Accept
This paper proposes an innovative pipeline for quantizing transformers for extremely low precision (1-2) bits, while reducing the gap of previous methods to full precision by ~3X. This result has important implications for resource-restricted inference, especially if memory is of concern, but 1-bit quantization has significant effect on inference latency as well. This work reaches these strong results by careful normalization, separate quantization for non-negative activations and a combinatorial optimization over various distillation paths. Overall, the paper demonstrates an important albeit incremental advance in the field and is of general interest to the wider community, therefore I propose its acceptance at NeurIPS 2022.
train
[ "ApH2Z7_c9za", "4-mSzBoMFgR", "_8mdxKRA93O", "fmT20z7_wUl", "XxOB92px-n", "X7rLd5FK-jB", "QnFRmiWhdz9", "xNeokIJRg8O", "ZB2GSywGDFN" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank the authors for the response. These clarifications are important to understand the contribution of this work. I now see that small changes to existing proposals can lead to significant improvements in quality. I will raise my rating to borderline accept.", " Thank you, the authors have addressed my questions during rebuttal.\n", " We thank the reviewers for their time and efforts in reviewing our paper. We are encouraged that the reviewers recognized that the proposed method is simple yet effective [Reviewer **2rah**], reduces the gap in GLUE score to the full-precision model by ~3X percentage points [Reviewer **4xBa**], improves the quality of Binarized BERT model significantly [Reviewer **2rah**] and delivers best practices for model replicability [Reviewer **Yoaz**]. \n\nAlso, we appreciate reviewers’ constructive comments, e.g., more discussion on the previous work and hardware complexity [Reviewer **2rah**]; and insightful questions e.g., whether ternary quantization yields significant improvements compared with binary quantization [Reviewer **4xBa**], whether using other pre-training objectives will change the results [Reviewer **Yoaz**], etc.\n\nIn addition to the pointwise response below, we summarize our updates to the paper as follows:\n\n[**Finer-grained comparison to the previous work**] We further provide more ablation studies to demonstrate the improvements brought by each component and discuss the crucial difference between the proposed method and the previous works, e.g., ReActNet and BiBERT. \n\n[**Hardware complexity analysis**] We add the discussion of how to implement the {0, 1} activation binarization using the previous hardware implementation for {-1, 1} binarization without incurring any extra complexity.\n\nWe hope our responses below address all the concerns, and we thank all reviewers' efforts again.\n", " Thanks for the supportive comments and detailed review.\n\n**Q1**: Elaborate on the hyper-parameter selection for the teacher-student.\n\n**A1**: For knowledge distillation (KD) hyperparameters, we set the KD temperature ($\\mathcal{T}$) to 1 in the KD-loss: $-\\sum \\rm{softmax}(\\frac{logits_{teacher}}{\\mathcal{T}})\\rm{logsoftmax}(\\frac{logits_{student}}{\\mathcal{T}})$ in all the distillation experiments. We also tried using $\\mathcal{T}=3$, which is another common setting for distillation, but did not find much difference in the final accuracy.\nFor the training batch size and learning rate. We choose the learning rate in {1e-4, 2e-4, 5e-4} and batch size in {16, 32} from the dev set (as mentioned in A.6), following the common practice in fine-tuning BERT models. Different hyperparameter settings yield ~1.5% accuracy difference.\n\n**Q2**: A possible extra contribution is to perform multiple random runs and report variance. However, how expensive could this exercise become? \n\n**A2**: Thanks for the question. We tried running the same experiment for two trials, but that produced identical accuracy across all GLUE datasets. We deem that is because the student networks are all initialized from the pre-trained BERT model instead of random initialization, which overcomes the potential variance brought by the randomness.\n\n**Q3**: Please speculate if by using other pre-training objectives in the LM task (e.g. next sentence) would it change any finding or results?\n\n**A3**: We discussed this interesting topic and think that the pre-training objectives will affect the final absolute accuracy, but will not influence too much on the relative accuracy gap between the real-valued model and the binarized model. This is because the accuracy gap depends mostly on three factors: (1) model capability (2) compression algorithm (3) the fine-tuning dataset and scheme. Since these three factors remain unchanged, we think the accuracy gap should be relatively consistent across different pre-training objectives.\n", " Thanks for the constructive feedback and detailed comments. We have clarified the differences from recent work and provided analysis of hardware complexity in the paper and response, which should address the concern. We respectfully ask the reviewer to reconsider the rating, since this review is quite positive otherwise.\n\n**Q1**: The relationship to Bi-Attention.\n\n**A1**: We agree that the proposed two-set binarization and Bi-Attention both use {0, 1} to represent the binarized attention. However, Bi-Attention replaces softmax with the bool function (Eq.11 in BiBERT), while we find that simply binarizing the softmax output to {0, 1} works better (1.7% higher on GLUE dataset). Moreover, we discover that binarizing the ReLU output to {0, 1} is also important, which further brings 2.3% accuracy improvement.\n\nWe did these comparison experiments but did not include them in the paper due to the page limit. The results are as follows:\n\n|Method|Attention|ReLU output|MNLI$_{-m/mm}$|QQP|QNLI|SST-2|CoLA|STS-B|MRPC|RTE|**Avg.**|\n|-|-|-|-|-|-|-|-|-|-|-|-|\n|Bi-Attention with bool function, no softmax|{0, 1}|{-1, 1}|48.1/50.0|60.1|60.6|78.8|14.0|22.3|68.4|58.1|**51.3**|\n|Binarize attention to {0, 1}, keep softmax|{0, 1}|{-1, 1}|51.9/52.6|76.2|60.5|79.6|11.6|18.1|70.6|55.6|**53.0**|\n|Two-set binarization (Table 2 row 6 in paper)|{0, 1}|{0, 1}|57.4/59.1|68.3|64.7|81.0|18.2|24.7|71.8|56.7|**55.3**|\n\nWe have added the discussion with BiBERT in Sec. 3.1 and included the above ablation study in Appendix (Sec. A.3) to give readers a more comprehensive understanding.\n\n**Q2**: Elastic binarization vs. ReActNet.\n\n**A2**: We agree that both ReActNet and the proposed elastic binarization use learnable bias. The difference is that elastic binarization proposes a novel learnable scaling factor that contributes to the major accuracy improvement. In our experiments, the learnable scaling factor brings 13.9% improvement on GLUE compared to 1.8% by learnable bias, shown in the table below:\n\n|Method|MNLI$_{-m/mm}$|QQP|QNLI|SST-2|CoLA|STS-B|MRPC|RTE|**Avg.**|\n|-|-|-|-|-|-|-|-|-|-|\n|Our two-set binarization (Strong Baseline)|57.4/59.1|68.3|64.7|81.0|18.2|24.7|71.8|56.7|**55.3**|\n|+ learnable scale $\\alpha$|76.5/76.8|82.7|85.1|88.1|26.6|62.3|74.3|58.1|**69.2**|\n|+ learnable scale $\\alpha$ and bias $\\beta$ (BiT$\\ddagger$)|77.1/77.5|82.9|85.7|87.7|25.1|71.1|79.7|58.8|**71.0**|\n\nNote that the proposed learnable scale is especially important for the case $A_B\\in$ {0, 1}, because in that case the scale of real-valued activations matters for the binary outputs, i.e., $\\lfloor Clip(\\alpha X,0,1)\\rceil\\not=\\lfloor Clip(X,0,1)\\rceil$. However, this scenario is seldom studied before since in most previous works, $A_B\\in$ {-1, 1}, for which scaling real-valued activations makes no difference, i.e., $Sign(\\alpha X)=Sign(X)$. We have added these discussions and experiments to Appendix (Sec. A.2) to make the paper clearer.\n\n**Q3**: Hardware implementation complexity of BiT model.\n\n**A3**: In BiT, weights are binarized to {-1, 1} and the two-set binarization scheme only applies to activations. Binarizing activations to {0, 1} requires no additional hardware adjustment compared to binarizing to {-1, 1}. Because we can represent the binary activation $A_B\\in$ {0, 1} with $A’_B\\in$ {-1, 1} through a simple linear mapping:\n$$A_B=\\frac{A’_B+1}{2}$$\nThus the matrix multiplication between binary weights ($W_B\\in$ {-1, 1}) and binary activations ($A_B\\in$ {0, 1}) can be converted to the operations between $W_B\\in$ {-1, 1} and $A'_B\\in$ {-1, 1} as:\n$$W_B^T A_B=W_B^T (\\frac{A'_B+1}{2})=\\frac{1}{2}(popcnt(xnor(W_B,A'_B)+\\Sigma W_B)$$\nHere $\\Sigma W_B$ is summing up values in $W_B$, which can be pre-computed and stored as bias. Therefore, implementing BiT incurs no additional complexity.\n\n**Q4**: The training cost of multi-step distillation.\n\n**A4**: For the multi-step distillation, we use the same number of training epochs as BiBERT in each step, i.e., 50 for CoLA, 20 for MRPC, STS-B and RTE, 10 for SST-2 and QNLI, 5 for MNLI and QQP, as mentioned in Sec. A.6.\n\nMulti-step distillation does incur more training time. However, we have to stress that the better performance of multi-step distillation does not come from more training. We compared two-step distillation (BiT) with using 2$\\times$ training epochs for single-step distillation (BiT$\\ddagger$). It turns out that simply doubling the training time in single-step distillation yields no accuracy improvement, suggesting that the original recipe is already sufficient for training the 1-bit model to fully converge. In contrast, multi-step distillation further improves the accuracy by closing the gap between student and teacher models.\n\nSince the BiT$\\ddagger$ with single-step distillation already exceeded the state-of-the-art by 7.8%, the multi-step distillation can also be regarded as a plus for scenarios with greater training time tolerance but higher requirement on inference accuracy.", " Thank you very much for the detailed comments and the precise summarization of our work.\n\n**Q1**: It would be interesting to see whether a -1, 0, 1 quantization rather than the proposed -1, 1 would yield significant improvements at a moderate cost. Has this been attempted?\n\n**A1**: Yes, we did an experiment on ternary quantization, the result is as follows:\n\n| Method | MNLI$_{-m/mm}$ | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Avg.|\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | \n| Ternary-W-Ternary-A | 80.9/81.3 | 86.6 | 87.6 | 89.9 | 29.1 | 81.2 | 76.0 | 59.6 | **73.9** |\n| W1A1 (BiT $\\ddagger$ )| 77.1/77.5 | 82.9 | 85.7 | 87.7 | 25.1 | 71.1 | 79.7 | 58.8 | **71.0** |\n\nCompared to the binary model distilled from the full-precision BERT model, the ternary model with the same settings achieves 2.9% higher accuracy on the GLUE dataset. \n\nFor the cost of ternary models, to our knowledge, there are two methods to implement ternary computation: (1) Treating the ‘0’ value as sparsity, which has the potential of further saving the memory. But it requires a certain level of sparsity and specially designed kernel/implementation to enjoy the actual memory saving effect. (2) Packing the ternary values in 2-bit memory. This method is easier to implement, but in that case the computational complexity of the ternary kernel is about 4 times that of binary kernels [ref1]. Thus, there is an accuracy-computation trade-off between binary or ternary models.\n\n[ref1] Fast matrix multiplication for binary and ternary CNNs on ARM CPU (ICPR 2022)\n\n**Q2**: How does quantization affect the quality of vision transformers?\n\n**A2**: Vision transformer (ViT) quantization could be a very good future direction to explore, which we have not studied yet. One difference between ViT and NLP transformers is that the feature map size of ViT is usually larger. This may pose new challenges in quantizing the ViT. \nIn the literature, a DeiT-T [ref2] (a common ViT) with weights and activation quantized to 8-bit encounters 0.6% accuracy drop compared to full-precision DeiT-T [ref3]; quantizing DeiT-T to ~3-bit witnesses 3.2% drop [ref4]; and quantizing the weights to ternary and activation to 8-bit sees 5.6% accuracy drop on DeiT-T [ref5].\nThus, we believe binarizing the vision transformer is a challenging task and may require combining the techniques found in this paper and additional adjustments customized for the ViTs.\n\n[ref2] Training data-efficient image transformers & distillation through attention (ICML 2021)\n\n[ref3] FQ-ViT: Post-Training Quantization for Fully Quantized Vision Transformer (IJCAI 2022)\n\n[ref4] Q-ViT: Fully Differentiable Quantization for Vision Transformer (Arxiv)\n\n[ref5] TerViT: An Efficient Ternary Vision Transformer (Arxiv)", " This paper addresses the task of quantizing transformers to very low precision (1/2 bit(s)). Quantizing neural networks can be essential for model size reduction (especially for mobile devices), but also for the speed of evaluation of hardware without floating point accelerators. In addition, binary activations/computations open new possibilities for high-performing low-power, special purpose hardware for evaluating neural networks with drastically improved parallelism and power consumption.\n\nWhile these considerations and motivations are not new, the paper gives a concise summary and references to prior efforts, especially in the context of convolutional networks, that is clearly referenced in this work.\n\nHere this work addresses a set of cumulative, technical improvements for successfully quantizing transformer networks for very low precision, including normalization, separate quantization for non-negative activations and exploring distillation paths over medium precision (eg. 8 bits) quantization. Using a combination of such methods, this paper manages to reduce gap in BLUE score to the full-precision model by ~3X percentage points. Also the paper gives thorough ablation analysis for the effect of the employed methods. Originality: Moderate. The motivation for extreme quantization of neural networks is not new and has been proposed many times over the past few years. These include model size reduction and the development of new special purpose hardware that could reduce the power consumption and latency of inference of large deep learning models. In fact the feasibility of extreme quantization has been pioneered for convolutional networks with high success, but with transformers becoming the main workhorse for most deep learning applications, it has become an important question how to transfer the above results over to them.\n\nWhile there has been an initial set of work of binarizing BERT and other transformer-based models, typically those results resulted in a huge drop of language-modeling quality. \n\nThis work does not propose one single solution to those issues, but a collection of relatively common-sense solutions, the combination of which has a large effect on quality of the resulting low-precision model. These ideas include:\n- Normalizing activations before binarization to ensure that low-precision activations are maximally informative.\n- Improved gradient clipping for training.\n- Specialized quantization for non-negative and general activations.\n- Attention quantization\n- Intermediate-layer distillation (employed in prior work)\n- Multi-step distillation path over medium-precision models.\n\nQuality: High. The paper has one very clear goal, a very well-defined experimental setup. Thorough experiments and ablation analysis that seem consistent. The work presents a clear analysis of the importance of all the employed methods. While most of the ideas/methods are not very novel in isolation, this work gives a clear, well-tested recipe for the extreme quantization of transformers the quality of which is significantly beyond whatever was reported earlier. While the paper omits speculating about the potential gains by special-purpose hardware, this is not a real issue, as the implications are relatively clear and these results give a clear motivation to further work in that direction.\n\nClarity: High. The paper is well motivated. References to prior work is quite extensive, although references to very new, concurrent works might (that has not been peer-reviewed yet), but this does not affect the final conclusion. The paper presents a lot of experimental evidence in concise tables that back up the intuition behind the decision and highlight the significance and motivation for all of the decisions made for this work.\n\nSignificance: High. While the employed methods are relatively well known, the fact that transformers can be quantized to 1-bit precision is extremely important and this work gives a clear, well-documented measurement point and clear, well-tested recipes. This is a valuable baseline for future work, also increases the motivation for building special-purpose hardware for extremely low-precision neural networks. - It would be interesting to see whether a -1, 0, 1 quantization rather than the proposed -1, 1 would yield significant improvements at a moderate cost. Has this been attempted?\n- How does quantization affect the quality of vision transformers? - While every improvement has some potential to increase the risk of powerful technologies. It is unclear/unlikely to me that this technology would have any adverse affects beyond the generic issues of making high-impact technologies easier/cheaper to deploy by bad actors and that approximate inference might become less reliable than high-quality models. These are a general concerns regarding any performance improvements and performance trade-off. In this sense, I don't think that this approach has any non-generic risks that should have been considered in particular by the authors.", " This paper proposes several changes to the design of the binary BERT model. The main proposal consists of three parts: 1) use different quantization schemes based on the output distribution of a layer (e.g., activations are quantized to 0 and 1 for multi-head self attention layers); 2) adopt an elastic binarization function which allows re-scale and shift; 3) use a multi-step distillation scheme joint with a multi-step quantization schedule. The resulting BiT model demonstrates a strong performance on downstream tasks in GLUE and SQuAD datasets. Strengths: This paper proposes several simple yet effective changes to improve the quality of Binarized BERT model significantly. The proposed techniques are sound. The writing of the paper is clear and easy to follow. The authors also provide enough empirical evaluation of the proposed BiT models.\n\nWeaknesses: My biggest concern with this paper is that the authors do not clearly discuss the relationship to other recent works. In the absence of a clear discussion of the relationship to other related work, I am not convinced that the contributions in this work are new and original.\n\nThis paper does cite a large number of relevant works. However, the relationship between the proposals in this work and related work remains unclear. First, the proposal to use different quantization schemes depending on the output distribution of the layers is quite similar to Bi-Attention proposed in BiBERT [1]. Bi-Attention suggests using a bool function (Equation 11 in the BiBERT paper) rather than a sign function to binarize the output of a multi-head self-attention layer. Second, the elastic binarization function looks akin to the activation function proposed in Reactnet [2]. Reactnet and some follow-up work have already shown that introducing additional scaling and shifting capabilities in the activation function can help improve the quality of the binarized CNN. The authors do not point out the similarity with Reactnet when discussing the elastic binarization function, which could be misleading.\n\nIn addition, I think the paper also needs to discuss the new hardware required for the BiT model, which is different from traditional binary neural networks that require only simple XNOR gates to implement the computational engine/processing element. The computational engine for BiT models needs to handle computation between 1 and -1 and 0 and 1. I suggest that the authors should at least compare the required hardware with traditional binary and ternary neural networks. Otherwise, it would be difficult to justify not using a ternary neural network.\n\n[1] BiBERT: Accurate Fully Binarized BERT, ICLR'22\n[2] Reactnet: Towards precise binary neural network with generalized activation functions, ECCV'20 I would like to hear a response on the relationship to related work and also an analysis on the hardware complexity/cost of the BiT model. In addition, the authors could clarify the training cost of the proposed multi-step distillation scheme. N/A", " The authors propose novel binarisation techniques for transformer models based on knowledge distillation. The goal is to achieve competitive performance for an efficient student transformer model.  The main contributions are: i) binarisation technique that improves knowledge distillation in transformer models, ii) multi-step technique to distil binarised models to improve intermediate performance. The study shows that the student model achieves competitive performance compared to a standard BERT on the GLUE benchmark and question answering. Strengths\n\n\n- Clear description of background knowledge and related work needed to understand the proposed approach.  \n\n- Clear description of the proposed approach.\n\n- Best practices for model replicability.\n\n- The authors perform a comprehensive comparison with related work on the GLUE benchmark.\n\n- The findings show that an efficient transformer model has competitive performance compared to previous SOTA.\n\n\nWeaknesses\n\n\n- It is not clear how the parameter initialization  and selection of hyper-parameters could affect the model performance. Questions to the Authors\n\n\nPlease address the following questions during the rebuttal:\n\n\n- Could you elaborate on the hyper-parameter selection for the teacher-student.\n\n- A possible extra contribution is to perform multiple  random runs and report variance. However, how expensive could this exercise become? \n\n- Please speculate if by using other pre-training objectives in the LM task (e.g. next sentence) would it change any finding or results? The authors have addressed limitations of the proposed approach." ]
[ -1, -1, -1, -1, -1, -1, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "XxOB92px-n", "fmT20z7_wUl", "nips_2022_wYgRIJ-oK6M", "ZB2GSywGDFN", "xNeokIJRg8O", "QnFRmiWhdz9", "nips_2022_wYgRIJ-oK6M", "nips_2022_wYgRIJ-oK6M", "nips_2022_wYgRIJ-oK6M" ]
nips_2022_1C36tFZn7sR
Learning Chaotic Dynamics in Dissipative Systems
Chaotic systems are notoriously challenging to predict because of their sensitivity to perturbations and errors due to time stepping. Despite this unpredictable behavior, for many dissipative systems the statistics of the long term trajectories are governed by an invariant measure supported on a set, known as the global attractor; for many problems this set is finite dimensional, even if the state space is infinite dimensional. For Markovian systems, the statistical properties of long-term trajectories are uniquely determined by the solution operator that maps the evolution of the system over arbitrary positive time increments. In this work, we propose a machine learning framework to learn the underlying solution operator for dissipative chaotic systems, showing that the resulting learned operator accurately captures short-time trajectories and long-time statistical behavior. Using this framework, we are able to predict various statistics of the invariant measure for the turbulent Kolmogorov Flow dynamics with Reynolds numbers up to $5000$.
Accept
This paper proposes a neural network-based approach to estimate the Markov operator of dissipative chaotic systems. It introduces a novel combination of Sobolev and dissipativity losses. While the reviewers had initial concerns about clarity, assumption and application condition, and the choice of learning Markov operator versus modelling continuous dynamics, the author-reviewer discussion addressed most concerns, and all reviewers agree this work exceeds the bar for publication. I would encourage the authors to take into consideration the remaining concerns from the reviewers, incorporate key conclusions of the discussions and the limitation of the work in their final version.
train
[ "AdlRoTm8K9", "QZ8ayFQmA5", "JjhgkMSyuXZ", "SyqxVADB2BM", "4HLdAwuoTbb", "U1ZCj5yqUcO", "i7Al0vvN2xU", "LS6w473Eo0v", "l1LTOEv0wf", "kJGAOd3AdzN", "QhriKhkTBpj", "KsH2jVcBq1m", "nxkhXiCnVag", "a1Bgv70IakJ", "wXJKJIZnS8sF", "jUPWRjV3sOR", "gmdLSnUEqSB", "SUPNpaxV3Zs", "Q1-_MnTeh6_", "FWQNWdTEl-" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. Your feedback helped us make the paper stronger!\n\nThe authors-reviewers discussion period may have ended, but allow us to post a brief response regarding the slope. \n\n7. If we understand correctly, the slope of spectrum is $k^{-5/3}$ in the inverse cascade range ($k_a << k << k_f$), and $k^{-3}$ in the direct-cascade range $(k>>k_f)$, which is exactly written in your reference (page 431, equation 12 & 13 https://courses.physics.ucsd.edu/2019/Winter/physics116_216/annurev-fluid-120710-101240.pdf). In our setting, we use 128x128 grid in the model, so we only compute the wavenumber k up to 100. It is possible $k_f > 100$, so the $k^{-3}$ range does not show up. We will certainly add the spectrum of the ground truth DNS simulation in the paper.\n\n8. We initialize the system from a Gaussian random field that has low initial energy. The energy is injected from the force term over time, so we can observe energy at the small scales is increasing and converging to a stable state (turbulence builds up).", " 7. I appreciate the authors' motivation to address this problem from a principled approach. My point was that for the first two systems, the instability problem does not exist with simpler RC-like methods can get stable solutions. Hence, they are not suitable test cases. For the third system, the authors' analysis seems fine. I'm a bit confused about the slope of the spectrum: it should be k^-3 not k^(-5/3). The slope of -5/3 is for 3D turbulence when there is a direct cascade. There's something not right here. \n\n8. In the new figure that the authors added it seems that the energy at the smallest scales is increasing with time. This is a stationary problem, if the model is really stable the spectrum should remain the same (at least the structure), but a significant curl-up can be seen in the figure. \n\nI still think that this is an interesting contribution. Obviously, this is a challenging problem, but the approach is rigorous and in my opinion, in the correct direction as well. For that, I would increase my score and recommend that this paper gets accepted. \n\nPlease fix the slope of the spectrum --- something is definitely wrong there ( take a look at this: https://courses.physics.ucsd.edu/2019/Winter/physics116_216/annurev-fluid-120710-101240.pdf) and I recommend that the authors plot the true DNS spectrum on top of the predicted spectrum in Fig 6 (or whatever is going to be the figure number for the Re=5K figure) showing the curl up. I also recommend removing the Re=400 figure from the main paper and replacing it with the Re=5K results. There is no value in looking at such a simple system and talk about instability. Even if the Re=5K results look relatively poor, it is a truly challenging case and a physics audience -- the kind that understands turbulence--would appreciate it. \n\n", " We really appreciate your support and encouragement! Your feedback helped us improve the paper. We agree with the reviewer that the major assumption is dissipativity. Such assumption does not hold for energy-conservative hamiltonian systems such as the solar systems. In this case, it will be wrong to impose a dissipativity constraint. Besides, in the multiple-attractors scenarios, it can be non-trivial to obtain a representative dataset for each of the sub-attractor, which requires gathering the train data actively or using importance sampling techniques.", " ... I'd still be interested to hear the authors' opinion on the caveats I mentioned in my last reply!", " The dissipativity aspect is indeed novel as far as I can see. What I meant is that limit behavior, invariant statistics and high-dimensional problems have been studied before in the RNN literature (some of the Vlachas papers, for instance, or https://arxiv.org/abs/2207.02542).\n\nI also got the point about the “sub-attractors”. The issue I have here is that practically, for an empirically observed complex physical or biological system, one usually wouldn’t know in advance whether it is multistable or not. If it is, imposing a dissipativity constraint would be the wrong thing to do, unless one is confident it’s restricted to a particular basin E.\nAlso, the dissipativity constraint induces a strong bias in the vector field farther away from the attractor and potentially close to the border of E for a multiple attractor system. This is another non-trivial issue I think, it may destroy certain topological properties of the system.\n\nNevertheless I feel this is a worthwhile direction to consider, and either way a technically superb contribution. I also appreciate the authors widened their discussion of related work.\nI therefore decided to raise my score further.", " Thanks for your feedback and your support. We are very glad to see you find our contribution valuable. The notion of dissipativity that we use includes problems, such as the Chaffee-Infante problem and the Brusselator, which possess multiple $\\omega$-limit sets for individual points. However, because we also invoke **ergodicity**, such systems are excluded from our current study. We agree that the study of non-ergodic dissipative systems, and their basins of attraction, is an important direction for future study; we have added this outlook in Appendix C.\n\nThe **multiple attractors** setting is also a very interesting direction. We would like to take the chance to clarify that our dissipativity formulation is applicable to multiple attractors. The definition of dissipativity we use in our paper defines the existence of an absorbing set $E$ for the system that consists of an open ball of radius $\\sqrt{\\alpha / \\beta + \\varepsilon}$, for any $\\varepsilon > 0$ (see equation 2 in the paper).\n\nWhereas there can be multiple attracting subsets $E$ that may compete and provide differing dynamics within $E$, these systems are still dissipative with respect to $E$. In this paper, we seek to learn/enforce dynamics that remain globally dissipative with respect to $E$. As long as there is sufficient training data on the areas of interest within $E$ (e.g., multiple subsets of $E$ that form \"sub-attractors\"), MNO can be trained on these dynamics. Specifically, we will collect a dataset on each of the attractors, so the model can learn the dynamics on each of them.\n\nAs for the RNNs for long trajectories in high-dimensional systems, we have cited this literature (~lines 58-59) and are grateful for the pointers. Nonetheless, as we clearly state (~lines 59-61) we believe our paper is the first paper to study the systematic imposition of dissipativity (according to our widely used definition) within machine learning.", " We really appreciate your positive feedback. These comments encourage us and help us to improve our paper. We agree that there may be a trade-off between the numbers of parameters required to represent the solution operator and the time over which the solution operator is represented; quantifying this trade-off, and including the continuous-time limit and vector field driving it, will be of value and almost certainly needs to be addressed on a case by case basis. However, for PDEs, where the operator learning problem is for an unbounded operator, our numerical results strongly suggest the value of learning the solution operator map at a positive time, and not its infinitesimal generator.\n\nLearning continuous dynamics for complex PDE systems is not only costly, but also limited for two reasons: (1) the time-derivative (infinitesimal generator) is an unbounded operator, which is more difficult to learn. (2) the continuous dynamics only applies to finite-dimensional ODE system. For PDE systems, the integrator (such as the forward-time centered-space method, etc.) needs to incorporate the spatial derivatives, which are usually unavailable in the reduced-order models. These works such as SINDy have to discretize the space using POD/PCA, which limits its performance on complex systems with high-dimensional attractors such as the Navier-Stokes equation.\n\nAs a comparison, the foundational work [Discovering governing equations from data by sparse identification of nonlinear dynamical systems](https://www.pnas.org/doi/10.1073/pnas.1517384113) considered a laminar cylinder flow with Re=100. It simplifies the complicated system into only 3 modes: two PCA modes plus the shift mode. The work is the first to generate attractors [Fig 2] similar to our [Figure 5. ab], but this simplified model cannot simulate the full-field velocity and spectrum as we did [Figure 6,9,10,11].\n\nReviewer NdmN points out the works by the Brunton group or Edward Ott’s group to argue that \"The authors are not the first to put the focus on invariant statistics.\" Specifically, of the three papers below learning continuous dynamics, none of them consider complex chaotic PDE systems such as the Navier-Stokes equation (with Reynolds number up to 5000, like we do):\n1. [Discovering Governing Equations from Partial Measurements with Deep Delay Autoencoders](https://arxiv.org/pdf/2201.05136.pdf): This paper combines deep learning to uncover effective coordinates and the sparse identification of nonlinear dynamics (SINDy) for interpretable modeling. They focus on ODE system such as **Lorenz**.\n2. [Data-driven discovery of coordinates and governing equations](https://arxiv.org/pdf/1904.02107.pdf): Similarly, the paper combines the strengths of deep neural networks for flexible representation and sparse identification of nonlinear dynamics (SINDy) for parsimonious models. They focus on simpler systems such as **Lorenz**, **Reaction-diffusion**, and **Nonlinear pendulum**.\n3. [SINDy-BVP: Sparse Identification of Nonlinear Dynamics for Boundary Value Problems](https://arxiv.org/pdf/2005.10756.pdf): This work main considers boundary value problems (BVPs) such as **Sturm-Liouville**, **Poisson equation** and **Euler-Bernoulli Beam theory**.\n\nSimilarly, there are few works on reservoir computing or Koopman operators can handle such non-linear systems. If you have any specific model in mind that can run on the turbulent Navier-Stokes equation or equivalent chaotic system, we would love to compare with them.", " I thank the authors for their extensive revisions. \nI wouldn’t agree with some of the points in the reply though: For instance, except for some strong simplifications of biological systems (e.g. Lotka-Volterra, SIR), most molecular, ecological or neuronal systems exhibit multiple attractors to my knowledge, and the same is true for many relevant complex physical systems (climate). Also, some of the lit. I collected in my initial review *does* study RNNs for long trajectories on high-dimensional systems. Thus, although the discussion of related lit. has been extended, I'm not sure the authors always appropriately acknowledge or do justice to this previous lit.\n\nThese are not major points, I find the authors’ contribution valuable regardless, but something that the authors may want to acknowledge and discuss, perhaps with an outlook on how to deal with systems with multiple attractors.\n\nI'm still in favor of this work and certainly agree it’s very relevant for Neurips. But I’m not sure yet the revisions are fundamental enough to increase my rating. I feel some of the authors’ responses (1-3) slightly tend to navigate around my points rather than fully addressing them.", " Dear reviewers,\n\nThere is a gentle reminder that the discussion period is going to end soon. Please feel free to let us know if our responses help to clarify your questions and concerns, so we can make further explanations and add new experiments if needed.\n\nWe really appreciate the reviewers’ actionable feedbacks, and we made great efforts to address with additional numerical studies on hard-constraint dissipativity, the ablation study for the regularization loss, and the turbulent Re=5000 system. We hope that the Reviewer will take these new results into account and increase our score if we have adequately addressed the main concerns.\n\nThank you,\nAuthors", " I have read the rebuttal and I would like to thank the authors for their response. \n\nI have increased my score accordingly. I see another review explicitly asks for learning continuous dynamics. Whereas the response gave an explanation that “learning continuous dynamics needs an integrator to compute predictions over long time intervals, which is computationally expensive,\" this point needs to be quantified, as we generally use small-scale neural networks, and only a few steps are required as a substitute for one-step prediction of a discrete model when data step size is small.", " \nWe appreciate your insightful comments and we are glad that the reviewer enjoyed reading our paper. The two main changes we have made to address your points are (1) including MNO results on Kolmogorov Flow ($Re=5000$) and (2) including a method for enforcing dissipativity via post-processing, which results in dynamics that are *provably* stable, in contrast to previous works. \n\nWe now address each of your points in detail:\n\n1. We have changed the wording here; see lines 16-17.\n2. Thanks for pointing out the reference. We have changed the wordings here.\n3. We agree: \"ill-posed\" may not have been the best phrase here; we have changed this sentence in the revised version.\n4. Thanks, we have changed the wording in the introduction.\n5. Thanks for the references; we have included them in our paper. Although these papers explicitly address the fact that long-term stability is a desirable feature, they do not provide a principled data-driven modeling paradigm to enforce it. \n\n In this paper we work within a mathematical setting that incorporates a variety of problems, including our three examples, and demonstrate two principled methods to enforce dissipativity: (1) by augmenting the training procedure; (2) by post-processing standard methodologies. Furthermore the method in (2) leads to provably dissipative (in our precisely defined sense) models, something that has not been achieved in the literature before.\n6. Thanks for the references on varying the choice of $h$; we have cited these in our paper.\n7. Thanks for your detailed comments. Our goal in this work is not to address large complex models directly, but rather to introduce---on a few test problems---new methodologies to ensure dissipativity which can then be deployed in more complex models. \n\n To address your concerns, we have plotted the vorticity spectrum over time (also in a higher Reynolds number) in Figure 6. The model captures the energy spectrum that converges to the Kolmogorov energy cascade rate of $k^{-5/3}$.\n \n As the reviewer pointed out, none of the models capture the auto-correlation of the PCA modes, probably because the PCA modes does not really have any physical meaning in this case. We will make sure to plot the probablity density functions in the semilog scale in the camera ready version. \n\n \n8. Thank you for the references. Kochkov et. al. study DNS at $Re=1000$ and $4000$, and LES at $Re=10^5$; Guan et. al. study LES; Maulik et. al. study the local closure at $Re=10^5$. To match their setting, we add a DNS at $Re=5000$ to show our model's performance under more complex conditions, and we have plotted the vorticity spectrum over time. Since we are learning the solution operator, it makes it easier to scale to more complicated systems with higher Reynolds numbers. The main constraint is to generate the datasets with DNS.\n \n ![](https://i.imgur.com/DTutGuU.jpg)\n \n We want to emphasize our goal in this work is not to show that we can simulate turbulence, but rather to introduce a new framework to systematically learn the attractor of chaotic systems and ensure dissipative learned dynamics. We test our methods on test problems (e.g., Lorenz-63 being a simple baseline to visualize the results).\n \n9. We have added the evolution of the energy spectrum at Figure 6.\n10. As the reviewer pointed out, none of the models capture the distribution of TKE exactly, but MNO is closer. The ground truth kinetic energy has two peaks. Both the MNO and UNet models predict one peak. Still the MNO model is more accurate compared to UNet. Similar to the KS equaiton, the PCA modes of NS are hard to capture too. It's probably because PCA modes do not really have any physical meaning neither. We will make sure to plot the probablity density functions in the semilog scale in the camera ready version. \n11. Thanks for the suggestion. We are happy to try another turbulent system that has a dominant mode and coherent structures. Suggestions are welcome. Meanwhile we would like to re-emphasize that our goal in this work is to introduce new methodologies for learning chaotic systems (enforcing dissipativity, training with Sobolev norms, etc.), which can later be applied to more complex systems. However, we hope you find interesting our MNO results on Kolmogorov Flow with $Re=5000$, as this is a more complex flow than for $Re=500$.\n\n", " \nThanks for your comments. Due to the recent advances in deep learning, the problem of learning chaotic dynamics from data (e.g., modeling turbulent flows for weather applications or for control) has become of major importance in machine learning and in applied science in general. \n\nIn our work, we propose a systematic framework for learning chaotic dynamics in realistic dissipative systems. We believe that our work sits at the intersection of research in dynamical systems and the modern advances of deep learning, and we advocate for continued communication between the fields. Reviewers 3spS, NdmN, and Hc1s all show interest in this work, which suggests Neurips can be a proper venue.\n\n1. We appreciate the suggestions. We have defined our Sobolev norm step-wise loss term (eqs. 5-6) and the overall loss function used to train our MNO (eq. 5). We have also renamed the \"dissipativity loss\" to \"dissipativity regularization.\" See Figure 2.\n\n2. Learning continuous dynamics is certainly an interesting avenue of study, with much prior work focused in that direction (e.g., SINDy, Neural ODE, OnsagerNet, etc.). This amounts to learning $F$ in eq. 1 of our paper.\n \n However, there are two reasons why we wish to instead learn the solution operator for the system: (1) If we learn $F$, we still need an integrator to compute predictions over long time intervals, which is computationally expensive, (2) In infinite dimensions, $F$ is unbounded, and learning unbounded operators is a considerably more challenging problem. [de Hoop et al., 2021] show, in the context of linear operators, that learning is easier (in terms of sample complexity) for compact and bounded operators than for unbounded operators.\n \n To clear up confusion, we have added this point more explicitly in our paper. See Section 2, paragraph 2.", " #### Minor points:\n1. RNN: Yes, the setting of our work is similar to recurrent neural networks. However, prior works either fit the RNN on extremely short trajectories or simple, lower-dimensional systems such as the Lorenz-63 and KS equations. For Markovian chaotic systems, we show memory is not necessary to learn the solution operator.\n2. Figure 3: We use Figure 3 to demonstrate the dissipativity of Lorenz-63 system. The statistical evaluation can be found in Figure 7 in the appendix. In the updated version, we zoom in the attractor and add a new sub-plot for the hard constraint case.\n3. Figure 4: By residual, we mean learning the time-derivatives. In the standard setting, we directly model the solution operator (flow map) $\\phi:u(t) \\mapsto u(t+1)$. The alternative setting is to learn $\\frac{du}{dt} \\approx (u(t+1) - u(t))*dt$. The two settings are equivalent. But when $dt$ is small, $u(t)$ and $u(t+1)$ are close, so the residual $u(t+1) - u(t)$ provides a stronger signal for learning.", " \n1. Under our dissipativity assumptions (which many real-world systems satisfy, including most biological systems) in eq. 2, there exists a unique global attractor. We note that a global attractor *can contain* multiple attracting regions. The existence of a global attractor implies that the global attracting region is connected.\n \n For systems with separate and distinct \"sub-attractors\" (e.g., systems with bifurcations) within the global attracting set, data can be collected separately across the entire global attractor so that the training set includes trajectories in the regions of interest.\n \n Note that, although the global attractor is unique under the dissipativity assumption in our paper (eq. 2), the $\\omega$-limit set (set of accumulation points of trajectories) of a single initial condition may differ across initial conditions [Stuart and Humphries, 1998; Section 2.8]. In this paper we concentrate on the ergodic setting where the same $\\omega$-limit set is seen for almost all initial conditions with respect to the invariant measure. In future work it will be of interest to study non-ergodic problems in which different initial conditions lead to different $\\omega$-limit sets; in particular it will be of interest to learn about the unstable manifolds which partition the state space into regions with differing $\\omega$-limit sets. In future work it will also be of interest to study Hamiltonian, energy-conserving problems [Stuart and Humphries, 1998; Section 2.9].\n \n \n2. Thank you for your time in collecting the relevant literature. We have added citations and highlighted key differences in our paper. Overall, we aim to systemtically study chaotic systems and enforce dissipaticity, which has not been studied in the previous works. We classify those works into four groups: (1) those learning continuous dynamics (e.g., SINDy), (2) techniques using reservoir computing, (3) technqiues using RNNs, and (4) methods for learning Koopman operators.\n\n (1) **Learning continuous dynamics:** We purposefully avoid learning continuous dynamics (the time derivatives) for two reasons:\n * To produce long-time predictions an integrator must still be used, which is computationally expensive.\n * Learning continuous dynamics for PDEs is undesirable because it is an unbounded operator, which is more difficult to learn. [de Hoop et al., 2021] show, in the context of linear operators, that learning is easier (in terms of sample complexity) for compact and bounded operators than for unbounded operators. In contrast, our methods scale to infinite-dimensional PDE cases because we are learning the solution operator.\n\n (2) **Reservoir computing:** Reservoir computing is similar to learning continuous dynamics. Usually the approximation power is limited: it requires a much smaller time-step (hence effectively learning continuous dynamics) and limits to a simpler model. To our knowledge, so far no reservoir computing model can predict the full velocity/vorticity field of a turbulent NS problem.\n \n (3) **Recurrent neural networks:** The setting of our work is similar to the recurrent neural networks. However, prior works either fit the RNN on extremely short trajectories or simple, lower-dimensional system such as the Lorenz and KS equations. For Markovian chaotic systems, we show memory is not necessary to learn the solution operator.\n \n (4) **Koopman operators:** In this line of work, the authors encode the dynamics into a latent space, and learn a linear operator on the latent space. However, even when the state space is finite dimensional, the Koopman approaches require approximation of a linear operator on a function space; when the state space is an infinite-dimensional approximation of this operator is particularly challenging.\n\n3. Thanks for pointing out the reference! For chaotic systems, the Lyapunov exponents are greater than one which causes the system to diverge over a period. This is also the main conclusion of this paper: if the dynamics are chaotic, gradients of RNN will always explode. This is the main reason we do not use RNN/memory in our formulation.\n4. Thanks for your comments. We have added a more precise description of our loss function (see eq. 5). We have also added ablation experiments for the hyperparameters of the dissipativity term in Appendix A.1, where we empirically show that the hyperparameters can be varied significantly without producing a large change in the performance of the model in terms of step-wise or dissipative error.\n", " \n1. Thank you for your questions. To clarify, we have empirically found that our trained models appear robust to variations in the shell radius of the enforced dissipativity. We have added ablation experiments in Appendix A.1 to corroborate our claims. In general, we believe it is ideal to select an inner radius that is solely dependent on the training set and on $\\lambda$. We choose our inner radius to be $r_i = \\sup \\frac{\\|x\\|}{\\lambda}$ over all $x$ in the training set. Note that if $\\|x\\| < r_i$, then $\\lambda x$ would lie inside the outer-most point of the attractor (assuming the training set adequately captures the attractor). We have found that the outer radius $r_o$ of the shell typically does not matter significantly because the models tend to learn dissipative dynamics outside of $r_o$ as well.\n2. We appreciate your questions on balancing the Sobolev loss and dissipativity regularization. To clarify this point, we have clearly defined our Sobolev norm step-wise loss term (eqs. 13-14) and the overall loss function used to train our MNO (eq. 5).\n3. We have added further ablation experiments in Appendix A.1 to address this point. We have found that the step-wise error and dissipativity error of our trained models is not highly sensitive to the choice of $\\lambda$ and $\\alpha$ (the weight between the Sobolev norm and dissipativity loss terms). As for the choice of $\\lambda$, this may be application-dependent. For certain systems where returning to the attractor quickly is important, larger values of $\\lambda$ may be used. Our results suggest that reasonable values of $\\lambda$ should not significantly increase the error of the trained model.\n4. We acknowledge that we are only focusing on learning Markovian systems in this work. However, there are many systems of practical scientific and engineering value (e.g., Navier-Stokes as presented in our paper) for which a priori we know there exists a Markovian solution operator, on which our techniques could be applied.", " We thank the reviewers for their helpful comments and suggestions. We have updated the paper and used color code to show the changes. For better presentation, please open the link to view the full response with figures: https://hackmd.io/@anonymous-author/B1NR1ePaq\n\nOverall, we make three major updates:\n### Hard-constraint: Enforcing dissipativity via post-processing:\n\n![](https://i.imgur.com/jbiCVme.jpg)\n\nWhile the previous dissipativity regularization encourages a stable map in practice, it does not guarantee that dissipativity is enforced. For an additional safeguard against model instability, we enforce a hard *dissipativity constraint* far from the attractor and from the shell where $\\nu$ is supported. This allows for provably dissipative dynamics on a ball sufficiently far from the attractor. \n\nSpecifically, we post-process the model: whenever the dynamic moves out of an a priori defined stable region, we switch to the second model $\\Psi$ that pushes the dynamic back. The new model combines the learned model $\\hat S_h$ and the safety model $\\Psi$, via a threshold function $\\rho$:\n\\begin{equation}\n \\hat S_h' (u) = \\rho(\\|u\\|) \\hat S_h + (1 - \\rho(\\| u\\|)) \\Psi(u),\n\\end{equation}\nwhere $\\Psi$ is some dissipative map and $\\rho$ is a partition of unity. For simplicity we define\n\\begin{equation}\n \\Psi(u) = \\lambda u \\hspace{4em} \\rho(\\|u\\|) = \\frac{1}{1 + e^{\\beta(\\|u\\| - \\alpha)}},\n\\end{equation}\nwhere $\\alpha$ is the effective transition radius between $\\hat S_h$ and $\\Psi$ and $\\beta$ controls the transition rate. Note that this choice of $\\Psi$ is consistent with the regularization term in the loss (eq. 5).\n\n### Kolmogorov flow with $Re=5000$:\n\n![](https://i.imgur.com/DTutGuU.jpg)\n\nThe proposed dissipative MNO model can learn stable dynamics with a high Reynolds number. As shown in the figure above, the model correctly captures the build up of the turbulence. The first column corresponds to the initial condition and is sampled from a Gaussian random field. Around $t=10$ to $t=20$, we can see the energy injected from the source term $\\sin{(4y)}$ (see equation 10). Around $t=10$ to $t=20$, the energy dissipates into the lower frequencies, and the spectrum converges to the ground-truth Kolmogorov energy cascade rate of $k^{-5/3}$. The learned model is stable with the dissipative regularization. Even with a $O(10^4)$ perturbation, the dynamic quickly returns to the attractor of the system. In contrast, the model without dissipative regularization will stay outside the attractor with a much smaller $O(10)$ perturbation.\n\n### Ablation study on the hyperparameters for the dissipativity regularization:\n\n![](https://i.imgur.com/DnJ9LmK.png)\n\n\nAblation experiments show the effect of varying dissipativity hyperparameters. Error rates are given as relative $L^2$ error. Per-step error, per-second error, and dissipativity error are defined as in Table 1 in the paper. Unaltered hyperparameters are held constant at default values of a radius of $40$, $\\alpha=1$, and $\\lambda = 0.5$.", " This paper proposes a machine learning framework, named Markov neural operator (MNO), to learn the Markov operator of dissipative chaotic systems. Different from the conventional neural networks, they use Sobolev norms in operator learning and add dissipativity losses to ensure dissipativity. They experimentally show that the MNO can accurately approximate the global attractor and estimate various statistics of the invariant measure for dissipative chaotic systems. Besides, they provide a theoretical guarantee that this model is rich enough to approximate many chaotic dynamical systems for an arbitrarily long period. This is a novel method to approximate the global attractor and estimate the invariant statistics of dissipative chaotic systems. They pioneered the combination of the Sobolev losses and the dissipativity losses, which improves the performance for preserving the invariant statistics. The dissipativity losses they proposed can enforce the dynamics close to the attractor, which improves the robustness against large perturbations. They also investigate the effect of time steps and different Sobolev losses in operator learning, which makes this work more systematic. However, there are still some confusing details. The choice of the hypermeter lambda and the inner and outer radii of the shell in the dissipativity losses remains to be discussed. There is no description of the proportion of the Sobolev and the dissipativity losses in the loss function. In addition, sometimes we do not know whether the target system is Markovian in a model-free task. As a result, this paper lacks a method to judge whether the MNO can be directly applied to a certain target system. 1. How the choice of the inner and outer radii of the shell of the dissipativity losses influences the performance of the MNO? Whether the choice significantly affects the MNO’s step-wise error and the estimation of the statistical properties of the attractor? How to choose the radii based on the training data?\n2. How to balance the influences of the Sobolev and the dissipativity losses? Is there a hyperparameter to adjust their proportion in the loss function?\n3. Does the choice of lambda significantly influence the performance of the MNO? How to choose the optimal lambda according to the training data?\n4. Is it possible to provide a method to judge whether the MNO can be directly applied to a certain target system?\n The MNO can be used only when the dynamical system is Markovian, but actually we do not know whether that assumption holds in a model-free task. This paper lacks a method to judge whether the MNO can be directly applied to the certain target system.", " This article discusses robust estimation of chaotic dynamical systems, in particular their invariant statistics, using Markov neural operators (MNOs). In case of simple ODE systems, feedforward NN instantiate the MNO, while Fourier operators are used for spatially extended (PDE) systems. The major innovation in the present work is the addition of a ‘dissipativity loss’ that enforces attracting behavior within a shell around the data, and thereby tends to enforce globally attracting properties. Essentially, the dissipativity loss tends to penalize larger flow vectors away from the data (why this works was not fully clear to me, however). This enables to learn highly challenging systems like the strongly turbulent Kolmogorov flow. Strengths:\nIn general I find this a sophisticated, concisely written and well presented contribution to the field from which I learned various things. This specific idea of enforcing dissipativity seems novel and useful to me.\n\nWeaknesses:\n... at the same time, for many systems this global form of dissipativity seems to be a quite strong and often too restrictive assumption, see my detailed Q below.\nMuch related lit. is not discussed, some methodological details are missing, for some results only single examples but no stats are presented. 1) If I understood correctly, a crucial premise of this approach is that there is indeed only one globally attracting limit set. This will be violated in most biological systems at least, for instance in molecular biology or neuroscience, as well as in many other complex dynamical systems of current interest, like climate models. For many real-world data from complex systems we would not know this in advance. Isn’t this a serious limitation of the approach?\nAlso, how does the vector field far from the attractor actually compare to that of the true system under this approach?\n\n2) I felt that a lot of relevant literature was left aside (all the work by the Brunton group or Edward Ott’s group for instance, https://arxiv.org/abs/2201.05136, https://arxiv.org/abs/1904.02107, https://arxiv.org/abs/1710.07313, https://arxiv.org/abs/2110.07238, https://arxiv.org/abs/1712.09707, https://arxiv.org/abs/1910.03471, https://arxiv.org/abs/2110.05266, https://arxiv.org/abs/2005.10756, https://arxiv.org/abs/1707.01146, https://arxiv.org/abs/2207.02542, or the recent developments surrounding neural ODEs and PDEs, e.g. https://iopscience.iop.org/article/10.1088/1367-2630/abeb90). The authors are not the first to put the focus on invariant statistics; power spectra, Lyapunov spectra, or geometrical statistics have been discussed before in the context of reconstructing dynamical systems. Likewise, although the novelty about Theorem 1 may be its extension to spatio-temporal operators (function spaces), similar ideas I believe appeared before in the works of Funahashi and Nakamura or, more recently, by Raginsky and colleagues (https://proceedings.mlr.press/v120/hanson20a.html and references therein). Also, much stronger competitors are likely to be found among these previous methods than the plain LSTMs and GRUs used in Fig. 4 or the models used in Table 1. Many groups (like Brunton or Vlachas) have tested their approaches on similar systems like the Lorenz-63, Kuramoto-Sivashinsky or Navier-Stokes systems. These, in my mind, would have been the more relevant comparisons. This is especially relevant when the claim is made that other ML approaches fail to capture invariant statistics (lines 255-256).\n\n3) If I understood correctly, the specific regularization solution is partly also motivated by the fact that iterating the MNO across many time steps (eq. 3) may easily lead to divergence. But this it seems to me is really a standard problem in RNN training, especially if the loss is only based on consecutive time steps. Specific algorithms to deal with this issue had been suggested before in the context of dynamical systems, e.g. https://arxiv.org/abs/2110.07238. Would be nice to know how these approaches differ in performance \\& properties.\n\n4) More methodological and algorithmic details in the main text would be appreciated. What is the loss function, apart from the novel regularization term? What precisely is the setup and parameterization of the forward NNs and the Fourier NOs, how were hyper-parameters of the architectures determined (especially $\\lambda$ in the dissipativity loss)?\n\nMinor points:\n\n- In my understanding RNNs also learn a Markov operator that is used as in eq. 3, what is the difference?\n\n- Fig. 3 provides just single examples, likewise it appears Fig. 4. It would be good to see statistically more comprehensive evaluations with error bands on many different training runs. Also, it would be good to see reconstructions of the actual attractors as well (the Lorenz is barely visible in Fig. 3). \n\n- I found it somewhat difficult to follow parts of sect. 4.2 and Fig. 4, simply cos in my mind too few details were provided. What, for instance, is the ‘residual model’?\n The major limitation I pointed out above was not discussed.\n", " This paper proposes a machine learning framework (MNO) by using neural networks or Fourier neural operators to learn the underlying Markov operator of dissipative chaotic systems. Numerical results show that MNO has lower loss and outperforms the other neural network models, including U-Net, LSTM-CNN, and GRU. This paper proposes to learn the underlying Markov operator of dissipative chaotic systems using deep learning. Several architectures are investigated in detail. The results is interesting, however, I'm not sure whether learning Markov operator of dissipative chaotic systems is relevant for deep learning. In my view, this paper is more suitable for some dynamical journals, such as SIAM Journal on Applied Dynamical Systems, Chaos, ... . 1. Could you provide the formula of Sobolev Loss and Dissipativity Loss in Figure 2. I guess that the Dissipativity Loss is given by Equation 5. Then, is the ground truth exactly match the Equation 5? If not, does \"lower loss\" make sense? I think it might be more appropriate to treat it as a regular term.\n\n2. How about using continuous models to learn continuous dynamics?For example, Neural ODE or its structure-preserving extensions [1]. In [1], they also consider learning Lorenz-63 system. For infinite dimensional PDE systems, in my view, we can directly replace the neural network in a continuous model by a neural operator to obtain a infinite continuous model. \n\n[1] Haijun. Yu, Xinyuan. Tian, Weinan. E, Qianxiao. Li, Onsagernet: Learning stable and interpretable dynamics using a generalized onsager principle, PHYSICAL REVIEW FLUIDS 6, 114402 (2021)\n Yes, the authors addressed the limitations.", " This is a very interesting paper and addresses an important challenge in terms of long-term stability of high-dimensional turbulent flow. The authors show that adding a dissipative term in the loss and training it with sobolev loss would improve the stability of the autoregressive data-driven models (here they show that a Markov neural operator) is the best skeleton model. \n The major strength of this paper is that it addresses a difficult problem in the physical sciences that have a lot of stakeholders and look at pertinent and physically justified metrics to support their claim. \n\nHowever, the major weakness of this work is that 2 of the 3 systems considered are not realistic enough to show issues with instability (the problem at hand) and the 3rd system is also not the best realistic case. Other papers have looked at more complex systems. The weakness still does not take away a very good set of analysis and transparent results that the authors have shown. Still it needs some work before getting accepted. The paper discusses and investigates realistic metrics pertinent to the problem and I've thoroughly enjoyed reading it. However, there are still some questions/clarifications that remain for the authors to justify their claims. Herein, I summarize some of the main issues that I find. Beyond that, there is a large class of literature in weather/climate modeling and fluid dynamics that talks about these issues at large; I suggest that the authors look at some of the data-driven weather prediction papers that talk about long-term instability. Adding that context to this paper would make it even more meaningful for readers who would actually care about why this is an important problem for the physical sciences. \n\n\n1. Line 2. -- The issue is not just with time-stepping, but with the fact that because of a positive Lyapunov exponent the initial perturbation grows exponentially. \n\n2. Line 19-20. There seems to be other work (one at least) that looks at long-term statistics of chaotic systems, e.g. PDFs including the tails that capture extreme events. Even more so, it turns out RCs can do both short- and long-term seamlessly. https://npg.copernicus.org/articles/27/373/2020/. Surely, there could be other papers that do the same. The authors' statement is not accurate here. They are however correct when they say that RCs definitely do not scale for more complex problems. Even so, the authors are still looking at toy systems that RCs excel at (except for the NS case). \n\n3. Line 25-26 What's the rationale behind calling it \"ill-posed\", citations to relevant papers are essential. \n\n4. Line 40. Typo. charges->changes\n\n5. For Lorenz 63, simpler methods are long-term stable, even for KS, and for other more complex Lorenz-type systems as shown above. The first two examples are not interesting or relevant when we want to talk about long-term statistics. 2D Kolomogrov is a great example, and there are many other relevant problems that have been studied for long-term stability along those lines, e.g., QG turbulence, other climate models, and real weather data. There is a big community working on this specific problem. The authors need to look at that literature:\n(i) https://arxiv.org/abs/2205.04601\n(ii) https://arxiv.org/abs/2202.11214\n(ii) https://arxiv.org/abs/2202.07575\n(iv) https://dl.acm.org/doi/10.1145/3429309.3429325\n(v) https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2020MS002109\n\n\nIn the purely ML community, there are a few other papers as well (who tried and improved stability but eventually couldn't make it infinitely stable) for Kolomogrov turbulence (more complicated cases than the current paper). The manuscript would benefit from putting the problem in the context of this bigger goal. \n https://arxiv.org/abs/2112.15275\n\n6. The effect of \"h\" is very interesting in the KS problem. There are a few other papers that have found results that might be contradicting this specific paper, or maybe not if I have misunderstood. But either way, that's good. It generates discussion and the importance of exploring this problem more carefully. However, I feel that the authors should acknowledge those papers as well. While not done in the context of operator modeling, they have been done on more complex, realistic, and practical problems of chaos and turbulence. Here are a few of those papers :\n\n(i) https://royalsocietypublishing.org/doi/10.1098/rsta.2021.0200 (Section 2)\n(ii) https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2020MS002203 (Figure 2)\n(ii) https://gmd.copernicus.org/articles/15/2221/2022/ (Section 4.3)\n\n7. I appreciate how the authors interpret stability as something beyond just running the model and not getting \"NaN\". Stability means that the flow is both physical and non-drifting and has the right variability. However, I am still not 100% convinced that this model can serve as a real long-term stable model for a real complex flow. Here are a few things the authors can show to remove my doubts:\n --- While the model captures the spectrum quite well, it deviates in the high-wavenumbers, an essential reason for instability (Figure 4b). Can the authors show how the spectrum deviates as we move forward in time? Basically, take the best model and plot the output's spectrum at various time steps during prediction and compare it with the truth. Furthermore, I don't think that KS is that complicated a system, wherein \"spectral bias\" (the fact that you lose high wavenumbers as you integrate in time) is that big of a problem. However, in the N-S case with high Re, this would become a definite problem especially because you have inverse cascade. Also, for KS, I think Pathak et al, PRL, had long-term statistics, without doing anything? Does this KS have the same dimension or is it higher?\n ---- It's great that the authors look at the PC1 autocorrelation curve. It must be pointed out that the authors' model does not capture it, or am I mistaken? I also understand that it's difficult to capture PC1 autocorrelation in KS because, in reality, I don't think PC1 in this system has any physical meaning (e.g., in a real system like Earth's climate PC1 has connections to ENSO and so on..). Still the fact that the model doesn't go absolutely haywire is promising. Yet, the authors should clarify this\n --- I don't think the PDF is also captured correctly. Please plot PDF in semilog, so that the tails (extreme events) are more prominent, that's what matters. But here, I think even the bulk is wrong in some portions. It's still impressive that you have the right range of values. \n\n8. For the N-S case, I have some major confusion. In the main paper (not the appendix), the spectrum is not plotted. In the appendix, I see some images with Re=40. Re=40 is not nearly turbulent enough. That's hardly interesting in the context of turbulence and it's long-term predictability. Even Re=500 is not a realistic set-up. I must say here, that I understand the difficulty to scale up the turbulent flow to high Re, it's both difficult computationally, and probably very hard to predict. But based on the context of the problem, if this model really is going to serve practical problems in terms of long-term stability, the 2D N-S case should have Re=10000 or higher. That's the regime when we have all the complexities of this flow including when the effect of inverse cascade would be prominent. Here're a few papers that look at NS cases in realistic regimes. \n (i) https://www.pnas.org/doi/10.1073/pnas.2101784118\n (ii) https://www.sciencedirect.com/science/article/pii/S0021999122001528\n (iii) https://www.cambridge.org/core/journals/journal-of-fluid-mechanics/article/abs/subgrid-modelling-for-twodimensional-turbulence-using-neural-networks/10EDED1AEAA52C35F3E3A3BB6DC218C1\n\n9. For the N-S case, the energy spectrum at different time steps needs to be shown for the best model and the baseline. That would clearly show how well the spectrum is captured. It would also only be interesting for a case where the flow is turbulent (Re>10000)\n\n10. Again, the PDFs should be shown in semilog plots to highlight the tails. Also, what's going on with the PDF of KE for the N-S case? It's not really getting captured right? In this case, as well, there's no dominant mode of variability, so PC1 does not have any meaning right? But still, plot the true PC1 autocorrelation plot to show how well/poorly it's captured. \n\n11. The authors might benefit from looking at another system. A system that has a dominant mode and coherent structures while being turbulent enough. I think the discussion around stability would have far more importance there with PC1, autocorr, etc even if they are not perfectly captured. \n\nFinally, despite my long review and many questions, I enjoyed reading the paper. The authors have done a great job in addressing the question of stability \"correctly\" by looking at the right set of metrics. While there are these lingering questions, once properly addressed this would be an important contribution to the ML and physics community. N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "QZ8ayFQmA5", "QhriKhkTBpj", "SyqxVADB2BM", "4HLdAwuoTbb", "U1ZCj5yqUcO", "LS6w473Eo0v", "kJGAOd3AdzN", "a1Bgv70IakJ", "nips_2022_1C36tFZn7sR", "KsH2jVcBq1m", "FWQNWdTEl-", "Q1-_MnTeh6_", "a1Bgv70IakJ", "SUPNpaxV3Zs", "gmdLSnUEqSB", "nips_2022_1C36tFZn7sR", "nips_2022_1C36tFZn7sR", "nips_2022_1C36tFZn7sR", "nips_2022_1C36tFZn7sR", "nips_2022_1C36tFZn7sR" ]
nips_2022_9YasTgzma8c
Trading off Image Quality for Robustness is not Necessary with Regularized Deterministic Autoencoders
The susceptibility of Variational Autoencoders (VAEs) to adversarial attacks indicates the necessity to evaluate the robustness of the learned representations along with the generation performance. The vulnerability of VAEs has been attributed to the limitations associated with their variational formulation. Deterministic autoencoders could overcome the practical limitations associated with VAEs and offer a promising alternative for image generation applications. In this work, we propose an adversarially robust deterministic autoencoder with superior performance in terms of both generation and robustness of the learned representations. We introduce a regularization scheme to incorporate adversarially perturbed data points to the training pipeline without increasing the computational complexity or compromising the generation fidelity by leveraging a loss based on the two-point Kolmogorov–Smirnov test between representations. We conduct extensive experimental studies on popular image benchmark datasets to quantify the robustness of the proposed approach based on the adversarial attacks targeted at VAEs. Our empirical findings show that the proposed method achieves significant performance in both robustness and fidelity when compared to the robust VAE models.
Accept
This paper received generally positive reviews that, after discussion, all backed acceptance. The paper was praised for its empirical evaluations, potential significance, clarity, and applicability. While some questions and lower-level issues were raised, I do not feel that the reviewers raised any significant issues that would be a barrier to acceptance, with the small number of issues that were raised well addressed by the authors' responses. My own personal view of the work is also very positive (perhaps more so than the reviewers themselves): I think this is strong work that will be of significant interest to the community. The empirical results are a particular highlight, both in terms of the performance shown, the comprehensive set of experiments considered, and the numerous and appropriate baselines compared to. While I do have some suggestions and minor gripes (see below) that I would like addressed in the final version of the paper, I have no hesitation in enthusiastically recommending its acceptance. Suggestions and minor issues: - My most important complaint with the paper is that the title is too strong and overclaiming. It suggests a trade-off will never occur and that the result applies to all deterministic auto-encoders, rather than the specific type considered. I do not think either of these are true: just because robustness has been improved with minimal change in FID score, does not mean there will not be a trade-off with future developments (i.e. there may well be ways of improving the image quality of [26] that would no longer give good robustness when combined with the suggested approach). Please, therefore, change the title to something more measured and precise for the camera-ready paper. - It would be good to provide more explicit timing information about the training times of all the different models, rather than just SE. Claims are made in the intro and conclusions about speed, but, unless I have missed something, I did not really feel these were properly supported. - While I think the paper is mostly quite clear and well written, I do think the writing could be improved in some of the key technical sections; I generally found Section 3 to be the worst written part of the paper. In particular, I think more high-level explanation was required. While the maths itself is not too difficult to follow, it took me quite a few reads through to get a feel for the intuitions. To give an example of a specific issue, the right-hand side of Figure 1 comes too early and lacks context: the reader will naturally assume that they should be able to understand what is happening when the figure is first referenced, but actually they need to get to Section 3.3 first to get an idea of what is going on. - It might be useful for the authors to have a look at https://openreview.net/forum?id=nzvbBD_3J-g because it actively argues against using GMM priors in the more conventional VAE setting, on the basis that such inductive bias can be more effectively be incorporated through a customised decoder architecture (and not treating the latents as the representation itself), than through regularisation. Of course, their insights may well not carry over to the deterministic auto-encoder setting, but it does hint at an interesting alternative approach and may be worth discussing or at least acknowledging. - Please increase the text size in the figures, these are very difficult to see at present.
train
[ "5gOb7pzb_3g", "sViJMGPIp2m", "RJXXR_dm8L-", "0CkDvT5aw4x", "ecOjbnUm7Px", "rAs_cm_2uDi", "9HRpqM3UHC8", "DSYg4e-kFa8", "gZuiP5RtPrM", "fpnzsgQp1Xj", "_BaGPVntLGM", "dPg7UjEUsms", "ywT6DgyjiYa", "HolsdAfBbUs" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Got it, thank you for the clarification.", " The general trend observed in Saseendran et.al is that the FID improves when the number of modes in the prior is increased. It can be also observed from their sensitivity experiments, that even when the number of modes is further increased from 10, the generation performance is further improved. \n\nIn the present paper, we use a fully connected architecture for training on MNIST and FASHIONMNIST images in contrast to the convolution architecture in Saseendran et.al. (Please refer to Section A.4 in the Appendix for architectural details.)\nIn our experiments, we observe large improvements in terms of robustness when the number of modes is increased from 1 to 10, and a slight improvement when the number of modes is increased further. Although we do not observe a large improvement in terms of the generation when increasing the number of components from 1 to 10, the FID score still drops from $42.45$ to $39.37$. We assume that this effect is less significant in our work compared to Saseendran et.al. due to the architectural choices.", " Thank you for the extra experiments.\n\nI was expecting results to show a maxima around the number of modes in the dataset (10 for mnist) or at least level off once they reach the actual number of modes in the dataset. Saseedran et al. seem to show similar results so I'm not sure if I'm missing something.\n\nHaving said that, the robustness following a similar trend to image quality is expected and it is nice to have some extra confirmation.\n\nI think this improves the paper slightly and still maintain my score.", " Thank the authors for the detailed explanations! The response addresses all my concerns. \n\nThe added experiments on the sensitivity analysis of alpha on MNIST are very helpful for readers who want to try your methods. My only suggestion is that, since in the updated Appendix it is suggested that hyper-parameter tuning for alpha was done for each dataset and the optimal alphas are different across datasets, it would be even better and more informative to put the results of table 3 for all datasets. I would assume it does not require too much work as you already have the data. Nevertheless, the current results are sufficient, and it is fine not to add them.\n\nThank you for the clarifications/confirmation on other questions as well. \n\nBecause of these, I will increase the score.", " With the rebuttal discussion period approaching a close, we kindly ask the reviewers whether our answers and additional experiments have clarified their questions. Please let us know if any further queries or concerns.", " 1. Sensitivity analysis of the hyperparameter $\\alpha$ - Thank you for suggesting this experiment. \nWe have summarized the results of the sensitivity of our model towards the coupling strength $\\alpha$ in the Table 3, section A.6 in the Appendix.\nWe observe that a larger coupling strength $\\alpha$ leads to improved robustness against both latent space and maximum damage attacks.\nHowever, as mentioned in the limitations section a strong coupling strength, i.e. $\\alpha=1$, compromises the generation fidelity of the model. \nIn our experiments, we tuned the coupling strength of each individual dataset.\nWe observed that a coupling strength in the range of $0.9\\leq\\alpha<1$ yields the best trade-off between generation and robustness across all datasets.\nIn our experiments, we chose $\\alpha=0.95$ for MNIST and FASHIONMNIST images, and $\\alpha=0.92$ for SVHN and\nCELEBA images.\n\n2. Maximum damage attack - The adversarial image $x_a$ is the image obtained by adding an $\\epsilon$-perturbation to the original input image $x=x_r$ in the case of maximum damage attacks.\nWe assume that the amount of noise/local change needed to cause maximum damage scales with the model robustness. Specifically, to attack a robust model, an attacker would need to impose more change on the input $x$. Therefore, MSSSIM$(x_a,x_r)$ should be low for robust models. \nWe will elaborate on the explanation in the main paper to clarify this point.\n\n3. Line 158 - $z_n$ and $z_n^{adv}$ are the latent representation of the input ($x_n$) and its corresponding adversarial sample ($x_n + \\delta_{x_n}$). \nWe will add this definition to the main paper accordingly.\n\n4. line 162 - \nAs you pointed out correctly, when \"ignoring the cross-covariance between the latents $z_{1,...,n}$ and $z_{1,...,n}^{adv}$\", the off-diagonal elements in the covariance matrix of the GMM prior, that is the last matrix mentioned in equation (8), are zero. We will clarify this point in the revised version of the paper.\n\n5. Eqn 6 - Indeed, $g$ denotes the encoder of the model! We will include this definition in the corresponding section.\n \n6. Minor questions - We have included the missing title of the mentioned reference and indicated the best metrics in Table 2 in bold in the revised version. \n\n7. Figure 1 explanation - We have edited Figure~1 and its title in the revised version of the paper.\nMost importantly, in the new version, we visually distinguish between the adversarial examples before and after regularization to visualize the effect of joint regularization of the original and adversarial samples.\n\n8. Denominators of Eq 8 and Eq 10 - Yes, that is absolutely correct. Thanks for pointing this out! We have corrected this error in the revised version of the paper.", " \n1. Ablation study - In the ablation study in Section 4 of the main paper, we compare three different variants of regularized deterministic autoencoders to evaluate the importance of joint regularization of the original and adversarial samples. \nWe begin with the model proposed by Saseendran et.al., which we denote as GMM-DAE.\nSecond, we study the *augmented* model defined by equations (7) and (8) in Section 3.2 of the main paper, but *without* the coupling of original and adversarial latent representations (Augmented).\nWe compare the robustness and fidelity of these models with our proposed model, i.e. the final variant, where original and adversarial latent representations are not only regularized towards the same prior but coupled according to equations (9), (10), and (11) in Section 3.3 (Ours).\nThe observed metrics are reported in Table 2 in the main paper. \nWe observe that the proposed method yields comparatively better performance in terms of robustness while still maintaining the generation fidelity when compared to the non-robust version, GMM-DAE. \nIt can also be seen that enforcing coupling between the latent representations of the original and adversarial samples (Ours) leads to better performance than simply augmenting them (Augmented). It is worth pointing out that the GMM-DAE maintains better performance than a standard VAE model (please refer to Table 1 in the main paper, MNIST VAE results).\n This is due to the well-structured latent space in GMM-DAE. This observation further confirms our hypothesis in Section 3 that when similar-looking samples are modelled together in the latent space of the model, robustness can be improved.\n\n\n2. Robustness to the downstream task - We would like to point out that the network and implementation details of the classifier network used in this section are provided in Section A.4 of the Appendix (lines 101-103).\n\n 3. Computational complexity - The proposed method is comparatively less expensive than the robust VAE baselines considered. \nAs we have pointed out in the limitations section of the main paper, the method is still expensive when compared to the non-robust model by Saseendran et.al. \nWe will further clarify this point in the abstract of the paper.\n\n4. Additional evaluation of decoder image quality - As suggested, we have evaluated the MSSSIM between the reference images $x_r$ and its reconstructions ${x_r} $ tilde and the adversarial images $x_a$ and its corresponding reconstructions ${x_a}$ tilde for both attack modes.\nThe results are reported in Table 4 in Section A.7 in the Appendix.\nCompared to the non-robust variants (VAE, $\\beta$-VAE, $\\beta$-TCVAE), the quality of the reconstructions of the reference images is compromised in robust VAE models (LipschitzVAE, SE, AVAE), whereas the proposed model exhibits comparatively better performance. \nThis further aligns with the observation that our model yields better reconstruction fidelity than all the baselines. (Please refer to the comment section of Reviewer qDc3 (R1) for more details.) \nFurther, we observe the higher similarity between the adversarial images and their reconstructions for all robust VAE models.\nThis is due to the fact that all these models employ adversarial training.\nThank you for suggesting this new line of evaluations of our model.\nWe will add these results to the Appendix of the camera-ready version.\n\n5. Exploring unexplored regions via adversarial samples -\nIn our understanding, from the perspective of the attacker, adversarial examples are optimized to be close to the decision boundaries and thus tend to end up in unexplored regions of the latent space in *non-robust* networks. \nMeaning that adversarial samples are encoded to latent representations that are not in close proximity to the learned latent representations of the original training data.\nHowever, from the perspective of building robust models, even adversarially optimized samples should follow the same distribution as benign samples so that the model can make a robust, correct decision. There might still be unexplored regions of the latent space, in which, for example samples from new, previously unseen classes might end up. Yet, epsilon perturbations of the data are regularized to not end up in such regions. We will clarify this aspect in the paper.", " 1. Sensitivity analysis of the number of modes in the chosen GMM prior - As pointed out correctly the number of modes in the\nchosen prior is a hyperparameter of our model. We agree that a\nsensitivity analysis of the number of modes of the GMM prior and the\nobserved robustness of the model is an interesting experimental\nanalysis. We report the robustness and generation performance of our\nmodel on MNIST images for different number of components in the chosen\nprior in Table 2, Section A.5 in the Appendix. We use the same number of modes\nused in Saseendran et.al for all our experiments. As expected\nfrom previous analysis in Saseendran et.al., with an increased number of\nmodes in the GMM prior, the generation performance of our extended model is\nalso improved. Most importantly, we observe a similar trend for\nrobustness as well. That is, with a higher number of components in the\nchosen GMM prior the model exhibits improved robustness. Thank you for\nsuggesting this line of experiments. We will add the results of the\nsensitivity analysis to the paper and discuss this point in the\nlimitations section. \n\n2. Improving clarity in the evaluation subsection - In the\nexperimental section of the paper the variables $x_r$ and\n $x_a$ denotes the reference image and the perturbed variant\nof the input image for two types of attacks in VAEs. The definition of\nthe reference image varies with the mode of attack.\n\n 1. In maximum damage attacks, the reference image $x_r$ is\n simply an original input image provided to the model.\n\n 2. For latent space attacks, the reference image $x_r$ is\n chosen by an adversary to attack the model, also referred to as\n target image.\n\n We agree that the notation in this section is rather involved. In the\n camera-ready version of the paper, we will move the qualitative analysis\n from Section A.3 in the Appendix to the main paper. With the help of\n image samples, we will clarify the definitions of reference and\n adversarial images for both attacks. We will also consider simplifying\n the notation.\n\nThank you for noticing the missing title in the reference \\[26\\]. We\nhave added it in the revised version of the paper.", " 1. Maintaining both reconstruction and generation fidelity - The robust VAE models sacrifice generation performance for improved robustness. That is, the sampling FID is compromised in these models. Hence, in our model, we seek to focus on improving the robustness of autoencoders while still maintaining the generation performance. We have clarified this point in the revised version of the paper. To showcase that our model effectively maintains both reconstruction and sampling FID, we have benchmarked its reconstruction fidelity against important baseline models on MNIST and FASHIONMIST images, please refer to the Table 1 below. Our results show that the reconstruction performance is also compromised in the robust VAE models when compared to the non-robust variants. However, the proposed model maintains both reconstruction and sampling FID when compared to the baseline models.\n Method | MNIST(↓) | FASHIONMNIST(↓) |\n|:-------------|:---------:|:---------------:|\n| VAE | 25.10 | 41.23 |\n| *β*-VAE | 26.21 | 41.98 |\n| *β*-TCVAE | 26.19 | 41.23 |\n| LipschitzVAE | 30.15 | 47.21 |\n| SE | 28.80 | 43.31 |\n| AVAE | 27.29 | 42.98 |\n| Ours | **19.91** | **35.74** |\n\nTable 1: Reconstruction fidelity of MNIST and FASHIONMNIST images.\n\n2. Ethical review flag - Our paper proposes a method to improve robustness in autoencoders. \nWe could not see any immediate ethical concerns arising from this fundamental research.\nUpon the suggestion of the other reviewers, we have included a discussion on the potential societal impact of our paper in the common comment section above.\nCould you please specify your concern regarding the ethics flag?", " We would like to thank the reviewers for the generally positive reviews\nof our paper and for their constructive remarks. We will begin with\nanswering common questions that were raised by multiple reviewers and\nafterwards answer individual questions. We uploaded a revised version of\nthe main paper with the following corrections:\n\n1. We have edited Figure 1 and its title to enhance the readability.\n\n2. We have fixed all the typos and clarifications suggested by the\n reviewers.\n\nWe also uploaded a revised Appendix pdf in the supplementary section with the following quantitative results:\n\n1. A sensitivity analysis of the number of modes in the GMM prior.\n\n2. A sensitivity analysis of the coupling strength parameter.\n\n3. Additional evaluation of decoder quality.\n\n**Common questions from the reviewers** \n\n1. Potential societal impact - Variational autoencoders enable\nlearning meaningful representations of complex high dimensional data\nwithout any supervision. This further enhances the usability of these\nlearned representations for various downstream tasks and applications\nwhere limited data is available for example, due to privacy/security\nconcerns. Hence, it is important to study the robustness of these\nrepresentations along with their accuracy, especially when employed in\nreal-world applications. In the present paper, we propose a method to\ntrain robust VAEs with high fidelity in the learned latent space. Since\nour work is a step towards more robust models, we hope to see a positive\nsocial impact. There is currently limited work in this direction and we\nbelieve that our method encourages potential future work in developing\nrobust VAE models. On the other hand, we should take into consideration\nthe possible negative social impact of this research, especially in\nsafety-critical applications. Although we observe superior robustness in\nour model against the existing attacks, similar performance cannot be\nguaranteed on newly discovered attacks on VAEs. Hence, when deployed in\nreal-world applications we highly recommend testing the model\ncontinuously against newly designed attacks. We also urge the machine\nlearning community to pursue this work responsibly to enable potential\nfuture research without any misuse. We kindly ask the reviewers if there\nare any specific topics to be discussed in this section that we have\nmissed.\n\n2. Qualitative analysis - We would like to point out that we provide visual results for MNIST, FASHIONMNIST, SVHN, and\nCELEBA images for both adversarial attacks in Section A.3. of the Appendix.\nDue to the lack of space, we have not yet added a qualitative analysis to the initial version of the main paper.\nUpon acceptance, we will move the CELEBA image samples to the additional page that is allowed for the camera-ready version of the paper.", " This paper studies the robustness of deterministic autoencoders by introducing a regularization scheme to incorporate adversarially perturbed data points into the training pipeline. The proposed method increases the generation and robustness of the learned representations without increasing the computational complexity. Strengths:\n\n1. The writing is good, and the paper is easy to follow.\n2. The experiment part demonstrates the proposed method has a large improvement in robustness of the most time.\n\nWeaknesses:\n\nIn the paper, the author may need to show some qualitative results. About the sentence in lines 45 and 46, ``However, regularizing the VAE objective ..., Hence, ... ''. The traditional methods lead to poor reconstruction fidelity, while you aim at maintaining the generation performance? Not both the reconstruction and generation quality of the image? Yes", " This paper proposes a technique to improve the robustness of VAEs. It builds upon earlier work by Saseendran et al, Ghosh et al. which provide a deterministic alternative to the variational formulation of VAEs. The formulation by Saseendran et al. uses the Kolmogorov-Smirnov (KS) test to regularize the latent space of deterministic auto encoders and impose a multi modal gaussian prior. \nThis paper extends this idea by imposing the same multi modal gaussian prior on the pair ($z_n$, $z^{adv}_n$) where $z^{adv}_n$ is the adversarial latent code obtained via perturbation at the input image level (added before the encoder). This is as opposed to just imposing the prior on $z_n$ like in earlier work.\nThis means that the regularizer goes from: the one point KS test used in Saseendran et al. to a two sample KS test between latent codes and their adversarial versions.\nThe models used in the paper are VAE, beta VAE, Smooth Encoder, AVAE etc evaluated on MNIST, FashionMNIST, SVHN, CelebA. \nThere are two attack vectors considered:\n1. Latent space attack: adding perturbation to an image such that its latent code moves close to the latent code of a target image.\n2. Maximum damage attack: adding perturbation to an image such that its reconstruction its now far away from its vanilla reconstruction\nResults show that there is consistent improvement in robustness across the board. This is evidenced by close similarity scores between image reconstructions. Strengths:\n1. Comprehensive Evaluation - The paper uses a variety of datasets to prove their point. A minor criticism could be that larger datasets are not considered (which is valid), however, datasets used in the paper are also used by prior work. \n2. Problem is well motivated and the exposition is clear \n3. Improving robustness in generative models is an important problem so this result is significant. Experiments show large improvements in FID so the results seem solid.\n\n\nWeaknesses:\n1. Unclear scope - How to pick modes to match the prior? What happens if they don't match?\n2. Section 4 seems a little unclear in an otherwise well written paper.\n3. The paper builds upon prior work in the area so the novelty of the technique is not very high. Having said that, it is an application of prior work into a new area which makes this point a less important.\n 1. A it stands, the paper does not have a single image about what samples look like or even what adversarial example would look like for the model. I think having pictures for these kind of papers make them read better.\n2. I found the paper fairly clear overall except the part in the evaluation subsection of Section 4, where the authors introduce a bunch of symbols at the same time. I think having clear definitions for each of reference image ($x_r$), target image ($x_t$) and adversarial image ($x_a$) in addition to clarifying the relationship between each of them would clear things up. (Maybe a diagram/image?)\n3. It seems that the number of modes in dataset has to match those in the prior. I would like to see some comments on this from the authors. What happens if the number of modes don’t match? We know from Saseendran et al. that image quality drops but does robustness also fall in a similar fashion? Or maybe robustness falls faster or slower than image quality?\n\nTypo:\nRef [26] does not have paper name. Listed several times in the paper. Key reference.\n I think the authors should list that the number of modes in the prior has to match the number of modes in the dataset as a limitation. ", " The authors extend existing methods for training variational autoencoders to be robust against adversarial attacks. They implement this by training on adversarial examples during training. They show that the resulting networks are less susceptible to adversarial attacks have better MSSSIM metrics and finally that the latent representations are better suited for downstream classification tasks. Here I have to admit that I am not very familiar with the current literature and the math behind the statistics of variational autoencoders. I have some difficulties understanding the exact novelty of the approach the authors have taken. It seems the basic idea is to prevent adversarial input examples during training and ensure some statistical properties of these examples.\nBut I assume the method is explained well for a reader with the sufficient background.\n\nWhat comes a bit short tough is the explanation of the experiments under the headers \"Ablation Study\" and \"Robustness to downstream applications\". There the authors seem to omit the details.\n\nThe results seem to lead to a significant improvement in the results. The authors claim that their method does not increase the computational complexity, yet they use adversarial examples during training. Should the generation of the adversarial examples not increase the computational complexity?\n\nThe authors should provide more details on the sections \"ablation study\" and \"robustness to downstream applications\". Here the text is not clear what they did. I do not understand what the \"ablation study\" is exactly about and for the downstream applications it might help to know what kind of MLP the authors are using.\n\nThe authors present how their method improves the robustness against adversarial attacks but they show little how this affects the baseline performance. How is the MSSSIM for x_r and x_r tilde? Does the new approach provide similar decoded image quality? Also would not MSSSIM for x_a and x_a tile be interesting, how well the adversarial images can be reconstructed by the autoencoder? \n\nThe authors say that the adversarial samples should not explore unexplored regions of the latent space. I do not quite understand why this is beneficial. Shouldn't the newly generated samples try to explore more regions then the normal samples already explore? The authors discuss the limitations of their work mainly in terms of computational power and additional hyper parameters that need to be tuned and how datasets with different distributions might not work as well with the approach.\n\nThe authors do not discuss potential negative societal impact of their work and just state N/A. Surely if their work is relevant, it should have a potential societal impact of which a part can also be negative. The authors could elaborate a bit on this to address this point.", " The paper proposes new approaches for improving the robustness of Variational Autoencoders (VAEs). The paper finds that deterministic autoencoder with multimodal prior (proposed in prior work) not only improves the fidelity of generated samples, but is also more robust to adversarial attacks. On top of that, the paper proposes to incorporate adversarial samples into the training process, and add regularizations to ensure that the latents from the adversarially perturbed samples also follow the same prior distribution. Results show that the proposed method achieves better fidelity and robustness across several datasets. \nOriginality: The key techniques of this paper (incorporating adversarial samples in training, KS test) are simple adaptations from prior work. However, this specific combination for improving the robustness of VAE is interesting.\n\nQuality: Overall, the paper is technically sound, and most of the claims are well-supported by experiments.\n\nClarity: The paper is easy to read. However, there are some typos and some unclear statements. See questions below.\n\nSignificance: The paper could be of interest to practitioners as it shows that the proposed technique could improve the robustness without sacrificing fidelity. \n \nI appreciate that the authors point out the downside of having a new hyperparameter alpha. However, I think it is still important to at least mention how alpha was tuned for all the experiments, and provide a sensitivity analysis of alpha. This would make it easier for people to adopt the method. I will adjust the score according to the response to this question.\n\n\n\nMy other questions are:\n\n* Table 1: I don't understand how you get x_r and x_a for maximum damage attack, and why lower MSSSIM(x_r,x_a) is better. I could not find related explanations or descriptions in the main text.\n\n* Line 158: z_n and z_n^{adv} are undefined. I suppose they are g(x_n) and g(x_n + \\delta_{x_n})?\n\n* Line 162: Does \"ignoring cross-covariance between z_{1,...,n} and z_{1,...,n}^{adv}\" mean the 0s in last matrix in Eq 8? Please make it more explicit\n\n* Eq. 6: g is undefined. I suppose it means the encoder?\n\n\n\nSome other minor typos and questions:\n\n* The title of the citation [26] is missing.\n\n* Figure 1 and its caption are hard to understand by themselves. It becomes clearer after I read Section 3.2, which appears much later in this paper.\n\n* Should the denominators of Eq 8 and Eq 10 be 4D^2 instead of 2D^2?\n\n* Line 171: left -> right\n\n* Table 2: Consider putting the best metrics of each column in bold?\n\n\n \nThe paper adequately addressed the limitations. The paper did not discuss the potential negative societal impact of the work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 2, 3 ]
[ "sViJMGPIp2m", "RJXXR_dm8L-", "DSYg4e-kFa8", "rAs_cm_2uDi", "nips_2022_9YasTgzma8c", "HolsdAfBbUs", "ywT6DgyjiYa", "dPg7UjEUsms", "_BaGPVntLGM", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c", "nips_2022_9YasTgzma8c" ]
nips_2022_Upt5wsECVJe
Mean Estimation in High-Dimensional Binary Markov Gaussian Mixture Models
We consider a high-dimensional mean estimation problem over a binary hidden Markov model, which illuminates the interplay between memory in data, sample size, dimension, and signal strength in statistical inference. In this model, an estimator observes $n$ samples of a $d$-dimensional parameter vector $\theta_{*}\in\mathbb{R}^{d}$, multiplied by a random sign $ S_i $ ($1\le i\le n$), and corrupted by isotropic standard Gaussian noise. The sequence of signs $\{S_{i}\}_{i\in[n]}\in\{-1,1\}^{n}$ is drawn from a stationary homogeneous Markov chain with flip probability $\delta\in[0,1/2]$. As $\delta$ varies, this model smoothly interpolates two well-studied models: the Gaussian Location Model for which $\delta=0$ and the Gaussian Mixture Model for which $\delta=1/2$. Assuming that the estimator knows $\delta$, we establish a nearly minimax optimal (up to logarithmic factors) estimation error rate, as a function of $\|\theta_{*}\|,\delta,d,n$. We then provide an upper bound to the case of estimating $\delta$, assuming a (possibly inaccurate) knowledge of $\theta_{*}$. The bound is proved to be tight when $\theta_{*}$ is an accurately known constant. These results are then combined to an algorithm which estimates $\theta_{*}$ with $\delta$ unknown a priori, and theoretical guarantees on its error are stated.
Accept
The paper addresses the problem of high-dimensional statistical inference from dependent samples. This is a recently emerging area, and the authors establish nearly tight minimax error rate bounds for a basic statistical model (gaussian hidden markov model). The reviewers appreciated the technical strength of the paper, but there were some questions about the framing and context of the problem. The authors clarified these issues suitably in their rebuttal, clearing the way for acceptance.
train
[ "8bMLQJcoBY3", "FVPxv6qd50", "VVavxvCiec", "RK3slRY4jB7", "YuG_etspwmx", "B4KDbVC2HbL", "PfK6GX0hMP4", "tpfslCGR8W-", "OJSIeDUewxw", "iASiU_dxlR8", "sVzEXvtPVe", "RKG6GFv5PU2", "ok2GPY4Los9", "24hh7f15xMQ", "YAfcjBTVfv8" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have added a short paragraph (due to space constraint) to the end of the Contribution section to reflect the reviewer's suggestions. \nIn particular, the usefulness of our techniques in more general models and the connections/differences with prior related work are highlighted. \nWe thank the reviewer for helping us situate our results in a broader context. ", " In light of your response, I am inclined to raise my score to 6 from 5. I would ask the authors in their contribution section to highlight the usefulness of their techniques compared to previous work wrt Ising models and the work on Markovian Linear Regression, in light of the above discussions. ", " 1. \"It would be great if the authors could comment on the generality of their techniques and the model. The Binary Markov model seem useful to model sample dependency in symmetric two component mixture models mainly, is there evidence that these Binary Markov models are previously studied/ useful in practice?\" \n* **Generality:** In a general model, the state $S_{i}$ takes $k\\\\ge2$ possible values, and the component mean at time $i$ is determined from $k$ possible vectors $\\\\{\\\\theta_{*,j}\\\\}_{j\\\\in[k]}$ by setting $j=S_i$. In a Markov model, the state transitions form a (hidden) Markov chain. We elaborate below.\n* **Previous studies:** The *two-component* symmetric case is addressed in almost all the papers on the Expectation Maximization (EM) algorithm for memoryless models (see our revised Introduction). In the context of HMMs, [Yang et al., 2015] studied the EM (Baum-Welch) algorithm. They derive results for general models ($k\\\\geq2$), but eventually focus on the binary case which we focus on too. As mentioned in the paper (cf. Sec. 1.4), even in this binary case, the error rate obtained in [Yang et al., 2015] by analyzing the Baum-Welch algorithm is suboptimal in $(n,d,\\\\delta,\\\\|\\\\theta_{*}\\\\|)$, and this was one of our motivations for this work.\n* **Utility in practice:** The binary model is widely applicable since in many cases there are two types of components (note that general component means $\\\\theta_{ * ,1},\\\\theta_{ * ,2}$ can be accurately reduced to symmetric means $\\\\pm\\\\theta_*$ by empirical centering). For example, consider a list of medical vector samples, each collected from either a male or a female, but this is not labeled. In some cases, it is reasonable to assume that there are long sequences of consecutive male samples followed by long female sequences, and so the Markov model is a good fit. The same applies to a communication signal, for which phase inversions sparsely occur in an unknown manner, but otherwise remain fixed over time. Naturally, the $k>2$ case is also very practical, and we find this generalization to be an important one. \n2. \"It would be great to understand the motivation of studying dependence through a Binary Markov model and for instance what model could be potentially used to study a k-component mixture model?\" \nA natural generalization to $k>2$ is a chain that remains at its current state with probability $1-\\\\delta$, and changes to one of the other $k-1$ possible states with equal probability $\\\\frac{\\\\delta}{k-1}$. Therefore, for blocks of size $\\\\Theta(1/\\\\delta)$ symbols, the model roughly acts as a Gaussian location model with a fixed mean (of course, there also will be \"bad\" blocks). We used this idea to reduce the binary Markov chain to a two-component symmetric Gaussian mixture model. Similarly, the Markov chain described above can be reduced to a $k$-component Gaussian mixture model, for which efficient estimators exist (e.g., ones based on tensor decomposition), and theoretical guarantees are available. However, in the non-binary case, the dependence of the estimation error on the separation between the components is delicate. For example, one can suggest the minimum pairwise distance. However, [Romanov et al., 2022] recently criticized that, and proposed a different property to quantify \"separation\". Therefore, we have focused on the binary case, which leads to clear and complete results, and the estimation error is clearly characterized by the signal strength $\\\\|\\\\theta_{*}\\\\|$ (besides $n,d,\\\\delta$). That being said, the $k>2$ is not an insurmountable barrier, and we believe that our model will be generalized in multiple ways, both by us and by other researchers. We also believe that with some innovation, general Markov kernels and general structure of dependencies (e.g., graphical models) can also be solved.\n3. \"I agree that this message is useful and in the context of the above question I posed, could authors clarify under what conditions you will not have this wastage? Is it necessary to use the Binary flip model for the above message?\" \nThe message of \"no wastage\" is general. For instance, assume that $\\\\{S_{i}\\\\}_{i=1}^{n}$ form an Ising model. A natural idea that follows our current paper is to partition the underlying graph of the Ising model in such a way that samples in the same partition are highly dependent, and so the model is accurately approximated by a Gaussian location model in each of the subsets. Then, averaging samples in each subset of variables will reduce the effective noise variance by a multiplicative factor of the size of the partition. Taking one sample per partition (\"wasting samples\") is then suboptimal compared to averaging samples per subset. The Ising model is rather general, and thus we hope that this demonstrates the fundamental difference between memory in the mean estimation problem and the linear regression problem. \n\nIn light of the above, we would kindly ask you to re-evaluate our\npaper.", " I thank the authors for their detailed responses and their prompt revisions. It would be great if the authors could comment on the generality of their techniques and the model. The Binary Markov model seem useful to model sample dependency in symmetric two component mixture models mainly, is there evidence that these Binary Markov models are previously studied/ useful in practice? It would be great to understand the motivation of studying dependence through a Binary Markov model and for instance what model could be potentially used to study a k-component mixture model? \n\n\n\"We thank the reviewer for pointing out this seemingly counterintuitive result compared to the linear regression problem, and agree it is surprising that no data is wasted. We believe that this is an important message, and presenting it to the community will impact and benefit researchers working in this area.\"\n\nI agree that this message is useful and in the context of the above question I posed, could authors clarify under what conditions you will not have this wastage? Is it necessary to use the Binary flip model for the above message?\n\n", " We are grateful for the reviewer's appreciation of the revised version of the manuscript. \n1. The reviewer is indeed correct regarding the definition of $\\\\overline{X}_i$. The second equality in the equation on line 196 is in fact an exact equality, not only equality in distribution, given the revised definition of $\\\\overline{S}_i$ and $\\\\overline{Z}_i$. We have revised the equation on line 196 in the updated manuscript. \n2. We thank the reviewer for pointing out the misformatted citations. They have been corrected in the updated manuscript. \n", " I wish to thank the authors greatly for their work revising the paper and in particular adding substantial information on the context and motivation. This material is very helpful, and accordingly I am happy to revise my rating to 6 (weak accept) (previously 4). I have also changed the presentation and contribution scores to 3 (both previously 2).\n\nJust a couple of minor notes:\n\n- On the revised re-randomization explanation—would I be correct in believing that given the revised definitions of $\\bar{S}\\_i$ and $\\bar{Z}\\_i$, the equality on line 196 (previously eq. (8) in the original manuscript) $R\\_i \\cdot \\frac{1}{k} \\sum\\_{j \\in \\mathcal{I}\\_i} X\\_j \\overset{d}=\\bar{S}\\_i \\theta_* + \\bar{Z}\\_i$ now holds with sure equality, not just equality in distribution?\n\n- In the revised introduction, some of the citation formatting is slightly off: I think the author names are meant to be outside the brackets in some places, specifically on line 28 (citing Györfi et al. [2002]), line 37 (citing Balakrishnan et al. [2017] etc.), line 41 (Balakrishnan et al. again), 43 (Wu and Zhou [2019]) and 44 (Balakrishnan et al.).", " We have answered all the comments made by the reviewers, and also\nnotably revised the paper. We believe that the paper has significantly\nimproved from the main aspects raised by the reviewers. Specifically: \n* Re reviewer KdtU (Rating 4): The reviewer said that the paper is solid\nand clear, and her/his main claim is that the paper lacks contextualization\nand motivation. We fully agree that the presentation in the original\nsubmission was lacking from this aspect, and we have significantly revised\nthe paper. Specifically, the motivation and context of the problem are now highlighted in the opening two paragraphs\nof the paper. These paragraphs present the broad research goal that we address, and two important motivations -- one that is practice-oriented,\nwhich stems from the availability of side information in modern-day estimation\nproblems, and the second is theory-oriented, which stems from a recent\non-going effort of understanding the minimax rates of high dimensional\nmixture models. We then conclude this part with a sentence describing\nthe possible broader impact of our obtained results. We believe that\nthis explanation serve as a significant motivation and contextualization\nfor the problem studied in this paper. We have also explained this with\nmore detail in our response to the reviewer (within the space limitation).\n \n* Re reviewer 9nBR (Rating 5): The reviewer was surprised that our estimation\ntechniques and analysis are capable of establishing minimax rates.\nWe have made our best attempt to clarify the doubts of the reviewer,\nand explain why our techniques are both correct and natural. We believe\nthat having our techniques surprise an expert in this problem domain\nis nothing but a strong motivation for the paper to be presented to the research community. We have also\nfully addressed all the other questions of the reviewer, both as detailed\nanswers to the reviewer, as well as in a revision of the paper (e.g.,\nRemark 3 on pp. 6-7 and Appendix G that address the Gaussian assumption). \n\n* Re reviewers nbjp and fjjw (Rating 6): Both the reviewers appreciated\nthe ambitious goal of the paper, its quality, its writing and its\nresults. We believe that our revision based on their comments further improved the paper. ", " 1. \"However, some parts of the paper are less clear to the reader.\" \nWe would be grateful if the reviewer could point out the parts that were less clear. We have already revised a few unclear parts and will be glad to further improve the presentation.\n2. \"I wonder how good the estimate of $\\\\theta$ should be to facilitate the following steps? Do more samples in Step A essentially help?\" \nThe condition for entering Step B is given by $(d/n)^{1/4}\\\\lesssim\\\\|\\\\hat{\\\\theta}^{(A)}\\\\|\\\\lesssim1$. Otherwise, the algorithm terminates at Step A. We have proved guarantees on the estimation error rate for each of the steps. The final rate is obtained by properly analyzing all cases and combining the error rates incurred in different steps. Adding more samples to Step A will not change the obtained minimax rate becuase it already uses $n$ samples, and even if all $3n$ samples are used just for Step A, the improvement in the error of Step A will be by a constant factor, which is immaterial for minimax rate analysis. \n3. \"How strong is the assumption of a Gaussian distribution, and can it be replaced with weaker assumptions?\" \nThe Gaussianity assumption is mild, and can be relaxed in multiple ways:\n * The Gaussian noise can be relaxed to subGaussian noise: Our minimax upper bound can be shown to hold for subGaussian $Z_i$, since all the concentration bounds for Gaussian random variables we use admit subGaussian analogues. The impossibility result on the minimax rate trivially holds since Gaussian noise is a special case of subGaussian noise.\n * Isotropic noise can be relaxed to anisotropic noise: Suppose $Z_i\\\\sim N(0_d,\\\\Sigma)$ i.i.d. for some $\\\\Sigma$. Our results can be trivially extended to the case a $\\\\Sigma\\\\succ0$ is _known_. The estimator multiplies the samples by $\\\\Sigma^{-1/2}$ and reduces the problem to the isotropic setting. Then it applies the estimator isotropic estimator we propose, and obtain its final estimator by multiplying this estimate by $\\\\Sigma^{1/2}$. The loss in this case, however, is gauged by the Mahalanobis distance (parameterized by $\\\\Sigma$). The problem becomes significantly more challenging if $\\\\Sigma$ is _unknown_. A well-known example shows this in the context of maximum likelihood estimation (MLE) (see, e.g., [Ferguson 1982](https://www.jstor.org/stable/2287314)): Consider the problem estimating the mean $\\\\mu$ and variance $\\\\sigma^2$ of a Gaussian mixture with two components $N(0,1)$ and $N(\\\\mu,\\\\sigma^2)$ where $\\\\sigma>0$. For any given sample $X_1,\\\\ldots,X_n$ from this distribution, the likelihood function can be arbitrarily large by taking e.g. $\\\\mu=X_1$ and $\\\\sigma$ arbitrarily small, and so the MLE does not exist. This is why typical papers on problems similar to ours are limited to mean and weight parameters. For example, there are many recent papers on the expectation maximization (EM) algorithm for estimation of Gaussian mixtures parameters. Almost all of them are restricted to isotropic noise (or known covariance matrix) -- we refer the reviewer to the second paragraph in the revised paper for a partial list. As an exception, fitting both mean and scale parameters was studied in [Dwivedi et al 2019](https://arxiv.org/abs/1902.00194) in the context of the EM algorithm, and in a very restricted setting: The distribution of the samples is standard Gaussian $N(0,I_d)$, yet the estimator is allowed to (over)fit a two-component Gaussian mixture, with symmetric means $\\\\pm\\\\theta$ and a covariance matrix $\\\\sigma^2I_d$ with any $\\\\sigma>0$. Even in this restricted setting, the result is rather delicate, and there are differences, e.g., between one- and multi-dimensional models. This demonstrates the challenges associated with fitting scale parameters. We finally remark that even method-of-moments based estimators (as the one we use in our paper) are typically analyzed under isotropic noise, e.g. [Anandkumar et al 2014](https://arxiv.org/abs/1210.7559) and [Wu and Yang 2020](https://arxiv.org/abs/1807.07237). Since the trade-off in the minimax rate already involves four parameters $(n,d,\\\\delta,t)$, we assumed known variance (normalized to $1$) to highlight the role of the memory ($\\\\delta$).\n * The Gaussian distribution can be relaxed to a heavy-tailed distribution: There has been some recent progress on this topic (see Appendix G in the revised paper for references). In such model, the estimation error rate will also depend on the decay rate of the tail. We leave it for future research, because: (1) Our minimax rate already depends on four parameters $(n,d,\\\\delta,t)$, and adding another one will obscure the main message of the paper. (2) Additional ideas are needed to handle heavy tails, and this will make the analysis cumbersome and unapproachable.\n\nIn the revised manuscript, we added Remark 3 on p. 6 which discusses possible extensions, and also added a new section to the appendix (Appendix G) that discusses more challenging ones (as open problems). ", " 1. \"It would be good to see why this $ \\Theta(1/\\delta) $ for the sake of completion.\" \nThe block size is chosen to be $ k=\\Theta(1/\\delta) $ because this is the largest value such that the ``average gain'' $ \\overline{S}_i = \\frac{1}{k}\\sum_j S_j $ of each block is close to 1 in expectation (cf. Lemma 8 and Eqn. (63)) and with high probability (cf. Eqn. (67)). \n Having $ \\overline{S}_i $ close to 1 makes sure that the block-wise averaging reduces the noise variance but not the signal strength. We have explained the trade-off in choosing the block size on the first paragraph of page 6. We hope that it is clear, but we would be happy to further clarify if needed.\n2. \"Also it is surprising that there is no data wastage. Even if you assume ergodicity, you might have to discard some amount of samples initially.\" \nWe thank the reviewer for drawing the connection to the work of [Nagaraj et al. 2020](https://arxiv.org/abs/2006.08916) which studies the effect of Markovity on linear regression, and uses similar ideas. On a technical level, we assume that the Markov chain is at its stationary distribution already from its first sample ($S_0$ is uniform Rademacher), and so no samples are wasted at initialization. On a higher level, in our problem, the _average_ of all the samples in the block is used to reduce the variance of the Gaussian noise by a multiplicative factor of the size of each block (which is chosen to be $\\Theta(1/\\delta)$). Since this achieves the minimax rate, it also shows that the \"one sample per block\" strategy used in [Nagaraj et al. 2020] for the linear regression setting is suboptimal in our problem. On an intuitive level, the blocks in which there is a sign change, and so the \"average gain\" $ \\overline{S}_i = \\frac{1}{k}\\sum_j S_j $ is not close to $\\pm1$ can be considered as ``bad blocks'', which deteriorate the estimation error. However, due to the optimized choice of block-length as $k=\\Theta(1/\\delta)$, the affect of bad blocks on the estimation error is provably negligible. We thank the reviewer for pointing out this seemingly counterintuitive result compared to the linear regression problem, and agree it is surprising that no data is wasted. We believe that this is an important message, and presenting it to the community will impact and benefit researchers working in this area. \n3. \"I am unable to see this argument here and it feels less convincing as to why the data is fully i.i.d after splitting it into batches according to the Markov chain mixing time and the errors due to some residual correlation doesn't seem to appear.\" \nOur estimator partitions the samples into blocks, and then each block is randomized with an i.i.d. Rademacher (the random sign is common to each of the $k$ blocks but independent among them). This randomization effectively ``restarts'' the Markov chain at the beginning of each block, thus decorrelates it from all previous blocks (due to the Markov property). In other words, instead of using a single instance of a length $n$ Markov chain, the estimator uses $k$ independent instances (blocks) of the Markov chain, each of length $n/k$. This transformation works for any chosen block length, but it is optimal in terms of estimation error to choose $k$ proportional to the mixing time. Without this randomization, there will indeed be residual correlation between the blocks. However, since our estimator is minimax optimal, eliminating this correlation is essentially optimal.\n4. \"The results seem to apply only for isotropic Gaussians.\" \nThe results were presented for isotropic Gaussians for the sake of clarity, but this assumption can be relaxed in multiple ways:\n First, the Gaussian noise can be easily relaxed to subGaussian noise, and the isotropic noise can be relaxed to anisotropic noise, if the covariance matrix $\\Sigma\\succ 0$ is _known_ (by pre-multiplying the samples by $ \\Sigma^{-1/2} $ and then post-multipling the isotropic estimator by\n$ \\Sigma^{1/2} $). In the revised manuscript, we added Remark 3 on p. 6 that explains this in detail. It is also perceivable that the results can be generalized to unknown covariance matrix, as well as to heavy-tailed distributions. This is, however, more challenging and will require additional ideas, will make the analysis unapproachable, and ultimately obscure the main message of the paper (memory between samples). We have included a new appendix (Appendix G) that discusses these more challenging open problems. In addition, our answer to Reviewer fjjw has contains more details on this matter (omitted here due to space limitation). ", " 1. \"Contextualization and motivation.\" \nWe agree that our presentation did not reflect well the context and motivation of the problem, and we have made a substantial effort to revise it. In particular, the opening paragraphs of the introduction are now exclusively devoted to this matter. We refer the reviewer to that paragraphs, and here further explain:\n * The broad question: Statistical dependencies in the data are ubiquitous in practice, and understanding how this memory can be used to improve statistical inference was extensively studied for classical, fixed-dimensional models, but much less for modern, high-dimensional models. Our research goal is to understand how to optimally exploit memory to improve high-dimensional statistical inference. The problem studied in this paper is an instance of this general theme. Since even this basic model was not understood before implies that there is much room for research in this direction.\n * Motivation: We next describe two problems which were our primary motivation and inspiration. The first one is practice-oriented, and the second one is an on-going theoretical research thread.\n * Improving estimation using social network data: Consider a population of $n$ individuals, each belong to either one of a few types. Each individual is characterized by a set of features, and the goal is to estimate the means of each type. Without further information, this can be cast as a standard mixture estimation problem. However, in modern applications, one typically has information on the social connections between the $n$ individuals. This can be used to significantly reduce the estimation error, since close friends are typically of the same type. We have focused on the simplest network structure possible – a homogeneous Markov chain, which is important and practical on its own. We expect that our results can be generalized to more complex structures. Since network data is widely available, such an analysis will have a broad practical impact.\n * The seminal paper of [Balakrishnan et al 2017](https://arxiv.org/abs/1408.2156) has provided the first finite-sample guarantees on the expectation-maximization (EM) algorithm applied to memoryless models with latent variables. This has spurred large research in this direction (see references in the second paragraph of the paper). However, when the results of [Balakrishnan et al 2017] are applied to a two-component Gaussian mixture, they are not minimax optimal, and such optimality was later proved in [Wu and Zhou 2019](https://arxiv.org/abs/1908.10935). Analogously to [Balakrishnan et al 2017], [Yang et al 2015](https://arxiv.org/abs/1512.08269) studied the EM algorithm (Baum-Welch) applied to the Markov setting we address too. Again, [Yang et al 2015] provided the first theoretical guarantees on Markov models, but not minimax tight bounds. The goal of our paper is to determine the minimax rates in this Markov model - a challenging task. Future research on the EM and other algorithms should use our precise minimax results as a benchmark.\n * Impact: Our message is that Markov memory in the samples should be exploited to reduce the estimation error compared to an i.i.d. model. Nonetheless, it also shows that for high dimensions (specifically $d\\\\gtrsim\\\\delta n$), this improvement is negligible (as the minimax rate is as for the GMM $\\\\delta=1/2$). One may conclude, e.g., that practitioners should focus their attempts on using memory to improve error in low-dimensional models, before such an attempt is made for high-dimensional ones. \n\n To conclude, our setting stem from a broad and under-explored research theme (“high dimensional statistical inference with samples memory”); has a practical motivation (“improving performance using social networks side information”); is tightly related to an active research thread (“EM and other computationally efficient algorithms for mixture models)”; and has important messages to convey (“what are the regimes in which memory is useful”). We believe that this serves a strong motivation and contextualization for our paper. \n2. \"I was a bit confused about the argument for why $\\\\overline X_i$ is i.i.d.\" \nThe confusion is in place, and due to our sloppy use of the re-randomization variables $\\\\{R_i\\\\}_{i=1}^n$. In the revised manuscript, we have corrected this, and properly re-defined $\\\\overline{S}_i$ and $\\\\overline{Z}_i$ as the average of $S_i$ and $Z_i$, respectively, over a block _multiplied by an independent Rademacher variable $R_i$_. This guarantees that $\\\\{\\\\overline{S}_i\\\\}$ and $\\\\{\\\\overline{Z}_i\\\\}$ are all i.i.d. In fact, due to the re-randomization step, one can w.l.o.g. assume that the averages of $S_i$ and $Z_i$ in different blocks are independent (without residual correlation across blocks), \nand therefore we occasionally omit $R_i$ in the rest of the paper.\n\n3. All the typos and grammatical mistakes have been fixed. Thank you.", " 1. \"Section 1.3 (Contributions) could be made a bit more reader-friendly.\" \nWe have thoroughly revised this section. \n \n2. \"Would it make sense to choose the $ S_i $ values from some other distribution?\" \nYes, though we focused on the binary uniform case as it allows to obtain a clear view of the trade-off between $(n,d,t,\\\\delta)$. Indeed, in the two-component case, the components can be centered and then the norm of the mean vector $t$ is a measure of \"separation\". If $ S_i $ takes values in a larger finite set, then the model has multiple components, and the resulting minimax rates are much more delicate. First, the ``separation'' of the multiple components need to be parameterized. Minimum pairwise distance is a natural choice, but there are recent papers [Romanov et al., 2022](https://arxiv.org/abs/2202.07707) which doubt that and propose global parameters. Second, it is known [Jin et al., 2016](https://arxiv.org/abs/1609.00978) that even with just three components, the likelihood can contain local maxima, suggesting that the multi-component problem is significantly more challenging. Another possible generalization is non-uniform distribution. Here too it is known [Weinberger and Bresler, 2022](https://arxiv.org/abs/2103.15653) that local maxima exist even for memoryless models. We presume that any result on multiple or non-uniform components will hinge on our result (and novel ideas), and we hope that presenting the paper at the conference would spur such research. \n3. \"Page 2: Lines 55-56 are not justified, and there is no citation.\" \nThe analysis of the estimator $\\\\hat\\\\theta=0$ is trivial, and the analysis of the averaging estimator leads to the parametric error rate of $\\\\Theta(\\\\sqrt{d/n})$ known for the Gaussian location model. We have added a citation to a full rigorous proof. \n\n4. \"Page 5: Could you please elaborate on the second equality of Equation (13)?\" \n The definition of $ \\\\overline{X} $ in Eqn. (8) implies \n $$ \\\\mathbb{E}[\\\\overline{X}\\\\, \\\\overline{X}^\\\\top] \n = \\\\mathbb{E}[(\\\\overline{S}\\\\theta_*+\\\\overline{Z})(\\\\overline{S}\\\\theta_*+\\\\overline{Z})^\\\\top] \\\\\\\\\n = \\\\mathbb{E}[\\\\overline{S}^2] \\\\theta_*\\\\theta_*^\\\\top + \\\\mathbb{E}[\\\\overline{S}\\\\theta_*\\\\overline{Z}^\\\\top] + \\\\mathbb{E}[\\\\overline{S}\\\\,\\\\overline{Z}\\\\theta_*^\\\\top] + \\\\mathbb{E}[\\\\overline{Z}\\\\,\\\\overline{Z}^\\\\top] \\\\\\\\\n = \\\\xi_k \\\\theta_*\\\\theta_*^\\\\top + \\\\frac{1}{k} I_d . $$\n The last equality follows since $ \\\\overline{Z}\\\\sim N(0, I_d/k) $ and is independent of $\\\\overline{S}$ (therefore the cross terms vanish). This was omitted due to space limitation.\n\n5. \"Footnote 2: I do not understand the last sentence.\" \n The PCA-based estimator analyzed in Theorem 1 yields a rate higher than the minimax rate of Theorem 2 whenever the signal strength is too low (see also Eqn. (7)). \n However, since the estimator is assumed to know $t$ (a common formulation in high-dim statistics), it can just output zero, and obtain the promised rate in the low signal strength regime. \n Thus, for any signal strength, the minimax rate is achieved by the minimum rate of the PCA-based estimator and the zero estimator.\n We have revised the footnote in the paper accordingly. \n\n6. \"Page 7: Please explain why Equation (19) yields a natural estimator.\" \n This is a natural estimator since it replaces the population mean \n \\\\begin{align}\n \\\\mathbb{E}[X_i^\\\\top X_{i+1}] &= \\\\mathbb{E}\\\\left[(S_i \\\\theta_* + Z_i)^\\\\top(S_{i+1} \\\\theta_* + Z_{i+1})\\\\right] \n = \\\\rho \\\\|\\\\theta_*\\\\|^2 , \\\\notag\n \\\\end{align}\n with an empirical mean computed over pairs of adjacent samples. We have revised the sentences leading to Eqn. (19) (now Eqn. (11)) to better explain this. \n\n7. \"Page 8: Please further explain lines 280-283.\" \nPer the minimax rates of GLM and HMM (Eqn. (5) and (6), respectively, see also Fig. 1), we see that if $ \\\\|\\\\theta_*\\\\|\\\\gtrsim1 $, then the best rate possible is the parametric rate $ O(\\\\sqrt{d/n}) $ of GLM, regardless of the value of $\\\\delta$. In that case, it is information-theoretically impossible to exploit the Markov structure to reduce the error rate. This rate, however, can already be achieved by the PCA estimator in Step A, and so Algorithm 1 terminates at this step. \n\n8. \"Page 9: Can you please elaborate on the claims of \"The impact of lack of knowledge of $\\\\delta$\"?\" \nWe have revised this paragraph in the paper. In Sec. 1.3 we state our bound on the minimax rate (Eqn. (7)) and explain how memory of $\\\\delta$ improves estimation error compared to $\\\\delta=1/2$ from three aspects. This bound, however, is for an estimator which knows $\\\\delta$. In the paragraph under question we revisit these three improvements in case $\\\\delta$ is unknown and Algorithm 1 is used. The claims made in the paragraph can be deduced from Theorem 7, in the same way they have been deduced from Theorem 2.\n\n9. All the typos and grammatical mistakes have been fixed.", " This paper is about the problem of estimating the mean in high-dimensional binary Markov Gaussian mixture models. If $\\delta$ denotes the associated flip probability of the stationary homogeneous Markov chain (and is known), the authors explore the regime of $\\delta$ between the previously studied values of 0 and ½. Moreover, in the other direction, they show that given the sampled (to be estimated) vector $\\theta$, one can compute some rough estimate on the parameter $\\delta$. These two results nicely combine into a method for estimating \\theta even if \\delta is not known.\n Originality: The originality of this paper mostly stems from its ambitious goal, namely, to understand the problem of mean estimation in high-dimensional binary Markov Gaussian mixture models for the case where the flip probability $\\delta$ can be any value in the interval [0,½]. Prior to this work, only the cases $\\delta = 0,½$ had been studied.\n\nQuality: The quality of the paper is good. It is well written and the results seem strong.\n\nClarity: The presentation of the paper and writing are generally clear. Section 1.3 (Contributions) could be made a bit more reader-friendly.\n\nSignificance: The work is significant: see Originality above. Another non-trivial reason that this paper is significant is about the techniques employed therein; in particular the use of a principal component of a suitable covariance matrix that is used in the aforementioned smooth interpolation between $\\delta$ = 0 and $\\delta$ = ½. I found that quite interesting!\n Would it make sense to choose the $S_i$ values from some other distribution?\n\nPage 2: Lines 55–56 are not justified, and there is no citation.\n\n\nPage 5: Could you please elaborate on the second equality of Equation (13)?\n\nPage 6:\n\nLine 204: “blcok” should be “block.”\n\nLine 205: “elemantary” should be “elementary.”\n\nLine 206: “crucuial” should be “crucial.”\n\nFootnote 2: I do not understand the last sentence.\n\nPage 7: Please explain why Equation (19) yields a natural estimator.\n\nPage 8: Please further explain lines 280–283.\n\nPage 9: Can you please elaborate on the claims of “The impact of lack of knowledge of $\\delta$? I did not see an explicit section addressing this. However, this may not be relevant to this paper. ", " The authors study an interpolation between Gaussian location models and Gaussian mixture models, in which samples are Gaussian noise-corrupted versions of a sequence in $\\\\{-1, 1\\\\}^n$ with each sequence element being the flipping of the previous one with probability $\\delta \\in [0, \\frac12]$. They characterise the minimax risk under Euclidean distance error to the near of $\\theta_*$ and $-\\theta_*$. They thus generalize known results for the GLM and GMM, extending these results to estimators and risk parameterized by $\\delta$, with the GLM and GMM at the extremes. Their ultimate result is to provide a three-step estimator to estimate $\\theta_*$ when $\\delta$ is not known, and to provide asymptotic results for its loss. This involves estimators for $\\theta_*$ under known $\\delta$, and for $\\delta$ under approximately known $\\theta_*$, for which they provide both achievability and lower bounds on worst-case risk within a logarithmic factor of each other. I thought the technical elements of the paper were sound. The problem and its relationship to prior results were clear, there was no ambiguity in notation, and all concepts were explained with suitable precision. I haven't had a chance to walk through the proofs in the appendices step by step, but I read through the proof of Theorem 1 and the approach, based on concentration bounds on a decomposition of the loss, seems sound to me.\n\nWhere I think the paper could do with improvement is in the contextualization it provides for the problem and results. Essentially none is given—the authors set up the problem straight away, and the only context given is in Section 1.2, where they state prior results for the GLM and GMM. Why is this problem interesting or significant? Does it have potential applications, or is it inspired indirectly by any potential applications or open questions, or does it get us a step closer to understanding some larger problem? I basically think the lack of contextualization or motivation substantially holds back this paper, and it would be a reasonably strong paper if material was added to address its significance.\n\n-----------------------------\n\n_Edit, 7 August 2022, after author revision:_\n- _Increased presentation score to 3 (was 2)_\n- _Increased contribution score to 3 (was 2)_\n- _Increased overall rating to 6 (was 4)_ 1. What motivated or inspired the problem? I'm fairly open to different sorts of motivations; there need not necessarily be concrete practical applications. But it would be good to have an understanding of where this fits into statistical learning, and/or any broader open questions that this contributes towards, and/or any actual or imagined applications. The fact that it is an interpolation between two known extremes doesn't by itself make it interesting: what sorts of questions fall in between the two extremes?\n\n2. I was a bit confused about the argument for why $\\overline{X}_i$ is i.i.d. Equation (8) says that $\\overline{X}_i$ is equal to $\\overline{S}_i \\theta^* + \\overline{Z}_i$ only in distribution. I think this is correct, because $R_i$ is independently drawn, so would break sure equality but maintain equality in distribution. But then the argument for $\\overline{X}_i$ being i.i.d. reasons via $\\overline{S}_i$ and $\\overline{Z}_i$. I don't this carries over an equality in distribution, unless $\\overset{d}{=}$ was intended to indicate equality of the joint distribution of the entire sequence, as opposed to of each element. But also, I don't think it needs to, because I think $\\overline{X}_i = R_i \\cdot \\left[ \\overline{S}_i \\theta^* + \\overline{Z}_i \\right]$, and since $R_i$ is also i.i.d., that makes $\\overline{X}_i$ i.i.d., right? Please feel free to correct me; any clarification would be appreciated.\n\nTypos/grammar:\n- 95: \"allows to achieve\" should be \"allows us to achieve\" (\"to allow\" takes either object + infinitive, or a noun phrase, but not just an infinitive)\n- 162: I think Bachmann is spelled with two n's\n- 204: \"blcok\"\n- 205: \"elemnatary\"\n- 206: \"crucuial\" I think the authors were reasonably upfront about questions that were left open. The work was entirely theoretical so I don't expect any societal impact, though I'd still encourage the authors to explain what motivates the problem.", " This work focuses on obtaining minimax rates in high-dimensional Binary Markov Gaussian Models. A random sign is multiplied to the sample and this sign is generated by Markovian coin flips w.p $\\delta$. The true distribution is a high dimensional Gaussian distribution with identity covariance. Thus this model interpolates between a standard mean estimation problem for Gaussians when $\\delta=0$ and when $\\delta=1/2$, this is a GMM with two components. The authors provide an estimator that needs to account for the Markovian samples to obtain near optimal minimax error rates (upto log factors) when $\\delta$ is known which degenerates to the known minimax rates when $\\delta = 0$ or $\\delta=1/2$. Additionally, when $\\delta$ is unknown they provide an upper bound of the minimax rates. Strengths:\n\nThis is one of the few recent works that study estimation under dependent samples, with Markovian dependency specifically. The model is interesting as it interpolates between a standard Gaussian location model and Gaussian Mixture model simultaneously. They obtain near optimal minimax rates from the estimators. The estimate relies on carefully splitting the data into batches of appropriate size, so that the local estimate in each batch forms an i.i.d sequence of estimates.\n\nWeaknesses:\n\n1) The main argument relies on the mixing time of the Markov chain that generates the signs and each batch length is decided based on that. It would be good to see why this $\\Theta(1/\\delta)$ for the sake of completion. [Nagaraj et al. 2020], apply a variant of SGD for the least squares regression problem with Markovian data and that relies on the mixing time of the Markov chain. Their argument relies on wasting certain amount of data and beyond which there is an approximate. i.i.d behavior in the samples and the errors due to correlation are taken care of. I am unable to see this argument here and it feels less convincing as to why the data is fully i.i.d after splitting it into batches according to the Markov chain mixing time and the errors due to some residual correlation doesnt seem to appear. Also it is surprising that there is no data wastage. Even if you assume ergodicity, you might have to discard some amount of samples initially. I hope this is clarified better.\n\n2) The results seem to apply only for isotropic Gaussians.\n\nUpdate:\nAfter the rebuttal, I think the authors have clarified my questions convincingly and I am inclined to increase my score.\nReferences :\n\nNagaraj, Dheeraj, et al. \"Least squares regression with markovian data: Fundamental limits and algorithms.\" Advances in neural information processing systems 33 (2020): 16666-16676. See weaknesses above. Yes, the limitations are somewhat adequately addressed. Please see weaknesses for some clarifications.", " The paper considers high-dimensional mean estimation problem over hidden Markov models (HMMs). It considers whether and to what extent the memory between samples can benefit the estimation problem. Specifically, given a Markov chain S^n with fixed flip probability \\delta and correlation \\rho between adjacent samples, the goal is to infer the parameter \\theta* of the model. \n\nIt studies three different memory cases: (i) \\delta = 0, (ii) \\delta = 1/2, and (iii) \\delta\\in(0,1/2), where (i) corresponds to infinite memory, (ii) to no memory, and (iii) to simple versions of HMMs. When \\delta is known, the authors show that a principal component computation based estimator achieves asymptotically optimal rate. When \\delta is unknown, the authors propose an algorithm that first obtains a gross estimate of \\theta by assuming the worst memory case, i.e. (iii); then estimates \\delta using the estimate of \\theta; at last, it refines the estimate of \\theta using the new \\delta. It shows that the information of knowing \\delta plays a role in the performance of the estimator.\n The authors were able to demonstrate how memory between the samples affect the rate of estimation. The paper shows sharp bounds on error rates in both low and high-dimensional scenarios. The paper is generally well written. However, some parts of the paper are less clear to the reader.\n In Algorithm 1, step B relies on the estimate obtained from Step A. I wonder how good the estimate of \\theta should be to facilitate the following steps? Do more samples in Step A essentially help? How strong is the assumption of a Gaussian distribution, and can it be replaced with weaker assumptions?\n The authors may address the limitation of the Gaussian assumption.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 2, 4, 2 ]
[ "FVPxv6qd50", "VVavxvCiec", "RK3slRY4jB7", "OJSIeDUewxw", "B4KDbVC2HbL", "iASiU_dxlR8", "nips_2022_Upt5wsECVJe", "YAfcjBTVfv8", "24hh7f15xMQ", "ok2GPY4Los9", "RKG6GFv5PU2", "nips_2022_Upt5wsECVJe", "nips_2022_Upt5wsECVJe", "nips_2022_Upt5wsECVJe", "nips_2022_Upt5wsECVJe" ]
nips_2022_d0stFTU2dTI
Exploration via Planning for Information about the Optimal Trajectory
Many potential applications of reinforcement learning (RL) are stymied by the large numbers of samples required to learn an effective policy. This is especially true when applying RL to real-world control tasks, e.g. in the sciences or robotics, where executing a policy in the environment is costly. In popular RL algorithms, agents typically explore either by adding stochasticity to a reward-maximizing policy or by attempting to gather maximal information about environment dynamics without taking the given task into account. In this work, we develop a method that allows us to plan for exploration while taking both the task and the current knowledge about the dynamics into account. The key insight to our approach is to plan an action sequence that maximizes the expected information gain about the optimal trajectory for the task at hand. We demonstrate that our method learns strong policies with 2x fewer samples than strong exploration baselines and 200x fewer samples than model free methods on a diverse set of low-to-medium dimensional control tasks in both the open-loop and closed-loop control settings.
Accept
All reviewers acknowledged to have read the rebuttal. Reviewer iWun's reply isn't visible to the authors (posted too late), see end of metareview. The most important concerns of the reviewers have been addressed by extensive replies and additional experiments. Overall the method is sound and performs well. As acknowledged by the authors, the method comes with inherent limitations to low-to-medium dimensional problems through the use of GPs. The method is useful on its own, and serves as proof-of-concept for the overall idea also for different function approximators - but replacing GPs will require quite a bit of additional work. *** 18 Aug 2022, NeurIPS 2022 Conference Paper2167 Reviewer iWun "Thanks to the authors for the response. I still don't like non-strict mathematical language of the paper, but this doesn't seem to be a problem for other reviewers. In addition, I like the results of the paper, and therefore I increase my score."
train
[ "th6NMaJiFE", "I7hDU6H56hf", "w5O8NSRq0dw", "wbNOuxQFFqd", "8ejPJ5IVSsm", "JEDREWMG-KE", "WWOvxLYDyk", "H7KhpTwO041", "Y9a5ft_Mh4d", "T_3EZFbjBn", "nKGEJ4gQnNl", "Q1MYZ6Bbuio", "giDOo5jADpd", "qYZjBE37trX", "ma7gU0XI80W" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for pointing that out. I am suggesting that you perform a thorough evaluation of your method which would make your paper more useful for readers. I am updating my score to weak accept.", " Thank you for your reply! Please note that one of the two suggested environments that you listed in your reply, cart-pole, is indeed in our experiments (see row 2 of Table 1).\n\nWe agree that this environment is more challenging (sparse rewards, underactuated dynamics, etc.), and indeed, several of the baselines we compared our method to could not solve this task (HUCRL, TS, FEEF, RHC).", " Dear Authors -- thanks very much for answering the questions. In my opinion, I think the evaluation of your method needs more work. You need to show performance on more complex environments. Inverted pendulum and reacher is too simplistic, even if you show more underactuated domains like acrobot or cart-pole would work. ", " I thank the authors for addressing my questions and comments. The updates made to the manuscript have resolved my concerns. Therefore, I raised my score.", " We have additionally implemented and updated our paper with results from comparing another suggested baseline method, RHC [1], on our benchmark environments. Though we did a moderate amount of hyperparameter tuning, RHC doesn’t perform well on our benchmark environments, either failing to solve the problem or solving the problem using many times more data than TIP. We hope that including both methods you noted as most relevant to our work in our updated version of the paper has addressed your concerns as to its quality.\n\n[1] Schultheis, M., Belousov, B., Abdulsamad, H., & Peters, J. (2020, May). Receding horizon curiosity. In Conference on robot learning (pp. 1278-1288). PMLR.\n", " Thank you for your review. We’re glad you found our method interesting and original and agree that the TIP algorithm is the most effective on our intended goal of sample efficiency. We’ll address your questions in order below, starting with a few edits we’ve made to our mathematical presentation.\n\n### **Questions**\n\n1. Thanks for pointing out that we didn’t explicitly define $P(T|D)$. As is typically the case when GPs are used, $P(T|D)$ is the posterior distribution over the transition function $T$, given the GP prior and data observed. We’ve added a note to our preliminaries explicitly stating this.\n\n2. We take $\\\\pi_g$ to be the MPC policy (i.e. the policy produced by running the MPC algorithm) given in Equation 3, with the cost function $C_g$ (described in line 233). We have made an edit explicitly defining this around line 170.\n\n3. We changed $S$ to $X$ and $\\\\tilde{\\\\mathcal{S}}$ to $\\\\mathcal{X}$ for the domain of our cost function throughout our paper. Thanks for the suggestion!\n\n4. (+ 5., 8.) The confusion here is that variables $S’$ and $\\\\tau^*_{ij}$ are themselves made of dynamics data in the sense that they encode transitions that come from our dynamics.\n\nIn this work we assume the unions between the dataset D and other mathematical objects such as the trajectories $\\\\tau_{ij}^*$ or the next states $S’$ given $X = (s_i, a_i)$ and some data do the natural thing and construct the state, action, and next state triples that allow them to fit as elements of a set of data. We added a sentence at the end of the preliminaries clarifying the meaning of this operation.\n\n6. One classic example where naive exploration cost functions such as $C_e$ work poorly is the noisy-tv problem, where the agent is presented with unpredictable noise that is irrelevant to the task. TIP will correctly identify that this information does not affect the dynamics which matter to the task and ignore it.\n\n7. This equality is by the conditional entropy definition of the mutual information I and the fact that it is symmetric. \n\nHowever, we felt that a derivation is unnecessary here since these are both well-known properties of mutual information (for example, listed as properties on the [wikipedia page](https://en.wikipedia.org/wiki/Mutual_information#Properties) on mutual information).\n\n\n\n9. In line 220 we say explicitly that this covariance is easy to calculate using a GP and give a reference. We also mention in Section A.3 that we use a squared exponential kernel and give details on the kernel fitting procedure.\n\n10. We give full hyperparameters for TIP in Tables 4 and 5 in the appendix. As we have over a dozen comparison methods we believed it would be excessive to include tables of hyperparameters for each of the other methods used. We will release code for our method and all comparison methods which will allow for the experiments to be easily repeated by anyone interested.\n\nThanks again for your feedback on the paper. Comments like these allow us to make concrete improvements that help people understand our work more clearly. We hope that our modifications and clarifications have resolved your questions.\n\nFinally, if you find our response satisfactory, we respectfully ask that you consider increasing your score. If it is still unsatisfactory, please let us know if there is anything else that we can do or clarify to improve this paper.\n", " Thank you for your detailed and helpful review. We appreciate the positive feedback and hope to address your concerns in a way that improves the quality of our submission. We largely agree with your summary of the paper and its contributions. \n\nPlease see our note to all reviewers regarding high dimensional RL problems as it was a concern in common.\n\n### **Regarding** $\\tau^*$ :\nUnder our assumptions $\\tau^*_{ij}$ is the optimal trajectory for a particular sample from the posterior $T_j \\sim P(T | D)$ and start state $s_i$. As long as the true dynamics are in the support of the posterior then the support of the true optimal trajectory (the states visited by executing an optimal policy from the start state distribution on the GT dynamics, which are by definition reachable) should be a subset of the support of $P(\\\\tau^*|D)$ since the GT dynamics $T$ must be in the support of $P(T|D)$. The core objective of the cost function is to identify the dynamics needed to plan the optimal trajectory as quickly as possible.\n\nThat being said, we are certainly constrained to only collect data from places that are reachable by the exploration policy. This is an inherent constraint of the rollout RL setting and we do not avoid this.\n\n### **How much of TIP's performance is attributable to GPs?**\nGPs have been shown in PILCO[1] and many other works[2][3] to be notably sample efficient dynamics models in regimes where they are statistically and computationally capable of fitting the MDP dynamics. As mentioned elsewhere they also allow us to compute an estimate of $C_{\\\\tau^*}$ using only Monte Carlo and an assumption of a good planner on known dynamics and we aim to extend to more scalable models using other techniques to compute a similar quantity.\n\nPETS [4] uses probabilistic neural ensembles and a stochastic MPC algorithm and is comparable to PILCO[1] and our MPC baseline, similar methods with a GP model--in our experience the GP has better sample complexity on the more limited set of problems with well-shaped reward functions and dynamics that are amenable to being fit by a GP but fails in many problems where PETS succeeds due to a more flexible model. We believe that an approximation of the $C_\\\\tau^*$ cost function that works with neural network models would explore more effectively than simply exploiting the model as in PETS or our MPC baseline; however we leave such extensions for future works.\n\n### **Why does TIP sometimes outperform BARL?**\nBARL optimizes its acquisition function by simply uniformly sampling the domain, evaluating the acquisition function at each sampled point, and choosing the maximum. This will work for very low-dimensional problems, but as the dimensionality increases to the 10D beta + rotation problem, this very simple optimization algorithm becomes much less effective. The combination of an iterative optimization algorithm (CEM) and the simple fact that TIP is forced to start from the initial state help mitigate the difficulty of optimizing the (nonconvex) mutual information quantity. \n\n### **Odds and ends**\nI think our sentence in Section 5 is unclear. We define solving the problem as reaching performance equivalent to an MPC controller given GT dynamics. This is a definition used in our evaluation procedure and not a claim of anything further. We have updated the wording to make this less ambiguous.\n\nWe are actively looking into various approaches for including a terminal value function for the cost function we introduce in the paper. I believe this could both improve the scalability of the method (as NN models would work) and its ability to handle infinite horizon. Thanks for the suggestion!\n\nWe appreciate your review and feedback!\n\n[1] PILCO: A Model-Based and Data-Efficient Approach to Policy Search, Deisenroth & Rasmussen, ICML 2011\n\n[2] Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control, Kamthe and Deisenroth, AISTATS 2018\n\n[3] Gaussian Processes in Reinforcement Learning, Malte & Rasmussen, Neurips 2003\n\n[4] Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models, Chua et al, Neurips 2018\n", " ### **Questions:**\nQ1: The correct interpretation of the modeling error is that the error on the points queried by the planner on the ground truth dynamics is much smaller than the error on the randomly chosen points in the domain. As the domain could be arbitrarily larger than the support of the optimal trajectory, it is likely that predictions would be poor on a uniform sample of the much larger set. We believe that this is an unavoidable consequence of task-oriented model-based reinforcement learning with very limited data and not unique to our method.\n\nQ2: Though we include error bars in our plots in Figure 3, we agree that box plots are a good way to present the statistical information around performance. We have added these plots to the appendix in the updated version of our paper. Thanks for the suggestion!\n\nQ3: We disagree with the premise that MPC performs comparably. MPC takes 2.2x, 1.7x, 1.5x, >2.5x, and 3x more samples than TIP to solve our benchmark problems. However, model-based reward maximization algorithms using GPs have consistently shown strong results with few samples on simple to moderately complex tasks. We are therefore not too surprised that MPC performs well on the examples here, especially since we made sure the planner was sufficiently strong to find a good solution to each problem.\n\nQ4: The problems that DIP performs well on are the lower dimensional problems where it is possible to simply fully explore the state space and solve the problem at test time. Once the dimensionality increases it is no longer possible to easily fully explore the state space and therefore DIP doesn’t perform well. MPC is able to be more directed, but suffers due to focus on maximizing reward rather than directly optimizing for additional information. As a result, it doesn't explore as quickly as TIP but, at the same time, doesn't get stuck like DIP.\n\nQ5: In the BARL paper, BARL performs slightly better on Pendulum (16 datapoints vs 21) and cartpole (91 points vs 111), exactly the same on Reacher, and substantially better on the beta tracking environment (96 vs 186), in comparison with the one we used in this paper. We believe the pendulum and cartpole results are due to small fluctuations in performance and the fact that we evaluate performance periodically as data is acquired. We are sure that for the classic control environments we used the same ones as in the BARL paper. \n\nFor the beta tracking environment, we trained the dynamics using our plasma data to try to be similar to those from the BARL paper but there ended up being differences in the ultimate environment. We added a comment to our description of the control problem clarifying this).\n\nFinally, we hope our response and updates to the paper addressed most of your concerns with the submission. If so, we respectfully request that you increase your score. Please let us know if there is anything else we could clarify. Thanks again for your feedback as it was instrumental in improving this paper.\n\n[1] Batch Bayesian Optimization via Local Penalization, Gonzalez et al, AISTATS 2016\n\n[2] The Parallel Knowledge Gradient Method for Batch Bayesian Optimization, Wu & Frazier, Neurips 2016\n\n[3] Shyam, P., Jaśkowski, W., & Gomez, F. (2019, May). Model-based active exploration. In International conference on machine learning (pp. 5779-5788). PMLR.\n\n[4] Pathak, D., Gandhi, D., & Gupta, A. (2019, May). Self-supervised exploration via disagreement. In International conference on machine learning (pp. 5062-5071). PMLR.\n\n[5] Schultheis, M., Belousov, B., Abdulsamad, H., & Peters, J. (2020, May). Receding horizon curiosity. In Conference on robot learning (pp. 1278-1288). PMLR.\n\n[6] Tschantz, A., Millidge, B., Seth, A. K., & Buckley, C. L. (2020). Reinforcement learning through active inference. arXiv preprint arXiv:2002.12636.\n\n[7] Efficient Model-Based Reinforcement Learning through Optimistic Policy Search and Planning, Curi et al, Neurips 2020\n\n[8] PILCO: A Model-Based and Data-Efficient Approach to Policy Search, Deisenroth & Rasmussen, ICML 2011\n", " Thank you for your detailed review. We appreciate your feedback and aim to satisfy your concerns while also improving the paper. We broadly agree with your summary of the paper’s contents. We’ll address your concerns below and highlight the changes we made in response to them, which are now present in the updated version of the paper.\n\n### **Claims about algorithm performance**\nWe understand that the claims in the abstract and introduction are imprecise and might give the wrong impression and have changed them to be more specific about how much we improve against different types of algorithm so as to be more clear about the improvements.\n\n### **Originality**\nThough the idea of maximizing information gain with respect to the optimal trajectory is introduced in the BARL paper, we believe that the generalization of the acquisition function from a single point to a joint information gain over a batch used for planning is one which wasn’t obvious in the BARL paper. We see similar contributions of this type in multiple other works in the BO literature [1][2]. We address the scope of our novelty on line 183 saying:\n\n> “this is the same overall goal as that of Mehta at al. , where the $EIG_{\\\\tau^*}$ acquisition function was introduced… However, in this paper we generalize this acquisition function in order to allow for sequential information collection that accounts for the redundant information that could be collected between timesteps.”\n\nHowever, we also added an additional citation to BARL to the introduction and some text in the related work emphasizing the connection. \n\n### **Baselines**\nThe exploration algorithms you cited [3][4][5][6] can be broken down into 2 categories. We see [3][4] and [5] as pure exploration algorithms that don’t consider the task being solved and are closely related to our DIP baseline, though they use various tricks to approximate information gain with neural networks or in the case of [5] a Bayesian linear model. As we mention on line 168, [3] is essentially an algorithm that optimizes an EIG criterion (equation 4 from that paper) that can be shown to be equivalent to $C_e$ in our paper. [4] is similarly optimizing the model disagreement in a predictive ensemble, which is an analogous quantity to the posterior predictive variance of a GP model and [5] is quite similar, using a curiosity objective over future timesteps (equation 2 in that paper is very close to the DIP objective). We believe that DIP is the best member of this family of algorithms to compare against as it removes the modeling aspects (especially since GPs perform well) and allows the two exploration strategies to be compared with the same dynamics estimation and planning algorithm. We added a short note to our updated draft clarifying this.\n\n[6] is a paper that we hadn’t seen before, and we agree that it is of a similar spirit. We have conducted additional experiments where we ran the FEEF algorithm from [6] on our benchmark tasks under several hyperparameters similar to those used in their experiments. TIP solves the problem with a small fraction of data that FEEF uses on each task and FEEF fails to solve 3 of our 5 tasks. We have included these results in our paper.\n\nWe believe that HUCRL and TS are similar to TIP and FEEF as methods which attempt to gather task-relevant information about the dynamics and are therefore specifically relevant comparisons. We agree that the results we observe from these algorithms are disappointing. However, we spent significant effort attempting to tune the authors’ implementations of HUCRL/TS, and even consulted with the first author of the HUCRL paper [7] over Zoom to ask for advice on tuning hyperparameters. After this effort we decided to include the best results we could achieve using the code provided.\n\nFinally, in order to make FEEF fit and with the aim of deemphasizing the comparison to the model-free methods, we moved PPO and TD3 to the appendix.\n\n", " Thank you for your review of our paper. We appreciate the positive feedback on our method and results.\n\nPlease see our note to all reviewers regarding high dimensional RL problems as it was a concern in common.\n\nWe’ll address your other comments below:\n\n### **Model-Free Methods**\nWe agree that it is widely accepted that model-free methods have comparatively poor sample complexity on classic control problems where dynamics are smooth. However, we also evaluate all methods on plasma control environments, which are typically solved by model-free methods[1] [2] for a number of reasons. We believe that in general it is informative to include results that could plausibly be used to solve many RL problems even when they are not necessarily the strongest in the dimension being analyzed. Given that we include seven model-based closed-loop baselines and three model-free ones, we believe that our set of comparative methods is robust. However, in response to another reviewer’s concern, we added a related directed exploration technique, FEEF, to our experiments and took the opportunity to deemphasize the comparison to model-free methods by moving PPO and TD3 to the appendix.\n\n[1] Jaemin Seo et al 2021 Nucl. Fusion 61 106010\n\n[2] Magnetic control of tokamak plasmas through deep reinforcement learning, Degrave et al, Nature\n\n### **Robust MPC Methods**\nBoth chance-constrained MPC and tube MPC are robust MPC methods which take the model uncertainty into account when synthesizing trajectories for execution and can achieve various guarantees for performance. These methods are in some sense complementary to the progress in TIP. TIP is a method of exploring the dynamics to solve a problem using few samples. One configuration that could make sense is to use TIP to obtain data during an agent’s exploration phase and then a robust MPC method during deployment to achieve good performance given the remaining model uncertainty. Further integrating these methods by exploring explicitly to find points that improve the robust performance of the MPC controller is an interesting direction we leave for future work.\n\nFinally, if our response satisfied your concerns we respectfully ask that you consider increasing your score. Please let us know if there is anything else that we can do or clarify to improve this paper. Thanks again for helping us improve the paper.\n", " Many of the reviewers asked for additional evaluations of our method on high-dimensional environments. We agree that they are an interesting and important subset of RL tasks and that scaling to high-dimensional environments up to text and images is an important goal of the field. \n\nOur work focuses on a new method for task-directed exploration where actions are chosen with the goal of maximizing the future information gain about the optimal trajectory. In order to best understand the performance of this method, in this paper we have focused on the setting where we needed to make the smallest set of assumptions to apply this techniques: problems where the dynamics can be modeled by a Gaussian process and where a relatively simple trajectory optimization method suffices to find a good trajectory given GT dynamics. In these settings we attempted to fully ablate our method and tease out the contribution of the $C_\\\\tau^*$ cost function and its effects on the learned dynamics model compared to a wide range of methods which often used the exact same prior and planner. We believe that this is the most controlled evaluation possible for understanding the method.\n\nOne unfortunate consequence of our choice of Gaussian process dynamics is that GPs are widely known to have sample complexity exponential in the dimensionality of the domain [1] and computational complexity cubic in the number of datapoints. Across the field of RL using Gaussian processes this has restricted methods to evaluate on environments with low to moderate dimensionality [2][3][4]. Though we would have liked to evaluate our method on high-dimensional experiments such as Ant or Atari, it is not computationally practical for GP-based methods using exact computations and a sampling-based planner.\n\nThere are also quite a number of methods for scaling GPs [5] or using hybrid models with neural networks or other approaches to Bayesian dynamics modeling. Each of these requires some thought about the best way to estimate the $C_\\\\tau^*$ cost function given a new kind of dynamics model. As we mention in the conclusion, we are actively working on exploring the space of methods to further scale this approach, but it will take substantial trial and error, implementation effort, and methodological improvement. We hope to one day present an RL algorithm that is simultaneously state-of-the-art in sample efficiency, scalability, and robustness to misspecifications. But we believe that it is not necessary for a contribution to solve all problems at once. \n\nWe have added language to the paper making it more clear that our control problems are low-to-medium dimensional. \n\n[1] Gaussian Processes for Machine Learning, Rasmussen and Williams, MIT Press 2006\n\n[2] PILCO: A Model-Based and Data-Efficient Approach to Policy Search, Deisenroth & Rasmussen, ICML 2011\n\n[3] Data-Efficient Reinforcement Learning with Probabilistic Model Predictive Control, Kamthe and Deisenroth, AISTATS 2018\n\n[4] Sample Efficient Reinforcement Learning with Gaussian Processes, Grande et al, ICML 2014\n\n[5] Conditioning Sparse Variational Gaussian Processes for Online Decision-making, Maddox et al, Neurips 2021\n", " This paper presents a method for exploration in MBRL techniques by proposing a method called trajectory information planning. They present an algorithm to maximize the expected information gain for the trajectory during planning. The method have been demonstrated on several control problems and compared with some baselines. Strengths:\n1. The information gain metric is interesting and seems novel to me.\n2. The authors can show improvement over some chosen baseline methods\nWeaknesses:\n1. I think the evaluation of the paper needs more work.\n2. Would it be possible for authors to perform comparison on some high-dimensional problems? Also, I think it is unfair to compare to model-free methods as they are known to slow compared to model-based method on these classical control tasks. If you would like to compare your method to model-free methods, please select some tasks where the SOTA MBRL techniques dont perform good compared to model-free methods, and try to show that your method succeeds. Otherwise the comparison seems unfair. 1. How would your method work for high dimensional tasks? How would you method compare against a Stochastic MPC approach? like chance constrained MPC? or Tube MPC kind of approaches. In general, they tend to work well on classical control systems.\n2. How would your method scale to high-dimensional systems? Did you try implementing your method on acrobot or humanoid kind of systems? 1. I think authors can try and address some limitations with respect to computations and scaling of the method to high dimensions. That could be useful insight for the readers.", " The paper extends Bayesian Active Reinforcement Learning (BARL) [38] by lifting its Transition Query RL (TQRL) assumption (TQRL means that the agent can sample individual transitions $(s,a,r,s')$ by querying the simulator at an arbitrary $(s,a)$ pair). Instead, only state-action pairs from collected trajectories are used in this paper.\n\nThe main contribution is Algorithm 2 for computing the cost function $C_{\\tau^*}(S)$, the negative joint expected information gain about the optimal trajectory. The average cost is subsequently optimized via the cross-entropy method (CEM). The computation complexity of one iteration is derived, and experiments on 5 environemnts for closed-loop policies (the algorithm applied in MPC fashion) and 3 environments for open-loop policies are reported.\n\nThe proposed method Trajectory Information Planning (TIP) is shown to perform comparably or even better than BARL in the sense of requiring fewer samples. The fact that TIP is even better than BARL is surprising since BARL can sample the state-action space more freely, but the authors hypothesize that this is because BARL samples uniformly whereas TIP has a more sophisticated scheme and forces initialization at the start distribution.\n\nTo demonstrate that explicit exploration is useful, TIP is shown to require fewer samples than the methods which don't explicitly encourage exploration (MPC, PETS, SAC, TD3, PPO).\n\nFurthermore, TIP is compared to a few baselines — HUCRL and TS — that rely on upper confidence bound (UCB) and Thomson sampling (TS) over the dynamics and shown to provide better exploration by requiring significantly fewer samples.\n\nFinally, a few ablations of the proposed method are considered that either only use the cost function $C_g$ (used in MPC) or the entropy objective $C_e$ (used in DIP, EIG$_T$). A variant of TIP where the information gain is split over time steps called sTIP is also considered. All these methods perform comparably, according to Table 1, however TIP shows the best sample complexity. Strengths\n- clarity: well-written paper, with good structure and clear presentation\n\nWeaknesses\n1) Significance: medium to low. Too dramatic and misleading claims in the abstract\n - \"2-200x fewer samples compared to 14 baselines\" — the 200x improvemnt is w.r.t. methods that don't compute information gain (PETS/SAC/TD3/PPO). Such comparison does not seem fair. There has been a lot of algorithms that propose various version of information gain and artifical curiosity, therefore a comparison should be with respect to those algorithms or at least w.r.t. BARL or MPC with entropy objective. Compared to these algorithms, the contribution is still valuable but the statement would be more sober.\n2) Originality: fair\n - Reading the paper briefly makes an impression that the whole approach of minimizing the information gain about the optimal trajectory is new. In particular, this is also claimed in the abstract. However, all the derivations and ideas are already presented in BARL [1]. The main contribution of this paper compared to BARL is to compute the InfGain using only sampled trajectories. It should be made much more clear what is the contribution of the present paper and what is actually novel here.\n3) Quality: fair. A lot of evaluations but comparison to wrong baselines\n - As mentioned above, the baselines are either from a different category (SAC/TD3/PPO) or don't use InfGain (MPC/PETS) or seem to be badly tuned or not applicable (HUCRL/TS) or are version of the proposed algorithm (sTIP, DIP, EIG$_T$). On the other hand, a variety of exploration bonuses and surrogates for the InfGain have been proposed, e.g., [2,3]. In particular, the proposed method appears very similar to [4] and [5] and therefore either a comparison would be desirable or a statement why such comparison is not necessary.\n\n\nReferences \\\n[1] Mehta, V., Paria, B., Schneider, J., Ermon, S., & Neiswanger, W. (2021, September). An Experimental Design Perspective on Model-Based Reinforcement Learning. In International Conference on Learning Representations. \\\n[2] Shyam, P., Jaśkowski, W., & Gomez, F. (2019, May). Model-based active exploration. In International conference on machine learning (pp. 5779-5788). PMLR. \\\n[3] Pathak, D., Gandhi, D., & Gupta, A. (2019, May). Self-supervised exploration via disagreement. In International conference on machine learning (pp. 5062-5071). PMLR. \\\n[4] Schultheis, M., Belousov, B., Abdulsamad, H., & Peters, J. (2020, May). Receding horizon curiosity. In Conference on robot learning (pp. 1278-1288). PMLR. \\\n[5] Tschantz, A., Millidge, B., Seth, A. K., & Buckley, C. L. (2020). Reinforcement learning through active inference. arXiv preprint arXiv:2002.12636. 1) In Fig. 3, \"Model MSE on Random Set\" is bigger than \"Model MSE on Current MPC\" by 4 orders of magnitude. This seems to imply that the model is quite bad outside of the observed data. How should one interpret it? Can one then rely on such model?\n2) Table 1 shows only the median value over 5 runs. It would be usefull to see the whole distributions, at least in the appendix. E.g., display a boxplot for each environment. Otherwise, it is hard to judge how reliable the numbers in Table 1 are and what is the variance in these results.\n3) MPC seems to perform quite comparably to TIP and it requires significanlty fewer samples than SAC/TD3/PPO/HUCRL/TS, despite the fact that it has no notion of exploration and is only optimizing the cost function $C_g$. This seems to indicate that explicit exploration may not be so necessary in the considered tasks or that MPC somehow performs sufficient exploration anyways. It would be worth providing a discussion of this result.\n4) DIP appears very similar to MPC in Table 1 despite the fact that they are using completely different objective functions. DIP is only using the entropy objective $C_e$ whereas MPC is only maximizing the reward $C_g$. Such similarity seems to indicate that the reward is somehow correlated with the exploration in the considered tasks. Or is there a different explanation?\n5) Results for BARL are better in the BARL paper (Table 1 here https://arxiv.org/pdf/2112.05244.pdf) compared to the results in Table 1 in the present paper. What is the reason for that?\n Technical limitations are described sufficiently. There is no discussion of the negative societal impact. The authors could indicate potential dangers of active exploration in applications of RL.", " The paper aims at improving data efficiency for RL, targeting real systems where data acquisition is normally costly.\nThe authors consider model-based control for MDPs with explicitly modelled dynamics.\nControl is relegated to MPC, the procurement of parametric (amortized) policies is left out of scope.\nThe focus is on obtaining useful data for learning the dynamics model as fast as possible, so that MPC can successfully control the agent.\nThe method shows significant improvement in data efficiency in 5 different environments with simple to moderately complex dynamics.\n\nThe main premise is that explorative actions are selected such that maximum information (in the sense of infogain, mutual information, MI) is gained for the optimal future trajectory $\\tau^*$ under the **current** dynamics model, noting that this is better than plain max. entropy RL, as it takes the task into account.\nThis is captured by the *Expected Information Gain* (EIG) objective, which has been proposed for the RL setting in prior work [1, 2].\nIn this paper, the objective is extended to account for the acquisition of data in sequence, i.e. in the form of rollouts in the real system.\nTo me it appears that this is a step forward from the cited prior work [1], where the system could be queried arbitrarily (at any state of the agent, coined the TQRL setting), which is unrealistic on real hardware.\nOne implication of this treatment is that mutual information is evaluated between the optimal future trajectory $\\tau^*$ and joint sets of future states corresponding to candidate future control sequences, i.e. it serves as the objective for evaluating MPC rollouts and accounts for possible information overlaps into the future.\nThe method is called *Trajectory Information Planning* (TIP).\n\n**References**\n\n[1] Mehta, V., Paria, B., Schneider, J., Ermon, S. and Neiswanger, W., 2021. An Experimental Design Perspective on Model-Based Reinforcement Learning. arXiv preprint arXiv:2112.05244.\n\n[2] Neiswanger, W., Wang, K.A. and Ermon, S., 2021, July. Bayesian algorithm execution: Estimating computable properties of black-box functions using mutual information. In International Conference on Machine Learning (pp. 8005-8015). PMLR. **Strengths**\n- The posed exploration objective appears sound and widely applicable. I also find it fitting, as it is about reducing the predictive entropy of the optimal trajectories MPC will actually produce, by virtue of training the dynamics model on data that is informative about them.\n- While the core idea of the objective is not novel (see [1]), I believe the extension to realistic data acquisition, i.e. in sequences, is a step forward.\n- The authors make it abundantly clear that information overlap in the acquired data sequences needs to be accounted for, and the method complies with that.\n- Motivation is easy to follow across the manuscript, exposition is clear.\n\n**Weaknesses**\n- The method hinges on the assumption that the posterior over the dynamics model can be updated in closed form (eq. 6 and 7) and that the entropy of predictions is easy to evaluate. This is required for the evaluation of the posed MI objective. Therefore the method limits itself to GP dynamics (just like in [1]), black-box neural dynamics models would encumber the evaluation of the objective.\n- GPs can be limiting both in terms of computation and expressivity for high-dimensional state spaces;\n- The authors use a variation of the cross-entropy method (CEM) for MPC optimization. MC-based optimization techniques are easy to apply, but might scale poorly to high-dimensional control spaces.\n- While I find the selection of experimental tasks reasonable, a lot of the environments are low-dimensional and with nice and smooth dynamics. How would the method fair on something more complex, like the ant locomotion task? Would the GP still be able to capture the transition accurately? For me it remains an open question whether the scheme is applicable to more complex systems (i.e. scalability).\n- The mutual information objective is formulated for the optimal trajectory $\\tau^*$ w.r.t. the current dynamics posterior. I think it is important to highlight that the \"optimal trajectory\" in that sense is not the ideal trajectory, but the best one available based on the current model fit. So overall acquisition success is still subject to the reachability properties of the state-space of the system.\n\nOverall I found this is a good contribution that explores an interesting idea already introduced in prior work [1] further. Thus I lean towards acceptance.\n\n**Minor remarks**\n- The qualifier \"simply\" appears too often in the text (11 times).\n - The **MPC** ablation variant from section 5, i.e. a GP + MPC with no exploration considerations, seems to perform very well in terms of data efficiency. This is expected, as the interpolation properties of GPs are a strong prior and aid generalization when the system dynamics behave nicely in the Lipschitz sense (PILCO is a good example of this). Do you believe the TIP objective would work just as well if this aspect was not there, i.e. without the inherently fast adaptation of a GP due to its interpolation properties?\n- Can you further elaborate on why TIP performs better than BARL for the plasma control tasks? The text mentions that this could be due to the inefficiency of sampling a finite candidate set for BARL, do you believe that ends up being worse than the inherent proximity of sequential trajectory states acquired by TIP?\n- I don't think I agree with the statement that the median amount of data used by algorithms to solve the task is equivalent to an MPC controller with ground-truth dynamics, as stated in sec. 5 -- what is the justification?\n- Have you considered using a critic, to account for infinite horizons?\n\nAlso see my remarks above.\n I don't see any major negative societal impact. The authors acknowledge the main methodological limitations of the method in the conclusion, although these could be emphasized further.", " The authors develop Bayesian Model-Predictive Control in which a new cost-function is proposed. The idea of the cost-function goes back to the information gain notion. The aim of the paper is to make the algorithm more sample efficient. The problem considered in the article seems relevant and the algorithms proposed by the authers seems interesting and original. The experiments show that, apparently, the algorithms are the most effective in terms of using samples. Nevertheless, in my opinion, the Weakness of the article is its descriptive part. To describe the ideas, the authors use mathematical language very carelessly, which can cause the reader serious problems in understanding the article (for more information, see the questions section). Thus, the article seems to be promising, but requires significant refinement, in particular, in the mathematical presentation – it should be more rigorous. 1. There is no description what is $P(T|D)$ in formula (2). Also it is not clear how distributions differ for open-loop and closed-loop problems.\n\n2. In the end of Algorithm 1, it is unclear how to create policy $\\pi_g$ for posteriory distribution $P (T' | D)$.\n\n3. On lines 203 and 204, the use of the notation $S$ for pairs, and $S'$ for one state does not seem quite intuitive to understand. Maybe it's better to use another letter instead of $S$?\n\n4. In formula (5), it is not clear what is the result of $D \\cup S'$ and what symbol $p(S'|S,D)$ stands for. When $D$ – is the set of triples, $S$ – is the set of pairs, $S'$ - is the set of single states. What is the connection between $D$ and $S$?\n\n5. In formula (6), it is unclear what is the result of union $D \\cup \\tau^*$, when $D$ – is the set of triples, $\\tau^*$ is the set of pairs of states and actions and a final state.\n\n6. Perhaps, since line 207 gives an idea of a new cost function, it would be better to give a simple example when it works better than the previous ones.\n\n7. In line 210 the non-obvious equality $C_{τ^*}(S) = −I(S', τ^*) = −I(τ^*, S')$ is given. It would be better to prove it or provide the link to the proof. \n\n8. In formula (7) it is unclear what is the result of union $D \\cup \\tau^∗_{ij}$, when $D$ – is the set of triples, $\\tau^∗_{ij}$ - is the set of pairs of states and actions and a final state.\n\n9. In algorithm 2, it is unclear how to compute the joint posterior covariance matrices. If a gaussian process is used for this, then it would be better to specify what is the kernel function.\n\n10. What hyperparameters were used to run experiments and baseline algorithms (so that they could be repeated)? The article does not have potentially negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "I7hDU6H56hf", "w5O8NSRq0dw", "Q1MYZ6Bbuio", "Y9a5ft_Mh4d", "H7KhpTwO041", "ma7gU0XI80W", "qYZjBE37trX", "Y9a5ft_Mh4d", "giDOo5jADpd", "Q1MYZ6Bbuio", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI", "nips_2022_d0stFTU2dTI" ]
nips_2022_UZJHudsQ7d
Robust Calibration with Multi-domain Temperature Scaling
Uncertainty quantification is essential for the reliable deployment of machine learning models to high-stakes application domains. Uncertainty quantification is all the more challenging when training distribution and test distribution are different, even if the distribution shifts are mild. Despite the ubiquity of distribution shifts in real-world applications, existing uncertainty quantification approaches mainly study the in-distribution setting where the train and test distributions are the same. In this paper, we develop a systematic calibration model to handle distribution shifts by leveraging data from multiple domains. Our proposed method---multi-domain temperature scaling---uses the heterogeneity in the domains to improve calibration robustness under distribution shift. Through experiments on three benchmark data sets, we find our proposed method outperforms existing methods as measured on both in-distribution and out-of-distribution test sets.
Accept
Reviewers find the paper original, useful, thorough in its numerics (in the revision), and clearly written.
test
[ "Co-zHTguCs", "Gj5QtyPQs-U", "MsSI0qVIriB", "3TbWl_n8M5", "ZLvUmmIqjXG", "zaAewhEVLso", "0UoOIq10f8", "TZINaXREZsc", "yU6GRpgT9zj", "-TcaRtWdNYVF", "vpovcWJjAMb", "HXaWa_JkgYe", "uGzvs24tUfF" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank you again for your thoughtful review and valuable feedback!", " I appreciate the additional work that the authors have done during the review session. All of the concerns have been dealt by the authors.", " Thank you for engaging with us and helping us improve the paper. \n\nThank you for your suggestion on the discussion of the validation dataset splitting. We will add a discussion of this point to the main text of our final version.", " Hi, authors, Thanks for your updates.\n\n* For 'node', I mean 'neural ordinary equations'.\n* Thanks for your adding results for resolving my comments, which I think would help you improve this paper. \n* As for the validation dataset, it must be lack of data on a targeted domain so it motivates you to do multi-domain. I disagree with 'your MD-TS is not adding additional requirements' beacuse MD-TS is applied on a new setting that lacks data on a new domain. Therefore, the validation dataset splitting might somehow limits your contribution of this paper. Still, I will remain my score as a weak accept. No matter this paper is accepted or not, I suggest adding a discussion on this validation dataset splitting, which involves how the MD-TS performs without a validation dataset.", " >**Q3**: *In Definition 2.2, is the number of pooled data identical in each domain? If not, taking the average should not be enough. It should reflect the number of examples in each domain as well. Hence, taking expectation over K domains should be modified.*\n\n**A3**: Thank you for pointing this out. The number of samples could be different across domains. In our definition, we weight each domain equally to balance across domains on purpose, which we hope reflects how the calibration method performs on each individual domain. For example, if domain A has 100x the data as domain B, we still choose to weight them equally – our motivation for this choice is that we are hoping to achieve balanced coverage across many domains, to hopefully generalize to entirely new domains. Furthermore, in our experiments, we also visualize the ECE measured on each domain to provide additional information on model performance on every domain. We have added this discussion into Remark C.1 in Appendix C.1 to our revised submission to address this point to readers.\n\n>**Q4**: *The concept of the proposed method is tested on image domain. It would be interesting to see the method on other domains, such as NLG (translation).*\n\n**A4**: Thank you for your suggestion – this is very interesting to us. We have conducted new experiments on a NLP dataset – WILDS-Amazon (Amazon review dataset, multi-class sentiment classification task on review text, with a DistilBERT model). We added the new experimental results to Table 8 and Figure 9 in Appendix B.8 of revised submission. The results are exciting; we find that our proposed approach also outperforms existing methods on this NLP dataset. We think that the fundamental reason for the improved performance in the NLP experiment is again that using heterogeneous data leads to more robustness to new domains, and it is encouraging to see this play out in the empirical results. We will add more results (other models such as RoBERTa) on this dataset in the main text of our final version.", " We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work is '*well-motivated*' (**Reviewer xbg7**, **Reviewer 4JGV**), proposes '*a simple and effective*' method for calibration under distribution shift (**Reviewer oWGL**), conducts '*extensive/comprehensive*' experiments to support the findings (**Reviewer oWGL**, **Reviewer xbg7**), and '*provide interesting theoretical justification*' (**Reviewer xbg7**). Two big changes to the paper are the addition of new calibration baselines (MC dropout and deep ensembles) in Appendix B.7 and the inclusion of a new experiment on text data in Appendix B.8.\n\nIn what follows, we try to address your concerns/questions and provide a detailed item-by-item response to your comments.\n\n======================================================================================\n\n>**Q1**: *The assumption in the Theorem 5.2 is strong. The assumption starts by assuming that Out of distribution is similar to a mixture of in-domain distributions.*\n\n**A1**: Thank you for your valuable feedback. As you say, our upper bound in Eq. (4) is sharpest when the OOD domain is similar to a mixture of in-domain distributions. Still, the theorem has informative content even when this doesn’t hold. In particular, even if the OOD domain is very different from the in-distribution domains, the Eq. (4) still implies that we could decrease the risk upper bound on the OOD domain if we perform multi-domain calibration. More specifically, if we could achieve good calibration performance on each individual domain by using multi-domain calibration (which that the first term in the RHS of Eq. (4) is small), then the second term plus the thrid term of RHS is always smaller or equal to $\\frac{1}{2}d_{\\bar{\\mathscr{H}}}(\\mathscr{P}^{\\prime}, \\widetilde{\\mathscr{P}_X}) + \\lambda(\\mathscr{P}^{\\prime}, \\widetilde{\\mathscr{P}_X})$, where $\\mathscr{P}^{\\prime}$ is the pooled distribution or any individual domain distribution. We have added a remark (Remark C.2 in Appendix C.2) to our revised submission for the better clarification.\n\nWe also wish to provide more context for Theorem 5.2. We view this mainly as a sanity check, proving that our approach is theoretically sound in some particular case. This is complementary to our extensive experimental results that show that our approach is effective in practice. We view the experimental results as the heart of our paper, and we see that the method performs well even in cases where the assumptions of Theorem 5.2 likely do not hold. \n\nIn summary, we partly agree with the reviewer’s criticism of Theorem 5.2, but wish to emphasize that we are not at all claiming that the assumptions are necessary for our method to succeed. Theorem 5.2 is one setting where theory can provide useful information about our method, but we observe good empirical performance much more broadly.\n\n>**Q2**: *This assumption, however, is too strong without proper citations or empirical results. If this assumption holds, then simply learning a universal temperature with data collected from the in-domains may be enough.*\n\n**A2**: Thank you for pointing this out. As suggested by Eq. (4) of Theorem 5.2, larger risks on in-distribution domains will lead to a larger upper bound for the risk evaluated on the OOD domain. On the other hand, as shown in Figure 1, a universal temperature is not sufficient to achieve good calibration performance on each individual in-distribution domain. In that experiment, there is a temperature that does well on the pooled data, but it does very poorly on when we evaluate it separately on its component domains. Therefore, even in the mixture of in-distribution domains setting, a universal temperature is suboptimal and applying multi-domain temperature scaling could be better than using a universal temperature. We have added a remark (Remark C.3 in Appendix C.2) to discuss this point explicitly in our revised manuscript. Thank you for surfacing this discussion.\n", " >**Q4**: *Like TS, MD-TS should also adopt a validation dataset, right? Can we do multi-domain without the validation dataset?*\n\n**A4**: This is a great point and gets to the heart of MD-TS. The core reason that we are able to generalize better to new test domains is that the calibration is done with domains that are “fresh” to the model. That way, the uncertainty information is calibrated correctly for OOD tasks, so to speak. As you are suggesting, this is a critical aspect of our approach.\n\nCan we do this without the validation set? We have tried many other versions of MD-TS during development, and found that MD-TS with validation set had consistently better results than other variants. (We do not report on all the other less effective versions in the manuscript for lack of space, but we experimented with many of them in the course of this project.) Our current understanding is that the validation set approach gives the best uncertainty information for the reasons above. It may be possible to accomplish this without the validation set, and we will continue experimenting with this. Still, at present, the most robust way is to use a validation set.\n\n\nLastly, we point out that even regular temperature scaling typically requires a validation set to avoid overfitting, so we are not adding any additional requirements above the usual temperature scaling approach.\n", " We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work is '*well-motivated*' (**Reviewer xbg7**, **Reviewer 4JGV**), proposes '*a simple and effective*' method for calibration under distribution shift (**Reviewer oWGL**), conducts '*extensive/comprehensive*' experiments to support the findings (**Reviewer oWGL**, **Reviewer xbg7**), and '*provide interesting theoretical justification*' (**Reviewer xbg7**). Two big changes to the paper are the addition of new calibration baselines (MC dropout and deep ensembles) in Appendix B.7 and the inclusion of a new experiment on text data in Appendix B.8.\n\nIn what follows, we try to address your concerns/questions and provide a detailed item-by-item response to your comments.\n\n======================================================================================\n\n>**Q1**: *The baselines only include TS and max-softmax. Can you compare with the more recent method, such as node, sde-net and others?*\n\n**A1**: Thank you for your suggestions on experiments. Besides TS and max-softmax, we also compared our proposed approach with histogram binning (HistBin), isotonic regression (Isotonic), and Bayesian Binning into Quantiles (BBQ) in Table 5, Appendix B.4 in our initial submission. Meanwhile, we have conducted new experiments (in comparison to deep ensemble and MC dropout) as *Reviewer oWGL* suggested, and the new results have been added to Table 7, Appendix B.7 of our revised submission.\n\nThank you for pointing out the SDE-Net [1] reference, we have added it to the related work section in our revised submission. We tried to implement SDE-Net on RxRx1 and GLDv2 datasets, and we have not managed to get satisfactory results due to the time limit. We only get less than 5% classification test accuracy on both datasets, whereas the networks considered in the paper achieve more than 35% test accuracy on both datasets. We think this is mainly due to the fact that: (1) SDE-Net requires network architecture changes and applies neural SDE models that are different from the standard models we considered in this paper; (2) We could not find SDE-Net models that are pre-trained on the ImageNet dataset, and we applied ImageNet pre-trained models for RxRx1 and GLDv2 in our experiments. We will add the comparison results (SDE-Net) in our final version. \n\n[1] SDE-Net: Equipping Deep Neural Networks with Uncertainty Estimates. Lingkai Kong, Jimeng Sun, Chao Zhang. Proceedings of the 37 th International Conference on Machine Learning, 2020.\n\n>**Q2**: *How is the optimization goes? Is MD-TS still efficient, compared with TS?*\n\n**A2**: Thank you for pointing this out. For the optimization part of MD-TS, it is computationally efficient to optimize the linear regression problem in MD-TS (in Step 2 of MD-TS). Meanwhile, the Step 1 of MD-TS is running TS on smaller subsets. \nOverall, MD-TS is almost as efficient as TS. By using the standard sklearn.linear_model, it takes 39.8s/3.2s/4.1s on ImageNet-C/WILDS-RxRx1/GLDv2 for solving the linear regression problem of MD-TS, and the overall running time of MD-TS is 49.7s/4.6s/6.6s on ImageNet-C/WILDS-RxRx1/GLDv2. standard TS takes 7.8s/1.2s/3.6s on ImageNet-C/WILDS-RxRx1/GLDv2. We have included the discussion on efficiency in Appendix B.9 and summarized the comparison results in Table 9, Appendix B.9 in our revised submission. \n\n>**Q3**: *In the introduction, only an example of multi-domain uncertainty? I suggest a more detailed elaboration on this motivation, since it’s the main contribution and claim.*\n\n**A3**: Thank you for your suggestion. We will add another two examples (listed below) of multi-domain uncertainty to the first paragraph of the introduction in our final version. \n\n'*As another example, a centralized model is trained on training data from existing clients in federated learning. It is important for the central server to provide uncertainty quantification for every client. Similar to the fMRI example, the centralized model should still produce valid uncertainty quantification for unseen new clients. Another example is applying animal recognition models on images in wildlife monitoring, where one set of camera traps corresponds to one domain, and the model will be deployed under distribution shift, i.e., new camera traps.*'\n\n", " We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work is '*well-motivated*' (**Reviewer xbg7**, **Reviewer 4JGV**), proposes '*a simple and effective*' method for calibration under distribution shift (**Reviewer oWGL**), conducts '*extensive/comprehensive*' experiments to support the findings (**Reviewer oWGL**, **Reviewer xbg7**), and '*provide interesting theoretical justification*' (**Reviewer xbg7**). Two big changes to the paper are the addition of new calibration baselines (MC dropout and deep ensembles) in Appendix B.7 and the inclusion of a new experiment on text data in Appendix B.8.\n\nIn what follows, we try to address your concerns/questions and provide a detailed item-by-item response to your comments.\n\n======================================================================================\n\n>**Q1**: *Compare against deep ensembles and MC dropout.*\n\n**A1**: Thank you for your suggestions on the experiments. We have conducted new experiments and compared our proposed approach to deep ensembles and MC dropout on both WILDS-RxRx1 and GLDv2. We have included these new results in Table 7, Appendix B.7 of our revised submission. We found that our proposed approach outperforms deep ensembles and MC dropout on both datasets. Also, in our revised submission, we summarize these results for the reader in the main text in Section 4.1 saying “Further comparisons in Appendix B.7 show that these improvements continue to hold when relative to two other calibration techniques: MC dropout and deep ensembles.” \n\nDue to computational constraints, we have not finished the experiments on ImageNet. We will add the comparison results (deep ensembles and MC dropout) on all datasets in the main text of our final version. \n\n\n>**Q2**: *Why was the temperature prediction model chosen to be linear?*\n\n**A2**: In our experiments, we found that simple linear regression, i.e., ordinary least squares (OLS), achieves competitive performance on all datasets we considered in this paper. We also investigated other approaches in Table 3, including non-parametric approaches such as kernel ridge regression (KRR) and K-nearest neighbors (KNN). We did not observe significant gains by using more complex approaches. Meanwhile, it is computationally efficient to solve linear regression and apply linear models for predicting temperature for new test samples. \n\n\n>**Q3**: *Prior work has experimented with methods to learn to predict a temperature value (https://arxiv.org/pdf/1903.00802.pdf): they use a 2 layer MLP to predict the temperature.*\n\n**A3**: Thank you for pointing out this reference. We have included this reference in the related work section of our revised submission. Also, we have conducted new experiments on using the 2-layer MLP to replace the linear model in our algorithm for predicting the temperature. Our preliminary results suggest that this 2-layer MLP does not outperform the linear model. For example, on WILDS-RxRx1 with ResNet50, the per-domain ECE of the 2-layer MLP is 5.62% whereas per-domain ECE of the linear model is 5.25%. We will include the results of 2-layer MLP in our final version. \n\n\n>**Q4**: *The ECE metric can be very sensitive when the number of examples in the test set are small and are not distributed over all bins. In such cases it might be interesting to also compare on other more standard metrics (apart from MAE) like the Brier Score.*\n\n**A4**: Thank you for your valuable suggestion. The alternative of the Brier score is a great idea, and it makes our evaluations more complete. We will include the Brier score results in our final version. \n\nAs you say, we observe that the ECE requires a large number of data points, and we have been careful to find examples with enough data to reliably evaluate it. For the datasets we consider in this paper, most of the domains contain more than 2000 samples, where the ECE metric could provide reasonably good evaluations on different calibration methods. \n\n\n>**Q5**: *The authors do not have a section detailing potential societal impacts.*\n\nA5: Thank you for your suggestion. We have added the Societal Impact section (Appendix D) to our revised submission. \n", " > Can you compare with the more recent method, such as node, sde-net and others?\n\nThank you for suggesting these recent methods! Is this (link: http://proceedings.mlr.press/v119/kong20b/kong20b.pdf) the **sde-net** method you were referring to? Could you also clarify the reference for the **node** method? Thank you! ", " The paper proposes a simple method for calibration that is robust to distributional shifts in the data. The proposed method extends the temperature scaling approach by learning to predict a suitable temperature value on an unseen domain. \nThe authors show empirically that their approach (MD-TS) can outperform other approaches (including vanilla temp scaling) on both in- and out-of-distribution test sets. \nAlong with this, the authors also provide a theoretical justification for their proposed method. \n\nCalibration methods generally fail to remain robust on out-of-distribution examples and this paper works towards solving an important problem\n Strengths:\n- The proposed method is both simple and effective at calibrating several models over multiple domains (an important problem) (+ originality, +quality + significance)\n- The paper provides an interesting theoretical justification for their proposed approach which gives a better intuition for regarding calibration over multiple domains (+originality, + quality, +significance)\n- The analysis seems comprehensive; the authors also provide ablations for their approach\n- The paper is well explained and easy to follow (+clarity)\n\nWeaknesses:\n- I felt that the chosen baselines were not very strong. [1] show that temperature scaling by itself isn't very robust on out-of-distr. examples. Perhaps the authors could compare against some of the methods shown in [1] to be more robust such as deep ensembles and MC dropout. \n\n\n[1] Ovadia Y, Fertig E, Ren J, Nado Z, Sculley D, Nowozin S, Dillon J, Lakshminarayanan B, Snoek J. Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. Advances in neural information processing systems. 2019;32.\n\n\nOverall, I feel this is a worthy contribution and vote for acceptance. Willing to increase my score if there's a more comprehensive comparison to previous methods. Why was the temperature prediction model chosen to be linear? \nPrior work has experimented with methods to learn to predict a temperature value (https://arxiv.org/pdf/1903.00802.pdf): they use a 2 layer MLP to predict the temperature. \n\nThe ECE metric can be very sensitive when the number of examples in the test set are small and are not distributed over all bins. In such cases it might be interesting to also compare on other more standard metrics (apart from MAE) like the Brier Score.\n The authors do not have a section detailing potential societal impacts, but they talk about how calibration can be useful in critical applications in the introduction. ", " This paper proposes a type of temperature scaling that is more robust on out-of-distribution dataset. The work first learns K temperatures for K in-domain datasets. Then a linear regressor is learned to predict the K temperatures given feature embeddings, hence learning to predict a suitable temperature given a sample. The linear regressor is then applied to out-of-distribution samples, calibrating a model’s output. The paper verifies the idea on three datasets and illustrates improved calibration results by the proposed method. Strength\n-\tThis paper asks an important question: How should one choose temperature for predictions from out-of-distribution inputs. \n-\tThe empirical result shows improved calibration score with the proposed method over the baselines.\n\nWeakness\n-\tThe assumption in the Theorem 5.2 is strong. The assumption starts by assuming that Out of distribution is similar to a mixture of in-domain distributions. This assumption, however, is too strong without proper citations or empirical results. If this assumption holds, then simply learning a universal temperature with data collected from the in-domains may be enough. Questions\n1.\tIn Definition 2.2, is the number of pooled data identical in each domain? If not, taking the average should not be enough. It should reflect the number of examples in each domain as well. Hence, taking expectation over K domains should be modified.\n -\tThe concept of the proposed method is tested on image domain. It would be interesting to see the method on other domains, such as NLG (translation).", " This paper propose a multi-domain temperature scaling method for distribution calibration with distribution shifts. The heterogeneity of temperature is used to improve the robustness. Numerical experiments and theoretical results are presented to validate the method. strengths:\n\n1. well-motivated\n2. clear presentation and good writing\n3. extensive results to support\n\nweakness\n\n1. more rigorious experimental results (more baselines and efficiency, detailed in the limitations)\n2. more introduction to highlight the motivation Like TS, MD-TS should also adopt a validation dataset, right? Can we do multi-domain without the validation dataset? 1. The baselines only include TS and max-softmax. Can you compare with the more recent method, such as node, sde-net and others?\n2. How is the optimization goes? Is MD-TS still efficient, compared with TS?\n3. In the introduction, only an example of multi-domain uncertainty? I suggest a more detailed elaboration on this motivation, since it’s the main contribution and claim." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "Gj5QtyPQs-U", "zaAewhEVLso", "3TbWl_n8M5", "0UoOIq10f8", "HXaWa_JkgYe", "HXaWa_JkgYe", "uGzvs24tUfF", "uGzvs24tUfF", "vpovcWJjAMb", "uGzvs24tUfF", "nips_2022_UZJHudsQ7d", "nips_2022_UZJHudsQ7d", "nips_2022_UZJHudsQ7d" ]
nips_2022_Q6DJ12oQjrp
Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection
There is currently a large gap in performance between the statistically rigorous methods like linear regression or additive splines and the powerful deep methods using neural networks. Previous works attempting to close this gap have failed to fully consider the exponentially growing number of feature combinations which deep networks consider automatically during training. In this work, we develop a tractable selection algorithm to efficiently identify the necessary feature combinations by leveraging techniques in feature interaction detection. Our proposed Sparse Interaction Additive Networks (SIAN) construct a bridge from these simple and interpretable models to a fully connected neural network. SIAN achieves competitive performance against state-of-the-art methods across multiple large-scale tabular datasets and consistently finds an optimal tradeoff between the modeling capacity of neural networks and the generalizability of simpler methods.
Accept
This paper proposes a scheme to augment a trained neural network (considering in particular the case of unstructured, tabular data) by extending generalized additive models to the multi-layer neural setting in an unusual manner by using higher-order derivatives from an initial deep neural network to select a sparse set of higher order feature interactions on which to fit their augmented network. Reviewers considered the paper well written, easy to follow, considered the method sound and well-motivated, and praised experiments as thorough and detail as adequate for reproducibility. Q2gD wondered specifically how the interpretability of these models measures against post-hoc DNN interpretation methods; the authors responded with a new section in the appendix, causing Q2gD to raise their score. FZvZ had questions about the selection of the model order and how that might affect interpretability which were adequately addressed in rebuttal. wcBj points to "relatively weak direct technical novelty", to which the authors reasonably respond that their forward selection method for interaction terms stands apart from typical approaches that involve backward selection or pruning; several confusing aspects and an suggestion for an explanatory visualization comprises a new section in the Appendix. The experimental results seem well chosen and are convincing, even if the proposed method SIAN is not uniformly the best method on all tasks considered, it is a strong contender overall and in several cases handily outperforms the DNN baseline. The selection procedure seems well motivated and clever, while the architecture of the resultant additive model seems like one particular choice in a sea of possibilities. I doubt this paper will be the last word on the matter. Nonetheless, this seems like a valuable contribution to the literature on applying DNNs to tabular data and the intersection of GAM techniques with deep learning. I thus recommend acceptance.
train
[ "YO1lK65iRu9", "5gV1n3W2km", "y3MIeTLDesi", "eHLiaLYS4Ot", "vcx_6iB2oA_", "ha_tllF_MEy", "e7-B-aRBi", "TH2GxjpHDRm", "TuPSaTJ8CFg", "jSfws9BiuJk", "lME5n4dCg5", "i4AOlX6HnQA" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for helping us improve the manuscript with your suggestions. We hope we were able to sufficiently answer the majority of your questions.", " We are happy to have addressed all of your major concerns.", " We greatly appreciate your reconsideration and are glad to have addressed your concerns regarding interpretability. Also, we are further working on applying these techniques to other specific tasks, including computer vision, which were previously unthinkable for additive models. One of the key reasons to keep such a work separate is because of the need to focus on vision-specific interpretability and vision-specific architectures.\n", " Thank you for the response and revision to the paper. I think my questions are properly addressed. ", " I would thank the authors for the detailed responses, which addressed all my concerns. Thus I tend to accept this paper and keep my score.", " I'd like to thank the authors for their response. Especially, I find the discussion of the (newly added) Appendix C.3 very helpful. I have raised my score to 6.\n\nSorry if this sounds like a classic reveiwer #2 cliche, but I think that the paper can be further improved if the authors could test SIAN on harder tasks beyond MNIST, e.g., on CIFAR. Apparently, both SIAN and baselines solve the MNIST task near-perfect (~99% accuracy), implying that this benchmark is pretty much saturated. Training on a harder task will either further demonstrate the impressive performance of SIAN or reveal its limitation -- either way, it would be a great contribution to the community.", " > To my understanding, the main advantage of SIANs is not their data-fitting ability (DNNs are better in these perspectives) but their interpretability. \n\nWhile we agree that DNNs have a higher functional capacity, we believe that SIAN has not only higher interpretability, but higher robustness. In Figure 2b, we can see when we increase the functional capacity of SIAN (to be closer and closer to that of a DNN), we decrease the robustness of the SIAN model (note the widening gap between training/validation performance.) This ultimately leads to SIAN having a higher effective data-fitting ability, because it does not easily overfit to spurious correlations. Please note that SIAN outperforms MLPs on 5 out of the 7 datasets we consider.\n\n> Would it be possible to have a discussion / empirical comparison between the interpretability of SIANs and DNNs undergone posthoc feature-attribution?\n\nYes, we have added a brief section to the Appendix C.3 discussing this. Importantly, the key distinction between SIAN/ additive model interpretability and posthoc feature interpretability is that SIAN is a global method while feature attribution is a local method. Therefore, while SIAN displays the exact decision-making process of the model, posthoc attribution only gives a local approximation of the feature importance given a specified data sample. This means that to produce a similar plot to Figure 3, we can only consider a scatter plot of training samples instead of a heatmap of the true model. Oftentimes, these scatter plots can be too noisy to discern a clear trend as in Figure 3. Further, these attributions can sometimes be misleading since MLPs do not naturally disentangle their features from one another, see [1], [2].\n\n> The applicability of SIANs, however, is currently confined to tabular datasets and MLPs.\n\nAlthough it is the primary focus of this work, we feel that the framework of SIAN is ultimately more general than both MLPs and tabular data. In section 5.2, we apply our framework to a different type of model besides MLP models, using instead the NODE architecture and NODE-GAM setup. Using this setup is able to advance state-of-the-art on one of the datasets we consider. We mainly focus on tabular datasets in this work because of their easily interpretable features; however, the techniques from SIAN can likely extend beyond tabular data. In preliminary experiments on MNIST, we have accuracies of: MLP, 98.9%; CNN, 99.6%; SIAN, 99.3%.\n\n\nUltimately, a comparison of interpretability is fundamentally both a task-dependent and a subjective matter. Nevertheless, we hope we have addressed many of your concerns and can follow up with additional details surrounding these questions.\n\n[1] “Sanity Checks for Saliency Maps”\n[2] “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead”\n", " > Table 2 is a bit confusing…\n\nThat is a very good point regarding table 2, we have updated the table with arrows for each dataset to help the reader more easily interpret the results.\n\n> It might be nice to show the visualization in the discussion via some \"automated\" process\n\nYou will be glad to know that our codebase can automatically visualize the learned shape functions from a SIAN model in a style similar to Figure 4. (Other figures were given visual enhancement for the manuscript.) We have previously considered visualizing the entire SIAN-2 model for MIMIC-III in the appendices and have now added it to a new section Appendix D.\n\n> Relatively weak direct technical novelty as it mostly combines Archipelago and generalized additive models.\n\nRegarding your main concern of technical novelty, we certainly admit that both Archipelago and GAMs are existing methods, however, it has been standard practice for decades to fit GAMs using all available terms. Historically, this has limited their application to only 1D or 2D functions. Even as recently as 2021, prominent papers fitting GAMs using DNNs have used the mantra of “train then remove” rather than using a feature interaction selection algorithm before training (see, for example [49] and [50]). To the best of our knowledge, despite the recent surge in interest in neural additive models, no existing work has been unable to train 3D functions and higher. It is for this reason we believe there is novelty not in the individual methods, but in our application of the FIS algorithm to GAMs, where we combine Archipelago with heredity in a novel way.\n\n> it is also possible to produce these with some \"manual\" exploratory data analysis methods.\n\nYes, it is partially true that manual data exploration can produce plots similar to the ones in the discussion section. We have added a brief section C.3 to illustrate the differences a little further. The main difference between SIAN and posthoc attribution of an uninterpretable model is that SIAN gives a global explanation of exactly why the model makes its predictions, whereas attribution only gives a local approximation of the decision, centered around the given data point. In terms of visualization, this essentially results in a scatter plot of attribution, rather than a heatmap or continuous plot. Often, these scatter plots are too noisy on their own to see the real trend, and fitting a model to these attributions could be met with the question: why not just fit an additive model in the first place?\n\n[49] \"Sparse Neural Additive Model: Interpretable Deep Learning with Feature Selection via Group Sparsity\"\n[50] \"GAMI-Net: An Explainable Neural Network Based on Generalized Additive Models with Structured Interactions\"", " > So how to effectively choose the feature intersection order used in SIAN for better performance.\n\nThe feature interaction order for best performance can easily be selected by looking at the validation performance. We find that as a function of K, the performance is typically a convex function: decreasing as we add training capacity but ultimately increasing as we begin to overfit. This pattern can be seen in all datasets and visualized at greater resolution in Figure 2b. \n\n> will it reduce the model's interpretability to some extent\n\nAs you importantly mention, when we increase the feature interaction order, the shape functions become more difficult to interpret. Previously, practitioners are forced to choose between the overly simple but interpretable bivariate models or the complex but uninterpretable deep networks. Our work is able to bridge/ interpolate between these two extremes, allowing practitioners greater control over the interpretability-accuracy tradeoff to suit their target application.\n\n> How do the training speed and storage change along with the increasing of feature intersection orders in SIAN ?\n\nThere is a linear increase in storage size as we increase the number of feature interactions, but our block sparse formulation allows for sublinear increases in training time. For example, on the Wine dataset in Table 4: increasing the interactions from 12 singles to 78 pairs and singles (6.5x increase) results in a size increase of 86KB to 537KB (6.24x increase) whereas the training time only increases from 40 seconds to 42 seconds. This demonstrates the significant improvement in speed from our block sparse technique.\n\nNevertheless, when considering higher-order interactions, this might still be insufficient. For the Song Year dataset, there are as many as 44 million possible quintuples to consider (90 choose 5). Training even a linear regression model with this many terms becomes infeasible. It is for this reason we develop the FIS selection algorithm which leverages Archipelago and heredity in a novel way to automatically select the most important feature interactions. Using this small selection of important feature interactions, we can easily train a SIAN-5 model.", " This paper proposes Sparse Interaction Additive Networks (SIANs). On an array of tabular datasets, SIANs perform comparably to neural networks and offer more interpretability than neural networks. **Strength**. The paper is very well written. The goal of the paper -- scaling up interpretable additive models while making their training to be tractable is very well motivated. \n\n**Weakness**. In my opinion, this paper can be further improved by incorporating a discussion / empirical comparison between the interpretability of DNNs and SIANs. To my understanding, the main advantage of SIANs is not their data-fitting ability (DNNs are better in these perspectives) but their interpretability. However, there has been a thrust of research in the post-hoc interpretation of neural networks such as saliency maps. These DNN-based saliency maps (or other attributed features) give rise to meaningful interpretation while being very flexible: They can be applied to DNNs in a model-agnostic way. The applicability of SIANs, however, is currently confined to tabular datasets and MLPs. 1. In Tables 2, 3, and 4, are DNN baselines here also the reference DNNs used in Feature Interaction Selection (FIS)?\n\n2. Would it be possible to have a discussion / empirical comparison between the interpretability of SIANs and DNNs undergone posthoc feature-attribution? As a concrete example, the authors can train a standard DNN on the California Housing dataset and visualize key features crucial to its performance. Are these features meaningful as well? And do they give rise to the same interpretaion as by SIANs in Figure 3? In my opinion, the interpretability of SIANs comes at the cost of being only able to deal with tabular datasets. In fact, even for tabular datasets, it is unclear to me whether the interpretability of SIANs is indeed greater than equipping conventional DNNs with feature attribution methods -- I encourage the authors to provide such a comparison. ", " This paper proposed Sparse Interaction Additive Networks (SIAN), which can effectively tackle the exponential number of high-order feature interactions and scale up to larger scale datasets. By leveraging heredity and interaction detection, SIAN achieves competitive performance across multiple datasets. The work also provides further insights into the generalization and capacity trade-off and a block sparse implementation of neural additive models, which achieve better training speed and memory efficiency of neural-based additive models. #### Strength:\n\n1. The paper is well organized and easy to follow.\n2. Sufficient implementation details are provided for reproduction.\n3. The experiment section consists of both quantitative results and several visualization analyses.\n\n#### Weakness:\n\n1. As shown in Table 2, SIAN-1 performs the best in Appliances Energy dataset, while SIAN-2/3 is better in Bike Sharing and SIAN-5 in the rest. So how to effectively choose the feature intersection order used in SIAN for better performance.\n2. When we increase the intersection order of additive models, will it reduce the model's interpretability to some extent, which may degrade the value of performance improvement.\n3. Letter case should remain consistent, e.g., sian-1/2/3/5 in Table2, but SIAN in the main text. How do the training speed and storage change along with the increasing of feature intersection orders in SIAN ? The authors have adequately addressed the limitations and potential negative societal impact of their work.", " The paper discusses how to generalize neural additive models to higher-order interactions and how to do them efficiently. This is done in three steps: 1) a general DNN is trained; 2) a feature interaction algorithm based on higher-order derivatives with a large norm is used to choose which features to focus on for learning neural additive models. These are then applied to an additive network to learn the final features. Experimental results demonstrate reasonable performance on regression and classification datasets Strength:\n- A sound solution for addressing the efficiency issues in selecting higher-order interactions for generalized additive models. \n- Extensive experiments on various classification and regression datasets. \n- The explanation of MIMIC III is particularly interesting. \n- Block sparse implementations massive improves training, and compression reduces memory cost.\n\nWeaknesses:\n- Relatively weak direct technical novelty as it mostly combines Archipelago and generalized additive models. Q1: Table 2 is a bit confusing. Both regression and classification metrics and datasets are listed, and for one higher is better, for the other lower is better. It would be clearer to clearly indicate which is which, as the reader might have a hard time switching context between Section 4 and it. Also, it is not all datasets since MIMIC III is in Table 3.\n\nQ2: It might be nice to show the visualization in the discussion via some \"automated\" process, as it would illustrate that these type of data analysis/visualization can be done automatically (what are the shared components in these post-processing steps that can be written into programs easily?). The results in the discussion section are nice, but it is also possible to produce these with some \"manual\" exploratory data analysis methods. Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 3 ]
[ "eHLiaLYS4Ot", "vcx_6iB2oA_", "ha_tllF_MEy", "TH2GxjpHDRm", "TuPSaTJ8CFg", "e7-B-aRBi", "jSfws9BiuJk", "i4AOlX6HnQA", "lME5n4dCg5", "nips_2022_Q6DJ12oQjrp", "nips_2022_Q6DJ12oQjrp", "nips_2022_Q6DJ12oQjrp" ]
nips_2022_vfR3gtIFd8Y
Fast variable selection makes scalable Gaussian process BSS-ANOVA a speedy and accurate choice for tabular and time series regression
Many approaches for scalable GPs have focused on using a subset of data as inducing points. Another promising approach is the Karhunen-Loève (KL) decomposition, in which the GP kernel is represented by a set of basis functions which are the eigenfunctions of the kernel operator. Such kernels have the potential to be very fast, and do not depend on the selection of a reduced set of inducing points. However KL decompositions lead to high dimensionality, and variable selection thus becomes paramount. This paper reports a new method of forward variable selection, enabled by the ordered nature of the basis functions in the KL expansion of the Bayesian Smoothing Spline ANOVA kernel (BSS-ANOVA), coupled with fast Gibbs sampling in a fully Bayesian approach. It quickly and effectively limits the number of terms, yielding a method with competitive accuracies, training and inference times for tabular datasets of low feature set dimensionality. The new algorithm determines how high the orders of included terms should reach, balancing model fidelity with model complexity using $L^0$ penalties inherent in Bayesian and Akaike information criteria. The inference speed and accuracy makes the method especially useful for modeling dynamic systems, by modeling the derivative in a dynamic system as a static problem, then integrating the learned dynamics using a high-order scheme. The methods are demonstrated on two dynamic datasets: a 'Susceptible, Infected, Recovered' (SIR) toy problem, with the transmissibility used as forcing function, along with the experimental 'Cascaded Tanks' benchmark dataset. Comparisons on the static prediction of derivatives are made with a random forest (RF), a residual neural network (ResNet), and the Orthogonal Additive Kernel (OAK) inducing points scalable GP, while for the timeseries prediction comparisons are made with LSTM and GRU recurrent neural networks (RNNs). The GP outperforms the RF and ResNet on the static estimation, and is comparable to OAK. In dynamic systems modeling it outperforms both RNNs, while performing many orders of magnitude fewer calculations. For the SIR test, which involved prediction for a set of forcing functions qualitatively different from those appearing in the training set, BSS-ANOVA captured the correct dynamics while the neural networks failed to do so.
Reject
Reading the reviews, I think there are ultimately two challenges for the authors to address in this work. First I think ends up being a somewhat simple "background for the community" problem, as both several reviewers and the authors in their general comments point out: significantly more background on KL decomposed kernels may be warranted, and this lack of background (along with perhaps some simple notational differences) led to what I felt like were some challenges understanding the full paper. With that being said, I don't think the above alone should be sufficient by itself to result in rejection, despite this lack of context likely being a primary contributor to final review scores. However, I do agree with reviewers' concerns that there are some concrete comparisons to existing scalable GP literature missing. I think the inclusion of OAK + inducing points is a start, but e.g. the use of m=40 inducing points in section 3.4 is surprising to me, partly perhaps because it's not clear which task this was a problem for (you state early that you use m=200 for the cascaded tanks task), and partly because none of the dataset sizes involved appear to me to be even remotely beyond the capability of these existing scalable GP approximations in the literature -- even up to m=512 or m=1024 inducing points is fairly standard practice. Given the reasonably good performance of OAK in some of the new results even with limited inducing point sets, I think that clearly further investigation is warranted there. Beyond inducing point methods, there are also NNGP / Vecchia models (which have also been recently made variational via Wu et al., 2022), and even exact GPs seem readily applicable to some of the tasks considered here with access to even a single moderately powerful GPU.
val
[ "KZ0p14mfw_y", "LfVjcjOs-H1", "SZuXSsx9HTJ", "e4uiJsv_T3Sw", "y23PVFjYMS9", "VDLdbBSuJc_", "W1-GD1btXlB", "NdJeKCUoCFK" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the perceptive assessment of our work. We have tried to improve the presentation to make it more germane for the GP community in machine learning, and we have added comparisons to state-of-the-art inducing points-based scalable GPs.\n\n1. $\\vartheta$ is commonly used as notation for model inputs in our community but we are happy to switch to $x$.\n2. We will present the formula for the $k^\\textrm{\\small th}$ Bernoulli polynomial, however instead of presenting them graphically we felt it was more appropriate to present the first few KL basis functions. This is the plan for the revision.\n3. We will clean up the typos -- thanks for pointing them out.\n4. The eigenvalues and eigenfunctions come from the eigendecomposition of $\\Gamma_1$ as defined in (3), as evaluated for an input spanning the interval [0,1] and discretized on a dense grid with 500 intervals. This will be made clearer.", " Thank-you for this interesting perspective on our work. The method can indeed be thought of as a latent factor decomposition, where the mediating model is a Gaussian process. Similarly the combination of the Gibbs sampler and the variable selection routine can be thought of as an optimizer of a BIC/AIC objective function, which contain an $L^0$ penalty. We will emphasize this perspective in the rewrite, particularly the manner in which the forward variable selection algorithm manages dimensionality issues.\n\n1. We have heard from other reviewers about notation and we are committed to making it clearer for the audience at NeurIPS. However we are confused by references to \"objective criterion $J$\" depending on $A$, $B$ and $D$, and a function \"CalculateD.\" It is not clear to us which superscripts could be missing in (9).\n\n2. The equality constraints in (8) are satisfied automatically in the Gibbs sampler through the use of a single prior for $\\sigma$ and $\\tau$. The same priors are used for each iteration in Algorithm 1.\n\n3. We are not familiar with a method called FANC_OB, but we have added a new, high-performing inducing points GP (OAK) to the set of methodological comparisons.", " Thank-you for the attentive review of our paper. Here are our responses to your questions:\n\n1. We have read the paper suggested and included comparisons with this method in our results. OAK outperformed BSS-ANOVA on the static estimation method for the cascaded tanks by slight margins (5-10%), using a small number of inducing points (30-40 for 2000 instances). However when we tried to incorporate OAK into the integrator it was not practical from a time standpoint: BSS-ANOVA is orders of magnitude faster.\n\n2. Of course the use of BIC and AIC in model selection is not new, but this is not claimed as the novel aspect of the variable selection routine. The most important aspect is the exploitation of the ordering of basis functions to perform variable selection on the KL-decomposed kernel, which addresses the dimensionality issues facing KL-decomposed GPs. It is these decomposed terms which are selected, not the terms in the additive kernel. We will emphasize this in the revised text.\n\n3. When combined with fast and automatic variable selection the new method outperforms many other methods in both speed and accuracy for low to moderate-dimensional, continuous tabular datasets. This combined with the speed of training and inference is why the application to dynamic systems identification is highlighted.\n\n4. We compared with the OAK GPs on the dynamic datasets that appear in the original submission. We have also used BSS-ANOVA on other static datasets available in the UCI database, which the OAK authors reported results for. Our results look promising, however we chose not to report them in this work since our implementations for datasets with larger input dimensionalities use an additional variable selection methodology. The dynamic systems datasets are the kind of problems for which the method as presented is ideal: a small number (5 or fewer) of continuous inputs, coupled with an integrator that takes advantage of fast model evaluation. These advantages and limitations will be made clearer in the revised paper. We will introduce the additional variable selection methodology along with results on static tabular datasets in future contributions.", " We appreciate the efforts of the reviewers and the chairs in reading and commenting on our work. Most of us are new to the ML space and as such we are not sure about functional groupings -- it seems as if we ought to have emphasized the dynamic systems aspect of this work in the title, abstract and introduction as opposed to the 'tabular data' aspect. We believe the method is a promising one for tabular data, and indeed we have experimented with some of the UCI datasets with promising results. However success in that space will require additional variable selection methods for larger input space dimensionality and a mix of categorical and continuous inputs and targets. We have been working on such methods but have not presented them here -- as presented in the paper the method works best for continuous static datasets of small dimensionality (five or fewer).\n\nWe understand better than before that the community will be less familiar with Karhunen-Loéve decomposed GP kernels: while additive, ANOVA decomposed kernels may be common, KL decomposed kernels are not, as discussed in this recent preprint: arXiv:2108.05924v1. KL decomposed kernels have the potential to outperform inducing point scalable GPs in certain contexts since it avoids the reduction of the data inherent in selecting the inducing points. They are also highly interpretable, since the contribution of each input and input combination is clear from the model form. However the advantage we would like to highlight in this contribution is the manner in which forward variable selection greatly improves the speed of training and inference of KL decomposed GPs, such that the method becomes highly adaptive to intrusive applications in domain science, particularly dynamic systems.\n\nAs discussed in the above-referenced paper, the main challenges of KL decomposed kernels are the estimation of the basis functions (which can be computationally onerous) and dimensionality issues arising from the KL expansion. The former issue has been dealt with already for this kernel by Brian Reich and co-workers, who used a dense grid for discrete basis function evaluation, which we fitted to splines. The main contribution of this work is dealing with the dimensionality issue, which we handle by taking advantage of the strongly ordered nature of the basis functions in a forward variable selection routine incorporating $L^0$ penalties (inherent in the BIC/AIC functions). The resulting models are fast, both in training and inference -- orders of magnitude faster than the fastest inducing point GPs. This strongly suggests applications in dynamic systems, as the GP can be trained on state derivatives and then used in an integrator, and more generally in an intrusive fashion in domain-specific modeling contexts.\n\nWe propose to revise the paper to emphasize these aspects in title, abstract and introduction, along with other changes outlined in responses to individual reviewers.", " The paper proposes a scalable GP based on Karhunen-Loeve expansions of kernels. The paper takes a fully Bayesian approach where Gibbs sampling is used to infer the kernel parameters, therefore, leading to forward variable selections.\n Strengths:\n\n- The variable selection using GP is an interesting problem.\n\nI think the weaknesses outweigh the strengths, include\n\n- The paper seems to be not well-structured, lacking a proper writing of what the problem that the paper tries to address here is. It would be better to have an introduction section that would clarify those for us as readers.\n- The paper does not provide sufficient descriptions or analyses for the method, for example, lacking literature reviews and no comparison with alternative methods. \n How to scale $\\mu$ and $\\Sigma$ in Eq. (10-11) which involve matrix inversion? If they are evaluated on 500 data points, such estimations are just local. \n Besides the presentation issues, I think the paper does not provide an extensive comparison with existing models. For example,an obvious baseline is sparse inducing GPs. \n\nI believe the paper needs an additional revision to be ready for publication.\n", " Regarding the scalable Karhunen-Loeve decomposed kernel BSS-ANOVA for Gaussian process modeling, this paper proposed a BIC/AIC assisted forward variable selection method to have fast and accurate predictions for large tabular datasets. The performance of the proposed model was tested on two datasets. Pros:\n1) A forward variable selection method via BIC/AIC is developed in the ANOVA regime for GP modeling;\n2) The new method has been applied to time series regression.\n\nCons:\n1) Benefits brought by forward variable selection are unclear;\n2) Differences to existing works have not been well discussed;\n3) Insufficient numerical experiments to showcase the superiority of proposed model.\n 1) Additive Gaussian processes within the ANOVA formulation for high-dimensional inputs have been studied for a long history, see a recent paper\n[1] Lu X, Boukouvalas A, Hensman J. Additive Gaussian Processes Revisited. arXiv preprint arXiv:2206.09861, 2022.\nThese however have not been discussed and compared in this paper. \n2) Besides, the usage of BIC/AIC to perform truncation is not a new idea, what is the difference to previous works? Please highlight it.\n3) What benefits can we have when using the ANOVA type decomposition for GP? Reducing computational budget? Figuring out import variables? The authors should make it clear rather than just showcasing the accuracy of time series regression.\n4) The proposed model is suggested to be compared to the ANOVA-type GP competitors on more benchmarks to offer solid conclusions.\n yes", " The authors describe a method for variable selection in a GP type model for large datasets called BSS-ANOVA. This (existing) approach uses a kernel that has a basis decomposition where the basis functions can be precomputed and the training complexity becomes O(NP) where P is the number of terms from the kernel decomposition used. The papers’ contribution is on selecting the components in this decomposition with non-zero coefficients using the natural order of complexity and a forward selection paradigm. The coefficients and an information criterion is calculated via Gibbs sampling. The paper describes a novel approach to fit a GP type model for large datasets and I think as such has a place in the literature.\n\nI would like the authors to be more clear about the impacts of the proposed Gibbs sampler, as described below, i.e. both with regards to the novelty of it and whether it changes the fitted model. My assumption is that this method is new, but if it already exists in the literature, I do not think the use of an information criterion for variable selection alone is contribution enough to warrant acceptance. What is the worst-case complexity of this algorithm? I.e. BSS-ANOVA has complexity O(NP) if I pre-specify the number of terms P, but your algorithm appears to determine that number dynamically. Would it naturally stop at N^2, could it be unbounded or is guaranteed to be better than N^2?\n\nIs the Gibbs sampling algorithm described in Section 1.3 an original contribution of this paper or is it based on previous work? Does the crucial assumption (8) that enables the use of Gibbs sampling reduce the expressiveness of this model in a meaningful way or can it be shown that this assumption is wlog?\n\nPlease change the formula references to (N) to avoid confusion and clean up the citations ([Reich et al., 2009] vs Reich et al. [2009]).\n\nAre there any other variable selection methods for BSS-ANOVA in the literature? Is there a way to use e.g. L1 penalties to find solutions faster? I can not see any negative societal impact of this work. The authors do address some limitations in that the paper underperforms on some datasets.", " The authors develop a Bayesian approach to the Karhunen-Loéve (KL) decomposed kernel BSS-ANOVA Strengths and Weaknesses:\nKey weakness. Most people in the GP community will not be familiar with the methods used. So we need a _much_ clearer background, and perhaps more standard notation:\n* Why are we using curly $\\theta$ rather than $x$ for the input locations?\n* Define kth Bernoulli polynomial.\n* Tell us the shape of $\\mathcal{B}_1$ etc.\n* \"the full kernel for a system with n inputs is written $\\delta \\sim MVN(0, \\Gamma)$\" is a very strange statement. The kernel is written $\\Gamma$, not $\\delta \\sim MVN(0, \\Gamma)$? Where did $\\delta$ come from? Why are we using MVN rather than $\\mathcal{N}$ (especially given that it can cause confusion with a matrix-variate Normal distribution.\n* $\\Gamma = \\sigma_0^2 \\tau_0^2 + ...$ makes no sense. $\\Gamma$ is a covariance matrix, while $\\sigma_0^2 \\tau_0^2$ are scalars?\n* $\\Gamma_{1, i}$ and $\\Gamma_{2, ij}$ aren't defined.\n* Where are the eigenvalues and eigenfunctions coming from?\n\nWithout these clarifications I do not believe most of the target audience will be able to understand the paper. See strengths and weaknesses See strengths and weaknesses" ]
[ -1, -1, -1, -1, 3, 4, 5, 4 ]
[ -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "NdJeKCUoCFK", "W1-GD1btXlB", "VDLdbBSuJc_", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y", "nips_2022_vfR3gtIFd8Y" ]
nips_2022_Rqe-fJQtExY
Efficient and Effective Multi-task Grouping via Meta Learning on Task Combinations
As a longstanding learning paradigm, multi-task learning has been widely applied into a variety of machine learning applications. Nonetheless, identifying which tasks should be learned together is still a challenging fundamental problem because the possible task combinations grow exponentially with the number of tasks, and existing solutions heavily relying on heuristics may probably lead to ineffective groupings with severe performance degradation. To bridge this gap, we develop a systematic multi-task grouping framework with a new meta-learning problem on task combinations, which is to predict the per-task performance gains of multi-task learning over single-task learning for any combination. Our underlying assumption is that no matter how large the space of task combinations is, the relationships between task combinations and performance gains lie in some low-dimensional manifolds and thus can be learnable. Accordingly, we develop a neural meta learner, MTG-Net, to capture these relationships, and design an active learning strategy to progressively select meta-training samples. In this way, even with limited meta samples, MTG-Net holds the potential to produce reasonable gain estimations on arbitrary task combinations. Extensive experiments on diversified multi-task scenarios demonstrate the efficiency and effectiveness of our method. Specifically, in a large-scale evaluation with $27$ tasks, which produce over one hundred million task combinations, our method almost doubles the performance obtained by the existing best solution given roughly the same computational cost. Data and code are available at https://github.com/ShawnKS/MTG-Net.
Accept
The overall idea of using a meta-learning network with an active learner for grouped multi-task learning is interesting. The experimental results provided in the original submission and rebuttal are extensive to verify the effectiveness of the proposed method. A major limitation of the proposed method is the high computational cost, especially when each task has its own dataset. Overall, this is a well-written paper that presents an interesting idea for multi-task learning.
train
[ "Csn_spxolPI", "jxVdHnzNfxW", "GO6Fku8luy", "jWO46h8SPXzV", "j_vR85lEt6P", "enD1QLhNxsx", "Tg79K3Lkfd", "SH8eaYqntKf", "oNS8uq9Ug4H", "Q4KvrQYyeu5", "ZvCqYW2p9_9", "Yt8ze0G-Yzh", "gNoQo1Vo6f7", "JAwIpZKHAA1", "vj14BMNbwXq", "HQ9wRwGJTHv", "yHgmcNLX2-", "0LuOi2ObgZ", "IRz-3J3c4Xl", "Rqdt7d8VRrS" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We very appreciate this suggestion and will take your advice to discuss the limitations of this work in the revised paper. We also agree with you that when each task has its own dataset, the computational cost of N-task MTL will be significantly larger than that of two-task MTL.\n\nAs you have mentioned, this work focuses on a basic setup of multi-task grouping with a hard-sharing architecture and overlapped task data. Please kindly note that the key motivation of this setup is to ensure a fair and consistent comparison with state-of-the-art task-grouping solutions (e.g., HOA, TAG) and to reveal a critical finding: for a fixed MTL procedure, the transferring relationships induced by task combinations lie in some low-dimensional spaces and thus can be learnable with limited samples.\n\nWith this new finding, we believe that the proposed framework together with the insightful questions mentioned in your reviews have opened many appealing research opportunities, including but not limited to zero-shot task grouping, generalization across multi-task model architectures, generalization across different datasets (especially non-overlapped task datasets), etc. We will also include a discussion about these potential opportunities.\n", " I would like to thank the authors for the response. \n\nRegarding your point that \"Compared with a two-task MTL model, an N-task MTL model (N>2) does have more computational costs due to more task-specific decoders\". I would like to remark that this is only true in the case where the task data overlap (almost) completely. In another setting of MTL, where each task has its own dataset (for example, classification and segmentation where each task has a dataset of 224x224 images), running MTL with N tasks is much more expensive than 2 tasks. Note that this setting is also very popular for MTL.\n\nI would suggest the authors to include a discussion about the scope/limitation of the paper: it focuses on the case of hard parameter sharing and overlap of tasks' datasets.", " Thank you for the detailed answers. I would like to keep my score.", " I want to thank the authors for clarifications on the questions and am satisfied with the response. I am inclined to keep the score the same and wish the authors the best of luck with their submission.", " Thanks for your efforts in reviewing this paper.\n\nRegarding your concern of “the technical contribution is limited”, we would like to emphasize a few unique technical contributions of this paper, as elaborated below, beyond a simple application of existing meta-learning framework:\n\n- In the first place, this paper reveals a critical finding: the relationships between task combinations and transferring gains are essentially some low-dimensional manifolds. This finding stands as the foundation for building the meta learning framework for multi-task grouping with high generalization capability. \n- Furthermore, this paper recognizes a new yet critical meta-learning problem on task combinations, i.e., learning to predict the transferring gains given a task combination.\n- To address this problem, this paper introduces a novel meta model, MTG-Net, that is capable of being generalized to unseen task combinations simply by taking task tokens as the input.\n- To further ensure the efficiency of this framework, this paper introduces an active-learning algorithm that is used to facilitate more efficient explorations of the exponentially growing space of task combinations.\n\nFurther indication about the performance of the active learning algorithm in the early epochs can be found by inspecting Figure 3 (a, b), in which we plot the results of MTG-Net under different $K$s. Specifically, $K$=1 denotes a very early epoch, and we can observe that MTG-Net can still provide some useful estimations of transferring gains for the grouping selection algorithm to obtain reasonable performance. Besides, you can find the convergence of MTG-Net under different $K$s in Section D.3 of the appendix. These results indicate fast convergence of MTG-Net in early epochs.\n", " Thanks so much for all your thorough reviews, insightful comments, and many constructive suggestions, based on which we will further enrich the technical and experiment content of this paper. Besides, we hope the following responses can help answer your questions and address your concerns.\n\n**Response to Question 1**\n\nIn accordance with your suggestion, we will append the detailed task combinations selected by MTG-Net at each step of active learning into the revised appendix to ease understanding. Here we provide a few snapshots of the active learning procedure as a quick response. (Please refer to the appendix to align task names with task indicators.)\n\n|Dataset| Round k | Task j | Selected Combinations | Ground-truth Gain | Predicted Gain Before Selection | Predicted Gain After Selection|\n| --- | ---| ---| ---| ---| ---| --- |\n|Taskonomy-5|1|$s$|[$s$,$k$]|6.3%|1.6%|3.2%|\n|Taskonomy-5|2|$s$|[$s$,$d$,$n$,$e$]|7.2%|15.9%|9.1%|\n|Taskonomy-5|1|$n$|[$s$,$d$,$n$]|-16.7%|-3.4%|-5.1%|\n|Taskonomy-5|2|$n$|[$n$,$e$]|1.6%|-4.5%|0.2%|\n|ETTm1-7|1| HF |[HF,MF,LL]|-5.5%|-2.3%|-6.5%|\n|ETTm1-7|3| HF |[HF,ML,LF]|19.2%|5.4%|7.3%|\n|ETTm1-7|1| LL |[LL,OT]|-3.5%|-5.5%|-2.8%|\n|ETTm1-7|3| LL |[HF,HL,MF,LF,LL]|-2.0%|4.1%|-3.5%|\n|MIMIC-III-27|1|$t_1$|[$t_{1}$,$t_{2}$,$t_{3}$,$t_{5}$,$t_{8}$,$t_{9}$,$t_{10}$,$t_{16}$,$t_{17}$,$t_{19}$,$t_{21}$,$t_{23}$,$t_{24}$,$t_{25}$]|-8.0%|-2.0%|-7.3%|\n|MIMIC-III-27|7|$t_1$|[$t_{1}$,$t_{3}$,$t_{6}$,$t_{7}$,$t_{9}$,$t_{10}$,$t_{11}$,$t_{14}$,$t_{16}$,$t_{26}$,$t_{27}$]|4.4%|6.8%|5.7%|\n|MIMIC-III-27|13|$t_1$|[[$t_{1}$,$t_{12}$,$t_{13}$,$t_{14}$,$t_{15}$,$t_{17}$,$t_{18}$,$t_{19}$,$t_{24}$,$t_{26}$]|-7.7%|-9.3%|-8.4%|\n|MIMIC-III-27|1|$t_{24}$|[$t_{4}$,$t_{8}$,$t_{9}$,$t_{12}$,$t_{13}$,$t_{14}$,$t_{17}$,$t_{20}$,$t_{24}$]|-3.2%|-36.1%|-5.1%|\n|MIMIC-III-27|7|$t_{24}$|[$t_{8}$,$t_{10}$,$t_{11}$,$t_{13}$,$t_{15}$,$t_{19}$,$t_{21}$,$t_{22}$,$t_{23}$,$t_{24}$,$t_{25}$]|20.1%|20.7%|20.5%|\n|MIMIC-III-27|13|$t_{24}$|[ $t_{1}$,$t_{2}$,$t_{3}$,$t_{4}$,$t_{7}$,$t_{8}$,$t_{9}$,$t_{11}$,$t_{12}$,$t_{16}$,$t_{20}$,$t_{22}$,$t_{23}$,$t_{24}$ ]|21.1%|23.5%|22.3%|\n\nFrom the table above, we can observe diversified combinations being selected for different tasks at different rounds. More importantly, these results intuitively reveal that MTG-Net can calibrate its predictions after updating with the ground-truth gains.\n\nBesides, we do not find a clear pattern of the variation in terms of the size of selected task combinations as the active learning process iterates. The sizes of selected task combinations seem to be consistent across different Ks. The following table includes mean$\\pm$std of the size of selected task combinations for each round (K) on all datasets.\n\n|Dataset| K=1 | K=2 | K=3| K=5| K=7| K=9| K=11| K=13|\n| --- | ---| ---| ---| ---| ---|--- |--- |--- |\n|Taskonomy-5|2.6$\\pm$0.5|2.8$\\pm$0.8 | - | - | - | - | - | - |\n|ETTm1-7|3.3$\\pm$1.0|3.1$\\pm$0.6|3.0$\\pm$0.8| - | - | - | - | - |\n|MIMIC-III-27| 12.1$\\pm$1.0 | 11.8$\\pm$0.5 | 12.1$\\pm$0.6 | 12.4$\\pm$0.9 | 12.7$\\pm$0.6 | 12.9$\\pm$0.7 | 12.4$\\pm$0.9 | 12.1$\\pm$0.5 |\n\nMoreover, you can check the convergence of test errors on gain predictions in Section D.3 of the appendix. It is intuitive that MTG-Net converges very fast in early iterations and can still obtain relatively slight yet robust improvements as $K$ gets close to $(N-1)/2$. Besides, in the table below, we show the final estimation errors (measured in Mean Absolute Error) of MTG-Net on all datasets for your reference. \n\n| Meta Dataset | Taskonomy-5 | ETTm1-7 | MIMIC-III-27 |\n| --- | ---| ---| ---|\n| Meta Train | 0.068 | 0.044 | 0.045 |\n| Meta Test | 0.083 | 0.073 | 0.057 |\n\nWe can observe that there is no overfitting issue, and MTG-Net can obtain reasonable generalization on massive unseen task combinations.\n", " **Response to Question 2**\n\nWe very appreciate your suggestion of adding more hyper-parameter analyses to make this work more promising. In the following, we provide detailed explanations and supplement more hyper-parameter testing results for your reference. Overall speaking, our framework is quite robust against varying hyper-parameters of MTG-Net. From another point of view, such robustness can imply that a fixed set of hyper-parameters, as presented in our paper, can work consistently well across three different datasets with diversified tasks.\n\nRegarding the sensitivity to the initialization of MTG-Net, Figures 2, 3 in the paper already revealed some insights since we re-run the MTG-Net training with five different random seeds and report the average results there. Please note that in Figure 3 (a, b), we also include the standard deviation (shadow area). These results can validate that MTG-Net is robust to different initializations.\n\nMoreover, MTG-Net is also robust to different configurations of the number of encoding layers and the hidden dimension (D), which in this paper are set as 2 (#layers) and 64 (D), respectively. Below we show the max-budget grouping performance of multiple MTG-Net models with different setups of these two hyper-parameters on MIMIC-III-27.\n\n| Dataset | Meta Model | #layers = 1|#layers = 2|#layers = 3|#layers = 4|#layers = 5|\n| --- | ---| ---| ---| ---| ---| --- |\n|Taskonomy-5| MTG-Net (K=1) |+12.17% |+12.43% |+12.43% |+12.35% |+12.35% |\n|Taskonomy-5| MTG-Net (K=2) |+17.90% |+18.34% |+18.34% |+17.90% |+18.34% |\n|ETTm1-7| MTG-Net (K=1) |+10.52% |+10.61% |+10.61% |+10.57% |+10.52% |\n|ETTm1-7| MTG-Net (K=3) |+14.44% |+14.52% |+14.50% |+14.44% |+14.50% |\n|MIMIC-III-27| MTG-Net (K=1) | +6.25% | +6.28% | +6.29% | +6.29% | +6.31% |\n|MIMIC-III-27| MTG-Net (K=13) | +8.63% | +8.73% | +8.74% | +8.76% | +8.78% |\n\n| Dataset | Meta Model | D=8 | D=16 | D=32 | D=64 | D=128 |\n| --- | ---| ---| ---| ---| ---| --- |\n|Taskonomy-5| MTG-Net (K=1) |+12.35% |+12.43% |+12.43% |+12.43% |+12.35% |\n|Taskonomy-5| MTG-Net (K=2) |+17.90% |+18.34% |+18.34% |+18.34% |+18.34% |\n|ETTm1-7| MTG-Net (K=1) |+10.52% |+10.59% |+10.61% |+10.61% |+10.61% |\n|ETTm1-7| MTG-Net (K=3) |+14.42% |+14.50% |+14.50% |+14.52% |+14.52% |\n|MIMIC-III-27| MTG-Net (K=1) | +6.07% | +6.10% | +6.17% | +6.28% | +6.32%|\n|MIMIC-III-27| MTG-Net (K=13) | +8.12% | +8.45% | +8.62% | +8.73% | +8.79%|\n\nWe have a few interesting observations from the two tables above.\n\nFirst, we find that different setups of the number of layers and the hidden dimension can all give rise to significant and consistent improvements in the final grouping performance. This observation can imply that our framework is robust to the hyper-parameters of MTG-Net.\n\nSecond, we can observe that some different setups even lead to the exact same grouping performance, especially on Taskonomy-5 and ETTm1-7 (the number of task combinations is relatively small). The underlying reason is that although these different MTG-Net models produce different gain predictions, these predictions can be consistent in selecting task combinations for a specific task, especially given limited combinations, and thus lead to the same grouping result.\n\nBesides, we observe certain correlations between the model capacity and the grouping performance. For example, as # layers or D increases, which indicates the increasing of model capacity, we usually observe associated improvements in grouping performance. This observation indicates that there exist some highly non-linear transferring relationships in the space of task combinations. Since the increasing of model capacity also enlarges the risk of overfitting, we also observe some performance dropping for some large values of #layers and D.\n\nMoreover, we can see that some setups of hyper-parameters (e.g., D=128) even slightly outperform the one presented in this paper. This observation implies that we do not overly fine-tune the hyper-parameters of MTG-Net, and the setup presented in this paper is a modest choice.\n", " **Response to Question 3**\n\nLike the setup of MTG-Net’s hyper-parameters discussed in the response to Question 2, we also employ a modest choice of $\\alpha$ and $\\eta$ in the active learning procedure, and our framework can still work well with other options. In the following, we provide concrete results and detailed explanations to give you more insights.\n\nFirst, we strongly agree with your intuition on using a relatively large $\\alpha$, which stimulates MTG-Net to prefer task combinations with larger gains and then leverage them as anchors to interpolate gains for other combinations. Below we include the max-budget grouping performance of MTG-Net with different $\\alpha$s to reveal the impact of $\\alpha$.\n\n|Dataset | Meta Model| $\\alpha$=0.01 | $\\alpha$=0.1 | $\\alpha$=1 | $\\alpha$=5 | $\\alpha$=10 | $\\alpha$=25 | $\\alpha$=100 |\n| --- | ---| ---| ---| ---| ---| --- | ---| --- |\n|Taskonomy-5| MTG-Net (K=1) |+11.35% |+11.50% |+11.95% |+12.35% |+12.43% |+12.43%|+12.43%|\n|Taskonomy-5| MTG-Net (K=2) |+17. 75%|+17.90% |+18.14% |+18.20% |+18.34% |+18.34%|+18.34%|\n|ETTm1-7| MTG-Net (K=1) |+10.11% |+10.15% |+10.52% |+10.57% |+10.61% |+10.61%|+10.61%|\n|ETTm1-7| MTG-Net (K=3) |+14.30% |+14.37% |+14.46%|+14.50%|+14.52% |+14.52%|+14.52%|\n|MIMIC-III-27| MTG-Net (K=1) |+5.86%|+6.07%|+6.15%|+6.23%|6.26%|+6.28%|+6.28%|\n|MIMIC-III-27| MTG-Net (K=13) | +7.91%|+8.13%|+8.61%|+8.65%|+8.70%|+8.73%|+8.74%|\n\nWe can observe that MTG-Net can still obtain reasonable performance with other setups of $\\alpha$, and by significantly preferring task combinations with larger gains, MTG-Net is able to obtain slightly more improvements in terms of the final grouping performance. The rationale behind this observation is that the estimation errors on larger transferring gains can cause more direct error propagations to the grouping selection algorithm, thus paying more attention to these estimation errors help most in the final grouping performance (this point aligns with a similar question raised by Reviewer QRvu).\n\nMoreover, the hyper-parameter $\\eta$ controls the frequency of updating MTG-Net with actively selected meta samples, which allows a trade-off between efficiency and effectiveness for MTG-Net training. In this work, we adopt a dynamic strategy: 1) in the earliest round (k = 1), we set $\\eta$ as 1 to encourage more frequent MTG-Net updating per each active selection, 2) and in latter epochs (k > 1), we set $\\eta$ as N to further improve the efficiency by updating MTG-Net per N active selections. In the following table, we compare this dynamic strategy with two fixed strategies ($\\eta$=1 or $\\eta$=N) to reveal the impact of $\\eta$.\n\n| Meta Model | $\\eta$ | # MTG-Net Updating Steps| Taskonomy-5 | ETTm1-7 | MIMIC-III-27 |\n| --- | ---| ---| ---| ---| ---|\n| MTG-Net (K=1) | $\\eta$ = 1 | K*N $\\rightarrow$ N |+12.47% | +10.65%| +6.29%|\n| MTG-Net (K=1) | $\\eta$ = N | K $\\rightarrow$ 1 |+11.27% | +9.97%| +5.97%|\n| MTG-Net (K=1) | $\\eta$ = 1 if k <=1 else N | N+K-1 $\\rightarrow$ N |+12.43% | +10.61%|+6.28% |\n| MTG-Net (K=(N-1)/2) | $\\eta$ = 1 | K*N $\\rightarrow$ N*(N-1)/2 |+18.34% |+14.55% |+8.75% |\n| MTG-Net (K=(N-1)/2) | $\\eta$ = N | K $\\rightarrow$ (N-1)/2 |+17.75% | +14.26%| +8.69%|\n| MTG-Net (K=(N-1)/2) | $\\eta$ = 1 if k <=1 else N | N+K-1 $\\rightarrow$ (N-1)*3/2 |+18.34% |+14.52% |+8.73% |\n\nThe above results demonstrate that our dynamic strategy for $\\eta$ can help MTG-Net with different $K$s achieve competitive performance as setting $\\eta$ as 1 while ensuring that the number of MTG-Net updating steps is not an order of magnitude larger than that of setting $\\eta$ as $N$.\n", " **Response to Question 4**\n\nWe find that these transferring relationships among tasks are not model agnostic, and this finding is consistent with the phenomenon observed by HOA [1].\n\nOne of the experiments in [1] was to compare two kinds of MTL procedures on the same dataset, in which the only difference was the model size (a large Xception network v.s. a small one). According to their results, the transferring gains collected from these two types of models were rather different. The correlation coefficient is merely around 0.23 (calculated by us based on their released results). We also have tried other architectures on ETTm1 and MIMIC-III-27 and obtained similar observations. Moreover, these task relations are not data agnostic (Experiments in [1] also demonstrated this claim), which means that when the data distribution or the data scale changes, we may obtain different transferring gains.\n\nTherefore, HOA, TAG, and our work are all studying a “static” multi-task grouping problem, in which the setup of the MTL procedure is fixed (data, model, optimization, etc.). In such a “static” setup, our method contributes to reducing the complexity of performing MTL procedures from $O(2^N)$ to $O(KN)$.\n\nIn the meanwhile, we think this question is very interesting and inspiring because it points out a new direction worthy of more research attention, which is to study the transferring relationships across MTL model architectures. (Please kindly note that this question is also related to the 4-th question raised by Reviewer ZASQ.)\n\n**Response to Question 5**\n\nThis question is more related to the grouping selection algorithm. If a task is involved in multiple groupings, the grouping selection algorithm will assign its inference group to the one with the largest transferring gains (or gain predictions).\n\nYou may refer to the HOA paper [1] and its appendix (http://proceedings.mlr.press/v119/standley20a/standley20a-supp.pdf) for more details about the grouping selection algorithm.\n\n[1] Standley, Trevor Scott et al. “Which Tasks Should Be Learned Together in Multi-task Learning?” ICML (2020).\n", " In addition to answering your specific questions, we want to provide more discussions on other weaknesses you have mentioned.\n\n**Response to Weakness 1**\n\nThe huge computational cost is indeed the biggest challenge in the multi-task grouping problem. Essentially, such huge computational cost can be divided into two folds: one is the exponential dependence on the number of tasks, and the other is the cost per each MTL procedure. One of the major contributions of this paper lies in addressing the former aspect. The key contribution of this paper is to reduce the complexity of conducting MTL procedures from $O(2^N)$ to $O(KN)$ ($K$ is an adjustable hyper-parameters) while still maintaining reasonably well performance. While in previous studies [1, 2], they relied on some predefined heuristics, which resulted in biased estimations for high-order transferring gains. For larger datasets and much more tasks, as long as the computational resources allow us to perform $O(KN)$ rounds of MTL procedures (K < 1 may also work), our framework can provide much better grouping options than random sampling or other heuristic-based approaches.\n\n**Response to Weakness 2 & 3**\n\nWe really appreciate your suggestion of adding more hyper-parameter analyses. All the supplemented experimental results have been provided in the responses to Questions 2 and 3, and we will also add them to the revised paper. We hope these results can help to address your concerns on “missing hyper-parameter analyses” and “instability of MTG-Net learning due to very limited meta samples”. \n\n**Response to Weakness 4**\n\nWe totally agree with you that some actual resource usages, such as inference time and memory, are of more interest in practice. In fact, it is relatively straightforward to combine our framework with these practical considerations. For example, we can conduct the MTL procedures of multiple models with different inference time and memory consumptions and then feed a tuple of (a task combination, inference time, memory used, the associated transferring gains) to an adjusted grouping selection algorithm for decision making. While this paper mainly focuses on the most fundamental setup (e.g., exemplifying the budget as the number of groupings) to highlight our critical findings and key ideas, we will continue the research under practical considerations as the future work.\n\n**Response to Weakness 5**\n\nWe demonstrate the effectiveness of our framework over three datasets with diversified tasks, including not only indoor-scene recognition tasks used by [1, 2] but also energy time-series regression tasks as well as clinical classification tasks. Moreover, the setups of the number of tasks also cover the small (5), moderate (7), and large (27) scales. Specifically, 27 tasks can produce over one hundred million task combinations, and none of the existing studies has seriously studied the multi-task transferring relationships at this scale. Moreover, due to the curse of combinatorial explosion, we have taken thousands of GPU hours in total to collect the transferring gains for all task combinations (except for MIMIC-III-27, we sample 3000 task combinations to approximate the huge space). Such extensive experiments on various tasks have provided very solid verification on the effectiveness of the critical ideas and new MTG framework proposed by this paper. We have a strong belief that all the thorough findings together with the novel framework can benefit much to the whole MTL community.\n\n[1] Standley, Trevor Scott et al. “Which Tasks Should Be Learned Together in Multi-task Learning?” ICML (2020).\n\n[2] Fifty, Chris, et al. \"Efficiently identifying task groupings for multi-task learning.\" NeurIPS (2021).\n", " Thanks very much for your insightful comments, careful inspection, and all the endorsement of this submission. We will fix all the typos in the revised manuscript. Besides, we hope the following explanations may address the other two concerns.\n\n**Response to Question 1 & Weakness 1**\n\nThe rationale behind preferring meta samples with large transferring gains is that the estimation errors on larger transferring gains would cause more direct error propagations to the grouping selection algorithm. For example, if the best combination for a specific task identified by MTG-Net produces a moderate or even negative transferring gain, the grouping selection algorithm will pick this combination if the budget allows but make a serious mistake in fact. Thus, we should be careful about making errors on task combinations with large values of predicted transferring gains. Thanks again for posting this suggestion, we will further polish the sentences in the revised version to make this point much clearer.\n\n**Response to Weakness 2**\n\nTo ensure the reproducibility of this work, we will not only release our data and codes but also take several extra steps.\n\nSpecifically, in addition to opening the results of MTL procedures and the code of MTG-Net to help reproducing the results in this paper, we will open the specific code to conduct MTL procedures, which can help other researchers double confirm the transferring gains of our MTL procedures and make more explorations on other aspects, such as more task combinations, different configurations of MTL procedures, etc. In this way, there is no need to reproduce all MTL procedures that we have spent thousands of GPU hours to run. Other researchers can focus on more advanced explorations, such as zero-shot task grouping.\n", " Thanks very much for your thorough comments. We hope the following specific explanations can help to address your concerns.\n\n**Response to Question 1**\n\nWe value your suggestion of including computational cost as a reference and will attach associated information to the revised manuscript. In this work, we prefer to use the number of MTL procedures as a major indicator of computational complexity because this indicator is more intuitive and aligns well with the magnitude of computational cost. The number of tasks in a combination only has marginal effects on the computational cost, which does not affect the major contributions of this paper. We can explicitly mention this point in the revised paper to avoid misunderstanding. Below we would like to give you detailed justifications to address your concern of “unfair evaluation”.\n\n\nCompared with a two-task MTL model, an N-task MTL model (N>2) does have more computational costs due to more task-specific decoders. While in a typical MTL architecture, the encoder part takes up the most computation. For example, on the MIMIC-III-27 benchmark, the computational time of a decoder only takes up 0.3% of the time spent on the encoder part. Thus, the additional decoder-side computations only have marginal effects on the total computational cost. Besides, all these decoders can be computed in parallel. Thus, with proper implementation, the additional computational time due to the introduction of more tasks can be further limited.\n\nMoreover, as shown in Figures 2 & 3(a), you may find that for those $K$s that are significantly less than $(N-1)/2$ (even when $K==1$), we can still obtain remarkable improvements over HOA with largely reduced computational costs. These results explicitly demonstrate the contributions of this paper.\n\n**Response to Question 2**\n\nYes, your understanding is correct. In this work, we focus on building the structured latent space that can reflect the relationships between task combinations and transferring gains, so we simply employ randomly-initialized task embeddings and optimize these embeddings along with other parameters in MTG-Net solely from meta-training data.\n\nWe strongly agree with your comments that it is pretty appealing and invaluable to establish the connection between “task data” (we guess you mean input data and task labels because all tasks share the same data) and task embeddings, since it can endow the meta model with the zero-shot capability to estimate transferring gains for unseen tasks. In the meantime, we have to note that this new research direction faces some critical challenges, such as how to encode an effective task representation from multiple data instances and associated task labels and ensure that such a task representation includes the transferring effects across tasks.\n\nOverall speaking, while this work still focuses on building the fundamental meta learning framework for multi-task grouping, we consider zero-shot task grouping as a truly valuable future direction worthy of greater research attention.\n", " **Response to Question 3**\n\nWe appreciate your careful inspections on this paper, and we are confident that there is no risk of “cherry picking”, especially given the extensive experiments on various scenarios and tasks. Instead, the improvements across these diversified tasks validate the generalization capability of our framework.\n\nTo be more specific, we evaluate the proposed framework across different applications, including pixel-level image tasks, time-series regression tasks, and clinical classification tasks. These tasks have very different evaluation metrics, and we follow the corresponding evaluation criteria described in the original papers. Thus, we allow the framework to customize the transferring gain as the improvement on a specific metric. In this way, our framework can easily adapt to all kinds of heterogeneous tasks.\n\n**Response to Question 4**\n\nYour question essentially raises an important research direction of the generalizability of multi-task grouping. \n\nAs mentioned in HOA [1], when the data or the model architectures changed, the transferring effects among tasks could also distinctly change. We do have similar observations in our experiments. Meanwhile, we also note that there exist certain correlations among the transferring gains obtained from different network architectures. For example, according to the experiments of [1] on the Taskonomy-5 dataset, the correlation coefficient between the transferring gains of a large model and that of a small model can be as large as 0.227. These observations imply that multi-task transferring relationships may be a function of the data, the model, and some other factors, such as optimization algorithms.\n\nBack to your question, when employing a soft-sharing architecture, the underlying transferring relationships are very likely to be different from those based on a hard-sharing MTL architecture. Nonetheless, one important value of this paper lies in establishing the basic framework to capture multi-task transferring relationships given a pre-defined MTL procedure (starting from the hard-sharing architecture in this paper), which can be the foundation to explore the generalization across different configurations of MTL procedures (e.g., different data sets, different model architectures, etc.).\n\nOverall speaking, we value this insightful question very much since it raises a new direction towards more comprehensive and generalizable multi-task grouping.\n\n[1] Standley, Trevor Scott et al. “Which Tasks Should Be Learned Together in Multi-task Learning?” ICML (2020).\n", " Thanks very much for your in-depth comments, especially from the perspective of connecting our paper to the related work in NAS/AutoML.\n\n\n**Response to Question 1 & Weakness 1**\n\nPer our understanding, the first question is about how to leverage the parameter-sharing idea in ENAS to speed up the process of measuring MTL transferring gains in our case. (Please kindly correct us if we misunderstand this question.) \n\nWhile the idea of parameter sharing across MTL procedures may help save computational cost, it could meanwhile take the risk of generating biased estimations of the transferring gains corresponding to training from scratch. The reason lies in that introducing the parameter sharing across different MTL procedures may bring implicit knowledge transfers among different task combinations, which do not accurately reflect the transferring effects purely from a specific task combination. These biased estimations of transferring gains, being inconsistent with the ground-truth gains obtained by training from scratch, are hence likely to lead to sub-optimal grouping selections.\n\nTherefore, to ensure the essential transferring gains for different task combinations can be accurately estimated, in this paper, we did not leverage the parameter sharing mechanism when performing MTL procedures. While in other scenarios that do not require the accurate results of training from scratch, we may consider increasing the efficiency of MTL procedures via multiple ways, including but not limited to parameter sharing, pre-training & fine-tuning, early stopping, etc.\n\n**Response to Question 2 & Weakness 3**\n\nWe really appreciate your proposal of adapting AutoML methods into the multi-task grouping (MTG) problem. Taking ENAS as an example, we can treat all tasks as output nodes and leverage RL to search for a hyper network architecture with separate sub-networks covering specific task combinations. Such an RL-based AutoML view indeed opens a new direction worthy of research for MTG, including a few challenging issues, such as 1) how to designing proper actions to explicitly separate tasks into different groups and assigning informative rewards to these actions; 2) how to design efficient exploration mechanisms to ensure low sample complexity (note that each sample here corresponds to a group of MTL procedures). \n\nMoreover, compared with the RL-based AutoML approach for MTG, our framework has certain distinctive advantages:\n\n- Flexibility. Our framework consists of two consecutive steps: 1) the estimation of transferring gains and 2) the grouping selection based on these gains. Such a decoupled formulation allows us to adopt more flexible grouping strategies (e.g., imposing constraints on the number of groups or assigning different inference budgets to specific tasks) without re-training the gain estimation model (MTG-Net). Furthermore, we can easily identify the causes of improper groupings, rooted from the inaccurate estimations of gains. This property can help us diagnose the whole procedure and involve human experts in the loop as needed.\n- Interpretability. As shown in Figure 4 of our main paper, our approach can reveal the inherent manifold of multi-task transferring relationships, which can intuitively inform which group of tasks should be put together and which should be separated from another. Particularly, we can find more insights by further inspecting this structured latent space. For example, as shown in Figure 2 of the appendix, we can identify certain task pairs that consistently lead to positive or negative transfers no matter whether other tasks are involved.\n- Controllable Efficiency. Our framework can offer an explicit tradeoff between efficiency and accuracy via the adjustment of K, which controls N*K rounds of computational-intensive MTL procedures. As shown in Figure 2 of our paper, we demonstrate that even when K = 1, we can still obtain reasonable groupings. Moreover, as Figure 3(a) shows, MTG-Net helps to boost the grouping performance significantly with the increase of K. Owing to the structured latent space of multi-task transferring relationships, a relatively small K (K<= (N-1)/2) can obtain high-quality grouping results that are close to the optimal groupings searched in the space of 2^N task combinations. In contrast, RL-based AutoML approaches require very sophisticated designs of action spaces, exploration mechanisms, and reward schemes to control sample complexity.\n", " **Response to Weakness 2**\n\nOur work is fundamentally different from OBOE [2] in the literature of AutoML.\n\nThe OBOE system aims to speed up the AutoML model selection over an unseen dataset by efficiently collecting the behaviors (e.g., performance, runtime) of a group of models. By assuming that the behaviors of various models have some common patterns across different datasets, given any new dataset, OBOE can only select a subset of models to train and then use its meta model (meta-learned on available datasets) to quickly infer the results of other models. Essentially, the low-rank hypothesis behind OBOE ensures it can generalize the common model behaviors from existing datasets to unseen datasets.\n\nDifferent from OBOE, our framework focuses on the multi-task grouping problem, in which the critical challenge is to deal with the exponentially growing space of task combinations. More importantly, the low-rank hypothesis used by OBOE does not have the capability to tackle this combinatorial-explosion space. In contrast, our unique finding is that the relationships between task combinations and associated transferring gains lie in some low-dimensional manifolds. This crucial finding motivates the design of our meta learning framework for multi-task grouping.\n", " This paper studies the problem of task grouping by using an efficient and effective meta-learning framework. The authors propose to use a meta-learning algorithm, which predicts the per-task performance gains of multi-task learning over single-task learning for any combination, to find optimal task combinations. The key insight is the low-rank hypothesis on task combination space. Based on the hypothesis, the authors develop MTG-Net, which uses an active learning strategy to select meta-learner’s training examples. Experiment results conducted on multiple multitask datasets show significant improvement over state-of-the-art task grouping algorithms. \n\n Strengths\n1. This paper is well written and easy to follow. \n2. The approach of introducing a meta-learner to learn task grouping strategies is novel and reasonable.\n3. Multiple MTL datasets are used to evaluate the proposed method.\n\nWeaknesses\n1. On the efficiency side, even though an active learning algorithm is introduced to reduce the number of samples needed to train the meta-learner, the cost can still be large for training models and measuring the gains. I am wondering if a similar hypothesis from ENAS[1] can be used to further save cost by sharing the parameters of the pair of models for measuring gains.\n2. There exists some related work [2] in AutoML that also leverages the low-rank hypothesis to learn AutoML algorithms based on task-task relationships. Even though the application is slightly different, the novelty of building meta-learned based on low-rank hypothesis is limited.\n3. Even though the author compared it with SOTA for task grouping. I am wondering how the proposed active learning framework compares with the existing AutoML framework, which models the meta-learner as an RL algorithm[1].\n\n[1] Pham, Hieu, et al. \"Efficient neural architecture search via parameters sharing.\" International conference on machine learning. PMLR, 2018.\n[2] Yang, Chengrun, et al. \"OBOE: Collaborative filtering for AutoML model selection.\" Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019.\n Q1 Can a similar hypothesis from ENAS[1] can be used to further save cost by sharing the parameters of the pair of models for measuring gains?\nQ2 How the proposed active learning framework compares with the existing AutoML framework, which models the meta-learner as an RL algorithm?\n Yes.", " The paper introduces a meta-learning method to learn to predict the transfer gains for each task grouping in a Multi-Task Learning problem. This is based on the assumption that, while the number of task combinations (datapoints in this meta-learning problem) is huge, the data lies in a low-dimensional manifold (manifold hypothesis) and thus can be learned. Since the meta-training set needs to be small (computational cost), the authors also apply active learning to help select the most informative datapoints (groupings) to train. **Strength**\n- The idea of the paper is clear. The manifold assumption also seems reasonable, which suggest that we can do better than heuristic estimations of the transfer gain (via learning to predict it).\n- The paper obtain good performance when compared with heuristic approaches such as HOA or TAG.\n\n**Weaknesses and Questions**\n- One of my main concerns is the computational cost of MTL procedure. The authors mentioned that they consider $k=1$ and $k=(n-1)/2$ so that the number of MTL combinations is roughly equal to TAG and HOA. However, for example, the combinations in HOA only consist of 2 tasks, so might be much less expensive than a combination considered in MTL-Net. Therefore, the evaluation may still be unfair. We should really consider the total computational cost here, instead of the number of task combinations.\n\n- If my understanding is correct, the task embedding (taken out from an embedding table) is independent of the task data. I wonder if we can additionally pass (part of) the task data to a set embedding network to get the task embedding. It might open the door to generalizing to some unseen (but related) tasks at test time. \n\n- Also I found the indicator of transfer gain to be inconsistent in the experiments. For some experiments it is the loss, for some others it is the AUC or the mean absolute error. I am a bit skeptical about this since this might have been cherry-picked. Why don't we use a consistent category across the experiments? \n\n- The authors only present experiments with a hard sharing case of MTL. It is natural to wonder if the method works for soft-sharing of the parameters. Is it the case that we can predict better the transfer gain for hard sharing only? See above. Yes", " This paper introduces a simple yet effective meta-learning framework for multi-task grouping. The main goal is to avoid computing the performance gains of multi-task learning (MTL) over single-task learning (STL) for all possible task combinations, which is intractable for a large number of tasks. During meta-training, a mapping function between task combinations and performance gains is learned, where the meta-training set is selected using an active learning strategy to keep it small in size while still being effective. During meta-testing, the performance gains of the rest of the task combinations are predicted using the learned mapping function. Finally, the task groupings are selected based on the meta-training (ground-truth) and meta-testing (predicted) performance gains. Experiments on 3 datasets demonstrate the effectiveness of the proposed method. Strengths\n- The proposed framework to learn the mapping between task combinations and performance gains is novel.\n- The contribution is quite significant as based on their experiments, the proposed framework significantly improves over the pairwise non-parametric approaches under similar computational costs. The visualization also helps to reveal why it works.\n- The paper is well-written and very easy to follow.\n- Most of the presentations are clear.\n\nWeaknesses\n- Certain parts of the paper lack clarity (see suggestions below).\n- The idea of selecting task groupings based on performance gains is very costly in nature as it involves a lot of MTL training. Given that the experiments cost “thousands of GPU hours in total” (line 268), the experimental results might be difficult to reproduce.\n I think this is a rather decent submission with a novel idea and convincing experimental results. Apart from the whopping computational cost of conducting the MTL procedures and perhaps the difficulty to scale to deeper models, I only have one minor suggestion for the authors to improve the paper.\n\n- The proposed active learning strategy assigns sample weights to prioritize task combinations with larger (absolute) performance gains. According to my understanding, here the performance gains are computed using the learned MTG-Net (instead of ground-truth). Is there a rationale behind why doing so (i.e., prioritizing task combinations with larger predicted gains) is more helpful for learning MTG-Net?\n\nI also identify the following typos.\n- Line 196, “work tokens” should be “word tokens”.\n- Line 198, the times sign $\\times$ in the shape of $\\mathbf{X}_i$ is written as an $x$.\n- Line 205, the shape of $H_i$ should be $|C_i| \\times D$.\n- In Figure 1, the STL procedure in the left subfigure, you may change the title to \"for $t^i_j \\in C_i$\" to keep the notation consistent with your text in line 132.\n- In Figure 2, performance vs budget for Taskonomy-5, the oracle line is missing.\n The authors have pointed out the limitations in lines 353-356.", " The paper proposes a method for multi-task grouping, which is to find the best sets of multi-task networks that covers the given target tasks. Specifically, the paper trains a model that predicts performance gains as a function of task combinations. Based on this prediction, best task combinations that maximizes the total performance gain can be obtained for given N tasks and budget B through the grouping selection algorithm. To save the training cost, the paper proposes an active learning algorithm that selects task combinations to use for training based on the predicted performance gains. Experiments show that the proposed method outperforms existing MTG methods and predicts total performance gains reasonably well on three MTL datasets (Taskonomy-5, ETTm1-7, MIMIC-III-27). I think this paper proposes an interesting approach for the multi-task grouping problem. Compared to [41] which also takes into account the compute budget constraint, this paper does not consider the compute budget in a strict manner, however, the computational complexity is reduced from O(2^n) to O(n^2).\n\n1. Strengths\n- The key finding of the paper is that the relationships between task combinations and performance gains lie in some low-dimensional space and can be approximated reasonably well by the proposed self-attention encoder architecture. I think this is an original finding and also provides good insights.\n- Unlike existing works which approximated relations of task combinations using pairwise task relations, this paper proposes to predict the task gains as functions of higher order task combinations directly using the self-attention encoder and task representations. I think this design is novel and reasonable.\n- The paper extensive experiments on three datasets, ranging from 5 to 27 tasks. Ground truth performance gains are collected and will be released, which I believe will be useful for future MTL research.\n\n2. Weaknesses\n- I think one weakness is the high computational cost. The method requires multi-task network training for O(NK) task combinations. Though it is much smaller than the number of all possible combinations, however, this could be still expensive especially when the number of tasks is large or datasets are large.\n- Analysis of the effect of some design choices (hyperparameters, MTL model architecture, MTG model design) is missing.\n- As also mentioned in the paper, the prediction accuracy is less reliable with smaller number of meta-training data selection. Since MTG-Net is trained using only a small sets of (task combination, accuracy gains) pairs, the MTG-Net design (initialization, architectures) can affect a lot on the prediction accuracy of the performance gains. For example, at the first round of MTG-Net training (k=1 in Algorithm1), the MTG-Net used to sample meta-training set is trained using only one (task combination, accuracy gain)-pair.\n- It is good that the paper takes into account the budget B, which is defined as the allowed number of task groups. However, the actual resource usage (e.g. inference time, memory), which are of more interest in practice, may not be proportional to this budget B because the multi-task network for each group may have different resource usages.\n- Experiments look relatively weak in terms of dataset size and task diversity. It would be nice to have experiments for datasets with larger number of data points and tasks. - It would be interesting to see which task combinations are selected at each step of active learning. How similar they are for each task and do their sizes change over iterations? Also, it would be interesting to see the average train/test error of gain predictions over the iterative training combination sets selection procedure, how it improves and what is the final error.\n- How sensitive the method is to the different design choices of MTG-Net (initialization, number of layers, feature dimension, etc)? Since it is trained using only a small sets of (task combination, accuracy gains) pairs, the MTG-Net design can affect a lot on the prediction accuracy of the performance gains. For example, at the first round of MTG-Net training (k=1 in Algorithm1), the MTG-Net used to sample meta-training set is trained using only one (task combination, accuracy gain)-pair. I am curious how much they affect the results.\n- How does the method work for different hyperparameter choices (D, alpha, eta)? For example, alpha is currently set to 25, which I guess will greedily choose the task combinations with best gain for each task. This makes sense to me because these task combinations with extreme gains likely serve as anchors and help interpolating gains for other combinations. At the same time, selecting diverse combinations could be more sample-efficient when training MTG-Net. How important the current hyperparameter choices are and what happens they are set to other values?\n- It would be interesting to see if task relations are model-agnostic. Will the gain estimation obtained from smaller MTL backbone generalizes to larger MTL backbones? This may reduce the training cost.\n- Which task prediction is selected at test time if that task is included in several task combinations and there are multiple predictions?\n The authors discussed the limitations in the paper. I do not see potential negative societal impact.", " This paper researches on the task grouping problem in Multi-task Learning (MTL). Task grouping matters MTL’s performance and has gained much attention recently. This paper proposes a novel method to address the task grouping problem, which formulate a meta-learning problem for task grouping. The proposed method has achieved superior performance. Pros\n\n1.The proposed method is experimentally verified to be effective on various applications.\n2.The paper has conducted extensive experiments .\n3.The paper is easy to follow.\n\nCons\n\n1.The technical contribution of this paper is limited for it just utilizes the well-known meta learning frame-work. What’s the performance of the active learning strategy in the early epochs? The technical contribution of this paper is limited for it just utilizes the well-known meta learning frame-work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3, 5 ]
[ "jxVdHnzNfxW", "yHgmcNLX2-", "Q4KvrQYyeu5", "ZvCqYW2p9_9", "Rqdt7d8VRrS", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "IRz-3J3c4Xl", "0LuOi2ObgZ", "yHgmcNLX2-", "yHgmcNLX2-", "HQ9wRwGJTHv", "HQ9wRwGJTHv", "nips_2022_Rqe-fJQtExY", "nips_2022_Rqe-fJQtExY", "nips_2022_Rqe-fJQtExY", "nips_2022_Rqe-fJQtExY", "nips_2022_Rqe-fJQtExY" ]
nips_2022_nLKkHwYP4Au
CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D. Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels with the same semantic predictions, which considers semantic consistency and diverse locality abandoned in previous bottom-up approaches. Then, to recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module to directly aggregate fine-grained spatial information from backbone for further proposal refinement. It is memory-and-computation efficient and can better encode the geometry-specific features of each 3D proposal. Our model achieves state-of-the-art 3D detection performance with remarkable gains of +3.6% on ScanNet V2 and +2.6% on SUN RGB-D in term of [email protected]. Code will be available at https://github.com/Haiyang-W/CAGroup3D.
Accept
4 expert reviewers suggest acceptance, based mostly on a strong evaluation section that shows good improvements over previous methods. Novelty of the method is deemed sufficient and well ablated. Overall seems like a good quality paper, although a tiny bit on the incremental side, but enough for recommending acceptance.
train
[ "t_WzSYj9ExD", "cmtW5D0LR6i", "uUvEAPVBcda", "OG4W5D5f1jz", "OEu0QBYtZi", "Q2dNQ0cdLP", "sWhvCjtssLG", "qZ0arGpAcvO" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank the reviewer for providing thoughtful review and positive feedback. Below are our responses to the questions and suggestions raised by the reviewer.\n\n**R4-Q1: Update the title/introduction.** \n**R4-A1:** Thanks. We agree that the title and introduction are somewhat misleading. Our model takes the raw point clouds as input and leverages a learnable voxel feature encoder to voxelize the point clouds into 3D volumes, which can be processed by more efficient sparse convolution. \nWe will follow the reviewer's suggestion and clarify this in the revised version. \n\n**R4-Q2: The non-trivial hyper parameter will need re-tuning on different datasets.** \n**R4-A2:** As for the concern about the hyper-parameter re-tuning, all the hyper parameters such as semantic threshold $\\tau$ and scale factor $\\alpha$ are shared across the different datasets without being re-tuned, which shows the great generalizability and robustness of our method. Although category average spatial dimension is dataset-specific, it is also easy to be obtained by making dataset statistics without being hand-designed.\n\n**R4-Q3: The weights in Eqn (8).** \n**R4-A3:** As mentioned in appendix, we simply set all the loss weights to 1.0 except for the bbox refinement $\\beta_{rebox}$, which is adjusted to 0.5 for balancing the value of Stage-I box loss $\\mathcal{L}_\\text{box}$ and Stage-II refinement loss $\\mathcal{L}_\\text{rebox}$. Our method is not sensitive to these two reasonable loss weights (*e.g.*, 1.0 and 0.5) and causes only minimal fluctuations (*e.g.* less than 0.2) as shown below. We will ablate more loss weights in the revised version.\n\n| $\\beta_{rebox}$ | [email protected] | [email protected] |\n|:---------------:|:---------:|:---------:|\n| 1.0 | 74.29 | 60.15 |\n| 0.5 | **74.50** | **60.31** |\n\n**R4-Q4: Any simpler alternatives to the re-voxelization step. Feeding semantic probabilities into aggregation module?** \n**R4-A4:** Thanks for your suggestions. We augment the backbone features with semantic probabilities and conduct ablation experiments on ScanNet V2. \n\n| Strategy | [email protected] | [email protected] |\n|:----------------------:|:---------:|:---------:|\n| w/o | 70.99 | 58.42 |\n| semantic probabilities | 71.55 | 58.93 |\n| Ours | **74.50** | **60.31** |\n\nAs shown in the above table, compared with the baseline that equips with all the components except for semantic predictions and diverse local grouping, simply adding voxel-wise semantic probabilities can also boost the performance. That indicates the semantic cues are really important for grouping based methods. However, our class-aware local grouping still outperforms this variant with a large gain. It can be explained that our model can not only explicitly leverage the strong semantic information to generate class-specific subsets for better guiding the network training, but also introduce the diverse locality and semantic consistency for solving the mis-grouping problem.\n\n**R4-Q5: IOU loss is gIOU loss? VoteNet baseline in Table 2 also trained with the IOU loss?** \n**R4-A5:** We use the naive IoU loss instead of gIoU loss by simply maximizing the IoU between the proposals and corresponding ground truths. To make a fair comparison, all the experiments in Table 2 are trained with the same loss objective, including the naive IoU loss adopted in $\\mathcal{L}_\\text{box}$ and $\\mathcal{L}_\\text{rebox}$.\n\n**R4-Q6: Are there particular classes for which this model is better than prior work?** \n**R4-A6:** Yes. Per-class results in appendix show that our model significantly performs better than prior works on tiny classes (*e.g.*, picture: +10.80 and +13.48 better than the SOTA in terms of [email protected] and [email protected] on ScanNet V2), which demonstrates the effectiveness of our local grouping strategy.", " We sincerely thank the reviewer for providing thoughtful review and positive feedback. Below are our responses to the questions and suggestions raised by the reviewer.\n\n**R3-Q1: Validity of RoI pooling.** \n**R3-A1:** Sorry for this confusion, we will add more clarification in the revised version. As for the loss of geometric details, our RoI-Conv module doesn't equip with re-voxelization or pooling steps since the RoI-Conv module directly operates voxel features from the backbone. Note that the voxel size is small enough (0.04m) so that it can well capture the fine-grained surface geometry and achieve impressive performance on tiny objects (*e.g*., picture). Actually, our model only performs voxelization in two steps: 1) At the beginning of feeding the input into 3D voxel backbone, the point clouds are first converted to regular 3D voxels with the voxel size of 0.02m, which is fine enough for detecting coarse-grained 3D bounding boxes. 2) In the diverse local grouping step, our model re-voxelizes the semantic subsets respectively by average pooling operation to generate class-specific 3D voxels. Notably, it is class-aware so that enough geometric information can be adaptively retained for detection. The entire operation is also differentiable and the obtained voxel features can be easily adapted to original point features. \n\n**R3-Q2: The role of the diverse local grouping.** \n**R3-A2:** Locality is one of the two crucial inductive biases we introduced to grouping algorithm, which is class-dependent and diverse among the different classes. However, previous grouping methods of 3D object detection are usually in a class-agnostic manner, which leads to the mis-grouping problem, *e.g.,* partial or over coverage of the object surfaces. Our diverse local grouping operation tries to address this problem. We first re-voxelize the semantic subsets with class-specific voxel size (*w.r.t*, the average spatial dimension of each category) and then group them by sparse convolutions with the same kernel size individually. Thus the smaller classes are preferred to be aggregated with smaller local regions and more detailed geometric representations, and the larger classes vice versa. It's novel and reasonable for the bottom-up 3D detection framework.\nTo ablate its effectiveness, we compare our model with the variant that directly operates the class-agnostic model on each class subset without re-voxelization. As shown in 2$^{nd}$ and 3$^{rd}$ row of Table 2, our diverse local grouping achieves much better performance, *i.e.*, 69.24 $\\rightarrow$ 72.10, 54.05 $\\rightarrow$ 57.07 on [email protected] and [email protected], which demonstrates its superiority. We sincerely thank the reviewer again and will reinforce other contributions such as other crucial inductive biases (semantic consistency), strong 3D backbone, and the efficient fully convolutional aggregation / 3D pooling module in the revised version. Please feel free to let us know if we misunderstood this question.\n\n**R3-Q3: Improve the completeness of the paper (there is few grammatical error).** \n**R3-A3:** Thanks. We will carefully polish our writing and fix the grammatical errors in the revised version. ", " We sincerely thank the reviewer for providing thoughtful review and positive feedback. Below are our responses to the questions and suggestions raised by the reviewer.\n\n**R2-Q1: The idea of using semantic predictions and geometric shifts to get class-aware clusters is not novel, such as SoftGroup.** \n**R2-A1:** SoftGroup is a valuable work in the 3D instance segmentation community and will be involved in our revised version. However, our approach is very different from the previous voting-based semantic clustering methods in the following aspects: 1) Existing semantic clustering algorithms are usually adopted in 3D instance segmentation, which aims to explicitly assign each point to its corresponding instance accurately. But our model is designed for 3D object detection and focuses on efficiently introducing two crucial inductive biases, *i.e.*, semantic consistency and diverse locality, to grouping step for implicitly aggregating high-quality object representations and generating reliable coarse-grained bounding boxes. The strong results in Table 2 verify the effectiveness of the proposed strategy. 2) Our grouping module is a fully sparse convolutional approach, which automatically aggregates the object surface points with pure sparse convolutions and avoids the hand-crafted clustering operations. Thanks to the well-optimized sparse convolution, our aggregation model achieves 3$\\times$ faster than the fastest semantic clustering algorithm (*i.e.*, softgroup).\n\n**R2-Q2: The effect of feature offsets.** \n**R2-A2:** To make a fair comparison, we follow the widely used feature shifting operation proposed in VoteNet [25]. Additional ablative studies of this step are provided below.\n\n| Feature Shifting | [email protected] | [email protected] |\n|:----------------:|:-----------:|:----------:|\n| | 74.18 | 60.17 |\n| $\\checkmark$ | **74.50** | **60.31** |\n\nWe can observe that feature shifting is slightly better than the variant of non-shift. As discussed in VoteNet, to generate more reliable object representations, a MLP is used to transform seeds’ features extracted from the backbone to vote space, so that the grouped features can align with the voted points automatically.\n\n**R2-Q3: Compare the inference time between the proposed method and existing methods.** \n**R2-A3:** Thanks for the valuable suggestion. Table 5 will be updated with inference time. \n\n| RoI Method | [email protected] | [email protected] | memory | inference time |\n|:----------------:|:---------:|:---------:|:-----------:|:--------------:|\n| PointRCNN [28] | 73.65 | 57.83 | 8,054MB | 62.9ms |\n| Part-A$^{2}$ [30]| 74.01 | 58.89 | 6,540MB | 47.9ms |\n| Ours-SA | 73.89 | 58.14 | 11,508MB | 45.5ms |\n| Ours-SpConv | **74.50** | **60.31** | **2,468MB** | **34.5ms** |\n\nWe can find that the RoI-Conv pooling module is significantly more memory- and time- efficient than previous pooling operations. Hope it can facilitate the two-stage 3D object detection community.\n\n**R2-Q4: All possible combinations of important modules. If a better backbone can alleviate the improvements of local grouping.** \n**R2-A4:** Totally agree. Due to limited space, we only focus on the most significant combinations. The results of all possible combinations will be added in the revised version.\n\n|Semantic Prediction|Diverse Local Group|BiResNet |RoI-Conv | [email protected] | [email protected]|\n|:-----------------:|:-----------------:|:----------:|:----------:|:--------:|:-------:|\n| | | |$\\checkmark$|69.10 |57.62 |\n|$\\checkmark$ |$\\checkmark$ | |$\\checkmark$|73.14 |59.85 |\n| | |$\\checkmark$|$\\checkmark$|70.99 |58.42 |\n|$\\checkmark$ |$\\checkmark$ |$\\checkmark$|$\\checkmark$|**74.50** |**60.31**|\n\nAs for the concern of better backbone, we list parts of the results and find that local grouping can still boost the performance, even with a better backbone (BiResNet). \n\n**R2-Q5: No qualitative analysis.** \n**R2-A5:** Thanks. We will move some qualitative analysis from appendix to main paper and add more additional analysis. For example, per-class results in appendix show that our model is significantly better than prior works on tiny classes (*e.g.*, picture: +10.80 and +13.48 better than the SOTA in terms of [email protected] and [email protected] on ScanNet V2), which demonstrates the effectiveness of our local grouping strategy.", " We sincerely thank the reviewer for providing thoughtful review and positive feedback. Below are our responses to the questions and suggestions raised by the reviewer. \n\n**R1-Q1: Don't feel reordering the steps of voting-based method constitutes to sufficient novelty, but maybe only barely together with the 2nd-stage re-voxelization idea.** \n**R1-A1:** We modify VoteNet [25] and adapt it to be compatible with voxel representations for the succeeding class-aware local grouping module. Our main motivation is to solve the mis-grouping problem caused by class-agnostic grouping in VoteNet, which is crucial for bottom-up 3D object detectors in cluttered indoor scenes where the objects of different classes are distributed closely. Our class-aware local grouping tries to address this limitation by introducing two reasonable inductive biases, *i.e.* semantic consistency and diverse locality among different categories. The strong results demonstrate the novelty and effectiveness of our method. Combining the class-specific re-voxelization step with voting approach is a simple yet effective way to achieve the above ideas.\n\n**R1-Q2: A 2nd-stage detector is followed by initial points segmentation is not really new either.** \n**R1-A2:** RSN only performs foreground point segmentation for detecting 3D objects more efficiently. Our approach and motivation are very different from it. We perform voxel-wise semantic predictions on all categories rather than only foreground, which aims to assign the points to their respective semantic subsets and achieve class-aware local grouping more efficiently. \n\n**R1-Q3: What do the authors mean by diverse localities in limitation discussion?** \n**R1-A3:** Sorry for this confusion, we will clarify it and add more descriptions in the revised version. Locality is one of the most important inductive biases of the grouping-based 3D object detection framework. This paper mainly focuses on the inter-category locality, which is class-specific and diverse among the different classes, but ignores the intra-category discriminations. Due to the incompleteness of the point cloud and the scale variance within the classes, the object spatial dimension of the same class is also variational, which leads to the diverse intra-category locality. Although our grouping algorithm can implicitly handle this problem to some degree by the learnable convolutional aggregation module, it is still an open problem and will be studied in the future. \n", " The paper addresses the problem of 3D object detection in point cloud for indoor scenes (scannet and sun rgbd). The idea is based on voting-based methods: each point predicts its center -> clustering -> classification. The method proposes to classify points first, before center prediction and clustering. They claim contributions in the following:\n\n- Novel \"class-aware\" 3D proposal strategy as discussed above.\n\n- A refinement stage that re-voxelizes according to each point's classes. \n\n- Good results on scannet and sun rgb-d. \n\nThe results do seem strong. And both claimed novelties contributed to the improvements. I'll recommend a weak accept for now. Might change opinions based on peers' reviews. \n Strengths:\n\n- Strong results on both datasets as mentioned. Also good ablation on table 2 to demonstrate the improvements from the claimed novelties. \n\n- Good writing. \n\nWeaknesses:\n\n- Only somewhat sufficient novelty: I don't feel that reordering the steps of a voting-based method constitutes to sufficient novelty for this venue, but maybe only barely together with the 2nd-stage re-voxelization idea. A 2nd-stage detector is followed by initial points segmentation is not really new either. See [RSN: Range Sparse Net for Efficient, Accurate LiDAR 3D Object Detection]\n\n N/A There is one sentence. What do the authors mean by diversity localities?", " Existing 3D indoor object detection usually abandons semantic consistency within the same group and ignores diverse locality among different categories. To tackle this issue, the authors propose a novel class-aware proposal generation strategy that considers class-specific local groups. Besides, the authors propose an RoI pooling module to revisit the missed surface voxel features due to semantic segmentation errors. Their approach outperforms state-of-art methods on two challenging indoor datasets. Strengths: \n+ The authors utilize a voxel feature encoding with a class-specific 3D voxel size to obtain the predicted vote voxels and generate class-specific geometric features based on them.\n+ The authors propose RoI-Conv pooling, which effectively recovers the missed surface voxel features due to the semantic segmentation errors\n+ The proposed method achieves state-of-art results with a remarkable gain on ScanNet V2 and SUN RGB-D.\n\nWeaknesses: \n- The idea of using semantic predictions and geometric shifts to get class-aware clusters is not novel, which has already been used in previous work such as SoftGroup. - I am not sure about the effect of feature offsets. What is the difference between features extracted from the backbone and features added by feature offsets? Please demonstrate the design goal of feature offsets.\n- As the set of neighboring input voxels is dependent on RoI-specific points, sparse abstraction is a time-expensive process. Please compare the inference time between the proposed method and existing methods in the parts of the experiment. \n- In Table 2, it is important for a reader to see all possible combinations of important modules (e.g. class-aware grouping and RoI-Conv) and see their potential influences. I wonder if a better backbone can alleviate the improvements of local grouping. \n- There is no qualitative analysis in the main paper. It is hard for a reader to judge what are the potential improvements. Yes.\n", " The authors introduced a novel two-stage 3D point cloud framework in indoor environment. In the previous bottom-up framework, the authors firstly modified the class-agnostic local grouping scheme, which may fail in clustering the object close to different categorical object. Instead of grouping class-agnostic manner as VoteNet, the proposed method grouped the voxels with the semantic consistency from the additional semantic prediction branch. Furthermore, the authors proposed an efficient ROI pooling module to compensate the miss-classified semantic prediction in the second stage. In the experimental section, the authors provided experiments on ScanNet V2 and SUN RGB-D benchmarks to demonstrate the proposed methods succeeds in grouping the objects within the same group and shows the better performance compared to existing class-agnostic methods. [+] First of all, the motivation of the paper seems to be meaningful and pragmatic for indoor 3D object detection, because different types of objects may be located in a short distance in the indoor environment unlike the outdoor environment.\n\nDespite of lacks of technical novelties, the authors well modified the existing class-agnostic model to solve the desired problem. The backbone model is selected to consider the geometric characteristic of the indoor object target, and the pooling module is designed for sparse convolution to handle limitations as computational overhead and geometric information loss.\n\nAs the detection paper, the overall framework is well-described, and specifically the details for reproducing and understanding the contribution such as target assignment, handling unique set, comparisons on different kernel size etc. are well presented.\n\n\n[-] One concern is the proposed RoI pooling module. In sparse convolution, there is a hand-crafted parameter such as voxel size, and according to this factor, geometric loss occurs like existing pooling methods. \n\nAnother concern is the role of the diverse local grouping. In Table 2, this factor has the greatest impact on performance improvement compared to other factors. Among all the contributions, this factor does not look much different from operating a class-agnostic model for each class efficiently. Therefore, I recommend that the authors should provide the opinion to reinforce other contributions.\n Feedbacks of weeknesses for validity of RoI pooling and opinion to reinforce other contributions. I mentioned all comments including reasons and suggestions in the above sections. I recommend that the author will provide all the concerns, and improve the completeness of the paper (there is few grammatical error).", " This paper presents an approach for 3D detection from voxels. The authors propose to 2 stage detection pipeline that (1) first generates high quality box proposals by considering the spatial features of the scene. Compared to prior work, this stage also uses semantic information associated with each voxel. (2) Refines the object proposals and aggregates per-object features by a ROI pooling module. While prior work uses Set Abstraction (proposed in PointNet++) or max-pooling, the authors propose a new Sparse Abstraction operator. The final model is evaluated on indoor 3D detection on the ScanNet and SUN RGB-D datasets. Strengths\n+ The authors report the results averaged over 25 trials in each table. They account for both the training time stochastic behavior as well as inference time stochastic behavior in 3D detection models. This is important for follow-up work as it (a) empirically validates the current paper’s claims; (b) helps understand the robustness of each proposed component.\n+ Strong empirical performance on indoor detection benchmarks. In particular, as Table 1 shows, the proposed model achieves a much higher [email protected] than prior work.\n+ The Class-aware proposal prediction is well motivated. In particular (L143), predicting a dense semantic map before creating the proposals is a good idea since it additionally supervises the feature backbone. I also liked that the authors directly use the 3D bounding boxes for this part rather than segmentation masks (L158).\n+ The 3D backbone inspired by HRNet is a good choice for trading of resolution and compute. It also leads to good gains in performance (Table 2)\n\nWeaknesses\n+ The title suggests that the authors use point clouds directly. However, their implementation first converts the point clouds into voxels (L102). The title is misleading. Similarly, the introduction talks about challenges of using unordered point clouds which do not exist once point clouds are converted to voxels, and thus are not challenges either faced by this work or addressed by it. Please update the title/introduction.\n+ The re-voxelization step introduced in Eqn (4) seems pretty involved - (a) it requires a threshold per class which is changed with a custom schedule during training. This seems like a non-trivial hyper parameter search. (b) It additionally requires re-voxelization using using a separate parameter \\alpha. (c) Finally, it requires aggregation using a kernel k_alpha. I worry that this step will need re-tuning on different datasets. + How are the weights in Eqn (8) tuned? How sensitive is the model to these weights?\n+ Did you try any simpler alternatives to the re-voxelization step? For example, rather than creating separate subsets, did you try feeding in the semantic probabilities of the classes into the aggregation module (which could be class-specific)?\n+ Do you mean gIOU loss when you mention IOU loss in L237? Also, is the VoteNet baseline in Table 2 also trained with the IOU loss?\n+ Are there particular classes for which this model is better than prior work? Thin/small objects? Since the authors spend effort in accounting for each object class’s size while aggregating features, I expect this design decision may work particularly well for some classes. Yes." ]
[ -1, -1, -1, -1, 6, 5, 5, 7 ]
[ -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "qZ0arGpAcvO", "sWhvCjtssLG", "Q2dNQ0cdLP", "OEu0QBYtZi", "nips_2022_nLKkHwYP4Au", "nips_2022_nLKkHwYP4Au", "nips_2022_nLKkHwYP4Au", "nips_2022_nLKkHwYP4Au" ]
nips_2022_Fm7Dt3lC_s2
Adaptive Data Debiasing through Bounded Exploration
Biases in existing datasets used to train algorithmic decision rules can raise ethical and economic concerns due to the resulting disparate treatment of different groups. We propose an algorithm for sequentially debiasing such datasets through adaptive and bounded exploration in a classification problem with costly and censored feedback. Exploration in this context means that at times, and to a judiciously-chosen extent, the decision maker deviates from its (current) loss-minimizing rule, and instead accepts some individuals that would otherwise be rejected, so as to reduce statistical data biases. Our proposed algorithm includes parameters that can be used to balance between the ultimate goal of removing data biases -- which will in turn lead to more accurate and fair decisions, and the exploration risks incurred to achieve this goal. We analytically show that such exploration can help debias data in certain distributions. We further investigate how fairness criteria can work in conjunction with our data debiasing algorithm. We illustrate the performance of our algorithm using experiments on synthetic and real-world datasets.
Accept
This paper has seen a lot of discussion between reviewers and authors. Reviewers are fairly positive after the discussion/rebuttal phase and there have been significant score revisions upwards. Few concerns that were highlighted during rebuttal/discussion phase are: 1) Multiple reviewers have pointed out that amongst two sources of bias - data bias and model bias - the authors focus on assembling a dataset to avoid the first type of bias. It has been pointed out using terms like "social bias, unfairness" and "statistical bias" are very misleading . I strongly suggest the authors to better revise the paper according to reviewer comments using more precise terminology -data bias and/or model bias. Clarity has been a concern uniformly shared amongst all reviewers. 2) The authors principally reduce the data to a single dimension using dimension reduction techniques and use thresholded classifier. Authors responded to this concern saying -effective feature learning in general amounts to that and there are optimal data dimension reduction techniques. Further authors also experimentally demonstrate that losses in accuracy is not much due to these techniques. In summary, concerns 1 and 2 are not severe enough (as acknowledged by reviewers raising scores) but important to keep in mind while preparing the camera ready.
train
[ "XhqomLc0bp", "f14Zq5O3akf", "4T9QOtl4qz1", "LQka7RnKDN", "1T5MEJu3_Y", "Mp7ykH4dBa_", "e24BQGMkbW6", "Z-aeoxkAz0Z", "Vl7BPryxrym", "96Gf2T50Ac_", "b8dy-A8m_wl", "vNO0mhMNEoH", "zbJMctwAQD", "-CHKx5guvYk", "OEgGmfg5xOk", "mbnPRwN00w", "IG3Te1dLQkh" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I will update the numeric score to a 6, with possible further change after discussion with other reviewers.\nI am happy with the response given by the authors. I believe that the series of clarification given in the responses regarding Assumption 1 and its implications are definitely needed. I also appreciate the additional clarity given to the type of fairness / bias explored.\n\n---\nAlso note: there is a missed typo in the revised version (and in the original version): Line 138 (of the revised), \"mistmatch\".", " Thank you for responding to the queries. The response addresses my queries adequately. Hence I have updated my score to 7, Accept. The paper explores interesting ideas on controlled data collection to correct biased estimates and the effect of putting fairness constraints during collection. There are limitations like the restriction to single dimensional data. Overall, the work makes a good contribution. \n\nI appreciate the thorough review of related work in the response. Please include the discussion into the main paper differentiating the problem setting and analysis from past work. Clarity of presentation needs to be improved as noted by the other reviewers including clarifying the problem and notation. I would suggest addressing the restriction to single-dimensional data by pointing to ways in which the insights can be transferred to the multi-dimensional case. It is unreasonable to expect that dimension reduction would be as lossless as shown in the Adult Income experiment (also, I could find the exact procedure used for dimension reduction, so the result is a bit surprising if it is completely unsupervised).", " Dear Reviewer e1EM,\n\nOnce again, we appreciate your time devoted to reviewing this paper. We have provided responses to your comments and an updated submission. Could you please check whether they properly addressed your concern? Your feedback would be appreciated. Please kindly let us know in case there are other concerns--we hope we will have the opportunity to respond to them. Thank you very much!", " Dear Reviewer ESAg,\n\nOnce again, we appreciate your time devoted to reviewing this paper. We have provided responses to your comments and an updated submission. Could you please check whether they properly addressed your concern? Your feedback would be appreciated. Please kindly let us know in case there are other concerns--we hope we will have the opportunity to respond to them. Thank you very much!", " Dear Reviewer Hsp8,\n\nOnce again, we appreciate your time devoted to reviewing this paper. We have provided responses to your comments and an updated submission. Could you please check whether they properly addressed your concern? Your feedback would be appreciated. Please kindly let us know in case there are other concerns--we hope we will have the opportunity to respond to them. Thank you very much!", " We thank all the reviewers for their comments and valuable feedback. We have made the following updates following the reviews to further improve our work.\n\n$\\bullet$ 1: We discuss the implications of our reduction from multi-dimensional data to a single-dimensional representation, in response to comments from reviewer Hsp8 and reviewer ESAg. In particular, in Appendix A, we have added discussion and results from an additional experiment showing only a small difference in a classifier's accuracy with and without performing dimension reduction on the Adult dataset. \n\n$\\bullet$ 2: We unpack our Definition 1 to give more explanations on how the lower bound is derived, following the suggestions from reviewer Hsp8 and reviewer ESAg. \n\n$\\bullet$ 3: We provide additional explanations on the unknown parameter $\\hat{\\omega}^y_t$ (the $\\alpha$-th percentile of $\\hat{f}^y_t$) in our algorithm throughout the paper, according to the comments from reviewer Hsp8 and reviewer e1EM. \n\n$\\bullet$ 4: We discuss the relation of our work to important papers referred by reviewer Hsp8 in the fields of selective labeling, fair learning, and active learning, and the relation to the online mean estimation literature. \n\n$\\bullet$ 5: We emphasize the differences between (1) social bias/(prediction) model bias/unfairness and (2) statistical bias/data bias, according to the comments from reviewer ESAg and reviewer e1EM. We emphasize that our focus is on the latter data bias issue, which has direct implications on the former algorithmic unfairness issue (we have added some citations in this regard).\n\n$\\bullet$ 6:We discuss the reason why there is no ground-truth information in the main-text, according to the comments from reviewer ESAg and reviewer e1EM. Due to the page limit, those important information were included primarily in the Appendix. \n \n$\\bullet$ 7: We add some citations regarding to our Assumption 1 (single parameter unknown assumption), following the suggestions from reviewer ESAg.\n\n$\\bullet$ 8: We formalize the term speed of debiasing according to the comment from reviewer Hsp8.\n\n$\\bullet$ 9: We add explanations and details for notation missing from Theorem 4 following the comment from reviewer e1EM. \n \nSeveral of the updates have been made in the revised draft (the location of these revisions have been noted in the individual responses). If our manuscript is accepted, the additional content introduced and summarized above (including discussions about dimensionality reduction, unpacking of Definition 1, and additional related work) will be merged into the main text given the extra page limit for the camera-ready version. ", " Thank you for your valuable questions, comments, and suggestions. \n\n**Major comment 1**: Thank you for your careful reading. The $\\hat{\\omega}^y_t$ is updated by \"new data\", but the classifier is trained by all collected samples over time. Sorry for any confusions made, we have added more clarifications in the pseudo-code in the revised draft.\n\n**Major comment 2**: You are absolutely correct that based on our Assumption 1 (single parameter unknown), Theorem 2 holds for any arbitrary statistic. Take a Gaussian distribution as an example; if we only assume the mean $\\mu$ is unknown, then the distribution shape (how the distribution spreads out) is fixed. Based on the law of large number, as we collect i.i.d samples from distribution, we are able to find any arbitrary statistics (e.g., 30\\%, 50\\%, 70\\% percentile). After that, we can add / subtract the distance from these percentile to median (50\\%) to get the unknown parameter $\\mu$ because the sd $\\sigma$ is fixed, which means the distance from any statistics to median is a constant. We have added this discussion at the conclusion of the proof of the theorem.\n\n**Major comment 3**: Thank you for noting this. In line 180, we describe the general case where $\\hat{\\omega}^0_t$ is the $\\alpha$-th percentile of $\\hat{f}^0_t$. In line 181, we provide an example when $\\alpha =50$, which is the median of the distribution; also, in Theorem 1 and 2, we use the mean wlog. We have clarified this in Theorem 1 in the revised draft. We note that under Assumption 1, we can map one-to-one from any $\\alpha$-percentile to another, and therefore the choice of percentiles are without loss of generality. \n\n**Major comment 4**: Thank you for the detailed reading and noting this. In the revised draft, we have added more explanations and details for each symbol in the theorem statement. \n\n**Major comment 5**: Thanks for your careful reading. Due to the limited space, by adding the ground-truth values in the main-text Fig. 1 will make graphs even smaller. Hence, we do not include a title in the main-text, but we do include these ground-truth information in the appendix with larger figures. Sorry for any confusions made. We have added some clarifications in the revised draft.\n\n**Major comment 6**: Thank you for your comments on the notion of bias. We would like to first note that our proposed active debiasing algorithm can also be viewed as one that is only used for data collection. That is, this data collection algorithm can be separate from the prediction model that eventually takes the collected data as input to generate useful predictions for the policy maker. \n\nWith this in mind, in terms of relation to unfairness/\"bias\", as noted in our introduction and as classified by prior work (e.g., Mehrabi et al. in our references), there are two general sources of algorithmic bias: (1) social bias/(prediction) model bias/fairness, and (2) statistical bias/data bias; we focus on the latter, recognizing that it has implications for the former. Specifically, if we start with biased training data (statistical bias), the trained (prediction) model will be different from the desired/true model. This could also mean that trained model is unfair, as studied and verified, e.g, by [1,2,3]. Therefore, our relation to the existing literature on social bias/unfairness is that unbiased/correct data is the first step towards fair AI. We also establish a connection (Proposition 1) between socially fair models and our algorithm for adaptively collecting statistically unbiased data. \n\n[1] Kallus, Nathan, and Angela Zhou. \"Residual unfairness in fair machine learning from prejudiced data.\" International Conference on Machine Learning. PMLR, 2018.\n\n[2] Wang, Jialu, Yang Liu, and Caleb Levy. \"Fair classification with group-dependent label noise.\" Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.\n\n[3] Zhu, Zhaowei, Tianyi Luo, and Yang Liu. \"The rich get richer: Disparate impact of semi-supervised learning.\" ICLR 2022.\n\nLastly, in our study of data bias, we have formalized data biases as the mismatch in the parameter of the estimated and true (joint) feature-label distributions $f^y(x)$ (Lines 138-139). We do not consider the specific source that has resulted in this mismatch; it could be any of the issues of data bias such as underrepresentation from a group, data shifts over time, incorrect/selective labeling in the past, feature measurement errors, etc. Our model of bias allows us to propose an algorithm for adaptively collecting an unbiased dataset while remaining agnostic to the original source of data bias. \n\n**Minor comments**: Thank you for your careful reading and comments. We have fixed typos and added the initialization for algorithm 1 in the appendix in the revised draft. ", " **Minor comment 1**: The single parameter unknown assumption is used, e.g., in the following works in the multi-armed bandit and fair learning literatures, which we will make sure to add to L131:\n\n[1] Slivkins, Aleksandrs. \"Introduction to multi-armed bandits.\" Foundations and Trends® in Machine Learning 12.1-2 (2019): 1-286.\n\n[2] Patil, Vishakha, et al. \"Achieving Fairness in the Stochastic Multi-Armed Bandit Problem.\" J. Mach. Learn. Res. 22 (2021): 174-1.\n\n[3] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020.\n\n[4] Schumann, Candice, et al. \"Group fairness in bandit arm selection.\" arXiv preprint arXiv:1912.03802 (2019).\n\n[5] Raab, Reilly, and Yang Liu. \"Unintended selection: Persistent qualification rate disparities and interventions.\" Advances in Neural Information Processing Systems 34 (2021): 26053-26065.\n\n**Minor comment 2**: Thank you for noting this. In Definition 1, in more detail, we choose $LB_t$ such that $\\hat{F}^0_t(\\omega^0_t)-\\hat{F}^0_t(LB_t)=\\hat{F}^0_t(\\theta_t)-\\hat{F}^0_t(\\omega^0_t)$; that is, such that $\\omega^0_t$ is the median in the interval $(LB_t, \\theta_t)$ based on the current estimate of the distribution $\\hat{F}^0_t$ at the beginning of time $t$. Then, we update $\\omega^0_t$ to $\\omega^0_{t+1}$, the \\emph{realized} median of the distribution between $(LB_t, \\theta_t)$ based on the observed data during $[t, t+1)$. Once the underlying distribution is correctly estimated, (in expectation) we will observe the same number of samples between $(LB_t, \\omega^0_t)$ and between $(\\omega^0_t, \\theta_t)$, and hence $\\omega^0_t$ will no longer change. We will add this explanation after Definition 1 in the final draft (given the extra space). \n\n**Minor comment 3**: Thanks for noting this. In terms of the seemingly paradoxical statement: the first ``increase'' is stating the increase in the exploration \\emph{threshold} $LB_t$ (as opposed to increase in data exploration/collection). If the decision threshold $\\theta_t$ becomes smaller, by Definition 1, the corresponding lower bound $LB_t$ will become larger. As a result, our algorithm's exploration range $[LB_t, \\theta_t]$ becomes narrower overall, and that is why our algorithm becomes more conservative at (data) exploration. We will make sure to work on rephrasing that discussion for clarity in the final draft. \n\n**Typos, etc 1**: Thank you for your careful reading. You are right that there are true $\\theta_a, \\theta_b$ (they never change w.r.t. time $t$), and we are estimating via $\\theta_{a,t}, \\theta_{b,t}$. Here, $\\theta_{a,t}$ is the same as $\\hat{\\theta}_{a,t}$. We have modified our notations to make it consistent with other notations in the revised draft. \n\n**Remaining typos, etc**: Thank you for your careful reading and comments. We have fixed these typos in the revised draft. ", " **Major comment 2**: Thank you for you comment on the importance of applicability of our work to multi-dimensional data. We hope to provide support for this below. Our current analytical work in the paper indeed discusses one-dimensional feature data and threshold classifiers, and our experiments also consider the algorithm used on data with multi-dimensional features by performing a dimension reduction (e.g., in our Adult dataset experiment; the FICO credit scores did not require such reduction). \n\nIn terms of whether this dimension reduction will lead to limitations due to considerable information loss, we have run an additional set of experiments on the Adult dataset, with classifiers trained with and without performing dimension reduction. The experiments show only minimal loss in accuracy (<1\\%), with results as follows: \n\n$\\bullet$ For the classifier trained through logistic regression without performing dimension reduction: the overall accuracy is 78.44\\%, and the accuracy for advantaged (41762 samples) and disadvantaged (7080 samples) group are 77.42\\% and 85.27\\%, respectively.\n\n$\\bullet$ For the classifier trained through logistic regression after performing dimension reduction: The overall accuracy is 77.79\\%, and the accuracy for the advantaged and disadvantaged group are 76.73\\% and 84.04\\%, respectively.\n\nHence, such reduction and focus on threshold classifier might not seem as restrictive as it sounds: its optimality has been established in the literature by Corbett-Davies et al. [1, Thm 3.2] and Raab \\& Liu [2] as long as a multi-dimensional $X$ can be mapped to a properly defined scalar. The recent advances in deep learning in fact has helped enable this possibility: for instance, one can take the last layer output from a deep neural network and use it as the single dimensional representation. As an example, one can collect a variety of information about a person's financial information and train a model to combine them into a single dimension $\\in [0, 850]$, i.e., a credit score. Then the classification question of whether someone's loan application should be approved or not reduces to finding a threshold of the risk score to determine the decision.\n\nWe have added the discussion above on single-dimensional data and threshold classifiers in our discussion section in Appendix A in our revised draft (lines 539-560). We will move a summary of this discussion to a conclusion section in our final draft as well (given the extra space). \n\n[1] Corbett-Davies, Sam, et al. \"Algorithmic decision making and the cost of fairness.\" Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining. 2017.\n\n[2] Raab, Reilly, and Yang Liu. \"Unintended selection: Persistent qualification rate disparities and interventions.\" Advances in Neural Information Processing Systems 34 (2021): 26053-26065.\n\n**Major comment 3**: Thanks for your careful reading. Due to the limited space, by adding the ground-truth values in the main-text Fig. 4 will make graphs even smaller. Hence, we have not included these as plot titles in the main-text, but we have included these ground-truth information in the appendix with larger figures (Fig. 10 caption). For the classification error or loss, we have included additional experiments in the appendix (Fig. 12) to compare the classification error with different depth of exploration, and a regret analysis for the loss. The ground-truth information for Fig. 12 is shown as follows: The initial biased distributions are Beta(2,3) and Beta(5,5) for label 1 and 0 respectively, and the true distributions are Beta(5,3) and Beta(3,5) respectively. Similarly, the ground-truth information for \\emph{Adult} and \\emph{FICO} dataset can be found in Fig. 10 and Fig. 11 caption. Sorry for any confusions made, we have added more clarifications in the revised draft. In terms of the impacts of dimensionality-reduction we have used in the Adult dataset, in the previous answer to \"Major comment 2\", we have provided an additional experiment results and discussions on why we believe the dimensionality reduction on the features and our focus on threshold classifiers is not too restrictive. We have also added this discussion in Appendix A in our revised draft. ", " Thank you for your valuable comments and suggestions. \n\n**Major comment 1**: Thank you for your comments on the notion of bias. \n\nWe would like to first note that our proposed active debiasing algorithm shown in line 125-129 is one that is used for data collection (as opposed to prediction): our algorithm is estimating the data distribution and using it as a guideline to adaptively collect new data, in a way that leads to a statistically unbiased dataset. As such, this data collection algorithm can be separate from the algorithm (prediction model) that eventually takes the collected data as input to generate useful predictions for the policy maker. In other words, the active debiasing algorithm we propose is one that takes the initially observed training data as input and tries to estimate the correct distribution $P(X,Y)$ ($f^y(x)$ in our paper notation) of the data; this is in contrast to a prediction model takes $X$ as an input and outputs predictions of $Y$. \n\nIn terms of relation to fairness, as noted in our introduction and as classified by prior work (e.g., Mehrabi et al. in our references), there are two general sources of algorithmic bias: (1) social bias/(prediction) model bias/unfairness, and (2) statistical bias/data bias. We focus on the latter, recognizing that it has implications on the former. Specifically, if we start with biased training data (statistical bias), the trained (prediction) model will be different from the desired/true model. This could also mean that trained model is unfair, as studied and verified, e.g, by [1,2,3]. Therefore, our relation to the existing literature on social bias/unfairness (as noted in our introduction and related work) is that unbiased/correct data is the first step towards fair AI. We also establish a connection (Proposition 1) between socially fair models and our algorithm for adaptively collecting statistically unbiased data. \n\n[1] Kallus, Nathan, and Angela Zhou. \"Residual unfairness in fair machine learning from prejudiced data.\" International Conference on Machine Learning. PMLR, 2018.\n\n[2] Wang, Jialu, Yang Liu, and Caleb Levy. \"Fair classification with group-dependent label noise.\" Proceedings of the 2021 ACM conference on fairness, accountability, and transparency. 2021.\n\n[3] Zhu, Zhaowei, Tianyi Luo, and Yang Liu. \"The rich get richer: Disparate impact of semi-supervised learning.\" ICLR 2022.\n\nLastly, in our study of data bias, we have formalized data biases as the mismatch in the parameter of the estimated and true (joint) feature-label distributions $f^y(x)$ (Lines 138-139). We do not consider the specific source that has resulted in this mismatch; it could be any of the issues of data bias such as underrepresentation from a group, data shifts over time, incorrect/selective labeling in the past, feature measurement errors, etc. Our model of bias allows us to propose an algorithm for adaptively collecting an unbiased dataset while remaining agnostic to the original source of data bias. \n", " **Major comment 5**: Thank you also for pointing to the relation to the literature on online mean estimation. We will conduct and add a review of this literature in our related work section in the final draft. In summary, compared to this literature, the main technical challenges of our proposed bounded exploration in online mean estimation is that it involves evaluating the behavior of statistical estimates based on data collected from a truncated distribution with \\emph{time-varying} truncation. More specifically, our data collection interval is bounded and truncated (which has been considered in some prior work on distribution/mean estimation as well, e.g., Lai and Ying [1991]) but our exploration interval $[LB_t, \\infty)$ is itself adaptive (which we believe is the main new aspect) and is what has motivated our analysis in a finite sample regime in Theorem 3. As you have noted, our focus on the interplay of fairness constraints with online estimation efforts (Proposition 1) is also new compared to this existing literature, and we will make sure to emphasize that as well. \n\n[1] Lai, Tze Leung, and Zhiliang Ying. \"Estimating a distribution function with truncated and censored data.\" The Annals of Statistics (1991): 417-442.\n\n**Minor comments**: Thank you for your careful reading and comments. We have added some clarifications and fixed the noted typos in the revised draft, and hope to add the additional clarifications and a conclusion section to our final draft. ", " **Major comment 4**: Thank you for pointing out these important papers! We will make sure to include them in our reference list and related work in the final draft. Below, we provide a discussion of each paper and how our work relates to them: \n\n$\\bullet$ From the selective labeling perspective (censored feedback in our paper): Lakkaraju et al. [1] address the problem of evaluating the performance of predictive model under the selective labeling problem. They propose a contraction technique to compare the performance of the predictive model and human judge while they are forced to have the same acceptance rate. Our work is similar in our focus on selective labeling, but we propose a bounded exploration technique to remove the difference between the estimated and true distributions, and avoid sample selection bias during data collection procedure. We also consider the cost of exploration and fairness consideration. In contrast, our work is more closely related to De-Arteaga et al. [2]. We both study the problems arising due to selective labeling. Similar to us, [2] proposes a data augmentation scheme by adding more samples that would be more likely rejected (we refer to this as exploration) to correct the sample selection bias. Their proposed data augmentation technique is similar to our bounded exploration, but it differs in its selection of samples in that it adds samples that would be more likely to be rejected. \n\n$\\bullet$ From the fair learning from imperfect data perspective: We have discussed the paper from Blum and Stangl in the Appendix B. The second referred paper, Kallus and Zhou [3], shows that residual unfairness remains even after the adjustment for fairness when policies are learned from a biased dataset. They propose a re-weighting technique (similarly, re-weighing ideas are explored in [Blum and Stangl] and [Jiang and Nachum]) to solve the residual unfairness issue while accounting for the censoring/adaptive sampling bias. In contrast, we use bounded exploration to remove the mismatch between an incorrectly estimated and the true data distributions through additional data collection over time. We further consider the effects of fairness interventions as orthogonal to our proposed data collection procedure. \n\n$\\bullet$ From the active learning perspective: We have some discussions about the active learning literature in Appendix B, and will augment them with the suggested papers. Abernethy et al. [4] propose an active sampling and re-weighting technique by sampling from the worst off group at each step. Their goal is to build a computationally efficient algorithm with strong convergence guarantees to improve the performance on the disadvantage (highest loss) group while satisfying the notion of min-max fairness. Noriega-Campero et al. [5] propose an adaptive fairness approach, which adaptively acquires additional information according to the needs of different groups or individuals given information budgets, to achieve fair classification. Similar to the approaches of these papers, we also compensate for adaptive sampling bias through exploration (by admitting individuals who would otherwise be rejected). In contrast, we start with a biased dataset, and we primarily focus on recovering the true distribution by bounded exploration, accounting for the cost of exploration, avoiding the adaptive sampling bias, and consider fairness issues as orthogonal to our data collection procedure (and as such, can apply our procedure to debiasing the estimates on a single group). \n\n[1] Lakkaraju, Himabindu, et al. \"The selective labels problem: Evaluating algorithmic predictions in the presence of unobservables.\" Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2017.\n\n[2] De-Arteaga, Maria, Artur Dubrawski, and Alexandra Chouldechova. \"Learning under selective labels in the presence of expert consistency.\" arXiv preprint arXiv:1807.00905 (2018).\n\n[3] Kallus, Nathan, and Angela Zhou. \"Residual unfairness in fair machine learning from prejudiced data.\" International Conference on Machine Learning. PMLR, 2018.\n\n[4] Abernethy, Jacob, et al. \"Active sampling for min-max fairness.\" arXiv preprint arXiv:2006.06879 (2020).\n\n[5] Noriega-Campero, Alejandro, et al. \"Active fairness in algorithmic decision making.\" Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 2019.", " **Major comment 3**: Thank you for noting this. We have added the missing details to the beginning of the proof of the proposition, and will update the proposition statement in our final draft as well. We compare the speed of debiasing based on the error in the parameter estimate, $\\mathbb{E}[|\\hat{\\omega}^y_t-\\omega^y|]$. Given a fixed $t$, the algorithm for which this error is larger has a lower speed of debiasing. In words, the slower algorithm needs to wait for \\emph{more} arriving samples before it can reach the same parameter estimation error as a faster algorithm. Alternatively, the speed of debiasing comparison can be in terms of the number of arriving samples needed in order for the estimated parameter $\\hat{\\omega}^y$ to be within a given distance of the true parameter ${\\omega}^y$, in expectation. \n\nTo corroborate that Proposition 1 aligns with this definition, for instance, when a group is over-selected under a fairness constraint, the fairness-constrained threshold $\\theta^F_{g,t}$ will be lower than the unconstrained threshold $\\theta^U_{g,t}$. Therefore, the exploration range will be narrower, which means by adding a fairness constraint, the algorithm needs to wait and collect more samples (takes longer time) before it manages to collect sufficient data to accurately update the unknown distribution parameter, and hence, it has a slower debiasing speed.", " Thank you for your valuable comments and suggestions. \n\n**Major comment 1**: Thank you for noting these. We let $\\hat{\\omega}^y_t$ be the $\\alpha$-th percentile of $\\hat{f}^y_t$ in our algorithm. (This has now been clarified in lines 135-136 after Assumption 1 in the revised draft.) As a simple instance, when $\\alpha = 50$ and $\\hat{f}^y_t$ is Gaussian, this $\\hat{\\omega}^y_t$ is the median of $\\hat{f}^y_t$. Assumption 1 then states that the firms will update the estimated distribution $\\hat{f}^y_t$ by updating its mean/median $\\hat{\\omega}^y_t$ only (without updating its variance). We have stated the assumption more generally, as our choice of an $\\alpha$-percentile is wlog. For instance, the unknown parameter could be the rate parameter $\\lambda$ of an exponential distribution, or the $\\beta$ of a Beta distribution; such unknown parameters have a one-to-one mapping to the distribution's $\\alpha$-percentiles under Assumption 1, and therefore our algorithm can be used to update such unknown parameters as well by performing the appropriate mapping. \n\nIn Definition 1, in more detail, we choose $LB_t$ such that $\\hat{F}^0_t(\\hat{\\omega}^0_t)-\\hat{F}^0_t(LB_t)=\\hat{F}^0_t(\\theta_t)-\\hat{F}^0_t(\\hat{\\omega}^0_t)$; that is, such that $\\hat{\\omega}^0_t$ is the median in the interval $(LB_t, \\theta_t)$ based on the current estimate of the distribution $\\hat{F}^0_t$ at the beginning of time $t$. Then, we update $\\hat{\\omega}^0_t$ to $\\hat{\\omega}^0_{t+1}$, the \\emph{realized} median of the distribution between $(LB_t, \\theta_t)$ based on the observed data during $[t, t+1)$. Once the underlying distribution is correctly estimated, (in expectation) we will observe the same number of samples between $(LB_t, \\omega^0_t)$ and between $(\\omega^0_t, \\theta_t)$, and hence $\\omega^0_t$ will no longer change. Thanks for pointing this out; we will add this explanation after Definition 1 in the final draft (given the extra space). \n\n**Major comment 2**: Our current analytical work indeed focuses on one-dimensional feature data and threshold classifiers, and our experiments also consider the algorithm used on data with multi-dimensional features by performing a dimension reduction (e.g., in our Adult dataset experiment). \n\nIn terms of whether there is a considerable information loss due to such dimension reduction, we have run an additional set of experiments on the Adult dataset, with classifiers trained with and without performing dimension reduction. The experiments show only minimal (<1%) loss in accuracy, with results as follows: \n\n$\\bullet$ For the classifier trained through logistic regression without performing dimension reduction: the overall accuracy is 78.44\\%, and the accuracy for advantaged (41762 samples) and disadvantaged (7080 samples) group are 77.42\\% and 85.27\\%, respectively.\n\n$\\bullet$ For the classifier trained through logistic regression after performing dimension reduction: The overall accuracy is 77.79\\%, and the accuracy for the advantaged and disadvantaged group are 76.73\\% and 84.04\\%, respectively.\n\nIn addition, the focus on a threshold classifier might not seem as restrictive as it sounds: its optimality has been established in the literature by Corbett-Davies et al. [1, Thm 3.2] and Raab \\& Liu [2] as long as a multi-diemnsional $X$ can be mapped to a properly defined scalar. The recent advances in deep learning in fact has helped enable this possibility: for instance, one can take the last layer outputs from a deep neural network and use it as the single dimensional representation. As another example, one can collect a variety of information about a person's financial information and train a model to combine them into a single dimension $\\in [0, 850]$, i.e., a credit score. Then the classification question of whether someone's loan application should be approved or not reduces to finding a threshold of the risk score to determine the decision.\n\nWe have added the discussion above on single-dimensional data and threshold classifiers in our discussion section in Appendix A in our revised draft (lines 539-560). We will move a summary of this discussion to a conclusion section in our final draft as you have also suggested (given the extra space). \n\n[1] Corbett-Davies, Sam, et al. \"Algorithmic decision making and the cost of fairness.\" Proceedings of the 23rd acm sigkdd international conference on knowledge discovery and data mining. 2017.\n\n[2] Raab, Reilly, and Yang Liu. \"Unintended selection: Persistent qualification rate disparities and interventions.\" Advances in Neural Information Processing Systems 34 (2021): 26053-26065.", " The work studies the problem of collecting new data to learn outcome distribution of sensitive groups. It analyses three schemes of data collection — (exploitation only) one in which decisions to give a positive decision to individuals are solely based on current classifier, and (exploration only and active debiasing) other two where the individuals who would not have been given a positive decision are given so randomly with or without some constraints. Theoretical analysing the cases of unimodal and univariate distributions, the work shows that the latter two schemes estimate the outcome distributions correctly. It also proves bounded regret in making correct decisions for the active debasing scheme. These observations are verified in experiments on two real datasets. Strengths\n- interesting and overlooked problem of how new data collection impacts fairness guarantees\n- theoretical analysis poses reasonable questions on learnability in this setting and finds intuitive answers\n- generally well written paper\n\nWeaknesses\n- some terms are not made precise like bias or debiasing\n- most relevant work are not discussed in the main paper or Appendix\n- limitations of the assumptions like unimodal distributions and of the method like not looking at the features are not discussed\n\nOverall I like the research direction of studying the interplay between data collection and fairness when starting from imperfect data. Proposition 1 is a good step towards understanding this interplay. \n\n\n---\n## After the response\n\nThe response addresses my queries adequately. Hence I have updated my score to 7, Accept. The paper explores interesting ideas on controlled data collection to correct biased estimates and the effect of putting fairness constraints during collection. There are limitations like the restriction to single dimensional data. Overall, the work makes a good contribution.\n\nI appreciate the thorough review of related work in the response. Please include the discussion into the main paper differentiating the problem setting and analysis from past work. Clarity of presentation needs to be improved as noted by the other reviewers including clarifying the problem and notation. I would suggest addressing the restriction to single-dimensional data by pointing to ways in which the insights can be transferred to the multi-dimensional case. It is unreasonable to expect that dimension reduction would be as lossless as shown in the Adult Income experiment (also, I could find the exact procedure used for dimension reduction, so the result is a bit surprising if it is completely unsupervised). Please address the following five points in the response.\n\n1. Assumption 1 is not clear mainly because the parameter \\omega is not explained. Similarly Definition 1 is introduced without giving the context for the formula in line 159-161.\n\n2. Clarify whether the analysis is restricted to single dimensional data and threshold classifiers. Extensions to multiple dimensions is not discussed.\n\n3. Clarify the term speed of debiasing in Proposition 1. The statement and the proof are not precise.\n\n4. Related work can be made more comprehensive by discussing important papers on the three relevant parts of the problem - selective labelling, fair learning from imperfect data, and active learning of fair models. For example, see Lakkaraju et al. 2017\n(https://dl.acm.org/doi/10.1145/3097983.3098066) and De-Arteaga et al. 2018 (https://arxiv.org/abs/1807.00905) for selective labeling, and Blum and Stangl 2019 (https://arxiv.org/abs/1912.01094) and Kallus and Zhou 2018 (https://arxiv.org/abs/1806.02887) for fairness learning from imperfect data. For active debiasing please see Abernethy et al. 2020 (https://arxiv.org/abs/2006.06879) and Noriega-Campero et al. 2019 (https://dl.acm.org/doi/10.1145/3306618.3314277).\n\n5. For the first four results the problem seems to be more related to online mean estimation than fair learning — how to learn two parameters (which are means of some distribution) online with the difference being the constraint of bounded exploration. The final result Proposition 1 is interesting in terms of interplay between online data collection and putting fairness constraints. Because of this, I would suggest discussing differences from the related work on online mean estimation. Please motivate the unique aspects of the problem setting like how the bounded exploration makes the problem and the analysis harder.\n\n\nMinor (no response is requested for these)\n\nThe presentation of the problem can be improved in the Introduction. It is a bit stylistic preference, but I would suggest adding a discussion of what has been done in the past work in Introduction itself and pointing out the shortcomings and improvements by the current work.\nMore importantly, the idea of bias in data is not clear. Does this refer to the training data being censored i.e. labels are available only for individuals with positive predictions? Or, the idea is more general? Lines 47-49 can be made more precise. For example, clarify if the mismatch between estimated and true label distribution refer to estimation error i.e. mismatch due to finite samples or the estimate is biased.\n\nPlease clarify whether the analysis depends on the form of the loss-miniminizing fair algorithm in Eq. (1). Does it matter if it is solved when the fairness constraints are defined with some slack (i.e. the difference need not be exactly zero) as done in many fair learning algorithms? Can the constraints be defined for differences in positive predictions (instead of the differences in thresholds) between the groups?\n\nDefine or provide a reference for alpha-stable distributions\n\nunimodel -> unimodal\n\nAdd a conclusion section Limitations of the work are not discussed in the main paper. Please discuss the limitations of the analysis for unimodal distributions and of the method in ignoring the feature information in data collection. Please discuss related work and use it to motivate the uniqueness of the problem setup. Some of the discussion from Appendix A can be moved to the main paper. Potential negative impacts are discussed.", " The paper presents a method for \"data debiasing\" in a sequential-data setting with one feature (analysis extended to two in supplement). The algorithm conducts a form of bounded-random exploration in this one-dimensional setting, and can be used to perform learning under fairness constraints. The authors provide a theoretical analysis of the proposed algorithm, demonstrating convergence and analyzing the regret bound. The paper includes experiments on a simluated datasets and versions of two fairness datasets.\n\nOverall, the paper is mostly well-written and notation clear (with some exceptions, noted below). I have three main concerns about the paper. The first is about the practical significance of the problem: this seems to be of limited real-world significance, and the authors do not identify a meaningful application where such a model and the described setting hold. Second, I have some concerns with the presentation of the work as \"data debiasing\", as their work is really about model debiasing in an active-sampling setting. Third, I find the real-world experiments somewhat unconvincing (see below).\n\n## After response\n\nSee comments below. #### Major Comments\n\n* This work strikes me as inappropriately framed, in two ways. First, it seems to be about *model* debiasing, not data debiasing. For example, the authors' own definition of bias is a property of the model, not of the data (cf. L43-49; L125-129). Second, this definition of \"bias\" seems wholly unrelated to any fairness concerns and is effectively a measure of model error. The work would make sense as a fairness work if their measure of bias was, for example, related to disparities in this error over subgroups, but as-is, the proposed algorithm seems to simply be a one-dimensional active-learning method that could incorporate fairness (or any other) constraints.\n\n* The entirety of the main text focuses on the case of one-dimensional data (with a reference to 2-D case in supplement). If this is the case, the proposed method seems to be of limited usefulness in most real-world classification scenarios, where many features are typically available. The authors' own somewhat contrived experiments on \"real-world\" datasets (Adult, FICO) which reduce these datasets to a single feature in order to apply their proposed method, seem to demonstrate the limited real-world usefulness. I understand that this may be an interesting theoretical case for the bandit literature, but the authors' claim to solve a meaningful fairness problem is hard to take at face value without any clear examples of such a scenario emerging in the real world.\n\n* The real-world data experiments, in particular, are weak. The authors do not provide the ground-truth values in Fig. 4a/4b/4c/4d; only one panel compares the proposed method to baselines; and the synthetic data augmentation in 4(c) seems ad hoc and is not described. Most concerning, however, is the authors' distilling these richly-featured tabular datasets into a single feature for the purposes of their experiments, which amounts to discarding information in order to shoehorn in the proposed method. This is effectively ignoring hundreds of published results on these datasets in the fairness literature, which would perhaps be an appropriate baseline. The results are also missing any notion of classification error or loss, if I understand them correctly.\n\n#### Minor Comments\n\n* L131 \"This type of assumption is common in the literature\" - please provide relevant citations.\n\n* Definition 1 could use more unpacking; some of the properties described as \"intuitive\" or otherwise are not immediately clear to me. A clearer description of the properties of LB_t would clarify the work considerably.\n\n* The intuition in L304-312 is not clear; in particular the sentence in L306-L309 could use more unpacking. It currently reads, paradoxically, as \"an increase in exploration makes the model more conservative at exploration\".\n\n#### Typos etc. \n\n\n* I am confused why there is no \"hat\" on \\theta, as there is on \\omega. Isn't it the case that there exists a true \\theta_a, \\theta_b, which we are estimating via \\hat{\\theta}_a, \\hat{\\theta}_b, respectively?\n\n* L154: in my opinion it would be clearer to retain the g subscript in the following analysis.\n\n* \"lowerbound\" --> lower bound\n\n* L178: \"update the estimates to f...\" --> update f\n\n* L179: \\theta_t^y is not an \"unknown\" parameter if we are updating it.\n\n* L213: \"overestiamted\" See above. See above.", " The paper considers an algorithm to debias datasets through adaptive and bounded exploration for classification. In particular, the bias in the data is defined as inaccuracy in the estimation of a univariate distribution (assumed known to be fit). Their algorithm works by adaptively picking a lower bound corresponding to a function of the percentile of the estimated univariate distribution (with current parameters). Then given a decision threshold classification algorithm, a threshold is fitted to determine if samples / agents are accepted, where violation of the threshold can occur wrt to the lower bound calculated (with low probability). Data is then selected for: (1) retraining the classifier; and (2) updating the parameters of the distribution.The paper presents properties of a purely exploitation and purely exploitation algorithms, and further shows that their algorithm has a favorable properties in a mix. This leads to convergence of estimated parameters. Regret analysis is done on the classifiers learnt; and the analysis of how a fairness constraint can effect the speed of convergence is done. Experiments are presented over synthetic and real world datasets.\n\n---\n\n*Update*\n\nI have updated my score to a 7. Similar to other reviewers, I would also highlight that additional clarity in the paper would improve the paper greatly.\n\nOne specific point on this is to make the types of fairness less ambiguous. In the discussion and in the author responses, language has been used for different types of fairness which is confusing. There are two (broad) types of (un)fairness being highlighted: (1) from data; versus (2) from prediction. The authors are focusing on the former. However, the language used to describe both is confusing. For instance, when describing prediction fairness, terms like \"social bias/model bias/unfairness\" are used, but data fairness is also \"social\" and \"unfairness\" is too broad of a term. In my opinion, this should simply be stated as \"prediction fairness\" or something similar. On the other hand, data fairness is labelled \"statistical bias/data bias\". Here, the \"statistical bias\" term could refer to predictive fairness as constraints for this are \"statistical\" in nature. Simply put, the language here needs to be made specific, especially so in fairness where there are many \"flavors\". Strength\n - Experiments show recovery of shifted parameters.\n - Theoretic results provide convergence and regret analysis.\n\nWeaknesses\n - Writing can be improved. There are aspects of the paper which are very unclear. (enumerated below).\n - Figures in the main text are confusing / missing information.\n - Algorithm / Pseudo-code are not clear. Questions / Comments / Suggestions\n1. Algorithm 1 is not especially clear in the psuedo-code (/ main-text description). In particular, it is unclear that \"new data\" is used to retrain the classifier (which I believe is the case from what I have seen via code / appendix). Furthermore, even in the appendix, algorithm 1, it does not specify what data the retraining is being done with. Also it seems to be incorrect? The way it is currently written seems to suggest that it should be looping over $ t < T $ (alongside the currently loop(?) over samples / batches).\n2. In the main-text / proofs / definitions, \"wlog\" statements are given for what the parameters are specified as (mean, percentile). Following the logic of the paper it appears that if we can prove, \\ie, Theorem 2, for a single type of statistic(?) of the distributions, then it will work for any arbitrary statistic / parameter. If true, I assume this is from assumption 1? It would help to clarify this step.\n3. Furthermore, with this assumption, it appears that the exploration and exploitation baselines also depend on how the parameters are specified. This is not clear. In particular, Theorem 1 & 2 are wrt the distribution parameters. In Theorem 2 the mean is specified, but what about Theorem 1? Is it the median as discussed on line 181?\n4. Theorem 4, the regret bound, has many undefined symbols in the main-text. \n5. The figures in the main-text are confusing. They state some algorithms \"successfully debiases data\". However, from the information given in the main-text, there is no way to verify the claim. For instance, for Fig 1, (a) + (b), how can we tell that the mean estimate is correct, when we don't know the true value? The corresponding figures in the appendix does have this information in their title.\n6. Statistical data bias being specified as the error in parameters is confusing. I initially though this criteria such as the statistical parity or representation rate of the data.\n\nMinor\n - Line 213 \"overestiamted\"\n - Appendix Algorithm 1 does not initialize $t$ The primary limitations are stated." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2 ]
[ "e24BQGMkbW6", "OEgGmfg5xOk", "IG3Te1dLQkh", "mbnPRwN00w", "OEgGmfg5xOk", "nips_2022_Fm7Dt3lC_s2", "IG3Te1dLQkh", "Vl7BPryxrym", "96Gf2T50Ac_", "mbnPRwN00w", "vNO0mhMNEoH", "zbJMctwAQD", "-CHKx5guvYk", "OEgGmfg5xOk", "nips_2022_Fm7Dt3lC_s2", "nips_2022_Fm7Dt3lC_s2", "nips_2022_Fm7Dt3lC_s2" ]
nips_2022_bfz-jhJ8wn
Bridging the Gap Between Vision Transformers and Convolutional Neural Networks on Small Datasets
There still remains an extreme performance gap between Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs) when training from scratch on small datasets, which is concluded to the lack of inductive bias. In this paper, we further consider this problem and point out two weaknesses of ViTs in inductive biases, that is, the spatial relevance and diverse channel representation. First, on spatial aspect, objects are locally compact and relevant, thus fine-grained feature needs to be extracted from a token and its neighbors. While the lack of data hinders ViTs to attend the spatial relevance. Second, on channel aspect, representation exhibits diversity on different channels. But the scarce data can not enable ViTs to learn strong enough representation for accurate recognition. To this end, we propose Dynamic Hybrid Vision Transformer (DHVT) as the solution to enhance the two inductive biases. On spatial aspect, we adopt a hybrid structure, in which convolution is integrated into patch embedding and multi-layer perceptron module, forcing the model to capture the token features as well as their neighboring features. On channel aspect, we introduce a dynamic feature aggregation module in MLP and a brand new "head token" design in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other. The fusion of weak channel representation forms a strong enough representation for classification. With this design, we successfully eliminate the performance gap between CNNs and ViTs, and our DHVT achieves a series of state-of-the-art performance with a lightweight model, 85.68% on CIFAR-100 with 22.8M parameters, 82.3% on ImageNet-1K with 24.0M parameters. Code is available at https://github.com/ArieSeirack/DHVT.
Accept
Authors introduce 3 modifications to ViT architecture to introduce additional inductive biases to improve performance in low-data scenarios: - SOPE: Sequential Overlapping Patch Embedding -- essentially convolutions before partitioning the image into patches. - DAFF: Dynamic Aggregation Feed Forward -- a DWCONV operation is applied to tokens after a FC layer increases the channel dimension. The new tokens are average pooled, input to additional FC layers, and then are used to scale the CLS token. - HI-MHSA: Head-Interacted Multi-Head Self-Attention -- this approach's name is confusing. This does not change the heads in MHSA. Rather a new mechanism is introduced prior to MHSA to introduce new tokens where each new token is derived from a different partition of the original channel dimensions. AC recommends authors use a different name. For example, "Intra-Channel Modeling (ICM) MHSA" or something would be more clear. Performance is evaluated on CIFAR-100, DomainNet subsets, and ImageNet 1K Pros: - [R/AC] The topic is important to the community. - [R/AC] The paper is well written and clear. - [R/AC] The authors present improved performance versus other recent SOTA hybrid model designs (during rebuttal phase, though missing from original work -- should be added to paper). Cons: - [R/AC] The evaluation could be significantly improved. For example, more training experiments on undersampled version of more datasets, with comparisons to other SOTA methods. - [R/AC] The design is complicated and the motivation isn't always clear. - [R] Novelty of the components implemented is low. - [R] Concerns over use of BN as opposed to LN. Authors have provided ablation experiments to demonstrate that BN improves performance of their model over LN. These ablations should be included in the manuscript. - [R/AC] Concerns over lack of comparison to other SOTA methods that mix convolutions with transformers, such as CvT. Authors have provided additional experiment tables that compare against CvT. These tables should be included in the manuscript in a consistent manner (showing number of parameters and FLOPS). - [R/AC] Authors do not include FLOPS in their experiment tables. Please ensure all tables report number of parameters and FLOPS for all models explored. There are python packages to help with computing this, such as "flopth". - [AC] Some spelling and grammatical mistakes. Please spell check the manuscript. Overall Recommendation: Reviews lean toward acceptance, but marginally so. Given that the authors have provided more comparisons against recent relevant SOTA methods, and that the reviewers (including expert in the field) lean toward accept, the AC opinion is that this manuscript can be published and provides some valuable knowledge to the community. There are ways in which the paper can still be improved before publication, such as inclusion of additional evaluation datasets. AC Rating: Borderline Accept
train
[ "PTkmGGoa0u_", "7wQj1CTbS8", "YT6q5kbjSZ_", "I9Do6wg2Rmq", "r3rGXFyGIxN", "XS3fjlnIxiF", "8D2Y2FvynkA", "C46X7i_YBN-", "54Ja5l-bqVU", "GZ0PHiVwk6g", "nRfto7_786O", "Sm1qKnZoEV7", "H2wb8kssSvO", "wRajdpybMhy", "dmjP5qTAkyC", "bbto0VIQNS6", "QZn-rCzAHf", "2UdiSFcHU8t", "h0LGMFzb4Wd", "99wi6Lnfb6", "claapWcVb-e", "bY4zzDmf_WJ", "7c0-so-dySO" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your time, your detailed and insightful comments and kindess! It is a good trip of us these days and your suggestions greatly improve our work, making it more solid. Best wishes.", " Thanks for your time and comments again. Your suggestions and insights help us rethink our work and make it more solid. Here in this post-rebuttal response, we provide further explanations and we hope this time our responese is convincing enough.\n\n**Q1: Further on LN and BN.** Thanks for your further concern on BN and LN. As we have shown in the first-round response, using BN can reach higher accuracy while it is more sensitive to batch size. From our point of view, the vanilla vision transformer adopts LN before MHSA and MLP, aiming at regularizing each token on channel dimension. This is important because MHSA use dot-product operation, and LN helps control the value of query and key, avoiding extreme values. While in our work, LN is also adopted at the same place, before MHSA and MLP. **Further, we use depth-wise convolution operation, and its output should be regularized in terms of spatial dimension.** When we replace BN with LN, the feature will be reshape into sequence style, which may ignore the spatial relations. **So it is more suitable to use BN for depth-wise convolution for higher recognition accuracy**. If the computational resource is limited and researchers could not search for the best choice of batch size, we think some special BN variants such as Cross-Iteration Batch Normalization [1] can be adopted to reduce the influence brought by small batch size. And there could still hides room for improvement from BN with further computational resource.\n\nWe will release all the code, including training and the models, as soon as our work is accepted. So the fairness of all our experiments and the effectiveness of our method are ensured.\n\n[1] Yao Z, Cao Y, Zheng S, et al. Cross-iteration batch normalization[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 12331-12340.\n\n\n**Q2: Training on 224 resolution.** Following your post-response in this discussion period, we implement our method on 224 resolution on CIFAR-100, ClipArt, Painting and Sketch. Note that we keep the same training scheme of paper [54], where the training epoch is set to 100 and the patch size is 16. The baseline models are cited from [54], training with its proposed method. The results are as follows. \n\n|Method|Epoch|Resolution|Patch Size|CIFAR-100|ClipArt|Painting|Sketch|\n|:-------:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n|ResNet50| 100 |224|1|72.94|63.93|53.52|59.62|\n|SwinT| 100 |224|16|66.23|47.47|41.86|38.55|\n|CvT-13|100|224|16|74.51|60.64|55.26|57.56|\n|T2T-ViT|100|224|16|68.03|52.36|42.78|51.95|\n|DHVT-T(Ours)|100|224|16|74.78|58.94|52.64|56.66|\n|DHVT-S(Ours)|100|224|16|**78.64**|**64.75**|**56.42**|**61.35**|", " **Q2. Experiment Comparisons.** We are sorry that it is our fault that the results are not presented in a consistent way. We will make it more clear in the future version. The reasons are as follows.\n\n**First,** the page limitation does not enable us to present all the results in the same way, and have to show the detailed comparison in the Supplementary Materials. Because CIFAR-100 is the main target to evaluate our method, so we present the whole comparison in the main paper. \n\n**Second, when compared with baselines on CIFAR-100**, we do not show the computational complexity of baselines and we only put the measurement of our method in the Supplementary Materials. **The results from the CNN baselines also do not show their computational complexity**, like in Res2Net [2] Table 4 and DenseNet [3] Table 2, SKNet [4] in Table 5, and their model variants are also not provided in the code. Res2NeXt-29, 6cx24wx6s-SE is more like a model variant specially for CIFAR-100. **So for the consistent comparison format in the baselines works, we just show the number of parameters to measure the model complexity.** On compared with ViT baselines on CIFAR-100, we report the patch size which is an important parameter in ViTs. The patch size choice also influences the performance of our method. When it comes to Epochs, the CCT [5] trains on different epochs. It reports results with training epochs of 300, 500 and 1500 and we have to show that which one is choosen to be our baseline. **And for fair comparison, the epoch in our work on CIFAR-100 is set to 300, which is the same as other ViTs and CNNs baseline except for wide-resnet of 200 epochs.** So considering the factors above, in the table of CIFAR-100 results, we show the number of parameters, epochs and patch size.\n\n**Third, in the comparisons on DomainNet,** previous work except paper [1] do not implement their model on DomainNet datasets. So we implement ResNet50 in the main paper and ResNeXt50-32x4d in the rebuttal period as the baselines on DomainNet. These two network are the common model variant choice. While Res2NeXt-29, 6c×24w×6s-SE is not a common choice even in their main paper, so we do not implement it on DomainNet datasets. And we report the number of parameters as the model complexity. Note that the patch size in our work and CvT, which is reported in our first-round response to *Reviewer 4fHM*, is set to 16 and the epochs of all models are 300. So we did not show the patch size and epochs in the Table 3, and we show the experimental setup in details in the text part.\n\n**Fourth, the format of comparision on ImageNet-1K is consistent to the previous ViT works** and most of them report the number of parameters, FLOPs and training resolution. In this part we mainly compare with ViTs, so we just cite the results of two latest CNN RegNet and ConvNeXt. \n\nThanks again for pointing out our shortages and we will follow your suggestion to update the comparison tables in the future version.\n\n\n[1] Liu Y, Sangineto E, Bi W, et al. Efficient training of visual transformers with small datasets[J]. Advances in Neural Information Processing Systems, 2021, 34: 23818-23830.\n\n[2] Gao S H, Cheng M M, Zhao K, et al. Res2net: A new multi-scale backbone architecture[J]. IEEE transactions on pattern analysis and machine intelligence, 2019, 43(2): 652-662.\n\n[3] Huang G, Liu Z, Van Der Maaten L, et al. Densely connected convolutional networks[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4700-4708.\n\n[4] Li X, Wang W, Hu X, et al. Selective kernel networks[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019: 510-519.\n\n[5] Hassani A, Walton S, Shah N, et al. Escaping the big data paradigm with compact transformers[J]. arXiv preprint arXiv:2104.05704, 2021.", " Thank you for your responses and your time again. We aim at replying to you more detailed and sincerely and so we have to use multiple responses. We hope this two detailed post-reponses for you could address your concerns. \n\n**Q1: Novelties.** As you pointed out, spatial relevance and diverse channel representation are two common points on promoting Vision Transformers. However, previous works only achieve data-efficiency on medium dataset ImageNet-1K. **And not all the modification made by previous works can truly bridge the performance gap between CNNs and ViTs on such small datasets like CIFAR-100.** As you can see in paper [1] Table 4, one of the paper we referred to, Swin Transformer, T2T-ViT and CvT still remain performance gap or just be comparable to the ResNet-50. Though they successfully reach promising results on ImageNet-1K, they fail to compete with or just behaves equally to the most common CNN ResNet-50. While in our work, the DHVT we proposed is able to close the gap between common CNNs like ResNet and ResNeXt, and we can even defeat much stronger CNNs like SKNet, DenseNet and Res2Net. **Our method exhibits superior performance on a wide range of datasets**, CIFAR-100, DomainNet and ImageNet-1K, demonstrating the generalization of our method. **Our modification is more effective on solving the intrinsic problem of ViTs, so the good generalization capacity is not surprising.**\n\nAnd as we pointed out in the first-round response to you in Q2 and the first-round response to *Reviewer b3Vm* in Q5, the highlighted Head Token is a brand new design. It brings a new insight that **enhancing channnel representation from the view of groups rather than independent ones.** The Figure 7 in our Supplementary Materials reveals the feature interaction pattern in our work. We further raised a question that if such characteristic is general in other Vision Transformers. We hope this will inspire more future research.", " I have read through the authors' responses and read other reviewers' comments as well.\n\nI appreciate the detailed responses from the authors. It does explain some details while some of my major concerns still remain:\n\n- As shared with other reviewers, the novelty is limited. Although the authors point out the detailed differences, the general ideas of spatial relevance and diverse channel representation are quite common. For the claimed three contributions, the first one is mentioning the overall solution containing the two components, and the 2nd and 3rd ones basically detail the two-component contributions. Although I agree there are some deltas or new tricks, I didn't see a significant one.\n\n- Although the authors provide more comparisons for Tables 3 and 4, the results are not presented in a consistent way. Table 2 includes Patch Size and Epochs columns with a lot of comparing methods, while Tables 3 and 4 only contain Params and Accuracy, and Table 4 also has GFLOPs information. It is hard to clearly see the accuracy-parameter-complexity tradeoff and the comparisons with SOTA across multiple datasets. Moreover, we can see that the Res2NeXt-29, 6c×24w×6s-SE is the most competitive one for CIFAR-100, while its results are not reported in Table 3&4.\n\nA minor issue: Although it is ok to use multiple responses at one time to address one reviewer' comments in detail, it might not be fair to others.", " Thank you for your detailed comments again and the questions you raised are full of insight which helps us rethink and improve our work a lot.\n\nNevertheless, we are not sure that if our response is able to address your concerns properly. Sincerely, if you have further questions or if our response on some points is not precise, please let us know.\n\nWe are looking forward to your further reply and thanks for your time and help. ", " I thank the authors for the very detailed rebuttal. I appreciated it. However, I am still not convinced by BN, which depends on the batch size. Moreover, even if 54 uses another resolution, I think it is crucial to compare the two approaches at both resolution (original vs 224)", " Thanks for the update. I increase my vote to 5.\n", " We would thank you for your detailed comments and the suggestions your raised help us a lot to improve this work.\n\nYet, it would be a little bit pity that we could not further improve our status in the response. Sincerely, could you please let us know whether there is any question we have not addressed properly. We humbly seek your further advice to improve this work and cherish the possible opportunity during this discussion period. \n\nThanks again for your time and help.", " I would like to thank the authors for their detailed responses. Most of my concerns are addressed and I believe this is an interesting touch towards promoting vit performances on small-scale datasets. In general, I believe the pros outweight cons and I would keep my score unchanged.", " Thank you for your further comments! We add more results in this response and we hope this time we can address all your concerns.\n\n**Q1.Updated Version.** We are sorry that we did not update the version before. Now the main paper is updated, following all the suggestions from all the reviewers. Thank you again for the sugeestions.\n\n**Q2. The shortcut alongside.** As is also mentioned by Reviewer 4fHM, the CMT [1] (which we missed before) and Shunted Transformer [2] (which we have cited) have adopted such shortut alongside. These two works came publised in CVPR2022 in March. Meanwhile we also have conducted plenty of experiments of about this shortcut, evaluating its effectiveness on small datasets. So in the final structure of our method, we adopt the shortcut. The difference between our DAFF and the FFN in CMT and Shunted-Trasnformer is that we leverage class token and examine the shortcut alongside DWCONV can also help in this circumstance. Our DAFF can show the effectiveness of shortcut alongside on the other hand. **So combining our work and CMT and Shunted-Transformer together, we can conclude that the shortcut alongside is a generally useful trick, both on small dataset like CIFAR-100 and larger dataset like ImageNet-1K, no matter if the class token is adopted or not.**\n\n[1] Guo J, Han K, Wu H, et al. Cmt: Convolutional neural networks meet vision transformers[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 12175-12185.\n\n[2] Ren S, Zhou D, He S, et al. Shunted Self-Attention via Multi-Scale Token Aggregation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 10853-10862.\n\n\n**Q3. Compared with CoaT and DaViT.** Thank you for suggestion. CoaT and DaViT investigate channel-wise representation through conducting self-attention on channel dimension, while we enhance channel-wise representation by dynamically aggregating feature from patch tokens to enhance class token channel-wise and involve channel group-wise head tokens into vanilla self-attention. We update the main paper and the dicussion is placed at L135-L138 now. \n\n**Q4. Computational Complexity of baselines.** We measure the computational complexity on CIFAR-100 under the same circumstance as the Section 2.2 in our Supplementary Materials. And for detailed comparison, we show two digits after the decimal point. Note that we use 4 heads and 8 heads in our DHVT-T and DHVT-S, and for fair comparison, the baseline ViT-Tiny (DeiT-Tiny) and ViT-S (DeiT-S) also adopt 4 and 8 heads respectively. We further exhibits the results of original ViT with different patch size on CIFAR-100. And during rebuttal period, we further implement CvT-13 on CIFAR-100 with patch size of 2 and 4. So all the results are summarized as follows. Compared with vanilla ViT and CvT, our method reaches much higher performance while the computational complexity increase is reasonable. \n\nFrom Row 1 and 3, it is not surprising that using smaller patch size will decrease the performance of vanilla ViT. Because ViT is hard to learn spatial relevance under insufficient training data. Using patch size of 4 means the nearby 4x4 pixels are fused into one token, and such token maintains part of the neighboring information. **Decreasing the patch size to 2 further intensifies the non-overlapping problem** and thus decreases the performance of original ViT. And **when scaling up**, from Row 1 to 2, the performance of vanilla ViT also drops, demonstrating that larger amount of parameters in the model are hard to adjust well under insufficient training data. Compared with ViT-S with patch size of 2 on Row 4, the ViT-T with patch size of 4 on Row 1 is superior in the number of parameters, the computational complexity and performance.\n\nWhile in our DHVT, from Row 7 to 10, using smaller patch size and scaling up can bring higher performance consistently. Smaller patch size is helpful to better investigate local patterns and thus the performance increases. And the scalibility of our DHVT is also promising because the channel representation is enhanced and the parameters are easier to adjust.\n\n\n|Row| Method |Img Size| Patch Size |#Params| GFLOPs | ACC |\n|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n|1| ViT-T | 32 | 4 |5.38M |0.36| 67.59|\n|2| ViT-S | 32 | 4 |21.38M| 1.42 | 62.05 |\n|3| ViT-T | 32 | 2 |5.41M| 1.68 | 65.86 |\n|4| ViT-S | 32 | 2 |21.44M| 6.08 | 66.04 |\n|5| CvT-13 | 32 | 4 |19.59M| 1.11 | 79.24|\n|6| CvT-13 | 32 | 2 |19.59M| 4.53 | 81.81 |\n|7| DHVT-T | 32 | 4 |6.01M| 0.39 | 80.93 |\n|8| DHVT-S | 32 | 4 |23.43M| 1.54 | 82.91 |\n|9| DHVT-T | 32 | 2 |5.84M| 1.74 | 83.54 |\n|10| DHVT-S | 32 | 2 |22.77M| 6.26 | 85.68 |", " Thanks for the clarification!\n\nMy concerns are partially addressed. I still need precision on some points in order to adjust my final recommendation. The remaining questions are as follows:\n\n- \"We add a shortcut alongside the DWCONV, resulting a data-specific representation of the input data.\" and ”Though the previous work like CoaT and DaViT try to enhance spatial-wise and channel-wise representation simultaneously, we enhance channel-wise representation in a different way.\"\n\n - As far as I know, the shortcut alongside the DWCONV is also proposed by previous works, like CoaT and DaViT. I highly suggest you state the differences more clearly between the two works and yours in your manuscript.\n\n- \"To make it clearer, we will add more detailed description in the caption and main part of the figure in the modified version.\"\n\n - Seems no update in captions until now.\n\n- A new question. \n - The computational complexity or the inference time compared with baselines, eg, the origin ViT? \n", " Owing to the limited characters of only one response and we want to anwer your questions in details, we have to add another reply!\n\n**Q3. Experiment Settings and More Results.** Thank you for your great suggestion! We implement **ResNeXt50-32x4d**, which has **23.7M** parameters on DomainNets and is comparable with our proposed DHVT-S. And also suggest by Reviewer u2nu, we implement **CvT-13** also. However, **because of the limited computation resource and tight rebuttal time, we can only train them on some of the datasets**. The results are as follows. The results show that even compared with ResNeXt, our method is still superior, demonstrating the effectiveness of our method. The comparision of ResNet50, ResNeXt50-32x4d, CvT-13 and our DHVT are summaried in the following table.\n\n|Method| #params |ClipArt|Painting|Sketch|Infograph|Real|Quickdraw\n|:-------:|:--:|:--:|:--:|:--:|:--:|:--:|:--:|\n|ResNet50| 24.2M |71.90|64.36|67.45|32.40|81.51|74.19|\n|ResNeXt50-32x4d| 23.7M |72.93|64.37|68.52|-|-|-|\n|CvT-13|19.7M|69.77|61.57|66.16|-|-|-|\n|DHVT-T|6.1M|71.73|63.34|66.60|-|-|-|\n|DHVT-S|23.8M|**73.89**|**66.08**|**68.72**|**35.11**|**83.64**|**74.38**|\n\n\nOn comparison with CNNs on CIFAR-100, we are not under the same data-augmentation. Some of the augmentations are suitable for ViTs but not for CNNs, which may hinder the performance of CNNs. So we just cite the reported highest results from the corresponding paper. And we also try to re-implement ResNeXt50-32x4d on the CIFAR-100 under the same augmentation and training method. However, it only achieves 60.20 accuracy. We hypothesize that maybe some data augmentation is not suitable for CNNs under such low-resolution 32x32 input. \n\nHere we train 5 more results with different random seed. The follows are total 10 runs results for our DHVT-S with patch size of 4 on CIFAR-100: {82.91, 82.77, 82.82, 82.86, 82.88, 82.94, 82.75, 83.02, 82.79, 83.00}, and the average result is 82.87±0.08.\n\nWe are trying our best to filling up the comparison and promise to report it on the final version.\n\n**Q4. Incorrect writings.** We are sorry that we made a mistake on the training epochs of WideResNet! We will fix it in the future version. And we will also correct the wrong writings as you pointed out.\n\n**Q5. Flop-Acc Trade-off.** We admit that this is the limitation of our proposed method. We conduct comprehensive computation under small amount of parameters. The dynamic operations in our method is responsible for the high computation burden. However, it can be simplified. Note that Head Tokens receives more attention in the input layer, middle and final layers, rather than shallow layers, which can be seen from Fig 7. in the Supplementary Material. We hypothesis that Head Token is more useful when processing high-level semantic features. Thus maybe only applying Head Tokens in the deeper layers of the backbone will not hinder the performance too much. Under this concern, the computational cost can be reduced.", " Thank you for your comments! We hope our comprehensive answers can address you concerns.\n\n**Q1. Influence of inductive biases for small datasets.** Thank you for your suggestion. We think we can provide qualitative result for the spatial relevance. The baseline DeiT-T result is 67.59 on CIFAR-100. And after splitting class token in FFN, as we pointed out in our reply to _Reviewer SHR5_ in Q1, the baseline comes to **68.76**. Now under this circumstance, if we simply introduce an Average Pooling Layer of stride=1 and 3x3 window size without any parameters, the performance will rise to **70.48 (+1.72).** You can see that simply aggregating neighboring feature can help, which means paying more attention on modeling better spatial relevance is of great importance. Due to its special structure and flexibility, ViTs themselves are hard to learn a simple averaging from neighbouring. We need to impose some spatial relevance in helping them training from scratch on small datasets.\n\nNote that **the lack of inductive biases is the intrinsic problem of ViTs, which is amplified on the condition with small datasets.**. The amount of training data just determines whether ViTs can derive a good feature extraction and representation ability itself. We start from the problem of ViTs themselves and provide our modification and evaluate its effectiveness on various size datasets. Previous works have investigated deeply on larger datasets like ImageNet-1K, and in our paper we focus on much smaller datasets. The final goal of our paper is to update the ViTs, making it behave competitive or even better than CNNs. That’s why the title is “Bridging the Gap between CNNs and ViTs”.\n\nWhen training from scratch on large enough dataset ImageNet-22K, the training data is enough for the ViTs to learn spatial relevance and good channel representation. But when it comes to ImageNet-1K, the intrinsic problem of ViTs gets amplified. The spatial relevance can not automatically learned well and the channel representation maybe not good enough. And considering much smaller CIFAR-100, it is quite a challenge to develop an accurate recognition capability on such scarce data. The spatial relevance is missed, even when the input resolution is small, and the channel representation is worse because of insufficient training data. In conclusion, as the training dataset becoming smaller, the intrinsic problem of lacking inductive biases in ViTs becomes more apparent. So methods that impose strict inductive biases into ViTs is quite essential in training from scratch on small datasets.\n\n**Q2. Comparison with previous works.** We proposed 3 modules accounting for introducing inductive biases of spatial relevance and diverse channel representation. As you pointed out, convolutional patch embedding is widely used in current work and we also adopt it as a simple operation to model spatial relevance. More complex operations like the local window attention in VOLO, as you pointed out, are left for future research. We further introduce two affine transformation in it, enabling a more stable training result. You can also refer to our reply to _Reviewer SHR5_ in Q4.\n\nThe DAFF is a combination of an FFN integrated with depth-wise convolution and a dynamic aggregation module for class token. Splitting class token away from patch tokens and passing it identically through FFN is useful and meaningful. You can refer to our reply to _Reviewer SHR5_ in Q1 for the explanation and experiment results. The dynamic aggregation module collects features from patch tokens and re-calibrate class token channel-wise with an SE style. Thanks for your comments, we missed the work CMT and we will put it in the reference in the future version. The previous method CMT also uses DWCONV and a shortcut inside. However, they do not use class token. Here, we need to consider re-calibrate class token. And note that, in this rebuttal time, we provide comprehensive ablation study on the choice of BatchNorm and LayerNorm and their connection between batch size. You can refer to our reply to _Reviewer b3Vm_ in Q6.\n\nVOLO uses a fine-grained patch embedding module with attention in local window but it is sophisticated and brings huge computational burden. We just apply a convolutional patch embedding for simplicity. And compared T2T-ViT, which leverages SE right after multi-head self-attention, it is far more different from our method. It is somewhat similar to the function of our Head-Interacted Multi-head Self-Attention, which enables different head interact with each other for better channel representation. The SE in T2T-ViT is a channel-wise post-processing, re-calibrating each channel. However, **our Head Token is group wise, dividing channels into several groups for simultaneous processing, which is more compatible with multi-head mechanism**.", " Thank you for your comments! We hope our comprehensive answers can address you concerns.\n\n**Q1. Re-calibrating only class token.** This is a good point. The feed forward network (FFN) is responsible for non-linearly projection and re-calibrating the tokens themselves. Here we introduce Depth-wise Convolution (DWCONV) into FFN, gathering neighboring feature inside FFN. Note that DWCONV can only be operated on patch tokens, and so class token is passed identically through. Now we also want to re-calibrate the class token and we want to use the information from patch tokens. Thus, we gather the feature of all the patch tokens after their projection and re-calibrate class token in an SE style. And we also find that, **splitting class token away from patch tokens is helpful** and we list the ablation study as follows.\n\n(1) **Baseline results** of ViT training from scratch on CIFAR-100 is **67.59**.\n\n(2) On original ViT, if we split class token away from patch tokens, passing identically through FFN, the result will be **68.76 (+1.17)**.\n\n(3) Under (2), if we re-calibrate class token using the method in our paper, the result will be **69.34 (+1.75).**\n\nFrom (1)-(3), we conclude that **class token should NOT be projected and re-calibrated in the same way as patch tokens**. We argue that **class token is responsible for collecting information, and its representation is different from patch tokens**. If both class token and patch tokens are processed together by FFN, the model will be confused because the two kinds of tokens are quite different intrinsically.\n\nAs you mentioned, SE is also applied in LocalViT. So we provide our discussion here. In LocalViT, SE operation is used right after h-swish. They are adopted after in between two linear layers, right after DWCONV. So we point out that it is served as an activation function to some extent. However, in our paper, we use SE operation to gather information from patch tokens and re-calibrate class token. The intuition is quite different. And we conduct experiment to show that **if we apply SE to re-calibrate on all the tokens, instead of only class token, the performance will drop from 80.98 to 78.34**, which greatly decrease the performance.\n\n**Q2. Head Token v.s. Simple MLP.** There are two benefits. **First,** compared with a simple MLP which stays fixed after training, using Head Token to enhance interaction among channels is more dynamic and data-specific. Owing to the scarce training data of small dataset, more comprehensively and data-specifically method can make full use of the data. **Second,** a simple MLP is point-wise on each channel. It only projects each channel respectively, rather than considering a group of channels as a whole. In our design, the Head Token is compatible with the Multi-head mechanism, which calculates relationship using a group of channels from input data. Previous works did not consider channels from the view of group and we are the first to provide this kind of idea.\n\n**Q3. Insufficient Baselines.** Previous CNN works did not train their models on DomainNet. So in our main paper we implement ResNet50 as the baseline. And for the ImageNet-1K, the number of pages is limited so we have to put the detailed comparison in supplementary materials. In the rebuttal period, we implement **ResNeXt50-32x4d** and **CvT-13** as baselines on DomainNets. Please refer to *Reviewer 4fHM* in Q3 for comprehensive results. For the comparisons on ImageNet-1K datasets, please refer to Table 4 in Supplementary Materials! We provide enough comparison there.\n\n**Q4. Spatial Relevance.** Our SOPE and DAFF provides inductive biases on spatial aspect. They are modified from previous works. As you point out, we follow [a] to conduct Patch Embedding in a sequential style, and it is indeed effective. We also provide our modification that we introduce two affine transformation before and after the sequential convolution layers. The pre-affine is done on transforming the original input and the post-affine is to adjust the feature after convolution sequence. This method renders a stable training results on CIFAR-100 dataset. If we remove such operation, the **average performance** **will drop from 80.90 to 80.72**. On the Vanilla ViT with SOPE, the performance is **73.68**, and if removing the two affine transformation, the accuracy will drop to **73.42**.\n\nAnd for DAFF, we introduce a shortcut alongside the DWCONV. **If we remove the shortcut, the result will drop from 80.98 to 80.14**, demonstrating the effectiveness of this dynamic operation. The BatchNorm is also adopted after the convolutions. Thank to the suggestion by _Reviewer b3Vm_ and _Reviewer u2nu,_ we conduct comprehensive experiments to show that the choice of BatchNorm instead of LayerNorm can bring higher performance while somewhat sensitive to the batch size. You can refer to _Reviewer b3Vm_ in Q6 for the results.", " Owing to the limited characters of only one response and we want to anwer your questions in details, we have to add another reply!\n\n**Q6. DomainNet and baslines.** DomainNet has 6 different dataset as we know, and we show the results on 3 of them. The Quickdraw and Real datasets has training size of 120750 and 120906, and the classes is still 345. The average number of images per class comes to 350, which is quite larger than ClipArt, Painting and Sketch. It does not mean we did not evaluate our methods on these datasets. We have a comparable targer dataset CIFAR-100 now, in which the low-resolution makes training even harder. So results on Quickdraw and Real are not shown in main paper.\n\nAnd for Infograph dataset, the images and the corresponding labels are quite weird. You can investigate the dataset in details and you will find that: For example, many very long images that with a very small peanut is labeled as Peanut! However, there are many other fruits and vegetables in that images, which greatly hinders model training. **So, we think this dataset has its intrinsic problem between images and labels, resulting a very bad dataset**. You can see from paper [54], they also reported very bad results on Infograph dataset. So we give up this dataset.\n\n**To show that we do NOT pick the good results purposely**, in the rebuttal period, we evaluate our methods and ResNet50 on the remaining datasets Real, Infograph and Quickdraw. Our **DHVT-S** has **83.64, 35.11 and 74.38** on Real and Infograph respectively, while **ResNet50** only achieves **81.51, 32.40 and 74.19** accuracy. This greatly supports our good method. The whole comparison are summarized in our reply to *Reviewer 4fHM* in Q3.\n\n|Method|Infograph|Real|Quickdraw|\n|:-------:|:--:|:--:|:--:|\n|ResNet50| 32.40|81.51|74.19|\n|DHVT-S|**35.11**|**83.64**|**74.38**|\n\nOn considering baseline changes, the reason is that: We need enough previous work as reference, and the **main dataset** to evaluate our method is CIFAR-100. Fortunately, many previous CNNs and ViTs conduct experiments on CIFAR-100, so we have enough baselines. However, except paper [54], we do not find any other works that train-from-scratch on the DomainNet datasets, and as we said above, the different experiment settings do not enable us to directly cite the results from [54]. So in our paper, we re-implement ResNet50 as baseline and we provide results of ResNeXt50-32x4d in rebuttal period. You can refer to _Reviewer 4fHM_ in Q3. And the ImageNet-1K is widely investigated so we have many baseline methods, and we just cite their results.\n\n**Q7. Solving things on spatial dimension.** This is a good point. We think that spatial dimension is the basis of good performance. A good method on spatial dimension means the model can correctly choose which position to focus. Under the correct spatial focus, the re-calibration on channel representation is meaningful and more helpful. So, the problem on spatial dimension must be solved.\n\n**Q8. Non-hierarchical.** The non-hierarchical structure means that every encoder block shares the same parameter setting, processing the same shape of features, such as vanilla ViT, CeiT, LocalViT. They do not down-sample and increase channel dimension as the layer goes deeper. While hierarchical structure, such as PVT, SwinT and PiT, is similar to CNNs style, with smaller resolution and more dimension as going deeper.\n\n**Q9. The convergence between CNNs and Ours.** Here we show the convergence curve of max accuracy **every 20 epochs of the total 300 epochs**. The models are DHVT-S and ResNet50, training from scratch on ClipArt datasets. This trend that our method converges faster than ResNet is general among various datasets.\n\nDHVT-S: {19.06, 44.27, 56.65, 62.74, 66.48, 68.10, 69.94, 71.17, 71.95, 72.58, 73.23, 73.56, 73.73, 73.85, 73.89}\n\nResNet-50: {8.46, 20.58, 35.89, 48.49, 56.80, 60.92, 63.99, 66.37, 68.36, 69.37, 70.53, 71.37, 71.62, 71.75, 71.90}\n\n**Q10. Incorrect writings.** Thanks for your detailed comments! We will fix them in the future version.\n\n**Q11. Why is Head Tokens beneficial.** This is a good question! Please refer to our reply to _Reviewer b3Vm_ in Q3. We provide comprehensive explanation there.\n\n**Q12. On big dataset.** Please refer to our reply to _Reviewer b3Vm_ in Q4 and *Reviewer 4fHM* in Q1.\n\n**Q13. Downstream tasks**. Thanks for suggestion. However, due to the limited computation resources and tight rebuttal time, we could not provide results on downstream tasks and it is left for the future. We have provided fine-tuning results in the Appendix and you can refer to Table 1 and 3 in the Appendix.", " Thank you for your comments! We hope our comprehensive answers can address you concerns.\n\n**Q1. BatchNorm/LayerNorm/The Influence of batchsize?** Please refer to _Reviewer b3Vm in Q6._We provide very detailed experiment results there.\n\n**Q2.Comparison with CvT.** Following your suggestion, we re-implement CvT-13, which has 19.9M parameters. Owing to the limit computation resource and tight rebuttal time, we can just train CvT-13 on CIFAR-100, ClipArt, Painting and Sketch under the same experiment setup as ours. The result is as follows. Note that CvT-13 a hierarchical structure which has 3 stages. We keep the patch embedding stride=1 in the 2nd and 3rd stages, which means the 2nd and 3rd stages do not conduct downsampling. So here the patch size for CvT is compatible with the patch embedding stride of the 1st stage. If we keep the original patch size of 16 on CvT-13, where patch embedding strides for the 3 stages are 4, 2, 2, training from scratch on CIFAR-100, the results will only be 67.56, showing that large patch size is not suitable for small resolution input data.\n| Method | Resolution |Patch size |CIFAR-100 |\n|:--:|:--:|:--:|:--:|\n| CvT-13 | 32 |4 | 79.24 |\n| DHVT-T (Ours) |32 | 4 | 80.93 |\n| DHVT-S (Ours) |32| 4 | 82.91 |\n| CvT-13 |32| 2 | 81.81 |\n| DHVT-T (Ours) |32| 2 | 83.54 |\n| DHVT-S (Ours) |32| 2 | 85.68 |\n\nAnd for the **DomainNet datasets**, the results are:\n\n| Method | Resolution |Patch size | ClipArt | Painting | Sketch |\n|:--:|:--:|:--:|:--:|:--:|:--:|\n| CvT-13 | 224 |16 | 69.77 | 61.57 | 66.16 |\n| DHVT-T (Ours) | 224 |16 | 71.73 | 63.34 | 66.60\n| DHVT-S (Ours) | 224 |16 | 73.89 | 66.08 | 68.72\n\n\n**Q3. Compared with [54]/Change of resolution.** Though both of our goals are training from scratch with ViTs, **the experimental setting is quite different**. Note that the original resolution of CIFAR-100 dataset is 32x32. Paper [54] trains for 100 epochs and **they resize the CIFAR-100 from original 32x32 resolution to 224x224**, while our setting is training for 300 epochs and **we keep the original resolution 32x32**. There are two reasons. **First,** training for longer epochs is more close to performance convergence, which is also presented in [54] with 300 epochs results rather than just 100. **Second,** keeping original resolution is consistent with the experiment setups of previous CNNs and ViTs work. Our experimental setup is quite the same as NesT [23]. \n\nWhen compared with CNNs on CIFAR-100, we just cite the reported result in their respective papers, because CNNs may not works well on such strong data-augmentation in ViTs (previous work have pointed out also). We re-implement ResNeXt50-32x4d, training with the same setting as ours on CIFAR-100, the performance is only 60.20%! It also shows that CNNs methods have their own suitable data-augmentation. We can not just put them under the same training setting directly. But when training on DomainNet dataset, where the input resolution is 224x224, both of our method and ResNet-50 are under the same training setting, and we also re-implement ResNeXt50-32x4d on DomainNet. You can refer to *Reviewer 4fHM* in Q3.\n\n**Q4. Why [54] reports higher results?** Note that **75.22 on ClipArt and 66.58 on Painting you pointed out is the fine-tuning results, not the train-from-scratch result!** We are talking about train-from-scratch result in the main paper in Table 3. Since paper [54] only train-from-scratch ResNet-50 for 100 epochs on DomainNet, we can not just cite their results. Under the same setting, we re-implement ResNet50 and train it from scratch for 300 epochs. **We also re-implement ResNeXt50-32x4d on DomainNet in rebuttal period**. You can refer to *Reviewer 4fHM* Q3 for detailed results. The results also show the superiority of our method when training from scratch. **If you are interested in fine-tuning results, you can refer to Table 3. in the Appendix.** The experiments are conducted under the same fine-tuning setting as paper [54]. You can see that our performance is much better than the result reported in [54].\n\n**Q5. The number of heads.** It is well-known that DeiT-Tiny and DeiT-Small have 3 and 6 head respectively. However, this parameter is chosen by the experiment on ImageNet. Similarly, we set the number of heads as 4 and 8 for our DHVT-T and DHVT-S based on the experiment on CIFAR-100. For **vanilla ViT** on CIFAR-100, using **3 and 4 heads will get 66.50 and 67.59** repectively. And the **DHVT-T with 3 and 4 head** will achieve **80.92 and 80.98** respectively. If we choose **4, 6, and 8 heads in DHVT-S, the performance will be 82.27, 82.59, 82.82. So 8 heads is best in DHVT-S and 4 for DHVT-T**. In compatible with scalibility, we adopt 4 heads in DHVT-T and 8 heads in DHVT-S. We hypothesize this is because each attribute of the object in CIFAR does not need too many channels for representation. So we keep the number of channels in each head less than usual.", " Owing to the limited characters of only one response and we want to anwer your questions in details, we have to add another reply!\n\n**Q5.Explanation of HI-MHSA**. The design of Head Token together with Head-Interacted MHSA is the core contribution of our paper. It enables different attribute of the object interacted with each other and resulting a general represent. A comprehensive explanation is as follows.\n\nFirstly, the HI-MHSA is inspired with and tally with human's visual principle. We human usually find and focus on several discriminative part for recognition, and a convolutional feature channel often corresponds to a certain type of visual pattern. Thus it is reasonable to dividing channels into several groups for re-calibrating. From the visualization in our main paper and appendix, such as Fig. 3 to Fig. 5 in the Appendix, the attention map of Head Token demonstrates its attention to different part of the object. Some Head Tokens focus on the head of the object, and some other on the main body. By HI-MHSA, the separated representation of the object can be fused into a general one.\n\nSecondly, the HI-MHSA is targeted and helpful to Transformer model. As is known, one of the attributes of vanilla MHSA is data-specific. However, this mechanism only models relationship spatial-wise and the multi-head design manually splits the linearly projected tokens into multiple segments, i.e. multi-head, and conduct self-attention within each head. We argue that this kind of design is a must but has it own disadvantage. Softmax function is an essential component in self-attention. It non-linearly generates attention weights from dot-product result of queries and keys. Its characteristic is **Choosing Only One**, which means only few positions have large weight. So multi-head mechanism splits tokens into multiple segments, conducting self-attention in each head, and choosing different position in each head. **Here we can consider the different segment of the tokens represents different attribute of the object. To make it compatible with multi-head mechanism and enable different attribute of the object can be fused into a general representation, we extract Head Token from different segments of the tokens**. Head Tokens are concatenated with other tokens sequentially, and is processed by vanilla MHSA. With our proposed Head Token design, different head can interact with each other now.\n\n**A very interesting discover** is the visualization of attention maps. As is shown in Fig.7 in the Appendix, in the input layer, all the tokens focus on themselves and head tokens, and in the shallow layers, the patch tokens focus on their neighbors. Further, in the middle layers, all the tokens focus more on the Head Tokens for better representation, and the deep layers draw more attention on prominent patch tokens. This is the visualization of the whole feature extraction process of our method. It raises a good question: **If such characteristic is general in other vision transformer architectures?** Previous works have demonstrated the feature extraction and information exchanging process on spatial aspect. However, they failed to delve into feature exchanging on channel aspect. Here in our paper, we move a step forward in to feature exchanging and integration on channel aspect. Maybe our work is the first to exhibit a potential feature extraction pattern in general vision transformers. And we hope it will inspire more future research.\n\n**Q6.BatchNorm/LayerNorm?** Here in our work, BatchNorm is adopted at two positions: SOPE and DAFF. In the following experiments of , we use **(A-B)** to denote normalization choice, where **A is the norm operation in SOPE and B is in DAFF.** This ablation study is conduct on DHVT-T, training from scratch on CIFAR-100 under the same setting as the main paper. We evaluate the influence from batchsize of 128, 256 and 512. From the following table, we can see that using BN is indeed sensitive to the batch size, while its performance is always superior to LN. We use serials of convolution operation and so the BN is more compatible. If you want to use LN, you have to reshape the feature and reshape it back for the next convolution.\n| Method |b128| b256 |b512|\n|:--:|:--:|:--:|:--:|\n| BN-BN| 79.69 |80.31|**80.98**|\n| BN-LN| 78.69 |79.55|79.25|\n| LN-BN| 79.24 |80.02|80.26\n| LN-LN| 78.46 |79.09|78.90\n", " Thank you for your comments! We hope our comprehensive answers can address you concerns.\n\n**Q1.The limited novelties.** In this paper, we provide 3 components, SOPE, DAFF and HI-MHSA, which aim to overcome the inductive bias problmes of ViT with spatial relevance and diverse channel representation. Some minor operation in our proposed method are also evaluated by other works such as depth-wise convolution (DWCONV). As is also pointed out by _Reviewer 4fHM_, using DWCONV is a common trick in current feed forward network (FFN) design. It is now becoming a kind of designing paradigm. Here, standing on the shoulder of giants, we also adopt such paradigm hybrid architecture and we evaluate its effectiveness on small datasets. Nevertheless, we provide our modification and it is able to comprehensively leverage the scarce training data with our proposed dynamic modules.\n\nOur DAFF is dynamic on two aspects. **First**, it is dynamic spatial-wise. After integrated with DWCONV, the FFN can be seen as a series of convolution layers. We add a shortcut alongside the DWCONV, resulting a data-specific representation of the input data. Without this shortcut, the performance of DHVT-T will drop from 80.98 to 78.34. **Second**, it is dynamic channel-wise. After feature representation process by convolution-based FFN, we add a dynamic aggregation module special for class token. DWCONV is only adopted for patch tokens, and so class token is passed identically through the FFN. **We aim at aggregating patch token features to re-calibrate class token, enabling the class token to adjust itself**. You can refer to *Reviewer SHR5* in Q1 for detailed experiment results.\n\nThough the previous work like CoaT and DaViT try to enhance spatial-wise and channel-wise representation simultaneously, we enhance channel-wise representation in a different way. They achieve channel-wise representation through conducting self-attention on transposed tokens, while we introduce Head Token together with normal patch tokens and CLS token to efficiently modeling relationship spatial-wise and channel-wise. **The computation of Head Token is light-weight and it does not need to re-formulate the original spatial-wise MHSA**. \n\n**Q2.Downstream Tasks.** Thanks for suggestions. Our main goal is to on the image classification task and we provide fine-tuning results in the Supplementary Materials as Table 1 and Table 3. Due to the limited computation resource and tight rebuttal time, we could not provide results on COCO and ADE20K. We are confident about our methods and the experiments are left for the future version.\n\n**Q3.Unclear Captions.** Thank you for your suggestion! To make it clearer, we will add more detailed description in the caption and main part of the figure in the modified version. \n\n**Q4.Why suitable and why ImageNet.** Our final goal is to bridge the performance gap between CNNs and ViTs. Our modifications are done to improve the whole architecture of vision transformers, **solving the intrinsic problem of lack of inductive biases in ViTs**. The chosen datasets are the tools to evaluate the effectiveness of our method. Previous works have proposed enough methods on training from scratch on ImageNet but failed to pay attention to much smaller datasets. Our method succeeds on small datasets and it also works well on larger dataset ImageNet, **showing the effectiveness and generalization of our method on various size dataset**. It indeed solves the intrinsic problem of the architecture. We also provide fine-tuning result in the Appendix in Table 1 and 3, further showing the effectiveness of our method.", " The paper proposes Dynamic Hybrid Vision Transformer (DHVT) for visual recognition. They argue that spatial relevance and diverse channel representation are two important inductive biases for visual recognition and they address the issue from two perspectives.\n1. On the spatial aspect, they adopt a hybrid structure, they integrate convolution into patch embedding and multi-layer perceptron (MLP) module, forcing the model to capture the token features and their neighbouring features. \n2. On the channel aspect, they introduce a dynamic feature aggregation module in MLP and head tokens in multi-head self-attention module to help re-calibrate channel representation and make different channel group representation interacts with each other.\nThe experiments are conducted on CIFAR-100, ClipArt, Sketch, Painting, and ImageNet. All of these datasets are for classification. Pros:\n- Well-organized and easy to read.\n- Motivation is clear from both the spatial and channel sides.\n- Good results on many small- and medium-size datasets.\n- Nice ablation study is shown in Tab. 5.\n- LImitation is stated.\n\nCons:\n- Some unclear captions should be more concrete. E.g., DAFF in Figure. 1.\n- Limited novelties. The depth-wise convolution is well investigated in previous works, like CVT, CPVT, PVT v2, etc. And the use of both channel-wise and spatial-wise representation is explored in DaVIT, CoaT, etc.\n- What is the intuitive explanation of the HI-MHSA? The component is a bit complicated.\n- Why is the work suitable for small datasets? Why does the author put the experiment on ImageNet in Appendix? Which part is specifically designed for small datasets?\n- No downstream tasks were evaluated. It would be better if the authors could provide some results on ADE20K or COCO.\n- The use of batchnorm, how does it influence the performance? VIT-based architectures typically use layernorm so that the batch size will not hinder the performance. See the Strengths And Weaknesses part. See the Strengths And Weaknesses part.", " The paper proposes a new Vision Transformer architecture that merges CNNs and ViT to have the best of the two worlds. The paper focuses on small datasets and Vision Transformers, which have to be somehow regularised or helped to learn the inductive biases that are naturally enforced with CNNs.\n\nThe paper proposes a new architecture that modifies the FFN with a Dynamic Aggregation Feed Forward, and proposes a \"Head Token\" so that each head of the MHA can attend to all others somehow. This supposedly helps learning better features.\n\nThe authors experimented the proposals with various small datasets and observed improvements in all tests. PROS:\n- the problem is very important: Vision Transformers focus a lot on big datasets but small datasets are nowadays almost forgotten\n- the paper is clear\n\nCONS:\n- lack of comparison with some state of the art\n- lack of clarity on the motivations for some choices\n\nDETAILS:\nTo me some parts of the paper are unclear. E.g.:\n- the choice of batch norm\n- why you did not compare with CvT, very famous architecture joining CNNs with Transformers\n- why you did not compare with [54], published one year ago in the same conf\n- why is the Head Token beneficial? I mean, the multi-head attention was created to have somehow the same approach CNNs have with multiple filters: learning different aspects of the same input. I am surprised your modification helps\n\nI did not also like the lack of clarity of the experiments. Why is the resolution of CIFAR-10 changed? What happens if you do not do the changes explained in L244? Are all baselines trained with the original augmentations, params, resolutions?\n\nThen, [54] reports 75.22 on ResNet-50 on ClipAart, 66.58 on Paintings etc. Your results are lower both for the baseline and your method. Why?\n\nL288 why 4 heads? it is not standard at all.\n\nL265 DomainNet is much bigger. Why did you select only a few datasets from DomainNet? Unfortunatedly, this raises the question on me whether you cherrypicked the results. Moreover, depending on the Table (2, 3, 4) the baselines change. The reason is a bit unclear.\n\n\nOTHER:\n- L144: needs a citation, e.g. 54. However, 54 shows that the networks are competitive with ResNet-50\n- L55-79: it is not clear why it is important to solve everything in the spatial dimension.\n- L79 SE is not defined\n- L150: non-hierarchical here is not clear. Try to explain it better.\n- L180: this is a result I suppose, on the methods. If you want to state this, show the results.\n- L181: more solution is not english\n- - The Transformer is a good architecture also because it does not depend too much on the batch size. However, your architecture now has Batch Normalization. Why? Couldn't you use Layer Norm? If not, why?\n\n- Why didn't you compare with [54]?\n\n- What is the impact of the batch size on your results?\n\n- what if I use this architecture with a big dataset? Is this an architecture that is general or only for small datasets? what is your opinion here\n\n- What about downstream tasks? Does your architecture help there?\n\n- answer the other questions raised in the previous section plz! No limitations", " This paper proposes a new Vision Transformer structure named DHVT targeting to improve the performance of ViTs on the image classification task with small datasets. Specifically, DHVT is designed to improve the spatial relevance among the patches with convolutional inductive bias and enhance the channel representation by re-calibrating the channel representations. To validate the effectiveness of the design choices, the authors conducted experiments on small datasets represented by CIFAR-100 and ImageNet-1k. The CIFAR-100 results look good, and the proposed method outperforms the other baseline methods by large margins. Strengths:\n- It’s meaningful to study why and how ViTs are inferior to CNNs when training on small datasets from scratch.\n- The quantitative results in Table 2 look good and surpass the other methods. \n\nWeakness:\n- It’s hard to understand the motivation for the proposed detailed design choices and why they are better than other simple designs. For instance, why re-calibrating only the class token with SE operation [25], considering localViT [18] proposed re-scaling channels for all tokens in FFN blocks? A proper discussion with the SE module in localViT is needed. Also, the head tokens are proposed to enhance the channel groups for each head by modeling head correlations, which can easily be implemented by an MLP layer. Can you explain the benefit of modeling the head correlations with MSA instead of a simple MLP layer?\n\n- The experiments are insufficient to support the authors’ claim. Tables 3 and 4 have very few baseline methods and it’s hard to see the gain of DHVT over the SOTA methods.\n\n- The novelty for modeling spatial relevance is limited, since introducing early convolutions has been shown effective in previous works, e.g. [a]. \n\n[a] Xiao, Tete, et al. \"Early convolutions help transformers see better.\" Advances in Neural Information Processing Systems 34 (2021): 30392-30400.\n \nPlease address the issues pointed out in the weakness. In general, yes.", " This paper focuses on generalizing vision transformers to the domain of small datasets through injecting spatial-wise and channel-wise inductive biases into model architecture design. Specifically, the authors point out that it is hard for the vanilla vision transformers to model spatial relevance and diverse channel representation under data-constrained scenarios. The weaknesses for vision transformers compared to their CNN counterparts finally lead to worse performances on small datasets, *e.g.,* cifar. To address the drawbacks, the authors introduce two augmented modules SOPE and DAFF, corresponding to the patch embedding and feedforward operators. By using additional convolution operations to gather the local feature, spatial relevance within patches is emphasized, and is forced to learn better relations even with limited data. On the other hand, *head tokens* are adopted on top of the widely used classification token and serve as the global token for each group of channels (*i.e.,* a head) in the multi-head self-attention module. In this way, the interaction across different heads is facilitated and shows better representation capability under a low data regime. Regarding the experiments, the authors validate the effectiveness of their module on various datasets, including small datasets like cifar100 and DomainNet, and larger datasets like ImageNet-1K. A good trade-off between model parameters and test accuracy is shown compared to several state-of-the-art baselines, and ablation studies are also conducted to show the contribution of each part. + Strengths\n + This paper is organized in an easy-to-follow pattern. Related works on data-efficient vision transformers are thoroughly investigated and properly introduced. Each component of the proposed model is described in detail respectively. The experiments are substantial and some results and experiment details in the supplementary material also help to understand the paper.\n + The practical value of this paper is promising. Although vision transformer models show better performance and scalability on large-scale datasets, the potential application on data constraint scenarios is still under-explored. The gap between vision transformer and cnns on the small dataset is a topic that is worth deep investigation. This paper attempts to address this limitation from the spatial and channel perspective, and the experiments show improvements on widely used benchmarks, *e.g.,* cifar100. I think this shows a new direction to practically using vision transformer models on datasets with low-scale / low-resolution data.\n + The experiments are sufficient in this paper. The parameter-accuracy trade-off on three datasets, ablative studies on each proposed component, visualization on attention maps, and the flops-accuracy trade-off in the supplementary material are detailedly described.\n + Finally, I really approve that the authors are willing to share the code of each module. This really helps to understand the implementation detail.\n\n+ Weaknesses\n + The motivation of this paper can be addressed more clearly. The authors point out that spatial relevance and diverse channel representation are two essential aspects that influence the performance of vision transformers under a low data regime. Nevertheless, I am still concerned about the practical connection between **lack of two inductive biases** with **low data regime**. To my knowledge, the lack of inductive biases is also a question for large-scale datasets like ImageNet-1k. Previous research [1] have already explored strengthening the spatial relationship between patches, and the diversity of features among different heads has also been investigated [2]. As a result, the fact that the lack of two inductive biases leads to inferior performances is not so surprising. As the authors are motivated from these two perspectives to address the limitation of the vision transformer, I think it would be better if they can **emphasize the unique influence of the inductive biases for small datasets**, and explain how the situation differs from large-scale datasets.\n\n + Technical contribution. It is an interesting idea to improve the performance of the vision transformer from spatial and channel perspectives. However, it must be pointed out that the proposed modules share some resemblance with several previous works. The convolutional patch embedding is widely adopted in several works [1][3]. Using depthwise convolution in the feedforward network is also a common trick [4]. The dynamic aggregation feedforward module also seems like a trivial implementation of SENet, and similar insight has been explored in [5]. It would be better if the authors can focus on addressing the differences with previous works.\n\n + I also have concerns about some experimental settings. For the main results on cifar100 shown in table 2, I wonder if the comparisons are fair. According to the authors, DeiT augmentations are adopted for proposed models. Do previous cnn models use the same augmentation? Also, the timm implementation can practically improve the model performances of cnn models. Have the authors reimplement the baselines on timm and see the differences with the original paper? I understand it is impractical to ask the authors to re-implement all the baseline approaches, I think a simple comparison with a widely used model like ResNeXt would also do the trick and makes the result more convincing. In section 4.2, the authors state that the results are the best out of 5 runs. This is clearly different from the routine in previous works where an average of 10 runs are usually used for cifar datasets. Considering the large variance for cifar100, I think some part of the gain may come from the randomness.\n\n\n\n[1] Xiao, Tete, et al. \"Early convolutions help transformers see better.\" Advances in Neural Information Processing Systems 34 (2021): 30392-30400.\n\n[2] Raghu, Maithra, et al. \"Do vision transformers see like convolutional neural networks?.\" Advances in Neural Information Processing Systems 34 (2021): 12116-12128.\n\n[3] Yuan, Li, et al. \"Volo: Vision outlooker for visual recognition.\" arXiv preprint arXiv:2106.13112 (2021).\n\n[4] Guo, Jianyuan, et al. \"Cmt: Convolutional neural networks meet vision transformers.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[5] Yuan, Li, et al. \"Tokens-to-token vit: Training vision transformers from scratch on imagenet.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. Most of my concerns are addressed in the **Strengths And Weaknesses** section. The followings are my detailed questions regarding the weaknesses.\n\n+ I understand that the authors have stated several times in the paper like in the abstract '... the lack of data hinders ViTs to attend the spatial relevance ...' or '... the scarce data can not enable ViTs to learn strong enough representation ...', nevertheless, from my perspective, these statements are more like intuitive claims that short of technique supports. For example, it would be better if some qualitative results can show that spatial relevance is more critical for small datasets. The previous work [1] actually shows that lower attention layers do not learn to attend locally with less training data. Nevertheless, the *less training data* in [1] refers to ImageNet-1k without ImageNet-22k pretraining, which still belongs to large-scale data in the scope of this paper. I wonder if this claim is also correct with small datasets like cifar.\n\n+ Do the setups in table 2 correct? I have checked the wide-resnet paper and found that WRN28-10 is trained for 200 epochs instead of 300 epochs. Also, as I have mentioned before, reporting the best results of 5 runs may enjoy the gain from randomness, especially on small datasets.\n\n+ A comparison on flops-acc trade-off for cifar100 would be better. According to supplementary material, the DHVT-S model that shows the highest performance (85.68) has 6.3G flops, which requires high computation for small image size. The trade-off for computation cost would be more valuable than training parameters.\n\n+ A Typo in line 181: require->required.\n\n[1] Raghu, Maithra, et al. \"Do vision transformers see like convolutional neural networks?.\" Advances in Neural Information Processing Systems 34 (2021): 12116-12128. The authors have addressed the limitations. I feel that inference speed is also a question that is worth further investigation. Although the authors provide the model throughput in the supplementary, the detailed comparison with baseline models and efficient cnns is still lacking." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "C46X7i_YBN-", "8D2Y2FvynkA", "I9Do6wg2Rmq", "r3rGXFyGIxN", "dmjP5qTAkyC", "dmjP5qTAkyC", "bbto0VIQNS6", "nRfto7_786O", "GZ0PHiVwk6g", "H2wb8kssSvO", "Sm1qKnZoEV7", "2UdiSFcHU8t", "wRajdpybMhy", "7c0-so-dySO", "bY4zzDmf_WJ", "QZn-rCzAHf", "claapWcVb-e", "h0LGMFzb4Wd", "99wi6Lnfb6", "nips_2022_bfz-jhJ8wn", "nips_2022_bfz-jhJ8wn", "nips_2022_bfz-jhJ8wn", "nips_2022_bfz-jhJ8wn" ]
nips_2022_Bq2-WN5csW
Loss Landscape Dependent Self-Adjusting Learning Rates in Decentralized Stochastic Gradient Descent
Distributed Deep Learning (DDL) is essential for large-scale Deep Learning (DL) training. Synchronous Stochastic Gradient Descent (SSGD) 1 is the de facto DDL optimization method. Using a sufficiently large batch size is critical to achieving DDL runtime speedup. In a large batch setting, the learning rate must be increased to compensate for the reduced number of parameter updates. However, a large learning rate may harm convergence in SSGD and training could easily diverge. Recently, Decentralized Parallel SGD (DPSGD) has been proposed to improve distributed training speed. In this paper, we find that DPSGD not only has a system-wise runtime benefit but also a significant convergence benefit over SSGD in the large batch setting. Based on a detailed analysis of the DPSGD learning dynamics, we find that DPSGD introduces additional landscape-dependent noise that automatically adjusts the effective learning rate to improve convergence. In addition, we theoretically show that this noise smoothes the loss landscape, hence allowing a larger learning rate. This result also implies that DPSGD can make learning rate tuning much easier for tasks that require careful learning rate warmup (e.g, Attention-Based Language Modeling). We conduct extensive studies over 18 state-of-the-art DL models/tasks and demonstrate that DPSGD often converges in cases where SSGD diverges when training is sensitive to large learning rates. Our findings are consistent across three different application domains: Computer Vision (CIFAR10 and ImageNet-1K), Automatic Speech Recognition (SWB300 and SWB2000) and Natural Language Processing (Wikitext-103); three different types of neural network models: Convolutional Neural Networks, Long Short-Term Memory Recurrent Neural Networks and Attention-based Transformer Models; and two optimizers: SGD and Adam.
Reject
This paper compares all-reduce SGD (SSGD) with decentralized SGD (DPSGD) and argues that the latter can tolerate lager stepsize due to a smoothing effect induced by noise in DPSGD. The reviewers found that the theoretical contribution is overclaimed. By the strong assumptions needed in the theory section (such as e.g. assuming Gaussian updates) the analysis becomes somewhat disconnected from the experiments, and, in addition, reviewers found several typos and issues in Section 2 of the original submission. Even though the numerical evaluation was judged more positively by all reviewers (and championed by one), we came to the consensus that the paper should be rejected in its current form. (Minor comments:) In the discussion, we also found that that the term “self-adjusting” might be a bit misleading (as learning rates are kept fixed and are not self-adjusting), and that the paper would benefit of a brief discussion of related works that study the benefitting effect of smoothing in large-batch training (such as https://arxiv.org/abs/1805.07898 or https://arxiv.org/abs/1906.10822, etc.).
train
[ "X-h0on63s6M", "Y7CuitXO5VB", "hx92YrCFJ0T", "ZXHDKvJXjP", "L6IFOQRsDS", "MjRD7Lhb7Oz", "9Dqe8IK8oqU", "2niSD2KW9Y", "uTUwq7tt-tr", "yhRi2oXh7fo", "x29T33nOgcD", "IoBUT5n_qd6", "q5TwjNH5KR", "coP-CKeBNiq", "y4joeCH_EIL", "ODy9dpaI4_Q" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed responses. After reading the rebuttals, I tend to keep my score.", " I acknowledge the authors' response.\nI maintain my rating, since there is no convincing theoretical argument in favor of the proposed method. This is a major problem, since the authors claim to provide such arguments.\nEither the authors give up the theoretical analysis, either they make it sound. Anyway, it cannot be achieved in a minor revision of the paper.\n", " I thank the authors for their reply, which satisfies some of my concerns, but unfortunately not the major ones. \nI believe the theoretical results of the paper needs to be revised, to establish a much clearer link between theory and proposed algorithm. Furthermore the authors have not answered satisfactorily to Q2. \nOverall the algorithm presented and the intuition behind it seem to be rather interesting, however I do not think that the paper in its current form is ready for publication. I will keep my original score.\n", " Weakness\n\nTheory\n\n(1) The hypothesis: ``$\\frac{1}{n} \\sum_{i = 1}^n X_i - \\frac{1}{n-1} \\sum_{i = 1}^{n-1} X_i \\leq \\epsilon$ almost surely'' is not reasonable when the $X_i$ i.i.d. and without compact support. So, we cannot consider the simple case where the $X_i$ = $\\nabla L^{\\mu_i(t)}$\n($\\vec{w}_i(t)$) are i.i.d. Gaussian random variables. \n\nThank you for pointing it out. We can relax the requirement of “almost surely” to be high probability. This is true when $n$ is sufficiently large as mentioned in the statement of Theorem 1. The reason is that the variance gets reduced when $n$ is large and both $\\frac{1}{n} \\sum_\\{i = 1}^n X_i$ and $\\frac{1}{n-1} \\sum_{i = 1}^{n-1} X_i$ concentrate around the same mean with very small deviation with high probability. Then the whole theorem holds with high probability. We will make it more clear in the revision.\n\n(2) In the proof: it is true that, conditionally to $\\mathcal{F}\\_{t-1}$ , the $\\vec{w}\\_i(t)$ are independent, but it is false for the $\\delta w_i(t)$. \nLet us recall that: $\\delta \\vec{w}\\_{i}(t) $ = $\\vec{w}_a(t) - \\vec{w}_i(t)$ = $\\frac{1}{n}$ $\\sum\\_{j=1}^{n}$ $\\vec{w}\\_{j}(t) - \\vec{w}\\_{i}(t)$. Thus, all $\\delta \\vec{w}\\_{i}(t)$ are linear combinations of the same set of variables, with only non-zero coefficients. It is thus impossible that two $\\vec{w}\\_{i}(t)$ are independent. Besides, independence has nothing to do with the fact the $\\sum\\_{i=1}^n \\delta \\vec{w}\\_{i}(t) = 0$. On the contrary, this equation indicates a strong dependence between the terms. \n\nThank you for your comments. We agree with this. The reason for assuming the i.i.d. $\\delta w\\_i(t)$ is that it provides a cleanest possible approach for supporting our argument: DPSGD is optimizing a smoother function. We can add a remark on this assumption in the revision and acknowledge that we cannot prove it rigorously. In addition, we can add a paragraph illustrating the intuitive reason why it smoothen the landscape by assuming i.i.d. Gaussian $\\delta w\\_i(t)$.\n\nExperiments\n\n(1) Error margins: there is no mention of any variance or error margin in the results. So, we cannot evaluate the significance of the presented results: if a reported result seems to be better than another, it may be luck... \n\nFirst, please note that In large-scale NLP, ASR experiments like ours, it is customary that no variance is reported. See Transformer[50], BERT[6], GPT[42,43], and various ASR papers [15,57]. These experiments are expensive enough that the common practice is not to do multiple runs.\n\nSecond, for all the numbers we reported in the experimental results section of the paper, we keep two decimal points. For the SSGD vs DPSGD numbers reported in Table 1, 2 and 5 large-scale runs (CV, ASR,NLP), we ran each experiment at least 3 times. The variance of heldout-loss for ASR and NLP tasks are effectively 0.00 (two decimal points). The variance of test accuracy for CV tasks is consistently smaller than 0.10. The difference between SSGD and DPSGD is several orders of magnitude higher (e.g., a held loss 10.37 vs 1.47 in ASR tasks). We can make a remark about variance in our final version.\n\nThird, we grid search over learning rates and gaussian noise strength as demonstrated in Section 4.3 to show that DPSGD consistently outperforms SSGD in the large batch setting.\n\nFinally, we show an advantage for DPSGD across three different tasks using different architectures is a strong indication that we aren’t just getting luck with one or two experiments.\n\n(2) The main drawback of the experimental setup is its complexity. In particular, a huge preliminary phase of fine-tuning seems to be necessary, without proposing a reproducible procedure to recover the proposed hyperparameters (which are very numerous). The authors only provide citations of preceding works, where the fine-tuning has already been done. \n\nFirst, we didn’t claim DPSGD can automatically pick the optimal hyper-parameter setup, we claim that DPGSD gives practitioners more freedom to choose hyper-parameter from (i.e., a larger range of learning rate).\n\nSecond, all the hyper-parameter setup we use are from the research literature that use SSGD and therefore such hyper-parameter setups are optimized for the SSGD. We use them “as is” for DPSGD. In another word, those hyper-parameters are “fine-tuned” for SSGD, but not for DPSGD. \n\n\n\n", " \nExperiments \n\n(3) It is clear that the authors claim to prove the reliability of DPSGD as the batch size increases; they do not claim to beat the state of the art. So, there is no point of testing architecture trained with highly fine-tuned optimization procedures. A unified and simplified optimization process should have been considered.\n\nOne key challenge of distributed training is how to tune hyper-parameters when batch size increases -- there is no known general solution. Some common practices include learning rate warmup and linear scaling . We show in such practice, DPSGD can work with a larger range of hyper-parameters than SSGD, which is the de-facto distributed training algorithm. \n\nIt is a common practice to use publicly known best hyper-parameters from the research literature when comparing different training algorithms. It is common knowledge among practitioners that different training tasks require different hyper-parameter setup (learning rate, batch size and etc). We are unaware of any work when evaluating across different domains adopts the same set of hyper-parameters or a “unified and simplified optimization process”.\n\nFinally, it is hard to only prove the reliability of DPSGD without showing competitive performance compared to SOTA in order to publish. Almost all the existing high performance deep learning models are after heavy hyperparameters optimization depending on the task of interest and architecture under investigation. In order to compare with published results and at least achieve competitive results, it is a reasonable strategy to start DPSGD with a similar optimization process as the published ones. \n\nClarity\n\n(1) The \"related works\" section must not be put in appendix. This section is fundamental to draw a line between existing works and actual contributions, and to evaluate fairly the significance of the paper. \n\nThe related works are in Appendix G of the full paper (i.e., supplementary materials). We will address this in the final version of the paper.\n\n(2) Using $\\vec{w}\\_j$, where $j$ is an index standing for an integer, and $\\vec{w}\\_a$, where $a$ is a simple letter (standing for \"average\"), is confusing. Why not using $\\hat{w}$ instead of $\\vec{w}\\_a$ ? Same issue with $\\vec{w}_{s,j}$. Besides, arrows can easily be removed, or replaced by bold text: $\\mathbf{w}\\_j$ instead of $\\vec{w}\\_j$. \n\nThanks for the suggestion. To avoid confusion, we will change the notation to use $\\hat{w}$ for the average weights over all learners and $\\hat{w}\\_j$ for the averaged weights for learner-j in the final version of the paper. \n\n(3) If the authors do not use the general version of DPSGD presented in [33], then the general notation $\\vec{w}\\_{s, j}$ can be removed \n\n$\\vec{w}\\_{s,j}(t)$ represents the locally averaged weight (at iteration t) over a subset of learners that a given learner-j picks according to the mixing matrix. Learner-j uses $\\vec{w}\\_{s,j}(t)$ as its starting weight for its weight update for the next iteration (t+1) as shown in Eq. 2. \n\nFor the large-scale experiments conducted in this paper, as mentioned in the first paragraph of Section 4, each DPSGD learner picks a random neighbor in each iteration as done in [59]. The theoretical analysis in this paper assumes a general version of the mixing matrix presented in [33].\n\n\n\n\n", " Questions\n\n(1) It is not very clear to me why only Gaussian noise has been considered, it appears like the noise simulation could have been designed such that it is more similar to the noise introduced by DPSGD. \n\nIt is true that the noise introduced by DPSGD is not Gaussian and we don’t have an easy way to simulate DPSG-noise. The point of SSGD* experiment is to show that one cannot simply use Gaussian noise (even with fine-tuned noise strength) to get the effect of DPSGD noise. The good news is that one can get DPSGD noise for free in a DPSGD system. \n\nLimitations\n\n(1) The authors did not really discuss the limitations of their evaluation, which is unfortunate. As mentioned earlier, especially the simulated noise injection has a lot of room for further considerations. It again appears that this discussion has been omitted due to the page limit. An outlook on potential societal impact is also not provided. Since the important results of this paper mostly benefit groups with very large training resources, some discussion on this topic would have been appropriate.\n\nWe fine tune SSGD* with different Gaussian noise strengths. In the future work, we can consider different type of noise distribution to see if it is possible to get some noise closer to DPSGD noise.\n\nOne comment on societal impact: DPSGD can converge under a larger range of hyperparameters, so it can reduce the number of very expensive training trials, thus reducing energy cost.\n", " Weakness\n\n(1) The writing of this work has a big problem. The authors put important context including Related Works and DPSGD and SSGD Runtime Comparison in the appendix. Related work is almost an indispensable chapter of an article. Besides, whether DPSGD has an advantage in runtime compared to SSGD is also important. Since the authors claim that DPSGD can have an advantage in terms of convergence, it is to be expected whether this advantage can bring about a saving in runtime. Although the conference has body content restrictions, putting very important content in the appendix is ​​a discouraged behavior. \n\nRelated works are in Appendix G is the full paper (supplementary materials) and Runtime results are in Appendix F in the full paper (supplementary materials). In the final version, we will add the related works and runtime results in the main paper.\n\n(2) Although this article emphasizes that DPSGD has great benefits, from the perspective of actual performance, the results are not very satisfactory when the learning rate is very large. Therefore, whether a large learning rate brings a great improvement in efficiency will be a very important question, and I don't think the author provide a detailed analysis in the text.\n\nFirst, there is not a known general solution to maintain model accuracy (within the *same* number of training epochs) when increasing batch size [12,24,18,54,57,61]. One key research area of distributed training is how to reduce the model accuracy gap between small-batch baseline and large-batch distributed training setting. However, please note that DPSGD is able to recover the baseline accuracy in the large batch setting (or shorter warmup window) for ASR and NLP tasks. \n\nIn the CV cases, as we discussed at the beginning of Section 4.1: On ImageNet-1K we test 6 CNN models – AlexNet, VGG11, VGG11-BN, ResNet-50, ResNext-50 and DenseNet-161. Among them, AlexNet and VGG have rougher loss landscapes and can only work with smaller learning rates, while VGG11-BN, ResNet-50, ResNext-50, and DenseNet-161 have smoother loss landscapes thanks to the use of BatchNorm or Residual Connections, and thus can work with larger learning rates. Table 1 listed in the main paper shows the most difficult large-batch CV tasks. However, we do show that DPSGD outperforms SSGD significantly in these tasks. Please note that for easier large-batch CV tasks (e.g., ResNet, ResNext and DenseNet) , DPSGD does recover the baseline accuracy in the large batch setting as reported in Appendix E.6 (Figure 9 and Table 10) in the supplementary materials.\n\nSecond, since the SSGD is the de facto distributed training approach. It is a common practice to increase learning rate to compensate for the reduced number of parameter updates in SSGD. The goal of this paper is to compare SSGD and DPSGD in such settings. The general tradeoff of training time benefits and the model quality degradation caused by distributed training (SSGD , DPSGD or otherwise) is beyond the scope of this paper.\n", " Questions\n\n(1) The orthogonal decomposition of noise in Equation (4) is based on the expectation of users. In other words, it only holds when the number of users is very large. However, the experiment shown in Figure 2 only performs with 5 learners. In this case, is the results of the experiment consistent with the theoretical analysis? \n\nThe decomposition of noise in Eq. 4 by itself does not depend on the number of users. However, whether or not we can approximate the distribution of the gradients from a finite number of learners by their mean and variance is a valid question. To this end, we tried with less and more learners (3 or 7) in MNIST and we did not find any difference in terms of the ability of DPSGD in helping convergence with the general trend of reduced effective learning rate in the beginning of training (shown in Fig. 2b) unchanged. In the final version of the paper, we will mention these experiments. The intuitive explanation for this is that besides averaging over different learners the overall learning dynamics also depends on integration over time (iteration), which provides another averaging process that makes the Gaussian approximation a good one. \n\n(2) In Table 1, the best performance comes from when lr=1x, and the performance drops significantly when lr is very large. What are the advantages of a large learning rate? \n\nTable 1 is measured for the *same* number of epochs. With a larger batch size , one can use more GPUs. For example, it takes batch size 256 ~16x more time to finish than using a batch size 4096 because the latter can run on 16x more GPUs. Please note that for more “modern” architecture of CV tasks (e.g., ResNet, ResNext and DenseNet), DPSGD does recover the baseline accuracy in the large batch setting as reported in Appendix E.6 (Figure 9 and Table 10) in the supplementary materials. For the reasons that the listed CV tasks (AlexNet, VGG, VGG-BN) in Table 1 are difficult to train in large batch settings please also see response to Weakness (2). \n\nLimitations\n\n(1) Even though the authors claim that they describe the limitations of their work. However, I cannot an explicit description. I would like to ask the authors if they can show me where is the limitation during the rebuttal. \n\nOne limitation is that we need to introduce the the assumption that $\\delta\\_i w|\\mathcal{F}\\_{t-1}$, which is Gaussian is a bit strong. However it gives us the cleanest possible way to illustrate that DPSGD is optimizing a smoother landscape (we show the smoothness constant in DPSGD case is smaller compared with SSGD). Without assuming this, the challenge is that we cannot exactly calculate the smoothness constant since there is no closed form. \n", " \nWeakness:\n\n(1) I am not quite convinced of the utility of the theoretical results, especially regarding the Gaussian hypothesis (see below in questions)\n\n(2) Even factoring this concern out, I believe the theoretical contribution to be very limited as it is basically the application of a know Lemma \n\nGaussian noise helps us better understand and analyze why the noise can make the landscape smoother. Considering general noise would make the analysis extremely difficult, and we plan to study this problem in the future work.\n\n(3) The presentation is unclear in various passages, including the proof of the theorem; notation is too often overloaded (see eq (3)) making it difficult to understand what is what. This is especially bad in section 2.2: e.g. in equation (4) one might expect the authors to discuss about the DPSGD algorithm, but there is no mention of the gradient $g\\_j=\\nabla L^{\\mu\\_j}(w\\_j)$. Other important quantities such as the total loss are never explicitly defined and the reader is left with the need to self-interpret the paper at various points. \n\nWe apologize for the complexity of notations in this paper, which we will try to simplify and clarify in the final version of the paper. $g\\_j=\\nabla L^{\\mu\\_j}(w\\_j)$ represents the minibatch gradient for learner-j at weight $w_j$. Since we are interested in the learning dynamics of the whole system including all learners, we focused on the statistical properties of $g_j$ over all learners in particular the average gradient $g_a$ over all learners and their variance ($\\Delta\\_{DP}$, $\\Delta\\_S$, $\\Delta^{(2)}$) rather than gradient from individual learners. The total loss for a given learner is the average of its minibatch losses, which is taken to be the cross-entropy loss as described in line-93 of our paper. \n\n(4) The layout of the paper should be revised: the abstract is too long and the related work section cannot be relegated to the appendix. Space can be saved by avoiding repetitions, focusing discussion and removing unnecessary \"summary\" sentences at the end of each experimental section. \n\nThank you for the suggestions! The related works are in Appendix G of the full paper (i.e., supplementary materials). In the final version of the paper, we will include the related works section.\n\n(5) The claim that DPSGD needs very little to no tuning is exaggerated and not backed up by the experiments. In particular, note that in the NLP experiment DPSGD still needs the learning rate warm-up stage, although shorter. \n\nWe didn’t claim DPSGD can automatically pick up the optimal hyper-parameter setup (we are unaware of any work that claims to do that), we claim that DPGSD allows more freedom to choose hyper-parameter from (i.e, larger range of learning rate) than SSGD. Please note that DPSGD doesn’t introduce any new hyper-parameter, it simply uses whatever hyper-parameter practitioners might use in SSGD, which is the de-facto distributed training algorithm. \n\n(6) The paper would benefit from a more complete (also formally) introduction of the two algorithms. For instance it is not clear what happens at the end of the optimization with DPSGD. Are the weights averaged over all the worker?\n\nWe presented the introduction to these two algorithms at the beginning of Section 2.\n\nFor the large-scale experiments, as mentioned in Section 4, the implementation is based on [33], with a randomized mixing matrix as in [59]. By the end of optimization, the weights are averaged across learners -- in practice, the model quality is no different (of any significance) than picking one model from any learner. \n", " Questions:\n\nMajor:\n\n(1) I do not understand why the assumption that $\\delta\\_i w |\\mathcal{F}\\_{t-1}$ should be Gaussian with zero mean and especially equal (scalar) variance. Why this should make sense? In the appendix the authors call for the central limit theorem, but 1) I do not understand to what it should be applied 2) I think that at least the multidimensional version of the CLT should be considered here. On the contrary, the empirical results with SSGD + Gaussian noise (which show poor performances) seem to suggest that there is a fundamental difference between the noise induced by DPSGD and Gaussian noise. \n\n$\\delta_i w|\\mathcal{F}\\_{t-1}$ stands for the noise in the weight, while the SSGD+Gaussian noise means that the noise is added on gradient. They are two different types of noise. The first type of noise is inherent noise due to the DPSGD algorithm which helps smoothen the loss landscape, while the second type of noise is artificial Gaussian noise. \n\nWe acknowledge that the assumption that $\\delta\\_i w|\\mathcal{F}\\_{t-1}$ is Gaussian is a bit strong. However it gives us the cleanest possible way to illustrate that DPSGD is optimizing a smoother landscape (we show the smoothness constant in DPSGD case is smaller compared with SSGD). Without assuming this, we cannot exactly calculate the smoothness constant since there is no closed form. Even though Gaussian noise is a simplified assumption, it is a useful model to interpret the optimization dynamics.\n\n(2) One might hypothesize that the advantage of DPSGD is that the noise is adaptive (but still Gaussian). To empirically verify this, the authors could run an experiment where the Gaussian noise added to SSGD is scheduled after what the authors find by running DPSGD (e.g. figure 2 b) for the MNIST setting. If the resulting methods performs on par with DPSGD, then I could be more willing to accept the assumption, although questions would remain. Otherwise, I would argue that Th 1. is not useful to understand the advantages of DPSGD. \n\nOur results are consistent with the reviewer’s hypothesis that DPSGD effectively introduces an adaptive learning rate that depends on the loss landscape – the rougher the landscape the smaller the effective learning rate, which allows the algorithm to converge even with large “bare” learning rate. As shown in Fig. 4 in the Appendix, in the initial stage of learning, the strength of the DPSGD specific noise $\\Delta^{(2)}$ is much larger (by 1-2 orders of magnitude) than the SSGD noise. As the system learns and the landscape becomes smoother, $\\Delta^{(2)}$ decreases and it reaches the same level as the SSGD noise after the system passes the initial learning stage and stays constant (with large fluctuations) afterwards. Therefore, adding a Gaussian noise with a constant strength after the system passes through the initial rough landscape should have similar performance as DPSGD noise, which is only critical for convergence in the initial stage of learning. \n\nOthers:\n\n(1) From Lemma 2 of Random Gradient-Free Minimization of Convex Functions [39], it would follow that the smoothness coefficient should be $\\frac{\\sqrt{n}}{\\sigma}G$, where $n$ is the dimension of the parameter vector. From where do you obtain $\\frac{2}{\\sigma} G$ that you report in the paper ? \n\nSorry we missed a $\\sqrt{n}$. We will add them in the revision.\n\n(2) I do not quite understand how the theoretical result cannot involve the mixing matrix. As the authors note, for a complete mixing matrix DPSGD is equivalent to SSGD; this should be reflected in the statement and formulation of the theorem. \n\nThe theoretical result requires that every machine has a different weight. If the difference between the weight and the averaged weight satisfies the i.i.d. Gaussian, then we can show that DPSGD is doing optimization on a more smooth landscape. We will add more details in the revision.\n\n(3) In eq. (2) is it intentional that there are 2 sets of weights $w_j$ and $w_{s,j}$ ? Isn’t it that the loss is also computed at $w_{s,j}$ ? Or does each worker maintain 2 sets of weights ? \n\nNo, gradients are calculated w.r.t $w_j$. Each worker maintains 2 sets of weights so that gradient calculation and weights communication can happen concurrently.\n\nMinor:\n\n(1) Why only limiting to the cross-entropy loss? \n\nIt is the most commonly used loss function in these CV, NLP and ASR tasks.\n\n(2) I do not understand the reason behind \"self\" of self-adjusting. Why \"self\"? Usually one speaks about adaptive method.\n\nNoise strength changes automatically according to the loss landscape (no manual tuning is needed) in DPSGD. DPSGD doesn’t introduce any new hyper-parameter to tune, the loss-landscape dependent noise effect comes for free (i.e., inherent system noise). \n\n(3) Typos: allow [for]; tasks -> models in line 29; inconsistencies between figures and tables and texts (e.g. Fig 1 SSGD-noise vs SSGD* in the text). \n\nThanks! We will fix these.\n\n", " Limitations:\n\n(1) Limitations of the proposed analysis are not discussed, and should be added, e.g. in relation to the assumptions of the theorem. \n\nWe need to assume that the noise in the weight is independent Gaussian noise, which may not be realistic. However, assuming it can help us understand the effect of landscape smoothing effect.\n\n(2) I do not think it is proper to have one paragraph sketching out very informally a possible result suggesting that other \"people\" (line 144) then can fill the gaps. One might suggest this as a potential interesting future work (briefly and in the conclusions). Or spell out the theorem and include the proof. As it stands, the passage starting at 144 should be removed. \n\nThank you for your suggestion. We will put it in the conclusion section to suggest future works.\n", " Questions\n\n(1) There is a beginning of theoretical analysis of the proposed method though the Hessian of the loss. SSGD and DPSGD provide information about the loss surface, and, intuitively, DPSGD provides richer information. Is it possible to formulate SSGD and DPSGD as second-order methods, where the approximation of the preconditioning matrix (usually the Hessian) is computed in a distributed way? Is this the optimal second-order method with acceleration through distributed computing? Etc.\n\nNote: Fig2b shows an evolution of the learning rate, which appears also when using second-order methods. \nThe reviewer is correct that we are interested in theoretical analysis (not in the form of theorem proving but rather a physics style understanding) of both algorithms (SSGD and DPSGD) in terms of the loss landscape characterized in part by the Hessian of the loss. As shown in previous work (e.g., ref. 8), all SGD based algorithms introduce a landscape-dependent (Hessian-dependent) noise that has a similar effect as a second-order method without computing the Hessian explicitly, which is computationally expensive. \nAs we explained in the paper (line 171-179), even though SSGD has a Hessian-dependent noise, its amplitude is proportional to 1/(nB), which is small given a large number of learners (n). For DPSGD, in addition to the SSGD noise, there is another noise term $\\Delta^{(2)}$ caused by the asynchrony of the learners. As shown in Eq.5, this additional noise term also depends on the Hessian. As we stated in our paper (line 177-179), “A main finding of our paper is that the additional landscape-dependent noise $\\Delta ^{(2)}$ in DPSGD can make up for the small SSGD noise when nB is large and help enhance convergence in the large batch setting.” \nSince the additional landscape-dependent noise $\\Delta^{(2)}$ originates from the difference in weights among the different learners (C in Eq. 5 is the covariance matrix of weights from different learners), as the reviewer may have guessed, DPSGD (but not SSGD) represents a distributed way of computing landscape information (e.g., Hessian).\nFinally, we thank the reviewer for pointing out the similarity between the evolution of the effective learning rate (shown in Fig. 2b) and that from the second-order methods, which we will mention in the final version of the paper. \n\n\nLimitations\n\n(1) no error margins in the results \n\nPlease see response to Experiments (1).\n \n(2) lack of reproducibility: the reader lacks information to obtain the same hyperparameters \n\nIn Section 4.1 (Computer Vision), 4.2 (Automatic Speech Recognition), and 4.5 (Natural Language Processing), we clearly stated the exact hyper-parameter setup (batch-size, learning rate, warmup schedule) one needs to reproduce the results.\n\n", " The authors propose a comparison between two methods of parallelization of the SGD, namely SSGD and DPSGD. They prove theoretically that the decentralized one (DPSGD) adds a smoothing effect to the naive SSGD, and shows that DPSGD is better than SSGD as the batch size increases. ## Strengths\n\nThis paper makes a tiny link between some distributed methods and second-order methods. It helps to understand their dynamics. Furthermore, the computation of an \"effective learning rate\" for DPSGD is useful to illustrate the implicit mechanisms of this method.\n\nThe experiments on large datasets, such as ImageNet, are valuable.\n\nAccording to the experiments, DPSGD seems to perform way better than the alternative choices when the batch size increases.\n\n### Control experiments\n\nIt is noticeable that the authors have tested a modification of the SSGD, called ``SSGD*'', in order to test fairly SSGD against DPSGD. \n\n## Weaknesses\n\n### Theory\n\nTheorem 1 contains at least two major issues:\n * The hypothesis: ``$\\| \\frac{1}{n} \\sum\\_{i = 1}^n X\\_i - \\frac{1}{n-1} \\sum\\_{i = 1}^{n - 1} X\\_i \\| \\leq \\epsilon$ almost surely'' is not reasonable when the $X\\_i$ are i.i.d. and without compact support. So, we cannot consider the simple case where the $X\\_i = \\nabla L^{\\mu\\_i(t)}(\\vec{w}\\_i(t))$ are i.i.d. Gaussian random variables.\n * In the proof: it is true that, conditionally to $\\mathcal{F}\\_{t-1}$, the $\\vec{w}\\_{i}(t)$ are independent, but it is false for the $\\delta w\\_i(t)$. \nLet us recall that: $\\delta \\vec{w}\\_{i}(t) = \\vec{w}\\_{a}(t) - \\vec{w}\\_{i}(t) = \\frac{1}{n} \\sum\\_{j = 1}^n \\vec{w}\\_{j}(t) - \\vec{w}\\_{i}(t)$.Thus, all $\\delta \\vec{w}\\_{i}(t)$ are linear combinations of the same set of variables, with only non-zero coefficients. It is thus impossible that two $\\vec{w}\\_{i}(t)$ are independent. Besides, independence has nothing to do with the fact the $\\sum_{i=1}^n \\delta \\vec{w}\\_{i}(t) = 0$... on the contrary, this equation indicates a strong dependence between the terms.\n\nHowever, it is probable that this theorem can be made useful and rigorous. But, anyway, it does not seem to be useful in the presented work. \n\n### Experiments\n\nError margins: there is no mention of any variance or error margin in the results. So, we cannot evaluate the significance of the presented results: if a reported result seems to be better than another, it may be luck... \n\nThe main drawback of the experimental setup is its complexity. In particular, a huge preliminary phase of fine-tuning seems to be necessary, without proposing a reproducible procedure to recover the proposed hyperparameters (which are very numerous). The authors only provide citations of preceding works, where the fine-tuning has already been done. \n\nIt is clear that the authors claim to prove the reliability of DPSGD as the batch size increases; they do not claim to beat the state of the art. So, there is no point of testing architecture trained with highly fine-tuned optimization procedures. A unified and simplified optimization process should have been considered.\n\n### Clarity\n\nThe paper suffers from several writing issues.\n * The \"related works\" section must not be put in appendix. This section is fundamental to draw a line between existing works and actual contributions, and to evaluate fairly the significance of the paper.\n * Using $\\vec{w}_j$, where $j$ is an index standing for an integer, and $\\vec{w}_a$, where $a$ is a simple letter (standing for \"average\"), is confusing. Why not using $\\hat{w}$ instead of $\\vec{w}_a$? Same issue with $\\vec{w}_{s, j}$. Besides, arrows can easily be removed, or replaced by bold text: $\\mathbf{w}_j$ instead of $\\vec{w}_j$.\n * If the authors do not use the general version of DPSGD presented in [33], then the general notation $\\vec{w}_{s, j}$ can be removed.\n There is a beginning of theoretical analysis of the proposed method though the Hessian of the loss. SSGD and DPSGD provide information about the loss surface, and, intuitively, DPSGD provides richer information. Is it possible to formulate SSGD and DPSGD as second-order methods, where the approximation of the preconditioning matrix (usually the Hessian) is computed in a distributed way? Is this the optimal second-order method with acceleration through distributed computing? Etc.\n\nNote: Fig2b shows an evolution of the learning rate, which appears also when using second-order methods. The authors have tested an improved version of SSGD against DPSGD, in order to compare them fairly.\n\nThe main unchecked limitations are (as mentioned above):\n * no error margins in the results;\n * lack of reproducibility: the reader lacks information to obtain the same hyperparameters.", " The paper studies two distributed learning algorithm for training large-scale deep learning models.\nThe authors argue that the decentralized parallel stochastic gradient descent (DPSGD) has a series of advantages over the more commonly employed Synchronous SGD (SSGD). They argue that the optimization dynamics of DPSGD injects noise which smooths the optimization landscape and allows for larger learning rates. This is especially beneficial in the large-batch-size setting.\nThe paper contains one main theoretical result concerning the smoothness of the objective function that DPSGD implicitly minimizes. \nSeveral experiments with various tasks and architecture are conducted. **Strengths:**\n- The experimental section contains several diverse tasks and architectures, therefore having large coverage\n- The intuitive and empirical arguments in favor of DPSGD are interesting; somewhat helping build up confidence in this algorithm which has other advantages wrt SSGD\n\n**Weaknesses:**\n- I am not quite convinced of the utility of the theoretical results, especially regarding the Gaussian hypothesis (see below in questions)\n- Even factoring this concern out, I believe the theoretical contribution to be very limited as it is basically the application of a know Lemma\n- The presentation is unclear in various passages, including the proof of the theorem; notation is too often overloaded (see eq (3)) making it difficult to understand what is what. This is especially bad in section 2.2: e.g. in equation (4) one might expect the authors to discuss about the DPSGD algorithm, but there is no mention of the gradient $g_j=\\nabla L^{\\mu_j}(w_j)$. Other important quantities such as the total loss are never explicitly defined and the reader is left with the need to self-interpret the paper at various points.\n- The layout of the paper should be revised: the abstract is too long and the related work section cannot be relegated to the appendix. Space can be saved by avoiding repetitions, focusing discussion and removing unnecessary \"summary\" sentences at the end of each experimental section.\n- The claim that DPSGD needs very little to no tuning is exaggerated and not backed up by the experiments. In particular, note that in the NLP experiment DPSGD still needs the learning rate warm-up stage, although shorter. \n- The paper would benefit from a more complete (also formally) introduction of the two algorithms. For instance it is not clear what happens at the end of the optimization with DPSGD. Are the weights averaged over all the worker?\n *Major*\n- I do not understand why the assumption that $\\delta_i w |\\\\mathcal{F}_{t-1}$ should be Gaussian with zero mean and especially equal (scalar) variance. Why this should make sense? In the appendix the authors call for the central limit theorem, but 1) I do not understand to what it should be applied 2) I think that at least the multidimensional version of the CLT should be considered here. On the contrary, the empirical results with SSGD + Gaussian noise (which show poor performances) seem to suggest that there is a fundamental difference between the noise induced by DPSGD and Gaussian noise. \n- One might hypothesize that the advantage of DPSGD is that the noise is adaptive (but still Gaussian). To empirically verify this, the authors could run an experiment where the Gaussian noise added to SSGD is scheduled after what the authors find by running DPSGD (e.g. figure 2 b) for the MNIST setting. If the resulting methods performs on par with DPSGD, then I could be more willing to accept the assumption, although questions would remain. \n- Otherwise, I would argue that Th 1. is not useful to understand the advantages of DPSGD. \n\n*Others*\n- From Lemma 2 of Random Gradient-Free Minimization of Convex Functions [39] it would follow that the smoothness coefficient should be $\\frac{\\sqrt{n}}{\\sigma} G$ where $n$ is the dimension of the parameter vector. From where do you obtain $\\frac{2}{\\sigma} G$ that you report in the paper?\n- I do not quite understand how the theoretical result cannot involve the mixing matrix. As the authors note, for a complete mixing matrix DPSGD is equivalent to SSGD; this should be reflected in the statement and formulation of the theorem.\n- In eq. (2) is it intentional that there are 2 sets of weights $w_j$ and $w_{s,j}$? Isn't it that the loss is also computed at $w_{s,j}$? Or does each worker maintain 2 sets of weights?\n\n*Minor*\n- Why only limiting to the cross-entropy loss? \n- I do not understand the reason behind \"self\" of self-adjusting. Why \"self\"? Usually one speaks about adaptive method.\n- Typos: allow [for]; tasks -> models in line 29; inconsistencies between figures and tables and texts (e.g. Fig 1 SSGD-noise vs SSGD* in the text).\n - Negative societal impacts are not discussed, but I do not see any visible one, so I think missing that discussion is fine for this paper. \n- Limitations of the proposed analysis are not discussed, and should be added, e.g. in relation to the assumptions of the theorem.\n- I do not think it is proper to have one paragraph sketching out very informally a possible result suggesting that other \"people\" (line 144) then can fill the gaps. One might suggest this as a potential interesting future work (briefly and in the conclusions). Or spell out the theorem and include the proof. As it stands, the passage starting at 144 should be removed.", " This work compares two stochastic gradient descent methods for Distributed Deep Learning (DDL), one is Synchronous Stochastic Gradient Descent (SSGD), and the other is Decentralized Parallel SGD (DPSGD), in a large batch setting. The authors empirically find that DPSGD not only has a runtime benefit but also a significant convergence benefit over SSGD in the large batch setting, which is due to additional landscape-dependent noise brought by DPSGD.\n\nTo understand this phenomenon, the authors perform both theoretical analysis and experimental investigation. They claim that the introduced noise smooths the loss landscape. They also conduct extensive studies over 18 state-of-the-art DL models/tasks and demonstrate that DPSGD often converges in cases where SSGD diverges when training is sensitive to large learning rates. Strengths\n\n(1) The authors provide a theoretical analysis to compare the smoothness of the two methods.\n\n(2) This work conducts experiments on an adequate number of tasks and datasets.\n\nWeaknesses\n\n(1) The writing of this work has a big problem. The authors put important context including Related Works and DPSGD and SSGD Runtime Comparison in the appendix. \n\nRelated work is almost an indispensable chapter of an article. Besides, whether DPSGD has an advantage in runtime compared to SSGD is also important. Since the authors claim that DPSGD can have an advantage in terms of convergence, it is to be expected whether this advantage can bring about a saving in runtime. Although the conference has body content restrictions, putting very important content in the appendix is ​​a discouraged behavior.\n\n(2) Although this article emphasizes that DPSGD has great benefits, from the perspective of actual performance, the results are not very satisfactory when the learning rate is very large. Therefore, whether a large learning rate brings a great improvement in efficiency will be a very important question, and I don't think the author provide a detailed analysis in the text. (1) The orthogonal decomposition of noise in Equation (4) is based on the expectation of users. In other words, it only holds when the number of users is very large. However, the experiment shown in Figure 2 only performs with 5 learners. In this case, is the results of the experiment consistent with the theoretical analysis?\n\n\n(2) In Table 1, the best performance comes from when lr=1x, and the performance drops significantly when lr is very large. What are the advantages of a large learning rate? Even though the authors claim that they describe the limitations of their work. However, I cannot an explicit description. I would like to ask the authors if they can show me where is the limitation during the rebuttal.", " The paper compares the convergence behavior of two common training schemes for distributed stochastic gradient descent optimization on large batch sizes. In particular, Synchronous Stochastic Gradient Descent (SSGD), where weight updates are computed by synchronizing all distributed learners globally, is being evaluated against Decentralized Parallel Stochastic Gradient Descent (DPSGD), where only a random selection of neighboring distributed learners is used to compute the weight update. The authors observe a much better convergence behavior with DPSGD and explore the hypothesis that this is due to an inherent quality of this training scheme to introduce stabilizing noise in the weight update rule, which automatically scales the learning rate depending on the landscape of the loss function. The paper continues to give a theoretical explanation for this observation and finally provides a very extensive evaluation on various machine learning tasks in a large distributed training environment. The results demonstrate a clear advantage of DPSGD for large batch sizes and a more robust convergence behavior that doesn't require to reduce the learning rates drastically, as it is the case for SSGD. The claim of this paper is very adequately examined. The biggest strength of this paper is clearly the very deep evaluation, taking large CV, NLP and ASR tasks into account. The authors hereby build on top of previously published modern state-of-the-art training recipes with warm-up periods and adaptive gradient optimizers in order to ensure optimal performance. The thorough evaluation and the conclusions drawn from the experiments is convincing and of high significance to the field.\n\nThe scope of this work seems to be a bit too large for this conference. At various points in the paper it is noticeable that the authors were struggling with the page limit. It is particularly unfortunate that the description of related work had to be put into the appendix. One way to reduce the scope of this paper would be to leave out the evaluation of noise-injection, which appears rather limited by sticking to Gaussian noise injection at various points in the training process. While these results somehow support the claims by the authors, that the advantages of DPSGD are related to their inherent noise introduction, it is far from being proven to be the case given that manual noise injection experiments with SSGD contain many heuristics and best effort attempts to simulate the effect of DPSGD. In my opinion this could have been studied in a separate publication. It is not very clear to me why only Gaussian noise has been considered, it appears like the noise simulation could have been designed such that it is more similar to the noise introduced by DPSGD.\n\nline 213: numbers up to twelve (12) should be written as words\n The authors did not really discuss the limitations of their evaluation, which is unfortunate. As mentioned earlier, especially the simulated noise injection has a lot of room for further considerations. It again appears that this discussion has been omitted due to the page limit. An outlook on potential societal impact is also not provided. Since the important results of this paper mostly benefit groups with very large training resources, some discussion on this topic would have been appropriate. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 5 ]
[ "2niSD2KW9Y", "IoBUT5n_qd6", "x29T33nOgcD", "q5TwjNH5KR", "q5TwjNH5KR", "ODy9dpaI4_Q", "y4joeCH_EIL", "y4joeCH_EIL", "coP-CKeBNiq", "coP-CKeBNiq", "coP-CKeBNiq", "q5TwjNH5KR", "nips_2022_Bq2-WN5csW", "nips_2022_Bq2-WN5csW", "nips_2022_Bq2-WN5csW", "nips_2022_Bq2-WN5csW" ]
nips_2022_hcVlMF3Nvxg
MultiGuard: Provably Robust Multi-label Classification against Adversarial Examples
Multi-label classification, which predicts a set of labels for an input, has many applications. However, multiple recent studies showed that multi-label classification is vulnerable to adversarial examples. In particular, an attacker can manipulate the labels predicted by a multi-label classifier for an input via adding carefully crafted, human-imperceptible perturbation to it. Existing provable defenses for multi-class classification achieve sub-optimal provable robustness guarantees when generalized to multi-label classification. In this work, we propose MultiGuard, the first provably robust defense against adversarial examples to multi-label classification. Our MultiGuard leverages randomized smoothing, which is the state-of-the-art technique to build provably robust classifiers. Specifically, given an arbitrary multi-label classifier, our MultiGuard builds a smoothed multi-label classifier via adding random noise to the input. We consider isotropic Gaussian noise in this work. Our major theoretical contribution is that we show a certain number of ground truth labels of an input are provably in the set of labels predicted by our MultiGuard when the $\ell_2$-norm of the adversarial perturbation added to the input is bounded. Moreover, we design an algorithm to compute our provable robustness guarantees. Empirically, we evaluate our MultiGuard on VOC 2007, MS-COCO, and NUS-WIDE benchmark datasets. Our code is available at: https://github.com/quwenjie/MultiGuard
Accept
This paper studies adversarial examples for varieties of randomized smoothing, namely, ways to improve the robustness of a classifier by adding noise and averaging over inputs. The main contribution is MultiGuard, which is a provably robust defense for multi-label classification. Moreover, the method works for a variety of classifiers, and the authors also provide theoretical and empirical results to back up their method. The reviewers generally find the technical contribution to be significant (although perhaps elementary) as well as finding the problem domain to be important and interesting. The reviewers also found the mathematical tools to be intuitive and appropriate (e.g., a variant of Neyman-Pearson lemma as well as a law of contraposition to extend the provable guarantees of randomized-smoothing multi-class classification to that of mutli-label). In addition to acknowledging the theoretical results, the reviewers also felt that the empirical studies were sufficient to verify the authors’ main findings. On the negative side, there are some concerns about clarity and rigor of the results. I would encourage the authors to improve the exposition and the preliminaries to increase the readability of the work. Similarly, there are some questions about comparison to prior work and similar ideas that should be addressed. Overall, I recommend acceptance. The positives outweigh the negatives, and the author-review discussion seemed to address many of the main questions.
train
[ "M6ivW3SAT8", "9c1ulXvveW", "YgiWL3qsncL", "-IJfBGgFrya", "EHuY3Xgaq6w", "GPQFsLtlrIe", "RM1cYOpbcX", "kHbMXT3UQEw", "op1YWPY6v9W", "HDC0AXEJY3", "9zmQlCb5_m", "LhZHkk5UJ0E", "E0g0EV0g6so", "0dDa639s_KP", "Q5T60HS986g", "TqhDKd5_5ik" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your time. We really appreciate the suggestion.", " Thanks the authors for providing the code. My concerns are addressed.", " Many thanks for the comment! We really appreciate the constructive feedback, which significantly improves the paper. We will definitively integrate our clarifications into the paper. Thanks for updating the rating score!", " Thanks for the clarification. I think it addresses my concern well. I will change my rating accordingly. In parallel, we'd encourage the author to integrate the clarification into the paper. This will help readers understand better the value of the contribution. ", " Thanks a lot for the comment! It is our great pleasure to answer the questions. We will definitively add those illustrations in the next version. Thanks for updating the confidence score!", " Thanks for your timely response. My question is addressed. Please make sure to incorporate these illustrations in the updated version of the manuscript. I increased the confidence score of my evaluation accordingly.", " Thank you so much for your time and insightful feedback. We will definitively incorporate them into our paper.\n\nFor the max operator, the second term in Eqn. (8) reduces to the first term under those conditions. Moreover, it is equivalent to the term on the left-hand side of Eqn. (3) in Jia et al.. For the min operator, the second term becomes $\\min_{v=1}^{s}\\frac{1}{v} \\cdot\\Phi(\\Phi^{-1}(\\overline{p}\\_{B\\_{v}})+\\frac{R}{\\sigma})$ under those conditions. When $v=1$, $\\Phi(\\Phi^{-1}(\\overline{p}\\_{B\\_{v}})+\\frac{R}{\\sigma})$ is equivalent to the first term. In other words, the first term is already included by the second term due to its min operation. Moreover, the second term is equivalent to the term on the right-hand side of Eqn. (3) in Jia et al.. Our certification reduces to Jia et al. as both sides are equivalent under those conditions.\n", " I appreciate the authors for your timely and detailed response, and I hope the authors can include these nice clarifications into their paper revision. Most of my concerns are resolved except one minor question:\n\nWhen $k'=1, k\\ge 1$ and $|L(x)|=1$, the author mentioned that their certification reduces to Jia et al. Is it because the second terms in Eqn. (8) $\\max$ and $\\min$ operators become equivalent to the first terms respectively?\n\n", " Thanks for the constructive comments!\n\nWhen leveraging our extended Neyman-Pearson Lemma to derive the certified intersection size, we jointly consider all ground truth labels. As a result, the computation of certified intersection size involves the label probabilities of all ground truth labels as shown in Equation (8). In contrast, Jia et al. consider each ground truth label independently, which achieves a sub-optimal certified robustness guarantee as demonstrated in our experiments.\n\nThe standard deviation $\\sigma$ of isotropic Gaussian noise (added to a testing input when building our smoothed classifier) controls a tradeoff between robustness and accuracy. In particular, a smaller $\\sigma$ can achieve better classification accuracy but is less robust against adversarial examples as shown in our experimental results (fourth row in Figure 1 and Figure 5 in Appendix).\n", " Thanks for the constructive comments!\n\nThanks for the suggestion. Our code can be found in this anonymized address: https://github.com/RandomizedOS/MultiGuard. We are still cleaning code for other experimental results. We will add more.\n", " Thanks for the constructive comments!\n\nWe only compare with Jia et al. because it is the only certified defense that considers top-$k$ predictions against $\\ell_2$ adversarial perturbation. We will clarify.\n\nSuppose $p_i$ is the probability that a base multi-label classifier predicts label $i (i=1,2,\\cdots, c)$ for a training input. Moreover, we let $y_i$ be 1 (or 0) if the label $i$ is (or is not) a ground truth label of the training input. The loss of ASL [2] is as follows: $L_{ASL}=\\sum_{i=1}^{c} -y_{i} L_{i+}-(1-y_{i}) L_{i-}$, where $L_{i+}=(1-p_{i})^{\\gamma_{+}} \\log (p_{i})$ and $L_{i-}=\\left(\\max(p_{i}-m,0)\\right)^{\\gamma_{-}} \\log \\left(1-\\max(p_{i}-m,0)\\right)$. Note that $\\gamma_{+}$, $\\gamma_{-}$, and $m$ are hyperparameters. Following [2], we set $\\gamma_{+}=0, \\gamma_{-}=4, m=0.05$. We train a base multi-label classifier using Adam optimizer, where the learning rate is $10^{-3}$ and batch size is 32. We adopt the official implementation of ASL (https://github.com/Alibaba-MIIL/ASL) in our experiments. We will add those details to our paper as suggested.\n\nWe note that $k’$ achieves a tradeoff between the certified top-$k$ precision@R (or certified top-$k$ recall@R or certified top-$k$ f1-score@R) without attacks and robustness. In practice, we can set $k’=1$ if robustness is desired.\n", " Thanks for the constructive comments!\n\nThanks a lot for pointing it out. $\\underline{p_{A_u}}$ is a lower bound of label probability. Thus, we have $\\underline{p_{A_u}} \\leq p_{A_u} \\leq k’$, where the last inequality is based on the fact that $p_{A_u} \\leq \\sum_{i=1}^{d} p_{a_i} \\leq \\sum_{i=1}^{c} p_i =k’$. We note that $\\overline{p}\\_{B\\_{v}}$ is an upper bound of $p\\_{B\\_{v}}$ which is the summation of the label probabilities of a subset labels among $\\{1,2,\\cdots,c\\}$. Based on the fact that $\\sum_{i=1}^{c} p_i =k’$, $k’$ can be viewed as an upper bound of $p\\_{B\\_{v}}$. In Section 3.3, our estimated $\\overline{p}_{B\\_{v}}$ will always be no larger than $k’$ (see Line 293), and thus we can apply our Theorem 1. We will clarify.\n\nWe perform experiments under our default setting to validate the effectiveness of our bagging terms (i.e., second terms in Eqn. (8)). Our results are as follows: with and without the second terms, MultiGuard respectively achieves 31.3% and 23.6% certified top-k precision@R, 66.4% and 48.8% certified top-k recall@R, as well as 42.6% and 31.8% certified top-k f1-score@R, where the perturbation size $R = 0.5$ and the dataset is VOC 2007. As the result shows, our second terms can significantly improve certified intersection size. We will add more technical details as suggested. \n\nThe reason why MultiGuard is better than Jia et al. is as follows. Given a perturbation size, Jia et al. can verify whether a ground truth label is among the top-$k$ labels predicted by the smoothed classifier by leveraging standard Neyman Pearson Lemma. However, each ground truth label is independently considered by Jia et al., which is sub-optimal. For instance, suppose we have two ground truth labels, it is very likely that both of them are not in the top-$k$ predicted labels when considered independently, but at least one of them will be among the top-$k$ predicted labels when jointly considered. The intuition is that it is easier for an attacker to find an adversarial perturbation such that a certain label is not in the top-$k$ predicted labels, but it is more challenging for an attacker to find an adversarial perturbation such that both of the two labels are not in the top-$k$ predicted labels. Our MultiGuard can jointly consider all ground truth labels when deriving the certified intersection size and thus is better than Jia et al.. We will add the discussion to our paper.", " This paper extends the traditional randomized smoothing, which only supports single-label classifiers, to support multi-label classifiers. The extension is based on a generalization of Neyman-Pearson lemma: the Neyman-Pearson lemma still holds for a normalized sum of functions as long as the normalized sum is within [0, 1] universally. The proposed method is evaluated on a few classical multi-class datasets including VOC 2007, MS-COCO, and NUS-WIDE, and achieves superior performance compared to a naive extension from top-k certification in terms of multi-class precision, recall, and F1 score. Strengths:\n- Non-trivial technical contributions: a generalization version of Neyman-Pearson lemma and the resulting certification protocol based on the extended randomized smoothing for multi-class certification.\n- Handles an important problem: multi-class certification is a common learning task formulation, e.g., in computer vision.\n- Significant experimental results: consistently beating an adapted baseline (from top-k randomized smoothing certification).\n- Writing quality is good.\n\nWeaknesses:\n- Some mathematical illustrations may not be rigor enough. See \"Questions\" for more details.\n- The underlying technical idea is a bit straightforward, or not explained clearly enough in terms of challenges. See \"Limitations\" for more detail.\n\nNote: I am not familiar with multi-label classification literature. Therefore, I didn't evaluate the corresponding experimental protocols, e.g., the selection of evaluation datasets. I will take other reviewers' comments into account for this part. - In Eqn. (8), where do we constrain the validness of input to $\\Phi^{-1}$? In other words, $\\dfrac{\\underline{p_{A_u}}}{k'}$ or $\\dfrac{\\overline{p}_{B_v}}{k'}$ seems to be possible to grow larger than 1, is it the case? If so, we may need to fix Eqn. (8) and the corresponding proofs, especially in Eqns. (72) and (78). - After reading the submission, I feel the main technical idea is to use (e+1)-th ground-truth label's lower probability bound to compare with the (e+1)-th adversarial label's upper probability bound to derive the certification. This idea seems to be a bit straightforward. On the other hand, the technique of bagging multiple labels together for certification (second terms in Eqn. (8) constraints) sounds interesting to me. I am wondering whether this bagging technical contributes much to the final certification tightness, or is just incremental. Maybe an ablation study can be conducted to quality this. If the technique contributes much, I would recommend the authors squeeze some space in Section 3.3 (seems most sampling techniques are already in the literature) and expand the discussion of this technique more to highlight the novelty and technical contributions.\n\n- Following the above comments, the experimental section seems to mainly convey \"the proposed method is superior than the baseline\" but lacks the discussion on **why** it surpasses the baseline. The discussion between Line 360 and 362 looks a bit thin to me.", " This paper proposes MultiGuard, where multi-label classification with provable guarantees against adversarial perstubations is studied. The method is based on randomized-smoothing, where randomization with Gaussian noise is utilized to provide a smoothed classifier with provable guarantees, and this work generalizes that to multi-label classification, with adjusted claims to suit multi-label classification. This work uses simple yet intuitive tools and techniques, namely a variant of Neyman-Pearson lemma as well as law of contraposition to extend the provable guarantees of randomized-smoothing multi-class classification to that of mutli-label. Eventhough technical novelty is somewhat limited, the problem is of interest and together with the provable performance, the community would benefit from the publication of the work.\n\nFurthermore, the paper is written well and is easy to follow and has good structure. Numerical results are well presented and parameter sensitivity is studied, however in terms of comparison with other methods are somewhat limited, since the method is only compared with Jia et al, and thus could benefit from more extensive comparisons.\n In order to make the paper self-contained, I’d recommend to add a section on the training procedure of the proposed method. Even though references to ASL for training the base multi-label classification training is given, the paper is not self contained as no further information, i.e. the definition of the loss and training procedure and/or hyper parameters, is given, which makes the work not self-contained. The bounds drop quickly as the multi-label parameter k drops for 1 to higher values (2 or 3), which poses the question of whether these bounds are non-trivial or useful in practice. ", " This paper discusses the adversarial robustness of multi-label classification task, extend provably robust defense methods to multi-label case, and derive a way to estimate the certified intersection size, which is the least number of true labels in the set of labels predicted by the certified classifier. Specifically, the author extends works of randomized smoothing to multi-label task by leveraging the law of contraposition and a variant of Neyman-Pearson lemma to derive the conditions of robust certification and using MC algorithm to help estimate the certified intersection size.\n\nExperiments on three datasets (VOC 2007, MS-COCO, NUS-WIDE) show that the proposed method performs better than directly extending randomized smoothing (Jia et al.) to multi-label classification. They also do experiments to study the effects of important hyper-parameters.\n Strengths:\n\n- The work is novel as it is the first one to extend provably robust defense to multi-label classification and the extension is not trivial as multi-label classification is different from multi-class classification. Besides, experiments show that directly extending the method proposed for multi-class classification to multi-label does not work well. The authors also show that randomized smoothing methods proposed for multi-class classification can be viewed as special cases of their framework.\n\n- The paper is clearly written and easy to follow. The method is straightforward and not hard to implement. The proposed method works well on three benchmark datasets.\n\nWeakness:\n\n- Though pseudo code and some implementation details are provided, actual code is not provided, It is better if the actual code can be provided for reproducing the results.\n NA Though the method performs better than directly extending randomized smoothing on multi-class classification to multi-label case, the performance is far from perfect. There's still room for improvement. The author points out some improvement direction at the end of the work.", " In this work, a multi-label variant of the random smoothing technique is designed by extending the Neyman-Pearson Lemma from the single-output model to the multi-output model. The theoretical study shows that the proposed multi-label random smoothing method can be considered as a more general framework of provably robust learning. It takes the single-label random smoothing work published in [12] as a special variant. Furthermore, considering only one of the multiple labels as the protected target by the injected random smoothing noise, the proposed method can be also reduced to [22]. Strong points: \n\n1. How to design the random smoothing noise for multi-output classification algorithms is important and remains open to address. It is good to see research efforts devoted to this topic. Extending the Neyman-Pearson Lemma is a decent idea along this track. \n\n2. The theoretical study is solid and points out clearly the relation between the proposed method and the previous random-smoothing techniques. \n\nWeak points: \n\nThe core idea of mult-label classification is to make use of the correlation between multiple labels, to boost the learning performance. Highly correlated labels are likely to have similar classification accuracy levels (they tend to be correctly classified / misclassified at the same time). It is unclear how the proposed method considers the label correlation in the design of the random smoothing noise. For example, the provably robustness radius / bound should be similar for highly correlated labels, as the same smoothing perturbation tends to bring similar effects to the correlated label pairs. Therefore considering untargeted multi-label attacks, the design of the random smoothing noise should be able to benefit from the label correlation: we can minimise the variance of the random smoothing noise, while providing the provably defense to as many as possible correlated labels. \n\n The major question is it is unclear whether the proposed theoretical study based on the extended Neyman-Pearson Lemma considers the label correlation explicitly in protecting multi-label systems against adversarial attacks. Furthermore, an injected random smoothing noise may protect some labels, while harm the classification performance of the other labels severely. Could the proposed theoretical analysis provide some insights on how to control the unexpected classification performance drops in multi-label systems ? This is also the unique problem in robustifying multi-label learning tasks, in contrast to the single-label tasks. \n\n######\n\nWe are satisfied with the clarification provided by the author. We think this concern has been well addressed. Please find our concerns in the listed weak points. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 2, 3 ]
[ "9c1ulXvveW", "HDC0AXEJY3", "-IJfBGgFrya", "op1YWPY6v9W", "GPQFsLtlrIe", "RM1cYOpbcX", "kHbMXT3UQEw", "LhZHkk5UJ0E", "TqhDKd5_5ik", "Q5T60HS986g", "0dDa639s_KP", "E0g0EV0g6so", "nips_2022_hcVlMF3Nvxg", "nips_2022_hcVlMF3Nvxg", "nips_2022_hcVlMF3Nvxg", "nips_2022_hcVlMF3Nvxg" ]
nips_2022_Yopob26XjmL
Natural gradient enables fast sampling in spiking neural networks
For animals to navigate an uncertain world, their brains need to estimate uncertainty at the timescales of sensations and actions. Sampling-based algorithms afford a theoretically-grounded framework for probabilistic inference in neural circuits, but it remains unknown how one can implement fast sampling algorithms in biologically-plausible spiking networks. Here, we propose to leverage the population geometry, controlled by the neural code and the neural dynamics, to implement fast samplers in spiking neural networks. We first show that two classes of spiking samplers---efficient balanced spiking networks that simulate Langevin sampling, and networks with probabilistic spike rules that implement Metropolis-Hastings sampling---can be unified within a common framework. We then show that careful choice of population geometry, corresponding to the natural space of parameters, enables rapid inference of parameters drawn from strongly-correlated high-dimensional distributions in both networks. Our results suggest design principles for algorithms for sampling-based probabilistic inference in spiking neural networks, yielding potential inspiration for neuromorphic computing and testable predictions for neurobiology.
Accept
Although some reviewers have reservations about strong modelling assumptions, the main contribution of the paper is clearly presented and technically sound.
train
[ "sy3xiIaJ65n", "W9Z7EBfbPEQ", "GX8SaKE2dVE", "nCM8ioaqSAL", "u-dI4LRdw-s", "_0TFZ7A-dn3", "Wjop1PzPCBGa", "Jx0T--rWOm", "tsBLEEu5C80", "6YWnjVX4PCo", "_tl0S2QLnk4", "mPqaUuLB-Ym", "iTW4wG4x0PV", "BQgYadFC95", "S10Em2oXyz6", "07yj8ul6c10", "21-EllRMobg", "UX_KZjwRJgq", "RFMKJfIsiWQ", "BlvE5lszsL", "wPyhVXai08s", "uP5wb03A8J", "7J0U3mg6Zbq" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors' reply addressed some of my confusion, e.g., the influence of the D matrix in Eq. 14 on the sampling dynamics, the denominator in Eq. 4, and the natural geometry has a smaller discretization error than naive sampling.", " We thank the reviewer for helping us improve the clarity and presentation of our paper. ", " I appreciate the authors addressing my concerns and suggestions in earnest. The response and changes have improved the high-level clarity of the paper, and helped me understand the context of the existing literature, and the authors' contributions. I still have doubts about some assumptions and how they can ultimately be realized in the biological brain, but given the references the authors have provided, I assume these are standard for the field, and should not negatively impact ratings of the current work. I have updated my scores accordingly, but remain unconfident in my assessment and defer to the other reviewers & AC.", " The reviewer is right that the efficient balanced network limit of our implementation has similarities to the one proposed by Savin & Denève. However the addition of dynamics in the natural space of parameters is specifically what allows the network to perform for high-dimensional distributions. Although high-d simulation can be implemented in the naïve Langevin framework of Savin & Denève, in practice the accuracy of the sampling degrades rapidly with the entropy of the posterior distribution (either due to increased covariance or increased dimensionality as shown in Figure 2 of our manuscript) and sampling using the natural geometry is needed to get reasonable accuracy. The parallelization they introduce does accelerate the sampling as it performs inference in parallel chains but it will not improve the accuracy and our simulations show that the naïve approach fails in high-dimensions. Furthermore, the parallel approach requires $K$ times as many neurons where $K$ is the number of parallel chains. \n\nWe propose to rewrite the paragraph introducing these previous works: \n\n>Previous works have proposed several approaches to accelerate sampling in biologically-inspired algorithms. For example, Hennequin et al. [9] showed that adding non-reversible dynamics to rate networks can reduce the sample autocorrelation time. Savin and Denève [20] used a distributed code to parallelize sampling in spiking networks based on efficient balanced networks, however they only considered two-dimensional distributions and while this approach accelerates sampling its accuracy will degrade in high-dimensions. Beyond this initial work, a general framework for sampling in biologically-inspired spiking neural networks incorporating recent advances from the machine learning literature remains to be developed.\n\nWe believe our contribution is a significant advance from previous work due to both the general framework we introduce as well as the dynamics within the complete recipe framework. We hope the reworded paragraph should focus the reader on our key contributions as highlighted in lines 53-62:\n- The general framework linking sampling in Metropolis-Hastings based networks and Efficient Balanced networks\n- Introducing the importance of the population geometry in controlling the speed and accuracy of convergence. \n\nWe were also wondering if the reviewer had comments on the presentation of our new derivations providing intuition for the speedup in the natural geometry setting.", " Please find below our detailed responses to the reviewer's clarification points. \n\n*I did not doubt that the brain has the ability of probabilistic computation. My concern here is what's the relationship between the brain's probabilistic computation and the framework of SNN sampling in the current paper? There is a gap between this two part. To me, it is not trivial given the references.*\n\nIt is well know that both at steady state and across trials there is variability in the firing of neurons. As we highlight in the introduction, the hypothesis that this variability in neural firing is a signature of sampling from a posterior in a MCMC framework gained traction with the work of Hoyer & Hyvarinen (NeurIPS, 2002). As we point out in the introduction, the advantage of the sampling hypothesis is that it relies on a body of work in machine learning and statistics that shows they could in principle perform computations for high-dimensional posteriors. In contrast, while other proposed schemes perform well on simple toy models (e.g. cue combination between two sensory input streams), their is little work scaling them to useful models in machine learning applications. Since then many papers have explored the sampling hypothesis but few have explored the ability of such schemes to perform probabilistic computations at a speed consistent with perception (~$50ms$) and for high-dimensional posterior distributions. As the brain is a spiking neural network, it is natural to explore potential sampling implementations in SNNs rather than rate networks.\n\nWe would also like to point out further experimental evidence such as the paper by Berkes and colleagues (Science, 2011, <https://www.science.org/doi/full/10.1126/science.1195870>) which shows in recordings from ferret visual cortex that the space explored by spontaneous activity in the absence of sensory stimulus is consistent with exploration of the prior in sampling algorithms. \n\nWe would be happy to extend the paragraph in the introduction to better introduce this work linking probabilistic computations in the brain to sampling in the in the final version of the paper with the additional content page.\n\n\n*If it is true, what is the time length T to achieve sufficient accurate samples?*\n\nFigures 2 and 4 show that for both our implementations, the inference at $t=50ms$ after stimulus onset is much improved when using the natural geometry in comparison to the naive geometry. In our revisions, we have also added Appendix Figure E.1 (<https://figshare.com/s/15ccd407eb0ba614d23d>), in which we specifically show the dynamics of the estimated mean, variance and Wasserstein-2 distance to the target distribution as a function of time.\n\n*What's the energy cost compared to using traditional sampling approaches? Can the authors provide any quantitative results to support the superiority of using SNN here?*\n\nIn biological networks, computing with spikes reduces energy expenditure by 10-100 folds compared to sending continuous signals (Laughlin et al, 1998, Nature Neuroscience <https://www.nature.com/articles/nn0598_36>). Indeed, in most biological networks, non-spiking neurons are usually only located in the earliest stages of sensory processing where signal transduction occurs. Spiking artificial neural networks with similar abilities than ANNs implemented on neuromorphic chips is a long term goal of the field as it is recognized they could offer orders of magnitude improvements in energy efficiency (e.g. Roy et al., Nature, 2019 <https://www.nature.com/articles/s41586-019-1677-2> or Markovic et al, Nature reviews Physics, 2020 <https://www.nature.com/articles/s42254-020-0208-2>). \n\nOur model is a proposed computational framework and is not tied to a specific hardware implementation. The resulting energy expenditure will be dependent on many factors such as whether the network is simulated on a standard CPU/GPU or directly implemented in a neuromorphic chip. \n\nWe would be happy to include an extended discussion of these points in the final version of the paper with the additional content page.", " Thanks for the clarifications. I continue to like the ambition of the paper but still feel that the spelling out of the additional benefits of this work relative to recent fast sampling procedures could be sharper and avoid some semantics type overstatement (e.g. just because large simulations of high d were not included in the previously mentioned papers there was nothing fundamental preventing them from doing it, and they are formally quite similar to this solution).", " The author is right that I mainly raised some high-level motivations and reasonability about this paper as I believe that this is more important for a new question. Based on the reviewer's response, I still have some puzzles. \n\n1. I did not doubt that the brain has the ability of probabilistic computation. My concern here is what's the relationship between the brain's probabilistic computation and the framework of SNN sampling in the current paper? There is a gap between this two part. To me, it is not trivial given the references. \n\n2. In terms of the network infrastructure, do you mean you used a recurrent setup as shown in Figure 1? If it is true, what is the time length T to achieve sufficient accurate samples? What's the energy cost compared to using traditional sampling approaches? Can the authors provide any quantitative results to support the superiority of using SNN here? ", " We thank the reviewer and appreciate the reviewer's opinion on our paper. We noted that most of the reviewer's questions are about the high level motivations behind our work, which we clarify below. We therefore ask the reviewer to kindly reconsider their assessment of our work.\n\n*[...]*\n\n#### Questions:\n\n- *What's the connection between the brain's capacity for probabilistic computation and the ability of SNN for sampling?*\n\n As we describe in Lines 18-23 of the Introduction, there is ample evidence that the brain performs probabilistic computations:\n\n > Neural circuits perform probabilistic computations at the sensory, motor and cognitive levels (Knill and Pouget 2004, Körding and Wolpert 2004, Fiser et al. 2010, Ott, Masset, and Kepecs 2018). From abstract representations of decision confidence (Masset et al. 2020) to estimates of sensory uncertainty in visual cortex (van Bergen et al. 2015, Festa et al. 2021), evidence of probabilistic representations can be found at all levels of the cortical processing hierarchy (Pouget, Drugowitsch, and Kepecs 2016).\n\n As sampling is one of the most promising implementation of probabilistic computation at scale in the ML literature, it is natural to explore how biologically neural networks could efficiently implement sampling algorithms.\n\n\n- *What's the evidence for the brain to realize its probabilistic computation through the same sampling approach as we do in the Bayesian inference?*\n\n Thank you for this question. Recent work by Echeveste et al., (Nature Neuroscience, 2020) has shown that sampling codes predict specific properties of neurons in visual cortex. We previously mentioned this paper briefly in the Introduction; we now describe it in more detail. The relevant sentence reads as follows: \n\n > Moreover, they predict specific properties of neural responses in visual cortex, including changes in Fano factor and frequency of oscillations with tuning and stimulus intensity (Echeveste et al., 2020).\n\n- *The implementation needs to accept the spikes probabilistically. How to realize this mechanism in a spiking neural network?*\n\n Thank you for this question. Including this stochasticity actually improves the biological plausibility of our proposed network. In biological neurons, the stochasticity of the spike generation and synaptic release mechanisms are well established phenomena through decades of experimental work. The probabilistic acceptance of spike proposals in our network may be realized in biology by these mechanisms. In the submitted manuscript, we mention this point in Lines 307-315 of the Discussion: \n \n > In biological spiking networks, probabilistic spike emission and probabilistic synaptic release are natural sources of stochasticity (Del Castillo and Katz, 1954; de Ruyter van Steveninck et al., 1997; Mongillo, Barak, and Tsodkys, 2008; Rusakov, Savtchenko, and Latham, 2020). \n \n We are happy to extend our discussion of this point if the reviewer believes it is necessary.\n\n\n- *Why is it called fast sampling? How is it faster when compared to other methods?*\n\n We use the term \"fast sampling\" as the networks we propose are able to sample high-dimensional posterior distribution after 50ms given biologically plausible constraint on the timescales of single neurons' membrane potentials (see Lines 237-241). To further illustrate how using natural gradients enables faster sampling than geometry-naïve samplers, we added Appendix Figure E.1 (\\url{https://figshare.com/s/15ccd407eb0ba614d23d}), in which we show the dynamics of the estimate mean, estimate variance and Wasserstein-2 distance to the target distribution as a function of time. These simulations highlight how leveraging the geometry improves both the speed and the accuracy of the inference.\n\n\n- *What's the infrastructure for the SNN used for sampling?*\n\n We apologize for our confusion, but unfortunately we could not understand the referee's question. We introduce the SNNs used for sampling in Section 2, with detailed derivations in Appendices B and C. Note that the networks here are simulated rather than implemented in neuromorphic hardware as we are interested in the computational principles of these SNNs. We might be misinterpreting the reviewer's question, please ask if more information is needed.\n\n\n- *If the probabilistic spiking mechanism is allowed, why do we need such a spiking neural network rather than using a regular artificial neural network for the computation in the sampling procedure?*\n\n The probabilistic spiking mechanism only applies to the generation of the spike. The spikes themselves are still binary events and neurons only communicate with each other through these discrete spikes. Our network therefore strongly differs from regular neural networks and carries all the associated challenges and benefits of SNNs. We might be misinterpreting the reviewer's question; please ask if more information is needed.\n\n*[...]*", " #### Limitations\n\n*The limitation of the current framework is the analog-digital conversion (DAC) in sampling a continuous distribution in discrete spiking networks, as mentioned in line 238. This makes people feel that the spiking network is not designed by implementing sampling although the error of DAC can be reduced by considering population geometry or in large number limit. Similar feelings were mentioned by the authors in the paper. I hope to see more discussion of alternative solutions to the DAC error. One possible solution is utilizing the math techniques in probabilistic population code to sample continuous distribution in discrete spikes, where the representation of a continuous distribution in spiking networks comes from the continuously smooth tuning curves of neurons.*\n\nWe agree that digital-to-analogue conversion is a fundamental conceptual issue in trying to approximately sample from a continuous distribution using a spiking network, as we noted in the manuscript. Here, and in prior work on EBNs and other spiking networks, our approach to this problem is quite simple: the approximate samples are low-pass filtered spike trains. This does not eliminate DAC error, but, as we show empirically, yields a reasonable approximation for moderately large networks. We also note that estimating moments of the distribution from samples through temporal averaging introduces an additional smoothing step.\n\nAs described in the Introduction of the submitted manuscript, probabilistic population codes afford an entirely distinct strategy for representing probability distributions using spiking neural networks. As noted there, our focus in this manuscript is on sampling-based codes because of their better potential scalability to higher dimensions, as noted in previous work (Fiser et al. 2010, Beck et al. 2011). To clarify this point, we have revised the corresponding paragraph of the Introduction (Lines 24-34), which now reads as follows:\n\n> Several neural architectures for probabilistic computation have been proposed, including: probabilistic population codes (Ma et al. 2006), which in certain cases allow a direct readout of uncertainty; direct encoding of metacognitive variables, such as decision confidence [...]; doubly distributional codes [...] which distinguish uncertainty from multiplicity; and sampling-based codes [...], where the variability in neural dynamics corresponds to a signature of exploration of the posterior probability. Most experiments quantifying uncertainty representations in single biological neurons have only varied parameters along one or two dimensions, such as in Bayesian cue combination [...]. In these conditions, many algorithms can perform adequately. However, probabilistic inference becomes more challenging as the entropy of the posterior distribution---which often scales with dimensionality---increases [...]. Some algorithms that work well in low dimensions, such as probabilistic population codes, may scale poorly to high-dimensional settings (Fiser et al. 2010). \n\n*[...]*", " #### Writing (Part 2 of 2)\n\n- *It is unclear how Eq. 6 and 10 are related to the undefined generative model which was mentioned in line 71. Is there an implicit assumption of the linear Gaussian generative model because the terms in Eq. 10 are the derivative of quadratic terms. Moreover, it is not clear how $\\psi$ is defined at this moment until later I read the text in line 160. Also, in Eq. 6 the input to the neural dynamics is not the observation x defined in the generative model (line 71), but the latent variable directly. More explanations are needed here.*\n\n Thank you for these suggestions, and we apologize for the confusion. As noted in Lines 77-78 (where $\\Psi$ was first defined) of the originally submitted manuscript, and again in Line 160, the goal of Section 2.1 is to build a network that (approximately) samples from a general Gaussian distribution with mean ${\\theta}$ and covariance ${\\Psi}$. It is to this distribution that the equations you mention refer; we now state this point more explicitly in Lines 77-78. We included the discussion on lines 68-75 in the hope of providing a more concrete point of reference, and regret that its link to the abstract discussion in the remainder of Section 2 was unclear. We have revised this prose as follows: \n\n > In this work, we will keep our discussion quite general, and state our results for sampling from a generic Gaussian distribution. However, the problem we aim to solve can be given a concrete interpretation in a neuroscience context, and could also be extended to non-Gaussian distributions. The goal of a neural network performing probabilistic inference is to estimate a posterior distribution $P({\\theta} | \\mathbf{x})$ over $n_p$ latent variables (or parameters) ${\\theta}$ given an input $\\mathbf{x}$ (Figure 1). The input could correspond to the activity of sensory neurons in early sensory processing (e.g. input onto ganglion cells in the retina or onto mitral cells in the olfactory bulb) or inputs into a cortical column that linearly sense features in the environment through an affinity matrix. We provide a detailed discussion of this linear Gaussian model in Appendix C. In the rest of the paper, we will usually abbreviate the distribution from which we want to sample as $P({\\theta})$, rather than writing $P({\\theta} | \\mathbf{x})$. \n\n- *Line 195: it is not clear what the matrix G is until I read line 232. Please define symbols when they first appear.*\n\n Thank you for this comment, and we apologize for the confusion. The sentence to which you refer should read \"Samplers based on Riemannian geometry can be designed by choosing $\\mathbf{D}$ to be the inverse of the Fisher information matrix $\\mathbf{G}$ (or an approximation thereof), yielding a preconditioned gradient ${\\nabla}_{\\textrm{nat}} U = \\mathbf{G}^{-1} {\\nabla} U$;\" the initial usage of $\\mathbf{G}$ was ommitted due to a typo in our submitted manuscript.\n\n*[...]*", " #### Writing\n\n*[...]*\n\n\n- *It is not clear why the problem from discrete and sign-constrained spikes can be solved by imposing a fine-tuned balancing condition on the readout weights (lines 93-98). More explanations are needed. Also, it is not clear the motivation for choosing a neuron uniformly at random (line 104).*\n\n Thank you for this comment. The balancing condition makes the proposal distribution symmetric, but does not resolve the issue of discretization. We have added the following sentence to Lines 99-101 of the updated manuscript to clarify this point:\n \n > To obtain a symmetric proposal distribution with sign-constrained spikes, we assume that the network is divided into two equally-sized populations with equal and opposite readout weights.\n\n Please see also our response to Reviewer Pjbs' question on the asynchronous update rule for further discussion of this point, which we quote below::\n\n > Thank you for these comments. With regards to the membrane time constant, we note that we are assuming that the decay constant in discrete time $\\eta$ is small. In terms of the membrane time constant $\\tau_{m}$ and timestep between spike proposals $\\Delta$, we have $\\eta = \\Delta/\\tau_{m}$, hence what we assume is that the membrane time constant is long relative to the interval between updates, not that it is long in an absolute sense. Indeed, in Section 2.2, we take the continuum limit $\\Delta \\downarrow 0$ in which spike proposals are made infinitely often. Therefore, our model does not require membrane time constants $\\tau_{m}$ that are longer than is plausible biologically. \n The fact that only one neuron is allowed to spike at a time is not a severe constraint relative to prior works, as it is standard in studies of efficient balanced networks (EBNs). As mentioned in Lines 280-288 of the Discussion, the constraint that only one neuron is allowed to spike at each timestep is present in previous work on efficient balanced networks (Boerlin et al 2013, Savin and Denève 2014), and some mechanism for constraining firing rate is present in all works on the subject (Rullán Buxó and Pillow 2021, Calaim et al 2022). As noted there, relaxing these constraints will be an important objective for future work.\n\n\n- *In Eq. 1, should I interpret the estimate $\\hat{\\theta}\\_{t}$ as a sample at time t? Is the uncertainty of the sampling distribution represented by the fluctuation of $\\hat{\\theta}\\_{t}$ over time? It is better to explicitly mention this in the beginning.*\n\n We are sorry for the confusing notation. You are correct in interpreting $\\hat{\\theta}\\_{t}$ as a sample at time $t$. To make this more transparent (and avoid possible confusion between samples from the distribution and an estimate of its mean), we now denote samples by $z\\_{t}$ rather than $\\hat{\\theta}\\_{t}$.\n\n\n- *Eqs. 3 and 4: the argument in the distribution in the denominator of Eq. 4, i.e., $(1-\\eta) \\Gamma r\\_{t-1}$ is not consistent with $\\hat{\\theta}\\_{t-1}$ in the denominator of Eq. 3.*\n\n Thank you for this comment. You are correct in noting that the argument of the denominator of the acceptance ratio of the model with decay ($(1-\\eta) \\mathbf{z}\\_{t-1}$) does not match that for the perfect integrator model ($\\mathbf{z}\\_{t}$). However, as noted in the main text (Lines 115-118 of the originally submitted manuscript) and in Appendix B, this is an intentional design choice, not a mistake. We have updated the discussion around these equations to further clarify this point, which now reads as follows:\n\n > [B]y comparing the likelihood of the proposal, $P[(1-\\eta) \\mathbf{z}\\_{t-1} + {\\Gamma} \\mathbf{e}\\_{j}]$, to the likelihood of the next state without the proposed spike but with the decay $P[(1-\\eta) \\mathbf{z}\\_{t-1}]$ (instead of the likelihood of the current state $P[\\mathbf{z}\\_{t-1}]$ as in the Metropolis-Hastings algorithm), this choice implements a sort of look-ahead step that should allow the algorithm to partially compensate for the decay in the rate.\n\n We hope that this clarification helps address your concern. ", " *[...]*\n\n#### Major\n\n*[...]*\n\n- *A strong requirement of neural sampling is that the neural circuit dynamics with fixed parameters is able to sample posteriors with different uncertainties. It is not clear whether this can be achieved in the current framework. The proposed network model seems not able to achieve this if I understood correctly, in that the network parameters in Eq. 12 depend on the mean and the variance (explained by the text in line 160). This implies that to sample distribution with different parameters we need to adjust the network connections.*\n\n Thank you for this comment. Please see our response to Reviewer TEk8's question regarding input dependence, which we copy here:\n > In our construction, the input to the network appears through the time-varying mean ${\\theta}(t)$. In the linear Gaussian model we discuss in Lines 71-80 of Section 2 and detail in Appendix C.2-3, the network infers the expected value of the parameters/latent variables given a sensory input $\\textbf{x}$, of which the target mean ${\\theta}$ is a linear function (see also Hennequin et al. and Savin and Denève, who use this model). In this simple setting, the covariance structure does not change with the input. The new supplementary Figure E.1, anonymously archived at \\url{https://figshare.com/s/15ccd407eb0ba614d23d}, highlights this point. The estimate of the variance is perturbed by stimulus onset (Figure E.1b) but relaxes to its steady state value. Note that this relaxation is both faster and more accurate in the natural geometry condition. For non-Gaussian posteriors, the variance would be changing with the input as well. \n > Note that the relative timescale of changes in the posterior is the important point. Our result show that given a change in input, the network implementing the inference in the natural space of parameters samples the new posterior within 50 ms for a large range of parameters. As long as the changes in the posterior occur at this speed (the speed of perception), it will be adequately sampled. \n > If the reviewer deems it necessary, we are happy to include an extended discussion of these points in the Introduction and discussion of the final version of the manuscript; we have not done so thus far because of space constraints. \n\n\n- *As mentioned in Eq. 14 that the D matrix only modifies the temporal dynamics but leaves the sampling distribution unchanged. That means sampling in the Euclidean space and the natural space (including the inverse Fisher information matrix, line 195) should not change the sampling distribution. However, from Fig. 2 and Fig. 3 the two sampling trajectories (blue vs. red) differ a lot and have different variances. I am wondering whether the sampling is correct or not.*\n\n Thank you for this question. The discrepancies you observe are expected if one considers the following factors. As noted in Lines 240-244, the overestimation of variance observed in Figure 2 is expected due to discretization error even for non-spiking Langevin samplers without accept-reject steps; the further discretization due to spiking exacerbates this effect (Neal, 1993; Savin and Denéve 2014). We find empirically that this overestimation is less for natural geometry. In regards to Figures 3 and 4, we show analytically in Appendix B.5 that one does not expect networks with naïvely chosen weights to correctly sample strongly correlated Gaussian distributions (see Lines 267-271). In brief, the firing rate should tend to zero as the correlation goes to one, as all proposed spikes will be rejected. Therefore, in these figures, the naïve spiking samplers are failing to sample from the correct distribution. Using natural geometry eliminates this problem. With these considerations in mind, the noted discrepancies demonstrate the advantages of using natural gradients. \n\n We remark also that the work of Ma et al (2015) shows only that different choices of $\\mathbf{D}$ yield the same stationary distribution. Therefore, one does not expect the sampling distribution at a short, finite time (as studied in Figures 2, 3, and 4) to be unchanged by modifying $\\mathbf{D}$ (see also Figure E.1). Indeed, accelerating the rate of convergence to the stationary distribution is precisely the goal of the complete recipe; our new analysis of linear rate networks illustrates this point. This equivalence of stationary distributions holds for the continuous-time dynamics, and is not guaranteed to hold after discretization. ", " ### Questions (Part 2 of 2)\n\n- *is there a way to make more formally precise statements about sampling speed in EBNs for the different variants or is this mainly relying on the ma work for generic speedup arguments?* \n\n Thank you for this question. We very much agree with the referee that it would be desirable to make precise statements about sampling speed in EBNs, but it is not entirely clear how such an analysis would proceed. We rely on the Ma et al. work for a generic argument for speedup of the sampling process approximated by the EBN (see also our new analysis of speedup in linear rate networks), but this of course does not transfer to the EBN because of the discretization inherent in spiking. Thus, as in most prior work on EBNs, we rely on numerical evidence for the claim that natural gradient yields a speedup. Recent work by Calaim, Dehmelt, Gonçalves, and Machens (eLife 2022) demonstrates a simple method to show that EBN approximation error of deterministic processes is bounded under certain constraints, but their analysis employs a different spiking rule (instead of constraining only one neuron to spike at a time, they enforce a refractory period during which a neuron is not allowed to spike). We will add a discussion paragraph to the final version of our manuscript to mention the possibility of formally analyzing sampling speed in EBNs; we are currently unable to do so due to space constraints. We note that we can provide some formal analysis of how sampling from highly correlated distributions fails in geometry-blind samplers using the probabilistic spiking rule; we argue in Appendix B.5 that the spike acceptance probability should tend to zero. \n\n*[...]*", " ### Questions\n\n- *decoding requires low pass filtering of spikes, which means that the filter width puts a hard limit on the autocorrelation function of the outputs, same as in distributed sampling a la savin et al. Why is then this faster? is it because the time constant is not tied to the membrane time constant the way it needs to be there?* \n\n The speedup in the Savin-Denève network was due to the parallelization of the inference. By having more neurons than latent variables/parameters the network can have effectively faster sampling. However, this is fundamentally limited by the dynamics of the underlying Langevin process approximated by the network and fails in high-dimensions for naïve Langevin dynamics. By introducing the natural geometry, our network approximates a sampling process with much more favorable dynamics as shown in Figure 2, 4, and E.1. We gave a new analysis of sampling in linear rate networks in Appendix D that makes the effect of natural geometry apparent; please see our discussion of these results below. \n\n- *explicit rejection step in MH seems to require fundamentally global knowledge which affects classic criteria for biological plausibility, is the discretization of time that assumes asynchronous updates making this local again?* \n\n Thank you for your question. Once we write the acceptance ratio as a function of membrane potentials and thresholds (equations 5-6) we only require local knowledge to compute the acceptance ratio for each neuron. The issue of instantaneous communication between neurons does remain, and is a shortcoming of EBN networks which recent work has worked to overcome (Rullán Buxó and Pillow 2020; see also our answer to your last question).\n\n- *hard to intuitively get the source of the speedup, especially given the earlier proofs of hennequin that langevin is the slowest possible random walk with a given gaussian stationary distribution, how does the structure of the covariance play with the sample autocorrelation function?* \n\n Thank you for this question. We first recall that Hennequin et al. (2014)'s analysis is restricted to samplers with isotropic noise covariance, and strictly speaking shows only that naïve Langevin sampling lies at a stationary point of their slowness cost. To show how the structure of the covariance affects the sample autocorrelation and rate of convergence in setting of Hennequin et al. (2014), we have added a new analysis of sampling from Gaussian distributions in linear rate networks to Section 3.1, with details in a new Appendix D. This shows how large eigenvalues of the target covariance matrix introduce slow timescales in naïve Langevin dynamics, to which sampling in the natural space are insensitive.\n\n Concretely, consider sampling from a Gaussian with mean zero and covariance matrix ${\\Sigma}$. Using the naïve Langevin dynamics\n \n $$d\\mathbf{z}(t) = - {\\Sigma}^{-1} \\mathbf{z}\\,dt + \\sqrt{2} d\\mathbf{W}(t),$$\n\n the stationary autocovariance of the samples is\n\n $$\\mathbf{C}\\_{s}(t-t') = \\mathbb{E}\\_{s} \\mathbf{z}(t) \\mathbf{z}(t')^{\\top} = e^{-{\\Sigma}^{-1} |t-t'|} {\\Sigma}$$\n\n and the 2-Wasserstein distance between the ensemble distribution of samples at time $t$ for initial condition $\\mathbf{z}(0) = \\mathbf{0}$ and the target distribution is\n\n $$W_{2}(t) = \\sqrt{ \\sum_{i=1}^{n_p} \\sigma_{i} \\left[ 1 - (1 - e^{-2 t / \\sigma_{i}})^{1/2} \\right]^{2} } ,$$\n\n where $\\sigma_{i}$ are the eigenvalues of ${\\Sigma}$. In contrast, for a network that samples in the natural space,\n \n $$d\\mathbf{z}(t) = - \\mathbf{z}\\, dt + \\sqrt{2 {\\Sigma}} d\\mathbf{W}(t),$$\n\n the stationary covariance is\n\n $$\\mathbf{C}\\_{s}(t-t') = e^{-|t-t'|} {\\Sigma}$$\n\n and the $W_{2}$ distance is\n\n $$W_{2}(t) = \\sqrt{\\textrm{tr}\\, {\\Sigma}} [1 - (1-e^{-2t})^{1/2} ] .$$\n\n Thus, it is easy to see that large covariance eigenvalues introduce long timescales in the stationary autocorrelation and in the ensemble $W_{2}$ distance for naïve Langevin sampling, while the timescale of sampling in the natural space is insensitive to these eigenvalues. The details of this analysis are given in Appendix D of our updated manuscript. ", " #### Weaknesses (Part 2 of 2)\n\n- *very precise/artificial architectural constraints on the solution (pair of readout pools tied weights etc)*\n\n We agree that our construction assumes strong constraints on the network architecture. As noted by the referee, these constraints are useful because they allow a relatively straightforward construction. As noted in Lines 300-320 of the Discussion, more careful investigation of how asymmetry of weights affects the performance of the probabilistic spiking network will be an interesting subject for future work, particularly in the context of biologically-relevant constraints (e.g., Dale's law). We also note that the EBN-based sampler does not rely on having fine-tuned balancing of readout weights. \n\n- *in the neurally relevant regime the dynamics are not guaranteed to have the target posterior statistics. unclear why the series of approximations should be expected to be constrained rather than lead to accumulation of errors and big deviations from the target*\n\n Thank you for this comment. We agree entirely that it is not \\emph{a priori} clear that our approximations should yield constrained rather than accumulating error, and that we rely on numerical evidence to support this claim. We note that this is similar to previous work on sampling using EBNs; Savin and Denève do not provide analytical guarantees their procedure yields the desired stationary distribution. See also the point below on the difficulty of formal analysis of convergence in EBNs.\n\n### Minor:\n\n- *strictly speaking it is not the dimensionality of the latent space per se but rather posterior entropy that limits sampling speed, although admittedly the two tend to go together in simple cases*\n\n Thank you for this comment; we have updated the wording of Lines 31-34 of the Introduction to more precisely mention posterior entropy. The relevant sentence now reads as follows: \"However, probabilistic inference becomes more challenging as the entropy of the posterior distribution---which often scales with dimensionality---increases.\"\n\n- *this is semantics i would say that typically the goal of bayesian perception is stated as computing a posterior over latent variables, not parameters*\n\n We thank the reviewer for their comment. We have reworded the introduction in Section 2 to make this link between the terms used in the statistics and machine learning literature and those from the Bayesian perception literature more explicit; the relevant sentence now reads as follows: \"The goal of a neural network performing probabilistic inference is to estimate a posterior distribution $P({\\theta} | \\mathbf{x})$ over $n_p$ latent variables (or parameters) ${\\theta}$ given an input $\\mathbf{x}$.\"\n\n- *potentially relevant refs: radford neal tech report on speeding up sampling by dropping the detailed balance requirement, and the old lars buesing sampling paper on a biological realization of that idea (refractory period of sampling).*\n\n Thank you for these suggestions; we now cite both of these references in our updated manuscript (we previously cited only Buesing et al). We have added a comment referencing these works when we introduce Metropolis-Hastings sampling on Lines 98-99 of Section 2.1: \"We note that previous work has shown that violations of detailed balance can accelerate sampling (Neal 2004, Buesing et al. 2011, Pecevski, Buesing, and Maass 2011), but we will not carefully explore this possibility in the present work.\" We also now mention in the Discussion that \"Further investigation of how violations of detailed balance through these mechanisms and the matrix $\\mathbf{S}$ in the 'complete recipe' framework could enable faster sampling will be a particularly interesting objective.\" As the referee appreciates, analysis of Markov chains that violate detailed balance remains challenging.", " *[...]*\n\n### Strengths And Weaknesses:\n\n#### Strengths:\n\n*[...]*\n\nWe thank the referee for their careful and favorable assessment of our work. \n\n#### Weaknesses\n\n- *restricted to multivariate gaussian posteriors, although the time varying mean makes it somewhat more unusual/interesting as a setup*\n\n We thank the reviewer for raising this important point. As the reviewer points out, the implementation with time varying mean would allow non-Gaussian posteriors to be estimated through sampling. Indeed the dynamics encoded in the network could correspond to Langevin sampling within the complete recipe framework of any probability distribution in the exponential family. We chose here to restrict ourselves to the multivariate Gaussian as they are better understood in the statistics and machine learning literature, and can be implemented in linear integrate-and-fire networks. More complex distributions will require non-linearities and therefore more complex (but still biologically plausible) neurons. We did not add a sentence to discuss the point due to the length constraint, but would be happy to do so in the final manuscript.\n\n- *dependence of input (the actual inference part) missing in the construction [...]*\n\n In our construction, the input to the network appears through the time-varying mean ${\\theta}(t)$. In the linear Gaussian model we discuss in Lines 71-80 of Section 2 and detail in Appendix C.2-3, the network infers the expected value of the parameters/latent variables given a sensory input $\\textbf{x}$, of which the target mean ${\\theta}$ is a linear function (see also Hennequin et al. and Savin and Denève, who use this model). In this simple setting, the covariance structure does not change with the input. The new Figure E.1 (https://figshare.com/s/15ccd407eb0ba614d23d) highlights this point. The estimate of the variance is perturbed by stimulus onset (Figure E.1b) but relaxes to its steady state value. Note that this relaxation is both faster and more accurate in the natural geometry condition. For non-Gaussian posteriors, the variance would be changing with the input as well.\n\n Note that the relative timescale of changes in the posterior is the important point. Our result show that given a change in input, the network implementing the inference in the natural space of parameters samples the new posterior within 50 ms for a large range of parameters. As long as the changes in the posterior occur at this speed (the speed of perception), it will be adequately sampled.\n\n If the reviewer deems it necessary, we are happy to include an extended discussion of these points in the Introduction and discussion of the final version of the manuscript; we have not done so thus far because of space constraints. \n\n- *emphasis on the practically relevant scale of sampling being unexplored is an overstatement of fact, both the hennequin and savin type of dynamics can be used to extract moments of interest at time scale of hundreds of milliseconds [...]*\n\n We agree with the reviewer that the works of Hennequin et al. and Savin and Denève have made an important contribution towards sampling at the speed of perception in biological neural networks, but we believe our contribution goes beyond these papers in important ways. Hennequin et al. (2014) use a rate network and only assess the auto-correlation of the samples provided by the network. They do not quantify whether these samples yield good estimates of statistics of the target probability distribution. Savin and Denève (2014) also makes a great contribution by implementing parallelized Langevin samplers within the efficient balanced network (EBN) framework. However, they do not show that their approach scales to high-dimensional (or high posterior entropy) parameter spaces as they only quantify performance in two dimensions. We have slightly reworded our presentation of these papers in Lines 35-45 of the Introduction to clarify our contributions: \n\n > Of the proposed approaches to probabilistic computation in neural networks, sampling-based codes are grounded in the strongest theoretical framework [...], and have been used to perform inference at scale [...]. Previous works have proposed several approaches to accelerate sampling in biologically-inspired algorithms. [Hennequin et al.] showed that adding non-reversible dynamics to rate networks can reduce the sample autocorrelation time. However, they did not study convergence of the sampling distribution, and did not consider the biologically-relevant setting of spiking networks. [Savin and Denève] used a distributed code to parallelize sampling in spiking networks, but only considered two-dimensional distributions. Therefore, it remains unclear how accurate sampling from high-dimensional distributions at behaviorally-relevant timescales can be achieved using spiking networks. ", " ### Questions\n\n*I'm somewhat confused about the central point of the paper, that population geometry improves sampling efficiency. If I understand correctly, population geometry, in the most important sense, means the way in which the sampling space can be projected (e.g., to the principal components, or some other non-Euclidean manifold) to essentially decorrelate the dimensions of the posterior (in the case of the correlated Gaussian).*\n\n*If this is correct, then I would suggest for the authors to more clearly and explicitly mention this much earlier in the paper (apologies if I I had missed it). Figure 1C is helpful in this regard, but I'm still uncertain if I really got what \"population geometry\" means, since typically, in the neuroscience context, population geometry refers to a e.g., a lower dimensional manifold on which population activity resides, and I'm not sure if this idea is invoked at all in this paper.*\n\nWe thank the reviewer for their comment. It is indeed the geometry of the \"lower dimensional manifold on which population activity resides\" that will govern the convergence dynamics of the sampling process. To clarify this point, we propose to change the title of our paper to \"Natural gradient enables fast sampling in spiking neural networks.\" Moreover, we have reworded the paragraph in the Introduction discussing population geometry (Lines 46-52). We highlight that the geometry imposed by the population should ideally correspond to the geometry of the natural space of parameters as defined in the information geometry literature:\n\n> In this paper, we show how the choice of the geometry of neural representations at the population level [36, 37], set by the neural code and the neural dynamics, can accelerate sampling-based inference in spiking neural networks. Ideas from information geometry allow us to perform inference in the natural space of parameters, which is a manifold with distances measured by the Fisher-Rao metric (Figure 1.c) [38–41]. Concretely, we leverage recently-proposed methods for accelerating sampling from the machine learning literature [35,41–43] to design novel efficient samplers in spiking neural networks.\n\n\n*Furthermore, and this is probably due to my naivety still, I'm a bit confused as to how the network should have access to the natural geometry of the problem (in the form of the projection B, or D)? Similarly, it's a bit unclear how the weights of the network should be learned, and whether it is dependent on the problem. From the perspective of a naive reader unfamiliar with this subfield of comp neuro, it would be very helpful if the authors took a more descriptive approach to provide intuition for such basic high-level ideas.*\n\nThank you for this question. In this work we focus on inference, given prior expectations and sensory input, of the probabilities of the values of the parameters/latent variables according to the Gaussian linear model presented in Appendix C. Learning the statistics of the world model (priors and affinity matrix $\\mathbf{A}$) can be done by learning at a slower timescale, as we mention in Lines 321-331 of the Discussion. We remark that this could extend to evolutionary timescales. Note that the expression of the complete recipe decouples the connectivity due to the gradient of the energy function ($\\nabla U_{\\Theta}$) and the matrices controlling the geometry ($\\mathbf{D}$ and $\\mathbf{B}$). As shown in the work by Gong et al. (ICLR, 2019) that we cite in the discussion, these parameters could be learned independently via meta-learning. Expanding these approaches for spiking neural networks would be an interesting avenue for future work. Space permitting, we would be happy to extend our discussion of these issues in the final version of the submission.\n\n\n*Overall, I believe my lack of expertise hinders my ability to understand this paper's contribution. However, I think I am not alone in that, unless one works exactly in the field of spiking network samplers, it is a bit difficult to understand the paper even at a high level.*\n\nThank you for this comment. We hope that our revisions have enhanced the readability of the manuscript, and appreciate your suggestions for how it might be further improved.\n\n### Limitations\n\n*Extensive discussion of the paper's limitations and assumptions, and no discussion of negative societal impact (though I don't foresee any immediate impact).*\n\nThank you for your comments. As we noted in the Checklist, \"Our work is purely theoretical, and we do not anticipate that it will have negative societal impacts as outlined in the ethics guidelines.\" We would be happy to include a sentence to that effect in the finalized version of the manuscript.\n\n*[...]*", " *[...]*\n\n### Strengths And Weaknesses\n\n*I'm quite naive to the field of spiking network as samplers, so I cannot really evaluate the originality, significance, or the technical quality of the work. Broadly speaking, I believe the work is of interest to the computational neuroscience community, since \"the Bayesian brain\" is an active field of research, and therefore, a central question is how a biologically realistic network (i.e., spiking network) could implement such sampling dynamics. In addition, at a superficial level, the work appears to be carefully done and cites relevant literature, as well as carefully discussing its relations to previous work, and its limitations. Based on the figures, it seems that the central claim of the paper is supported, that access to the natural geometry of the sampling space enables more efficient and accurate sampling.*\n\nWe thank the reviewer for carefully reading our paper and for their comments. We hope we address their concerns below.\n\n*There are several assumptions that seem biologically implausible to me, many of which the authors touch on in the discussion, and I understand certain assumptions are required for such theoretical work and cannot comment on whether they are standard for the field. However, on the issue of long membrane time constant ($\\eta \\ll 1$), it would be nice to see if/when the results break down, with a shorter integration window. Similarly, the fact that only one neuron is allowed to spike at a time significantly dampens my enthusiasm for the work, since at some point the actual model become so far away from the (very nicely stated) high level motivation.*\n\nThank you for these comments. With regards to the membrane time constant, we note that we are assuming that the decay constant in discrete time $\\eta$ is small. In terms of the membrane time constant $\\tau_{m}$ and timestep between spike proposals $\\Delta$, we have $\\eta = \\Delta/\\tau_{m}$, hence what we assume is that the membrane time constant is long relative to the interval between updates, not that it is long in an absolute sense. Indeed, in Section 2.2, we take the continuum limit $\\Delta \\downarrow 0$ in which spike proposals are made infinitely often. Therefore, our model does not require membrane time constants $\\tau_{m}$ that are longer than is plausible biologically.\n\nThe fact that only one neuron is allowed to spike at a time is not a severe constraint relative to prior works, as it is standard in studies of efficient balanced networks (EBNs). As mentioned in Lines 280-288 of the Discussion, the constraint that only one neuron is allowed to spike at each timestep is present in previous work on efficient balanced networks (Boerlin et al 2013, Savin and Denève 2014), and some mechanism for constraining firing rate is present in all works on the subject (Rullán Buxó and Pillow 2021, Calaim et al 2022). As noted there, relaxing these constraints will be an important objective for future work.\n\n*There are some low-level editing mistakes that can improve the quality of the work, e.g., Figure 4 panels a and b are labeled as b and c.*\n\nThank you for noticing this typo, which has been fixed. We have edited the manuscript for correctness throughout.", " We thank the referees for their thoughtful reviews of our paper. We have uploaded a revised manuscript that addresses their concerns and strengthens our opinion that this paper should be presented at NeurIPS. This comment describes major additions to our manuscript; we reply in detail to specific referee comments individually. \n\n- In response to **Reviewer Pjbs**' questions regarding the meaning of `population geometry' we propose to change the title of our manuscript to ``Natural gradient enables fast sampling in spiking neural networks.'' We think this modification will help resolve possible confusion, and appreciate the referees' feedback on this point. \n \n- The most substantial addition to the content of our manuscript is a new analysis of how natural gradient enables fast sampling in linear rate networks. This addresses a question raised by **Reviewer TEk8** regarding an intuition about the source of the speedup in sampling, with particular reference to the prior work of Hennequin et al. (NeurIPS 2014). We introduce these results at the end of a reworked section 3.1 and provide a detailed derivation in Appendix D (Note that the numerical details and supplementary figures are now in Appendix E). \n \n- We have added a supplemental figure (Figure E.1; see anonymous upload at https://figshare.com/s/15ccd407eb0ba614d23d) showing the timestep-by-timestep timecourses of distribution estimates for the EBN sampler. This figure allows the reader to better put into context the parameter sweeps of the distribution estimates at $t=50$ ms and $t=1.5$ s after stimulus onset shown in Figures 2 and E.3 \n \n- We have added another supplemental figure (Figure B.1) showing how the scale of the readout matrix affects the probabilistic spiking rule. This figure provides further characterization of the behavior of the novel Metropolis-Hastings based sampler we propose.\n \n- We added a number of clarifications as highlighted in the responses to individual reviewers. Note that to improve readability, we now denote samples by $z_t$, rather than $\\hat{\\theta}_t$ as in our submitted manuscript, to avoid confusion with the target $\\theta_t$.", " In this paper, the authors:\n- construct a spiking neural network that performs Metropolis-Hastings sampling of a target posterior distribution (for, e.g., estimating external features given noisy sensory input), and extend it to continuous time dynamics\n- demonstrate its relationship to efficient balanced networks\n- show how the network can be augmented (by imposing population geometry) to implement the \"generalized\" stochastic gradient MCMC\n- and demonstrate that geometry-aware networks are superior to geometry-naive networks, especially with increasing correlation and posterior dimension\n I'm quite naive to the field of spiking network as samplers, so I cannot really evaluate the originality, significance, or the technical quality of the work. Broadly speaking, I believe the work is of interest to the computational neuroscience community, since \"the Bayesian brain\" is an active field of research, and therefore, a central question is how a biologically realistic network (i.e., spiking network) could implement such sampling dynamics. In addition, at a superficial level, the work appears to be carefully done and cites relevant literature, as well as carefully discussing its relations to previous work, and its limitations. Based on the figures, it seems that the central claim of the paper is supported, that access to the natural geometry of the sampling space enables more efficient and accurate sampling.\n\nThere are several assumptions that seem biologically implausible to me, many of which the authors touch on in the discussion, and I understand certain assumptions are required for such theoretical work and cannot comment on whether they are standard for the field. However, on the issue of long membrane time constant (nu<<1), it would be nice to see if/when the results break down, with a shorter integration window. Similarly, the fact that only one neuron is allowed to spike at a time significantly dampens my enthusiasm for the work, since at some point the actual model become so far away from the (very nicely stated) high level motivation.\n\nThere are some low-level editing mistakes that can improve the quality of the work, e.g., Figure 4 panels a and b are labeled as b and c. I'm somewhat confused about the central point of the paper, that population geometry improves sampling efficiency. If I understand correctly, population geometry, in the most important sense, means the way in which the sampling space can be projected (e.g., to the principal components, or some other non-Euclidean manifold) to essentially decorrelate the dimensions of the posterior (in the case of the correlated Gaussian). \n\nIf this is correct, then I would suggest for the authors to more clearly and explicitly mention this much earlier in the paper (apologies if I I had missed it). Figure 1C is helpful in this regard, but I'm still uncertain if I really got what \"population geometry\" means, since typically, in the neuroscience context, population geometry refers to a e.g., a lower dimensional manifold on which population activity resides, and I'm not sure if this idea is invoked at all in this paper.\n\nFurthermore, and this is probably due to my naivety still, I'm a bit confused as to how the network should have access to the natural geometry of the problem (in the form of the projection B, or D)? Similarly, it's a bit unclear how the weights of the network should be learned, and whether it is dependent on the problem. From the perspective of a naive reader unfamiliar with this subfield of comp neuro, it would be very helpful if the authors took a more descriptive approach to provide intuition for such basic high-level ideas.\n\nOverall, I believe my lack of expertise hinders my ability to understand this paper's contribution. However, I think I am not alone in that, unless one works exactly in the field of spiking network samplers, it is a bit difficult to understand the paper even at a high level. Extensive discussion of the paper's limitations and assumptions, and no discussion of negative societal impact (though I don't foresee any immediate impact).", " New framework for constructing fast sampling dynamics in spiking neural networks, encompasses many of the previously proposed solutions as limit cases but also allows for a unified treatment and interesting variations. Strengths:\n- clear biological and computational motivation \n- interesting knowledge transfer between machine learning and computational neuroscience\n- interesting mechanics: deterministic dynamics, stochastic spike generation process (usually stochastic dynamics, deterministic theshold)\n\nWeaknesses:\n- restricted to multivariate gaussian posteriors, although the time varying mean makes it somewhat more unusual/interesting as a setup\n- dependence of input (the actual inference part) missing in the construction (for simplicity i would agree this is fine for static posteriors but seems awkward once there is a time constant in the posterior changes themselves)\n- emphasis on the practically relevant scale of sampling being unexplored is an overstatement of fact, both the hennequin and savin type of dynamics can be used to extract moments of interest at time scale of hundreds of milliseconds (explicitly with spiking neurons at least for the second case), which many would argue is perceptually relevant enough, especially given the inherent tradeoff between precision and time for all sampling based codes; that is not to say that there is no room for alternative models of fast sampling but it's not nearly as bad as the introduction would make you believe\n- very precise/artificial architectural constraints on the solution (pair of readout pools tied weights etc)\n- in the neurally relevant regime the dynamics are not guaranteed to have the target posterior statistics. unclear why the series of approximatons should be expected to be constrained rather than lead to accumulation of errors and big deviations from the target\n\nMinor:\n- strictly speaking it is not the dimensionality of the latent space per se but rather posterior entropy that limits sampling speed, although admittedly the two tend to go together in simple cases\n- this is semantics i would say that typically the goal of bayesian perception is stated as computing a posterior over latent variables, not parameters\n- potentially relevant refs: radford neal tech report on speeding up sampling by dropping the detailed balance requirement, and the old lars buesing sampling paper on a biological realization of that idea (refractory period of sampling).\n - decoding requires low pass filtering of spikes, which means that the filter width puts a hard limit on the autocorrelation function of the outputs, same as in distributed sampling a la savin et al. Why is then this faster? is it because the time constant is not tied to the membrane time constant the way it needs to be there?\n\n- explicit rejection step in MH seems to require fundamentally global knowledge which affects classic criteria for biological plausibility, is the discretization of time that assumes asynchronous updates making this local again?\n\n- hard to intuitively get the source of the speedup, especially given the earlier proofs of hennequin that langevin is the slowest possible random walk with a given gaussian stationary distribution, how does the structure of the covariance play with the sample autocorrelation function?\n\n- is there a way to make more formally precise statements about sampling speed in EBNs for the different variants or is this mainly relying on the ma work for generic speedup arguments? no issues", " The present paper studies sampling in spiking network models and tries to unify the Langevin sampling in the efficient balanced networks and Metropolis-Hastings sampling in networks with probabilistic spike rules. And it also studies how the geometric structure in the neural code speeds up the sampling. The strength of this paper is its unifying nature. And its proposal of a probabilistic spike rule implementing Metropolis-Hastings sampling is novel. The present study did some theoretical derivations in linking the sampling dynamics with neural dynamics, but some of the presentations is not very clear.\n\n### Major\n\nI have a couple of major concerns about this present paper.\n- A strong requirement of neural sampling is that the neural circuit dynamics with fixed parameters is able to sample posteriors with different uncertainties. It is not clear whether this can be achieved in the current framework. The proposed network model seems not able to achieve this if I understood correctly, in that the network parameters in Eq. 12 depend on the mean and the variance (explained by the text in line 160). This implies that to sample distribution with different parameters we need to adjust the network connections.\n\n\n- As mentioned in Eq. 14 that the D matrix only modifies the temporal dynamics but leaves the sampling distribution unchanged. That means sampling in the Euclidean space and the natural space (including the inverse Fisher information matrix, line 195) should not change the sampling distribution. However, from Fig. 2 and Fig. 3 the two sampling trajectories (blue vs. red) differ a lot and have different variances. I am wondering whether the sampling is correct or not.\n\n### Writing \nThe writing in sec 2.1 is not clear and I feel difficulty sometimes in following the flow. A lot of questions about the presentation remain.\n\n- It is not clear why the problem from discrete and sign-constrained spikes can be solved by imposing a fine-tuned balancing condition on the readout weights (lines 93-98). More explanations are needed. Also, it is not clear the motivation for choosing a neuron uniformly at random (line 104).\n\n- In Eq. 1, should I interpret the estimate $\\hat{\\theta}_t$ as a sample at time t? Is the uncertainty of the sampling distribution represented by the fluctuation of $\\hat{\\theta}_t$ over time? It is better to explicitly mention this in the beginning.\n\n- Eqs. 3 and 4: the argument in the distribution in the denominator of Eq. 4, i.e., $(1-\\eta)\\Gamma r_{t-1}$ is not consistent with $\\hat{\\theta}_{t-1}$ in the denominator of Eq. 3.\n\n- It is unclear how Eq. 6 and 10 are related to the undefined generative model which was mentioned in line 71. Is there an implicit assumption of the linear Gaussian generative model because the terms in Eq. 10 are the derivative of quadratic terms. Moreover, it is not clear how \\psi is defined at this moment until later I read the text in line 160. Also, in Eq. 6 the input to the neural dynamics is not the observation x defined in the generative model (line 71), but the latent variable $\\theta$ directly. More explanations are needed here.\n\n- Line 195: it is not clear what the matrix G is until I read line 232. Please define symbols when they first appear.\n Please refer to my comments in the strengths and weaknesses. The limitation of the current framework is the analog-digital conversion (DAC) in sampling a continuous distribution in discrete spiking networks, as mentioned in line 238. This makes people feel that the spiking network is not designed by implementing sampling although the error of DAC can be reduced by considering population geometry or in large number limit. Similar feelings were mentioned by the authors in the paper. I hope to see more discussion of alternative solutions to the DAC error. One possible solution is utilizing the math techniques in probabilistic population code to sample continuous distribution in discrete spikes, where the representation of a continuous distribution in spiking networks comes from the continuously smooth tuning curves of neurons. ", " In this work, the authors demonstrate the possibility of implementing sampling algorithms through spiking neural networks. They provide details on how to realize the metropolis-hastings sampler using the probabilistic spike rules. Overall the work is interesting and extends the potential application of spiking neural networks. Strengths:\n1. The application is relatively new for SNN. \n2. The writing is clear. \nWeakness:\n1. The motivation for applying SNN for sampling is not convincing enough. \n2. The comparison to traditional sampling is not enough. 1. What's the connection between the brain's capacity for probabilistic computation and the ability of SNN for sampling? What's the evidence for the brain to realize its probabilistic computation through the same sampling approach as we do in the Bayesian inference?\n\n2. The implementation needs to accept the spikes probabilistically. How to realize this mechanism in a spiking neural network? \n\n3. Why is it called fast sampling? How is it faster when compared to other methods? \n\n4. What's the infrastructure for the SNN used for sampling? \n\n5. If the probabilistic spiking mechanism is allowed, why do we need such a spiking neural network rather than using a regular artificial neural network for the computation in the sampling procedure? NA. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 5, 4, 4 ]
[ "uP5wb03A8J", "GX8SaKE2dVE", "21-EllRMobg", "_0TFZ7A-dn3", "Wjop1PzPCBGa", "iTW4wG4x0PV", "Jx0T--rWOm", "7J0U3mg6Zbq", "6YWnjVX4PCo", "_tl0S2QLnk4", "mPqaUuLB-Ym", "uP5wb03A8J", "BQgYadFC95", "S10Em2oXyz6", "07yj8ul6c10", "wPyhVXai08s", "UX_KZjwRJgq", "BlvE5lszsL", "nips_2022_Yopob26XjmL", "nips_2022_Yopob26XjmL", "nips_2022_Yopob26XjmL", "nips_2022_Yopob26XjmL", "nips_2022_Yopob26XjmL" ]
nips_2022_3AbigH4s-ml
CEBaB: Estimating the Causal Effects of Real-World Concepts on NLP Model Behavior
The increasing size and complexity of modern ML systems has improved their predictive capabilities but made their behavior harder to explain. Many techniques for model explanation have been developed in response, but we lack clear criteria for assessing these techniques. In this paper, we cast model explanation as the causal inference problem of estimating causal effects of real-world concepts on the output behavior of ML models given actual input data. We introduce CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP). CEBaB consists of short restaurant reviews with human-generated counterfactual reviews in which an aspect (food, noise, ambiance, service) of the dining experience was modified. Original and counterfactual reviews are annotated with multiply-validated sentiment ratings at the aspect-level and review-level. The rich structure of CEBaB allows us to go beyond input features to study the effects of abstract, real-world concepts on model behavior. We use CEBaB to compare the quality of a range of concept-based explanation methods covering different assumptions and conceptions of the problem, and we seek to establish natural metrics for comparative assessments of these methods.
Accept
The paper presents a new benchmark dataset for assessing explanation methods in NLP, on the sentiment analysis domain. The dataset is unique in that it focuses on the casual effects of modifying specific aspects, providing minimal pairs where only one of the aspects is different. After constructing the benchmark, the paper uses a causality-based metric (Section 2) to evaluate existing explanation methods. The reviewers (NcbX / dBGa) agree that the paper is well motivated and can provide useful resources for the community. The experimental results show a simple baseline they propose performs on par with existing explanation methods, which is already an interesting finding to the community. While the reviewer spotted some flaws (not providing important details, positioning of the work, weak discussion of related work, etc), most seem fixable by camera ready. I’d recommend acceptance.
train
[ "rdNHGumYev", "vnC6OtTvnuou", "eNgkajIs_vb", "bnb_MBxovbD", "jSSrg4xCQV", "LwF36bA9vUW", "N24LdB18BxS", "xNIDfohPyQj", "Bsm-RSRRNWE", "aoobWQdsZGoY", "xC9w5Yih2M2", "gw6Ktpkq9yR", "kt22EulxD1M", "upAhuVrwWFQ", "2yMaQsx_TZ", "g5AYqy3g8Sc" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Also an interesting conceptual question. The scenario we've thought about the most concerns controlling for confounds. Some methods are explicitly motivated by their ability to do this. The default exclusive train set for CEBaB might not have rich enough confounds to bring this out. So someone advocating for a confound-controlling method might feel it is misleading if they tie with a method that cannot do this.\n\nHowever, CEBaB's inclusive train set is so rich in counterfactual texts that one can create datasets with lots of confounds in them, just by sampling using the labels and other metadata. We thought about constructing such confounding datasets for the paper, but we ran out of room. However, if people want to create such versions of CEBaB, that seems great, especially if they release the split information so that others can compare.\n\nThe above isn't quite a scenario where a simple method does _better_ but isn't better in general. If a sophisticated method can handle confounded datasets but not simple scenarios, it seems like an ambiguous win for that method, especially since we won't know how hard our problems are in general or how confounded they are in general. \n\nIdeally, CEBaB will end up being used to test many scenarios and the best methods will be able to handle all of them.", " Do you anticipate potential scenarios where a method works well on this benchmark but is not actually better generally (say on Question Answering or NLI or hate speech detection)?", " Thanks for the comments! You've raised great points that we feel we can be responsive to:\n\n1. We think that having a benchmark with dense, human-created counterfactual texts with lots of labels is just what is needed to get the field out of its current rut. To date, people developing explanation methods have had to rely on general intuition, synthetic benchmarks, and custom tasks that don't support apples-to-apples comparisons. With CEBaB, methods can be evaluated with fixed criteria. In our experience, if you build high-quality benchmarks, they attract people to work on the area, and we predict CEBaB can be such a nudge for people. Conversely, it is not surprising that causal explainer work has been somewhat at sea to date given the lack of clear evaluation criteria before CEBaB.\n\n2. The assumptions behind CEBaB seem reasonable to us. The aspect-level categories are originally from OpenTable. It's unusual to have our density of counterfactual texts with labels, but that's the special value of the benchmark -- that it provides a measurement tool that would not normally be available in the wild. \n\n3. In terms of the general framework: we sketched out an extension to spam/ham and would be happy to go further. We are still thinking about where this would fit into the paper. We've released all our MTurk templates, and minor tweaks to those will support a wide range of classification tasks. One just needs to think creatively about the causal graph that one wants to be able to explore.", " \"We find that explanation performance on CEBaB is already lackluster, indicating that this will also be the case for more complicated tasks. Thus, we believe that CEBaB will serve as a call to action for the NLP model explainability community and that our resources (data and methodology) will help the development and validation of new explanation techniques, which will subsequently have an impact outside the scope of English sentiment analysis.\"\n\nI don't see how or why CEBaB will serve as a call to action for the NLP model explainability community given most of the current methods are meaningless and there's no reason to expect them to be explaining anything absent unrealistic assumptions. That said, the dataset is definitely a positive step towards analyzing these methods. Finally, you point to a case study in the response as to establish how this methodology could be extended but I don't see it anywhere in the paper. I'm inclined to maintain my score as is.", " Thank you for the clear answers and the added verification experiments. I am raising my score significantly, from 4 to 6.\n\nWhile I do feel 1) the qualitative analysis to be added could be valuable, and 2) the scope of this dataset is a somewhat limited (sentiment analysis is a somewhat easy task), the new version have explained many missing details and proved more solid evidences of the quality of the dataset, compared to the previous one.\n\n", " > **Q2**: “The benchmark is only for sentiment analysis which limits its use while broader claims are made about NLP explainability. While that's still a good first step, it is arguably easier to \"explain\" a model on a sentiment analysis task versus say NLI or other more complicated NLP tasks using CEBaB, and there is no discussion offered in the paper on how this would extend to those tasks. I would like a greater discussion of how the method could be extended to other more complicated tasks. And if there are challenges associated with that, what are those?”\n\n**A2**: While CEBaB focuses on English sentiment analysis, a relatively easy task, its potential impact as a human-validated natural model explainability benchmark is not limited to this task. The reviewer makes the observation that some tasks are arguably more difficult to explain. We find that explanation performance on CEBaB is already lackluster, indicating that this will also be the case for more complicated tasks. Thus, we believe that CEBaB will serve as a call to action for the NLP model explainability community and that our resources (data and methodology) will help the development and validation of new explanation techniques, which will subsequently have an impact outside the scope of English sentiment analysis.\n\nOnce satisfactory results are achieved on CEBaB, we hope the community will use our resources as a starting point towards considering more complicated tasks, as the reviewer suggested. Our evaluation methodology and proposed metrics are directly applicable to other tasks such as NLI. The main challenge will be sourcing a new set of human-generated counterfactuals for the specific task, where deciding which concepts to annotate is an important task-specific decision.\n\nLastly, to respond to the question about extensibility, we also provide a case study where we can build a CEBaB-version of the classic spam classification problem in ML:\n\n- **Task**: Spam or Ham Classification. The input to this task is an email with a label indicating whether this email is a spam or not a spam.\n\n- **Concepts**: The email contains rich high-level concepts, which can be divided into two categories: **(1) linguistic concepts**: grammaticality, fluency, vocabulary richness, punctuation usage, etc. **(2) semantic concepts**: sentiment, category, ambiguation, tone, stance, sarcasm, intent, etc. High-level concepts are usually latent to the input (i.e., cannot be expressed in one word or one token directly). Each concept can be associated with a label. For instance, a binary or a discrete variable indicating the grammaticality of the email.\n\nAs we can see, it just needs to follow CEBaB’s data collection process and can be created as another concept-based explanation benchmark for spam filtering models.\n\n***\n\n> **Q3**: “I don't understand what Table 4 is. The caption is very confusing so if you could share that, that'll be great.”\n\n**A3**: We updated the caption of Table 4 in the rebuttal revision, and we hope this clears things up. As we are now focusing on the 5-way problem, we also update Table 4 accordingly in a way that is easy for readers to parse our results. Additionally, below we put CaCE in the context of other metrics, which might also shed more light on what it is.\n\n- **ATE** is a metric that only depends on the data (no model, no explanation method). It measures the causal effect of changing a concept in a given text by measuring the resulting change in the label.\n\n- **CaCE** is a metric that depends on the data and the model (not on the explanation method). It measures how much a concept affects the output of a model. For example, what happens if we change food from positive to negative - how would the output probabilities of the model change? We can think of this as a ground-truth effect that our explainers will try to estimate. The **ICaCE** (Individual CaCE) is the change in model output for a single intervention.\n\n- **ICaCE-error** is a metric that depends on the data, the model, and the explanation method. Given a specific instance of a model, it measures the average distance between the estimated effect for a single intervention (outputted by the explanation method) and the actual effect of that single intervention (the ICaCE).\n", " We greatly appreciate the reviewer’s in-depth feedback on our paper, which encouraged us to consider the way we are characterizing our goals and findings. Below we provide a point-to-point response, which adds to our above response to the shared reviewer themes and the revised version of the paper submitted alongside our response.\n\n***\n\n> **Q1**: “This work could be much better positioned in prior work on benchmarks for explainability and counterfactual analysis for model explanations (such as the Language Interpretability Toolkit).”\n\n**A1**: Agreed. In addition to the revisions we made to **Section 2 - Prior Work** following this comment, we would like to further elaborate on the differences between our work and other predecessors.\n\n- [11, CausalLM] estimates the causal effect of high-level concepts on the model, using synthetic data, by considering the aggregated CaCE values. One crucial advantage of CEBaB is that it evaluates explanation methods with naturalistic data rather than synthetic data. This allows us to compare different explanation methods against a human-validated ground-truth. Additionally, we define our final score on the level of individual examples (ICaCE), where CausaLM defines scores at the aggregated CaCE level. This allows us to better study nuanced local explanations of the model behavior, compared to aggregated claims. Furthermore, we consider a range of different metrics (cosine, normdiff, L2), which allows us to e.g. detangle the explanation method's ability to estimate directions of changes and magnitudes of changes.\n\n- [52, Language Interpretability Toolkit (LIT)] is a new reference added per a suggestion by R4. LIT is “an open-source platform for visualization and understanding of NLP models”. It is an explanation toolkit rather than an explanation benchmark. The closest feature to causal concept-based explanations provided by LIT is a family of counterfactual analyses for NLP models. While LIT only provides rule-based tools to generate counterfactuals from an existing dataset (e.g., “HotFlip”), our benchmark contains human-written counterfactuals. Lastly, while LIT only allows for a qualitative analysis, CEBaB offers quantitative analyses in the form of ICaCE scores using different metrics.\n\n- [57 (formerly 56), ConceptSHAP] provides mostly qualitative comparisons by asking human judges to score different concepts that the explanation method discovered. In our work, we propose to use the ICaCE score which measures concept-based explanation quantitatively by calculating the distance between the true and estimated causal effects. Although [57, ConceptSHAP] provides a quantitative evaluation as well, it uses a synthetic image dataset without rigorous control of the dataset generation process, and uses only non-causal, accuracy-like metrics to benchmark the explanation methods.\n\nThe above three points are reflected in the revised paper in the following quote: *“Other works that do compare to some ground-truth either employ a non-causal evaluation scheme [26], use causal evaluation metrics which do not capture performance on individual examples [52], evaluate on synthetic counterfactuals and rule-based augmentations [11 , 52], or are tailored for a specific explanation method and are hard to generalize [57].”*\n\n", " We greatly appreciate the reviewer’s in-depth feedback on our paper, which encouraged us to consider the way we are characterizing our goals and findings. Below we provide a point-to-point response, which adds to our above response to the shared reviewer themes and the revised version of the paper submitted alongside our response.\n\n***\n\n> **Q1**: “The dataset evaluates the model explanation methods based on aspects (food, noise, ambiance, service). This dataset can be used for sentiment analysis. I am not sure if this dataset can help in other classification problems for evaluating causal effects.”\n\n> **Q2**: “How does this paper help developing evaluation techniques for other problems apart from sentiment analysis?”\n\n**A1 & A2**: While the data collected is not directly applicable to tasks other than sentiment analysis, the main data collection process, methodology and conclusions of this work are highly relevant for the general model explainability field. Primarily, we show that existing explanation methods fail our relatively simple task (English sentiment analysis). We hope that this serves as a call to action for the NLP explainability community, which can now leverage our dataset to develop more principled explainability approaches. Additionally, the evaluation and data creation framework proposed in this work are extendible to other tasks and languages. Therefore, we believe that our contributions will generalize beyond the scope of English sentiment analysis.", " We greatly appreciate the reviewer’s in-depth feedback on our paper, which encouraged us to make significant additions to the paper and better articulate its scope. Below we provide a point-to-point response, which adds to our above response to the shared reviewer themes and the revised version of the paper submitted alongside our response.\n\n***\n\n> **Q1**: “Unfortunately, the dataset is only in English, as most of the pretrained models are now multi-lingual it could have been interesting to consider this point.”\n\n**A1**: Extending CEBaB to a multi-lingual setting would be an interesting endeavor. Regardless of the language, we want to highlight that concept-based explanation methods are performing subpar even on the relatively simple task of English sentiment analysis. Before investing resources in extending CEBaB to multiple languages, we hope that the introduction of CEBaB will prompt the model explainability community to develop and validate methods that actually work on our current benchmark. Once the field is ready to consider explanation techniques for different languages and tasks, the data collection and method validation framework provided by this work can be applied there.\n\n***\n\n> **Q2**: “The notion of concept is not clear in the context of the reviews, I assume it is the tokens. Can you please detail the definition of the concept in the dataset? This point is central to all evaluated explanation methods.”\n\n **A2**: Concept is not equivalent to tokens. We consider concepts as high-level abstract constituents that are latent in the input but can be extracted[57]. In the context of CEBaB, which consists of restaurant reviews, the concepts are different restaurant aspects (e.g., food, ambiance, service, noise). Each of these concepts can be expressed across multiple tokens and even sentences, so there is no clear 1-to-1 mapping of latent concepts and tokens. Following the reviewer’s comment, we revised the text to better reflect this definition.\n\n***\n\n> **Q3**: “Did you consider augmenting the dataset through paraphrasing, do you think it would cause an additional challenge?”\n\n**A3**: This is an interesting proposal. In this work, we use counterfactual examples to validate explanation methods. While it would be possible to augment counterfactual examples to build a bigger validation set, this would go against our goal of building a high-quality human-created validation dataset. In follow-up work, where we plan to leverage CEBaB’s interventional data to explicitly train better explanation methods, augmentation strategies could be highly valuable to build more data-efficient explainers.\n\n", " > **Q3**: “Are the final aggregated review and model score sensitive to different edits of a particular aspect-level goal? For each original review and a specific aspect-level goal, the paper collects one edited text. Ideally, it’s better to have multiple samples (as stated in Eq (2)) for estimating the ICaCE and ATE. It might be worthwhile to collect multiple edits for a relatively small set of examples and investigate the sensitivity.”\n\n**A3**: This is a valuable observation. We conducted further analyses to address this concern. CEBaB includes 176 examples that have a paired edit (i.e., an extra edit with the same goal and type on the same original sentence, performed by a different worker). The difference in average review score assigned by the workers across these 176 pairs is on average 0.78 stars. This result suggests that most of the paired edits have a high agreement in the final review score, indicating a limited sensitivity. We report this and supplementary analysis in Appendix B8.\n\n***\n\n> **Q4**: “The paper benchmarks the performance of different explanation techniques and presents some interesting findings. E.g. TACV and ConceptSHAP perform worse (or on par) with the random baseline, even on the straightforward food aspect. It might be worth elaborating or providing qualitative analysis of the failures of these methods.”\n\n**A4**: We agree and plan to provide more qualitative analyses of the failures of these methods in the next revision. To provide more insights into our results, we already updated our main result section and figure to include more results for different versions of our metric, and we now focus on the 5-way problem, which brings out a lot more differences between methods and is overall more robust (as discussed in the shared response). Additionally, we’ve updated Appendix D - Additional Results to include more insights about the methods across models, classification settings, and evaluation metrics.\n\nAn initial analysis we plan to consider for the next revision would break down the ICaCE-scores across different aspects and intervention directions. This would allow us to study if some methods are failing on specific aspects or even specific aspect directions, as opposed to the current global aggregation of the ICaCE-scores that only indicates how methods are performing across the board.\n", " We greatly appreciate the reviewer’s in-depth feedback on our paper, which encouraged us to consider the way we are characterizing our goals and findings. Below we provide a point-to-point response, which adds to our above response to the shared reviewer themes and the revised version of the paper submitted alongside our response.\n***\n> **Q1**: “The paper does not clearly discuss the relationship with other work on evaluating concept-based explanation methods. While the central contribution of this paper is benchmarking. The paper mainly discusses different explanation techniques. It might be good if there could be more discussion on the evaluation side, especially metrics and experimental findings. E.g., whether other work [11,56] uses the same or mostly similar evaluation metrics but evaluates on synthetic tasks.”\n\n**A1**: Agreed. In addition to the revisions we made to **Section 2 - Prior Work** following this comment, we would like to further elaborate on the differences between our work and other predecessors.\n\n- [11, CausalLM] estimates the causal effect of high-level concepts on the model, using synthetic data, by considering the aggregated CaCE values. One crucial advantage of CEBaB is that it evaluates explanation methods with naturalistic data rather than synthetic data. This allows us to compare different explanation methods against a human-validated ground-truth. Additionally, we define our final score on the level of individual examples (ICaCE), where CausaLM defines scores at the aggregated CaCE level. This allows us to better study nuanced local explanations of the model behavior, compared to aggregated claims. Furthermore, we consider a range of different metrics (cosine, normdiff, L2), which allows us to e.g. detangle the explanation method's ability to estimate directions of changes and magnitudes of changes.\n\n- [52, Language Interpretability Toolkit (LIT)] is a new reference added per a suggestion by R4. LIT is “an open-source platform for visualization and understanding of NLP models”. It is an explanation toolkit rather than an explanation benchmark. The closest feature to causal concept-based explanations provided by LIT is a family of counterfactual analyses for NLP models. While LIT only provides rule-based tools to generate counterfactuals from an existing dataset (e.g., “HotFlip”), our benchmark contains human-written counterfactuals. Lastly, while LIT only allows for a qualitative analysis, CEBaB offers quantitative analyses in the form of ICaCE scores using different metrics.\n\n- [57 (formerly 56), ConceptSHAP] provides mostly qualitative comparisons by asking human judges to score different concepts that the explanation method discovered. In our work, we propose to use the ICaCE score which measures concept-based explanation quantitatively by calculating the distance between the true and estimated causal effects. Although [57, ConceptSHAP] provides a quantitative evaluation as well, it uses a synthetic image dataset without rigorous control of the dataset generation process, and uses only non-causal, accuracy-like metrics to benchmark the explanation methods.\n\nThe above three points are reflected in the revised paper in the following quote: *“Other works that do compare to some ground-truth either employ a non-causal evaluation scheme [26], use causal evaluation metrics which do not capture performance on individual examples [52], evaluate on synthetic counterfactuals and rule-based augmentations [11 , 52], or are tailored for a specific explanation method and are hard to generalize [57].”*\n\n***\n\n> **Q2**: “Have the authors verified whether edits affect other aspects? The paper verifies that the edits successfully achieve the aspect-level goal (altering a particular aspect towards a specific direction), but there are no descriptions on whether other aspects are constant, which can be done with the help of crowdworkers on a small number of examples in the benchmark.”\n\n**A2**: This is an excellent suggestion. As a preliminary step toward such an assessment, we sampled 3K training examples and re-validated the concept-level ratings for non-target concepts using our standard validation pipeline using MTurk. We treat each reviewer label as a one-hot vector over the three aspect-level classes and calculate the distance between the centroids of these vectors across our original and re-validated labels. This leads to an average distance of 0.32 (where the minimum is 0 and the maximum is sqrt(3)), suggesting that editing does not disrupt non-target concepts. Indeed, we suspect that this 0.32 score primarily reflects individual differences in sentiment ratings in general and is not especially tied to the editing process. To further explore this, we sampled a separate set of 200 examples and manually evaluated whether editing had affected a non-target concept. From the 200 examples we inspected, we found that 89.5% were unaffected by editing.", " We thank the reviewers for their efforts and valuable comments, which have let us make major revisions to the paper, particularly by collecting new crowdsourced data and performing new quantitative analyses. These are already reflected in the rebuttal revision and will continue shaping the way we revise the work over the next weeks. \n\n**We would first like to discuss the shared themes of the reviews**.\n\nFirst, there seems to be a consensus that the CEBaB dataset is useful for evaluating a large spectrum of explanation methods, under the conceptual framing of the problem as a causal estimation challenge. Like the reviewers, we too believe that a large, human-written counterfactual-based resource could be useful to the DL/ML community, especially in the important area of model explanations, and CEBaB is the only such resource at present as far as we know\n\nSecond, while we agree that CEBaB should be extended beyond English sentiment analysis, our results show that current explanation methods struggle even in this relatively simple domain. In fact, we have **a simple baseline method that outperforms all other explanation methods**. This finding suggests that fundamental research still needs to be done before we can trust explanation methods in more complex domains. However, when appropriate, the methodology developed in this paper (both in terms of data collection and evaluation methodology for explainers) can be directly extended to more complex domains and different languages. \n\nPut another way, while there are high-quality algorithms for English sentiment analysis, concept-based explanation in this domain is very much an open problem. In our view, CEBaB is just the first step towards encouraging our community to create interventional benchmarks for the evaluation of concept-based explanation methods. \n\n**Overview of major changes (now appearing in the rebuttal revision)**:\n\n- **Section 2 - Previous Work** now better positions CEBaB within prior work on explainability benchmarks.\n- **Section 6 - Experiments and Results.** The reviewers asked for more details on the results and what they mean. While we are still highly space-constrained, we believe we have made major improvements here. The overall takeaways remain the same: explanation methods have a long way to go before they can explain causal effects even in simple domains like CEBaB.\n - We introduced a new baseline that approximates true counterfactuals by sampling from training data. **This baseline is the new best-performing method**. The fact that a simple baseline outperforms popular explanation methods highlights the need for a benchmark like CEBaB. We believe that this finding has **implications beyond English sentiment analysis**. \n - We revised the section to include more results for different versions of our metric, and we now focus on the 5-way problem, which brings out a lot more differences between methods and is overall more robust. \n - We reworked **Appendix D - Additional Results** to include more discussion about the results across all models, evaluation settings, and metrics.\n- **New Appendix B8**. R1 asked about sensitivity to different edits from different authors. CEBaB embeds a subset of examples where the same example and goal were given to two editors, so we can assess this important question. Overall, we find that different edits differ by at most one star in the final ratings. \n- **We ran an additional crowdsourcing effort** to validate dataset consistency, verifying that editing for a concept C has negligible to no impact on the ratings for other concepts C’. Details on this study are provided in our response to R1, and we will expand on this study for the next version of the paper.\n\nWe believe that the above addresses the major concerns and omissions identified by reviewers, but we are happy to continue engaging with the reviewers, as this process has already helped us make our core findings more robust and accessible.\n", " This paper introduces a new benchmark called CEBaB for evaluating concept-based explanation methods. This benchmark dataset consists of 15,089 restaurant reviews obtained by editing 2,299 original reviews from Opentable. Crowdworkers are asked to modify an original in order to alter a specific aspect (food, ambiance, service, or noise) in the review while holding all other aspects constant. This process creates pairs of examples where only one aspect of them is different, which allows estimating the causal effect of changing the particular aspect. The paper evaluates 6 different explanation methods by measuring the distance of the concept-based explanations w.r.t. the estimated causal effect.\n Strengths:\n\nThe paper collects a natural language dataset with pairs of human-written counterfactuals, which allows building.\n\nThe experiments cover an array of popular explanation techniques with varying requirements of access to the models.\n\nThe writing is clear and easy to follow.\n\nWeakness:\n\nWhile the collected benchmark and the experiments of evaluating different concept-based explanation methods are a central contribution, the paper does not provide some important details .\n\nThe paper does not provide thorough discussion of results.\n\nThe paper does not clearly discuss the relationship with other work on evaluating concept-based explanation methods.\n *Benchmarks:*\n\nHave the authors verified whether edits affect other aspects? The paper verifies that the edits successfully achieve the aspect-level goal (altering a particular aspect towards a specific direction), but there are no descriptions on whether other aspects are constant, which can be done with the help of crowdworkers on a small number of examples in the benchmark\n\nAre the final aggregated review and model score sensitive to different edits of a particular aspect-level goal? For each original review and a specific aspect-level goal, the paper collects one edited text. Ideally, it’s better to have multiple samples (as stated in Eq (2)) for estimating the ICaCE and ATE. It might be worthwhile to collect multiple edits for a relatively small set of examples and investigate the sensitivity.\n\n*Experiments:*\n\nThe paper benchmarks the performance of different explanation techniques and presents some interesting findings. E.g. TACV and ConceptShap performs worse (or on par) with the random baseline, even on the straightforward food aspect. It might be worth elaborating or providing qualitative analysis of the failures of these methods.\n\n*Related work*\n\nWhile the central contribution of this paper is benchmarking. The paper mainly discusses different explanation techniques. It might be good if there could be more discussion on the evaluation side, especially metrics and experimental findings. E.g., whether other work [11,56] uses the same or mostly similar evaluation metrics but evaluates on synthetic tasks.\n\n See questions.\n", " The paper introduces CEBaB, a new benchmark dataset for assessing concept-based explanation methods in the domain of Natural Language Processing (NLP). \nCEBaB consists of short restaurant reviews with human-generated counterfactual reviews in which an aspect has been modified. \nCEBaB is leveraged to compare various concept-based explanation methods.\nThe purpose of the paper is to establish natural metrics for comparative assessments of these methods.\nMore precisely, five leading concept-based explanation methods: CONEXP, TCAV, ConceptSHAP, INLP, and CausaLM.\nThe dataset of reviews is composed of four aspects (food, service, ambiance, noise), associated with binary evaluation and an overall evaluation estimated from 1 to 5 stars.\nThe dataset is crowdsourced from an initial set of OpenTable reviews.\nAs a first step, the crowdsourcers have to expand the reviews to focus on one specific aspect.\nIn a second step, the crowdsourcers have labeled each aspect of the edited reviews.\nThe notion of the concept associated with the reviews is not clear, I presume it will be words.\nThe authors have used it as a classifier to explain a fine-tuned pretrained language model to predict the overall binary sentiment of all restaurant reviews Strengths.\n * The paper addressed a pertinent question of the explainability of large neural models\n * The paper is clear and properly federate in a unified description method a relatively large spectrum of explanation methods\n * The dataset is large and addressed the problem of explanation in NLP which is particularly known as tedious\n * The dataset has been crowdsourced in two separated steps which is a reasonable indicator of the limited noise in the data.\n * The paper already compares a large set of explanation methods with the resulting dataset\n\nWeakness\n * Unfortunately, the dataset is only in English, as most of the pretrained models are now multi-lingual it could have been interesting to consider this point.\n * The notion of concept is not clear in the context of the reviews, I assume it is the tokens.\n Can you please detail the definition of the concept in the dataset? This point is central to all evaluated explanation methods.\nDid you consider augmenting the dataset through paraphrasing, do you think it would cause an additional challenge? I do not see a strong limitation in regard to the purpose of the paper and the associated contribution and results.", " The paper introduced CEBaB, a new benchmark dataset for assessing concept-based explanation methods in Natural Language Processing (NLP). CEBaB consists of short restaurant reviews with human-generated counterfactual reviews in which an aspect (food, noise, ambiance, service) of the dining experience was modified. Strengths:\nEvaluating explanation methodology for causal effect is important for the research community. The dataset would be useful for that purpose. \n\nWeakness:\nThe dataset evaluates the model explanation methods based on aspects (food, noise, ambiance, service). This dataset can be used for sentiment analysis. I am not sure if this dataset can help in other classification problems for evaluating causal effects. \n How this paper help developing evaluation techniques for other problems apart from sentiment analysis? They have described the limitations. ", " This paper introduces CEBaB, a new benchmark dataset to assess model explanations for NLP models. The key idea here is to edit short restaurant reviews with human generated counterfactual reviews in which one aspect of the review is modified. With aspect level and review level sentiment annotations on CEBaB, authors are able to approximate how a particular model might weigh some abstract concepts more than others when performing sentiment classification. The authors evaluate six (existing and proposed) explanation methods on this benchmark and find that there are significant differences between them. I like that this is a simple and intuitive idea that appears to work well in practice. However, there is a disconnect between the general claims the paper makes and the benchmark. The claims are about NLP model behavior in general but should be constrained to sentiment analysis. I have updated my sore following author response. ### Strengths:\n- This paper addresses an important problem of evaluating explainability methods. The paper is well motivated and well written.\n- The conceptual framing offered by this paper as well as the resource contribution could be useful to future researchers.\n- The authors have also provided an extensive experimental analysis of how well multiple model explanation methods are able to capture the true concept effect.\n- I like the focus on direction of an intervention's effect on model behavior rather than the scale of effects as it simplifies the problem to knowing what causes what rather than the scale of a cause's contribution to the effect.\n\n\nWeaknesses:\n- This work could be much better positioned in prior work on benchmarks for explainability and counterfactual analysis for model explanations (such as the Language Interpretability Toolkit).\n- The benchmark is only for sentiment analysis which limits its use while broader claims are made about NLP explainability. While that's still a good first step, it is arguably easier to \"explain\" a model on a sentiment analysis task versus say NLI or other more complicated NLP tasks using CEBaB, and there is no discussion offered in the paper on how this would extend to those tasks.\n\n\n - I don't understand what Table 4 is. The caption is very confusing so if you could share that, that'll be great. I would like a greater discussion of how the method could be extended to other more complicated tasks. And if there are challenges associated with that, what are those?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "vnC6OtTvnuou", "eNgkajIs_vb", "bnb_MBxovbD", "N24LdB18BxS", "kt22EulxD1M", "g5AYqy3g8Sc", "g5AYqy3g8Sc", "2yMaQsx_TZ", "upAhuVrwWFQ", "kt22EulxD1M", "kt22EulxD1M", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-ml", "nips_2022_3AbigH4s-ml" ]
nips_2022_aLNWp0pn1Ij
GAR: Generalized Autoregression for Multi-Fidelity Fusion
In many scientific research and engineering applications, where repeated simulations of complex systems are conducted, a surrogate is commonly adopted to quickly estimate the whole system. To reduce the expensive cost of generating training examples, it has become a promising approach to combine the results of low-fidelity (fast but inaccurate) and high-fidelity (slow but accurate) simulations. Despite the fast developments of multi-fidelity fusion techniques, most existing methods require particular data structures and do not scale well to high-dimensional output. To resolve these issues, we generalize the classic autoregression (AR), which is wildly used due to its simplicity, robustness, accuracy, and tractability, and propose generalized autoregression (GAR) using tensor formulation and latent features. GAR can deal with arbitrary dimensional outputs and arbitrary multifidelity data structure to satisfy the demand of multi-fidelity fusion for complex problems; it admits a fully tractable likelihood and posterior requiring no approximate inference and scales well to high-dimensional problems. Furthermore, we prove the autokrigeability theorem based on GAR in the multi-fidelity case and develop CIGAR, a simplified GAR with the same predictive mean accuracy but requires significantly less computation. In experiments of canonical PDEs and scientific computational examples, the proposed method consistently outperforms the SOTA methods with a large margin (up to 6x improvement in RMSE) with only a few high-fidelity training samples.
Accept
This paper considers the problem of multi-fidelity fusion using generalized autoregression. The authors especially take on problems such as high-dimensionality and non-subsetness with this approach. The reviewers agree that the paper is well written and makes a significant contribution to MF-fusion. I recommend acceptance and strongly encourage the authors to take the reviewer comments into account in preparing the final manuscript.
train
[ "8DV0f66ECnl", "2bs-6GXrmN9", "EU99BPbQU6R", "kcuQndRZaCI", "TnvA2uSe6Aq", "JcKSMnuDeLg", "xJ5gWEG1WFb" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for answering my questions. I've improved my score based on the responses.", " Thank you for your valuable suggestions for our work.\n\nC1: But there's existing non-AR work [1] that is able to handle both non-structured high-dimensional outputs and non-subset multi-fidelity data [1] besides MF-BNN.\n\nR1: Thank you for suggesting this great work. We have cited this work in our revision. As it is presented in the introduction (and is also pointed out by the suggested work [1]), AR is a dominant technique for multi-fidelity fusion. Our work aims to generalize AR to resolve its critical limitation for high-dimension and non-subset challenges. This way, we can derive a simple yet powerful model to deal with most multi-fidelity problems. Unlike many existing deep learning-based methods with thousands of model parameters (such as MF-BNN), the proposed method has only a few model parameters (less than 10 hyperparameters for CIGAR) and thus can naturally deal with a small dataset, which is essentially important for multi-fidelity fusion in the emulation of physics simulations. The superiority of the proposed method is well supported by our experiments, where GAR/CIGAR outperforms the SOTA methods in most cases. We will also compare our results with the latest great work [1] in the future. \n\nC2: Besides, there's no big improvement between the proposed method and baselines (MF-BNN, ResGP) for some of the experiments. Only one metric RMSE is evaluated.\n\nR2: We believe that the proposed method has significantly improved the SOTA methods, including MF-BNN and ResGP. There are indeed situations (non-subset data for Poisson’s, Burger’s, and heat equations with 4 training data points) where the proposed method is no better than MF-BNN. However, as the number of training data points increases to 8 and more, our method outperforms other models with a significant margin. We can confidently conclude that our method shows improved performance for 95% of all cases we have tested (including 6 different datasets), which itself sufficiently justifies the superiority. \n\nThe significance of our work is also partially recognized by the recommended reference [1], where AR is recognized as a predominant method but lacks the ability to deal with high-dimension and non-subset situations, the exact issues we resolve in this work.\n\nC3: The paper mentioned uncertainty quantification. But why there's no evaluation of uncertainty quantification for experiments? It will be great if the author could include other metrics for accuracy (negative log-likelihood) and uncertainty quantification (Continuous Ranked Probability Score [1] or mean interval score [1,2]).\n\nR3: We agree with the reviewer. Unfortunately, due to the limited timeframe, we are not able to conduct a comprehensive investigation with all suggested metrics. We instead add the most commonly used log-likelihood (also used by MF-HNP [1]) as a fair comparison for the real-world datasets. The detailed results and discussions have been supplemented in Appendix E.7. They are consistent with the previous conclusions. Note that we cannot add the results for MF-BNN, because the open-source code does not provide a clear way to generate the predictive variance, and the published codes are difficult to modify without discussions with their authors. We will contact its authors and hopefully resolve this problem in the future.\n\nWe would like to bring up that most multi-fidelity literature (even GP-based method) still uses RMSE as the main metric (e.g. in MF-BNN, ResGP, DC, NAR ) to compare different methods. Being able to outperform other SOTA methods under RMSE consistently is a significant contribution. \n\nAlso, we hope the reviewer can see the significance of our work and not underrate it for that it generalizes the predominant multi-fidelity fusion technique, AR, for the modern high-dimension and non-subset challenges. The elegance and tractable framework of AR allows GAR to be further enhanced and modified for different challenging problems, e.g., non-Gaussian likelihood.\n\nWe genuinely believe that we are submitting a solid work that is fundamental and novel to the multi-fidelity fusion community. We will also open source our code to benefit potential users. Please let us know if you find any things unclear or require our further efforts to improve this work (and your rating :) ). Thank you!\n", " Thank you for your valuable suggestions for our work.\n\nC1: Sparse variational inference might further improve the inference efficiency here.\n\nR1: We agree with the reviewer. Indeed this is our plan for the next step to improve this method for a very large number of data points.\n\nC2 How do you decide the size of the tensor? \n\nR2: For this work, we keep its original tensor format because each spatial-temporal data admits a natural tensor structure. For instance, for a spatial-temporal field with input $x_i$ in a 2d-space domain, we use a 4-mod tensor $(\\xi, x_1, x_2,t)$ to index the tensor, where $x$ indicates the space and $t$ indicates the time. \n\nFor data without an explicit tensor format, we follow the HOGP work [1] to organize it into a random tensor. This will not create issues because the index is then learned by the model. However, the indexes are not likely to have the same meaning to its original indexes.\n\nWe genuinely believe that we are submitting a solid work that is fundamental and novel to the multi-fidelity fusion community. We will also open source our code to benefit potential users. Please let us know if you find any things unclear or require our further efforts to improve this work (and its rating :) ). Thank you!\n", " Thank you very much for your valuable feedback on our work.\n\nC1: seems to only work for regression problems for now - if non-gaussian likelihood is needed, not sure how straight forward it is to translate the whole tensor algebra business to that. Worth mentioning in the paper a bit as well I guess? Nonetheless, focusing on regression data is a worthwhile contribution already in my opinion. \n\nR1: Yes, currently, our model only deals with regression problems only. This is mainly due to the fact that almost all previous works are designed for regression problems. It is kind of rare to see real applications with multi-fidelity non-gaussian likelihood data. Thus, we follow the literature to focus on the regression problem.\nWe think the reviewer has proposed an exciting direction for the multi-fidelity problem. There can be a non-gaussian likelihood for regression, e.g. if we apply a Box-Cox Transformation. Nevertheless, there are already many existing tensors-based models for non-gaussian likelihood for us to follow [1,2]. We will certainly extend our work for non-gaussian likelihood in the future. Thank you for your very insightful advice.\n\n[1] Xu, Zenglin, and Feng Yan. \"Infinite Tucker decomposition: Nonparametric Bayesian models for multiway data analysis.\" arXiv preprint arXiv:1108.6296 (2011).\n\n[2] Xing, Wei, et al. \"Infinite ShapeOdds: Nonparametric Bayesian Models for Shape Representations.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020.\n\nC2: The idea of using imaginary variables to match \"missing\" data seems very similar to the use of inducing variables in sparse GPs, can you comment a bit on that? \n\nR2: Yes. If we leave the tensor part aside, they are indeed sparse GPs. When we introduce the tensor, each output corresponds to two parts: one is the known input parameters, which form a normal GP, and the other part corresponds to the “inducing variables” that form a sparse GP (with inducing points).\n\n\nIn summary, we appreciate the positive feedback from the reviewer. We genuinely believe that we are submitting a solid work that is fundamental and novel to the multi-fidelity fusion community. We will also open source our code to benefit potential users. Please let us know if you find any things unclear or require our further efforts to improve this work. Thank you!\n", " This paper designs a generalized autoregression method (GAR) and its efficient implementation CIGAR to deal with non-structured high-dimensional outputs and non-subset multi-fidelity data. The result shows their method outperforms the baselines among several benchmarks at the highest fidelity level. Strengths:\nWell-written paper in general.\nInclude 5 experiments. 2 with real-world data.\n\nWeaknesses: \nMy main concern about the paper is its novelty. Although the author didn't overclaim their contributions \"Generalization of the AR for arbitrary non-structured high-dimensional outputs\" and \"Generalization to non-subset multi-fidelity data for the AR\", which are all for the classic autoregression model specifically. But there's existing non-AR work [1] that is able to handle both non-structured high-dimensional outputs and non-subset multi-fidelity data [1] besides MF-BNN.\n\n[1] Wu, Dongxia, et al. \"Multi-fidelity Hierarchical Neural Processes.\" arXiv preprint arXiv:2206.04872 (2022).\n\nBesides, there's no big improvement between the proposed method and baselines (MF-BNN, ResGP) for some of the experiments. Only one metric RMSE is evaluated. The paper mentioned uncertainty quantification. But why there's no evaluation of uncertainty quantification for experiments? It will be great if the author could include other metrics for accuracy (negative log-likelihood) and uncertainty quantification (Continuous Ranked Probability Score [1] or mean interval score [1,2]).\n\n[1] Gneiting, Tilmann, and Adrian E. Raftery. \"Strictly proper scoring rules, prediction, and estimation.\" Journal of the American statistical Association 102.477 (2007): 359-378.\n[2] Wu, Dongxia, et al. \"Quantifying uncertainty in deep spatiotemporal forecasting.\" arXiv preprint arXiv:2105.11982 (2021).\n No potential negative societal impact I can see.", " The authors here present a generalized autoregressive model to conduct multi-fidelity prediction on data with high-dimensional outputs via tensor decomposition. A different non-subset multi-fidelity data setting is explored. Based on autokrigeability, a more efficient implementation is proposed. Strength:\n1)\tProblems are clearly stated and addressed. \n2)\tThe non-subset data scenario is addressed here for the first time.\n3)\tTensor GP is applied here to reflect possible underlying structures of the data.\nWeakness:\n1)\tThe notations are a bit confusing, like Z, F and their subscripts.\n2)\tSparse variational inference might further improve the inference efficiency here.\n 1. How do you decide the size of the tensor? N/A", " The authors proposed Generalized Autoregression (GAR) to handle high-dimensional outputs for multi-fidelity fusion problems. GAR is a GP-based approach that employed tensor algebra to efficiently model covariances across the high dimensions. To handle situations where high fidelity data is not a subset of low fidelity data, they propose a latent variable approach and with a simple trick they proposed CIGAR, which is a computational more efficient version of GAR.\n\nExperiments of GAR and CIGAR showed promising results compared to SOTAs. Strengths:\n- clear and thorough comparison of GAR with existing methods in sec.4 \n- exact solutions are obtainable (when subset data exists)\n- although tensor algebra papers tend to be tedious to read, the authors defined the notations clearly and communicated effectively between equations.\n- the problem is clearly defined, and worthwhile contribution to handle high dim output multi fidelity fusion problem. \n\nWeakness:\n- there are some typos around, e.g. line 38 there is a 1. after citation [9], or in line 55 the word \"Problems\" is capitalised, but these do not affect the reading experience at all.\n- seems to only work for regression problems for now - if non-gaussian likelihood is needed, not sure how straight forward it is to translate the whole tensor algebra business to that. Worth mentioning in the paper a bit as well I guess? Nonetheless, focusing on regression data is a worthwhile contribution already in my opinion.\n The idea of using imaginary variables to match \"missing\" data seems very similar to the use of inducing variables in sparse GPs, can you comment a bit on that? Currently only works for Gaussian likelihood. Not sure how straight forward or not it might be to generalised to arbitrary likelihood in high dimensinoal output space." ]
[ -1, -1, -1, -1, 5, 6, 8 ]
[ -1, -1, -1, -1, 4, 3, 3 ]
[ "2bs-6GXrmN9", "TnvA2uSe6Aq", "JcKSMnuDeLg", "xJ5gWEG1WFb", "nips_2022_aLNWp0pn1Ij", "nips_2022_aLNWp0pn1Ij", "nips_2022_aLNWp0pn1Ij" ]
nips_2022_nrksGSRT7kX
RAMBO-RL: Robust Adversarial Model-Based Offline Reinforcement Learning
Offline reinforcement learning (RL) aims to find performant policies from logged data without further environment interaction. Model-based algorithms, which learn a model of the environment from the dataset and perform conservative policy optimisation within that model, have emerged as a promising approach to this problem. In this work, we present Robust Adversarial Model-Based Offline RL (RAMBO), a novel approach to model-based offline RL. We formulate the problem as a two-player zero sum game against an adversarial environment model. The model is trained to minimise the value function while still accurately predicting the transitions in the dataset, forcing the policy to act conservatively in areas not covered by the dataset. To approximately solve the two-player game, we alternate between optimising the policy and adversarially optimising the model. The problem formulation that we address is theoretically grounded, resulting in a probably approximately correct (PAC) performance guarantee and a pessimistic value function which lower bounds the value function in the true environment. We evaluate our approach on widely studied offline RL benchmarks, and demonstrate that it outperforms existing state-of-the-art baselines.
Accept
This paper introduces the idea of Robust Adversarial RL for offline model-based RL, which could have a high impact. It is well organized and the writing is very comprehensive; the authors manage to convey their idea in concise but informative language. The proposed RAMBO approach performs reasonably well in the presented experiments, although it was pointed out that the paper would benefit from more scenarios showing the necessity of RAMBO compared with the current baseline (COMBO). Questions and issues related to the theory that were raised during the reviewing process have been addressed in the rebuttal.
train
[ "Mp0hvD_P1_Q", "vMJDTvlI6s", "-cJADR6Ke11", "Syez9iD7B3O", "0BBR5HQd9CQ", "CWr7IrvC4R", "NRD-r7WzZj1", "bxl3ecwh3-T", "DbKEwR89opc", "bg0JiDq_fnV", "phP4TeUGEfE", "NRKRcGLDsix", "1URRL-_p9pG", "SPCsO5J0_cI", "W-MHJD7KT-7", "RmrGhq9kxC1", "bFDbYxLQNqP", "BahsH0d7H4D", "io2yKENun2P" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks a lot for getting back to us despite being on holiday. \n\n \n\nWe have modified the additional experiment so that we now choose the value of the regularisation parameter for COMBO by sweeping over $\\beta \\in$ {$0.1, 0.25, 0.5, 5.0$}, and selecting the best performance. The best performance for COMBO is 1.39$\\pm$0.13 which is obtained using $\\beta = 0.25$. For this experiment we used 10 seeds for each configuration (compared to 5 in our main experiments). RAMBO outperforms COMBO for this problem with a statistical significance of $p$=0.02. \n\n \nYou say that it is a disadvantage that our approach does not require uncertainty estimation, because if a reliable uncertainty estimation approach is developed in the future, our approach will be less relevant. We think that it is unreasonable to judge the significance of our work based on a hypothetical situation in which an unsolved problem is solved. \n\n \n\nBoth the COMBO and CQL papers are clear that they use \"conservative'' to mean that the algorithms optimise a lower bound on the value function. To quote directly from the CQL paper 'we analyze CQL to show that the policy updates derived in this way are indeed “conservative”, in the sense that each successive policy iterate is optimized against a $\\textbf{lower bound on its value.}$' We also address a formulation that optimises a lower bound, so we use the term conservative in the same manner as these previous papers. \n\nFor Equation 3, as we said in our original response $T_{MLE}$, is only considered at points in the dataset. Therefore, it is unclear to us what role the generalisation of $T_{MLE}$ plays in Equation 3. \n\n ", " Thanks for this suggestion.\n\nWe have changed the result in the paper (and in Appendix C.1) so that we now select the regularisation hyperparameter for COMBO by sweeping over $\\beta \\in$ {$0.1, 0.25, 0.5, 5.0$} and choosing the best performance. This includes values lower than those used in the original COMBO paper (i.e. < 0.5).\n\nThe best performance obtained by COMBO was 1.39$\\pm$0.13, which was obtained using $\\beta = 0.25$. This is worse than the performance obtained by RAMBO for this problem (with signficance of $p$ = 0.02).", " Thank you for the response. I have understood how RAMBO deals with the terminal states. I will increase my score. ", " Unfortunately, the author-reviewer discussion is in the middle of my holidays, sorry by advance for my telegraphic response.\n\nThe authors seemingly give what I asked at minimal effort, but things do not really add up. For instance, the experiment does not answer my questions as it has been performed with a single hyperparameter. Simpler experiments are used for deeper analysis with more statistical significance and with hyperparameter sweeps. There are still inconsistencies between the theory and the implementation that have been barely discussed. In general, I feel that the flow of the story of the paper is not well-rounded yet. I'd prefer it to be rejected now to be a good paper at the next conference, rather than a mediocre paper at NeurIPS. I keep my original recommendation: borderline reject.\n\n*Furthermore, the reward penalization approaches require uncertainty estimation, whereas our approach does not.* I see that as a bug rather than a feature. Your approach implicitly does uncertainty estimation, meaning that if tomorrow (finally) a reliable uncertainty estimation was to be developed, you would not be able to integrate it out-of-the-box to your algorithm.\n\n*Use of the term conservatism* CQL (and COMBO which is basically the same algorithm) has been analyzed as an estimate of the value that is penalized according to the deviation of the target policy wrt the behavioral policy, which in spirit, is conservatism.\n\nEq 3: I mean that in continuous space, generalization comes into play in what $T_{MLE}$ is going to be.\n", " Could you please acknowledge that you have read the response to your review? Also, please reply to the authors to indicate how they managed to answer the points raised in your review and how this impacts your score. Finally, make sure that you update your score accordingly.", " I have read the analysis in Appendix C.1 and I think the results do take a step further in revealing the mechanism of adversarial model training. I have one more concern about the Appendix C.1 experiments, because the degree of the pessimism on Q values is also affected by certain hyper-parameters. I noticed that you select the lowest value of \\beta from the original paper, but it would be more persuative to include results where \\beta is smaller (thus reduce the degree of pessimism) while COMBO still achieves comparable performance on this toy environment. ", " Thanks for your response. I will increase my score.", " Thanks for getting back to us. We are glad that you appreciate the additional results we added in Figures 2 and 3. \n\n \n\nWe think it is difficult to definitively conclude what causes the differences between the algorithms on the more complex domains. However, upon reviewing the results for the medium-expert datasets in more detail, we observe that for Walker2D medium-expert there is initially some instability in the Q-values during training on some runs, which we do not observe for other datasets. This likely happens because as illustrated in Appendix C.1, regularisation is introduced by RAMBO gradually, so there is a tendency for values to be over-estimated early in training. This unstable training for some runs results in a less performant policy on these runs. This is reflected by the fact that the variation in the performance between seeds is very high for Walker2d medium-expert compared to the other datasets. We are unsure why we observe this issue on Walker2D medium-expert specifically. We did not observe this phenomenon for the Hopper and Halfcheetah medium-expert datasets, but RAMBO still performs reasonably well on these problems. \n\n \n\nFor Antmaze, we did not observe the same instability during training so this does not appear to explain the performance for Antmaze. One possible explanation is that because the Antmaze domain is higher-dimensional, it requires more training to appropriately train the adversarial model. However, we did try running the Antmaze datasets for more iterations, and this did not seem to improve performance. We do wish to point out that RAMBO does perform better than COMBO for some of the Antmaze datasets, and achieves a scores greater than 0 across most of the datasets, unlike COMBO. \n\n \n\nWhile generating the results in Appendix C.1, we did test some other slight variations of the one-dimensional \"Single Transition\" problem. The overall tendency was that the policy produced by COMBO was more likely to become stuck in a local minima. However, we did find that for some variations of the problem (particularly those in which there was one action that was distinctly better than the rest), both algorithms had no trouble finding the globally optimal action.", " Thanks for your response. I have read the results in Appendix C.1. I think the results are interesting and the difference in Fig. 2 and Fig. 3 strongly demonstrate the difference between the two regularizer mechanisms. \n\nNow I have a further question about the problem. The authors state that \"RAMBO consistently performs better than COMBO for this problem \" (line 784-785), but in Medium-Expert (MuJoCo) and AntMaze, RAMBO is worse than COMBO in general. Can the authors give further explanation on this?\n\nOn the other hand, did you conduct the experiments in Appendix C.1 on different tasks? Does the difference generally exist?", " Thank you for responding to us so quickly.\n\nOur apologies, we misunderstood your original question regarding terminal states. Thank you for clarifying your concern. We hope that we have addressed the issues you have raised in our response below, but please do let us know if you have further questions. \n\n\n\n$V^\\pi_\\phi$:\n\n$V^\\pi_\\phi$ is the value function learned by the actor-critic RL algorithm used to optimise the policy. It is a separate network, defined by different parameters ($\\theta$, lets say). In the paper, we use the subscript of the value function, $V^\\pi_M$, to indicate that the value function is defined for MDP $M$. So, in the case of $V^\\pi_\\phi$ the subscript $\\phi$ indicates that this is the value function trained in the MDP model defined by parameters $\\phi$ (denoted by $\\widehat{T}_{\\phi}$ in the paper). To reiterate, $V^\\pi_\\phi$ itself is a separate network defined by different parameters, $\\theta$.\n\n\n$\\textbf{Terminal states}$:\n\nWe originally misunderstood your question concerning terminal states. We clarify how terminal states are handled in our implementation here.\n\nIn our implementation, we assume that we have access to the termination function of the MDP, which we refer to here as $terminal(s)$. This is a standard assumption in model-based RL. In our code, we compute $r + \\gamma V^\\pi_\\phi(s')$ as required by Equation 5 as: $r + \\gamma \\cdot (\\textnormal{not}\\ terminal(s')) \\cdot V^\\pi_\\phi(s') $. This is the same method used to compute the Bellman error in most RL implementations, and is implemented in line 890 of rambo/algorithms/rambo.py of our code in the supplementary material.\n\nThus, if $s'$ is terminal, our implementation treats $s'$ as if it has zero value due to the use of the termination function. Therefore, it does not matter if the estimate of $V^\\pi_\\phi(s')$ is inaccurate at terminal states, as the value utilised at terminal successor states is always set to zero.\n\nBecause terminal states are always treated as having zero value, we do not think that the handling of terminal states leads to instability in our algorithm. \n\n\n$\\textbf{Sensitivity to hyperparameters}$: \n\nAs you have pointed out, the performance of our algorithm does vary significantly with untuned hyperparameters, specifically the rollout length. \nHowever, we think that including these results is actually a strong point of our submission, as it provides transparency as to how well our algorithm performs for a variety of hyperparameters. \nExisting commonly cited model-based offline RL papers (COMBO, MOPO, MOReL) only provide results for tuned values of the rollout length.\nTherefore, it is unclear whether our algorithm is more sensitive to the rollout length than other existing algorithms.\n\nWe do not think that the inclusion of these additional results for untuned hyperparameters should be considered a negative aspect of our submission.\nPenalising a submission based on additional results for untuned hyperparameters encourages authors to only present cherry-picked results in their papers. \nWe think that including a wider range of results, including for situations in which the algorithm does not perform as well, makes it easier for researchers to assess the potential limitations of a given approach.\n\n\n$\\textbf{Offline hyperparameter tuning}$\n\nIn Appendix C.3, we provide results where the hyperparameters were selected using a heuristic that is evaluated offline. To generate the results in Appendix C.3, we rerun the algorithm after selecting the hyperparameters offline. These results demonstrate that we can still achieve solid performance with RAMBO using only offline hyperparameter tuning.\n\nThe results presented in the main body of the paper use online hyperparameter tuning by comparing three possible configurations of the hyperparameters (details in Appendix B.4). ", " Thank the authors for the response. I am still confused about Equation (5).\n1. According to the authors' response, the gradient is computed by $V_\\phi^\\pi(s^\\prime)$. Is the $V_\\phi^\\pi(s^\\prime)$ an additional network?\n2. I wonder whether the terminal condition makes the $V_\\phi^\\pi(s^\\prime)$ discontinuous and thus leads to a high variance of the gradient. According to Table 1 and Table 6, the performance of RAMBO is relatively poor in Hopper and Walker, and the value function even diverges in these two environments without tuning the hyperparameters. The authors may want to provide more discussion/results to demonstrate the effectiveness of RAMBO to deal with terminal states.\n3. Moreover, we wonder whether the data $(s, a, r, s^\\prime)$ is included in the training set if $s$ is a terminal state. In popular implementations, this data is not included in the training set. However, in this paper, we need an accurate estimate of $V^\\pi(s^\\prime)$, even if $s^\\prime$ is a terminal state.\n\nMoreover, I wonder whether the hyperparameters of RAMBO are tuned with online evaluation. The authors may want to provide a method that tunes these hyperparameters offline, as online evaluation may be risky in some real-world applications.", " Thank you for your feedback on our work.\n\nTheoretical claims: The original statements in our paper were paraphrased from the paper which introduced these theoretical results [1]. We have modified lines 48-50 and 189-190 to make the statements less strong as you have suggested.\n\nEquation 1: In our formulation, the model itself can be thought of as the adversary policy in an instantiation of Robust Adversarial RL (RARL). We discuss this in the paper in Remark 2, and this connection between this formulation of offline RL and RARL is one of the contributions we state in the introduction. We have added a reference to Equation 1 in Remark 2 to make this clearer.\n\nEquation 3: In Equation 3, $\\hat{T}_{MLE}$ is only considered over the dataset, which is the same as in Section 5.2.\n\nWe do not understand the issue you raise about the concept of the MLE model. Please let us know if you feel that this issue has not been resolved, and if so, could you please explain the issue in more depth.\n\nTheorem 1: Thank you for pointing out this typo, it has been corrected.\n\nProposition 2: Thanks for noting this issue with the notation. In the preliminaries section, we have defined $V^\\pi_\\phi$ (with no $s)$ to be the expected value under the initial state distribution. We now use this notation on the left hand side of Proposition 2 and in the proof to avoid the issue of $s$ on both sides of the equation.\n\nEmpirical validation: We have added additional results on a simple example, specifically to analyse the differences between RAMBO and COMBO, the most similar existing algorithm, in Table 3 and Appendix C.1. These results illustrate that because of the adversarial training used by RAMBO, the value function produced by RAMBO is initially optimistic. Pessimism is introduced into the value function gradually as the model is trained to produce more pessimistic transitions. These new results indicate that this makes policy optimisation less likely to become stuck in local maxima compared to COMBO, which introduces pessimism into the value function at the outset.\n\nEquation 9: Using the TV loss per Equation 8 would have required first training one network to learn $\\hat{T}_{MLE}$, and then a second network trained using the TV loss with respect to the first network. Additionally, optimising the TV loss would be more complicated than the MLE. We wished to make our practical implementation as simple as possible to implement, which is why we used the standard MLE loss to train a single model. It is unclear to us that there would have been any practical advantage to using the TV loss approach with two models.\n\nRemark 1: We agree that our approach has a similar ultimate effect to adding reward penalties of a specific magnitude. However, we think it is insightful that appropriate regularisation can be achieved by modifying the MDP model itself, rather than adding penalties. By proposing our approach of adversarially training the dynamics model to minimise the value function, we make a connection between this formulation of offline RL and robust adversarial RL (RARL), as emphasised in Remark 2. Furthermore, the reward penalisation approaches require uncertainty estimation, whereas our approach does not. \n\nMany existing papers which have been impactful share the same approach of penalising the value of out-of-distribution state action pairs, but achieve this regularisation in a different manner. Therefore, we believe our new approach for regularisation can still be a valuable idea to the research community.\n\nLines 1 & 6: These minor points have been resolved.\n\nUse of the term conservatism: Many existing papers use the term conservatism to refer to approaches which do not constrain the policy to remain close to the behaviour policy, and instead regularise the value function. For example, it is used in the title of the COMBO [2] and CQL [3] papers. Therefore, our usage is consistent with how this term is used in the research community.\n\n\n[1] Uehara, Masatoshi, and Wen Sun. \"Pessimistic model-based offline reinforcement learning under partial coverage.\" ICLR (2021).\n\n[2] Yu, Tianhe, et al. \"COMBO: Conservative offline model-based policy optimization.\" Advances in neural information processing systems 34 (2021): 28954-28967.\n\n[3] Kumar, Aviral, et al. \"Conservative Q-learning for offline reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 1179-1191.\n", " Thank you for your feedback on our work.\n\n1).\nIn Equation 5, if the reward plus the value at the successor state, $r + \\gamma V^\\pi_\\phi(s’)$, are lower than what was expected compared to $Q^\\pi_\\phi(s, a)$ then the parameters of the model, $\\phi$, will be updated to make receiving reward $r$ and transitioning to $s’$ more likely after executing $(s, a)$. Conversely, if the reward plus value at the successor state are higher than expected the model parameters will be updated to make receiving that reward and successor state less likely. This is analogous to policy gradient algorithms, except that we modify the distribution over rewards and successor states, rather than over actions.\n\nIf $s’$ were to be a terminal state, then $V^\\pi_\\phi(s’)$ would be low as no future reward can be gained from $s’$. Because the value of reaching $s’$ is low, the adversarial update according to Equation 5 will make transitions to $s’$ more likely.\n\n2). \nCompared to other offline model-based RL algorithms, RAMBO does not appear to be particularly sensitive to hyperparameters. For example, in the appendix of the COMBO paper, significant variation in performance is also reported for different hyperparameter values.\n\nWe also do not think that RAMBO is particularly difficult to tune. In our experiments, we only modified two hyperparameters across a total of three different configurations. For COMBO, six different hyperparameters were modified between datasets (rollout length, learning rates, conservative coefficient, regularisation distribution, rollout policy, ratio of real data). \n\n3). \nIn the experiments section, we report a training time of 24-30 hours for RAMBO. COMBO reports a computation time of one day. While RAMBO has to additionally adversarially train the transition model, COMBO requires optimising the additional regularisation term in the value function update. It appears the result is comparable computation time.\n\n4). \nThank you for this suggestion. However, unfortunately we did not have time to run this analysis during the rebuttal period.\n", " Thank you for your feedback on our work.\n\nBecause the original COMBO paper used the -v0 datasets, we used the results for COMBO on the -v2 datasets reported by [1]. We are not sure why these results differ from the original COMBO paper for only some of the datasets, and have contacted the authors of [1] to seek clarification as to how these results were generated.\n\nWe have added further analysis of RAMBO, specifically in comparison to COMBO to the experiments section (Table 3) and Appendix C.1. These results illustrate that the value function for RAMBO is initially optimistic. Because of the adversarial training, pessimism is introduced into the value function gradually by RAMBO as the model is trained to generate more pessimistic transitions. These results indicate that this makes the policy optimisation less likely to become stuck in local maxima compared to COMBO (and other existing algorithms), which introduce pessimism into the value function at the outset. We believe that this demonstrates how RAMBO differs from existing approaches, and that gradually increasing the level of pessimism can be a useful modification to existing offline RL algorithms.\n\nFor the baselines in the experiments section, we opted to include algorithms that are commonly compared against and that most readers will be familiar with. We also chose to compare against algorithms for which results on AntMaze-v0 were available. We have added references to the papers you have mentioned to the related work section.\n\n[1] Jianhao Wang, Wenzhe Li, Haozhe Jiang, Guangxiang Zhu, Siyuan Li, and Chongjie Zhang. Offline reinforcement learning with reverse model-based imagination. Advances in Neural Information Processing Systems, 34, 2021.\n", " Thank you for your feedback on our work.\n\nWe have added additional results comparing RAMBO and COMBO in the experiments section (Table 3) and Appendix C.1. These results illustrate a key difference between RAMBO and COMBO. RAMBO introduces pessimism gradually as the adversary is trained to generate more pessimistic transitions. For COMBO, pessimism is present in the value function throughout training as it is part of the value function update. \n\nIn the example we have added, there are several distinct regions of the action space which are covered by the dataset. Because the value function produced by RAMBO is initially optimistic, and pessimism is introduced gradually, there is less chance of the policy learned by RAMBO becoming stuck in poor local maxima in this example. This is a natural characteristic of RAMBO. For COMBO, we observe that the value function contains local maxima throughout training, and it is more likely that the final policy finds a local maximum. We believe that this illustrates a crucial difference between RAMBO and COMBO, and that gradually increasing the level of pessimism can be a useful modification to existing offline RL algorithms.\n", " The authors propose RAMBO to offline RL which imposes conservatism by adversarially modifying the transition dynamics of a learned model. RAMBO is built on a formulation for which PAC bounds are guaranteed. The experiments are conducted in the D4RL benchmark and several SOTA baselines are taken for comparison. \nStrengths\n1. The article is well-written and easy to follow.\n2. The solution is simple to implement and sounds reasonable.\n\n\nWeaknesses\n\n1. RAMBO is similar to COMBO: COMBO learns a conservative Q through regularization based on the policy while RAMBO reduces unreasonable V through regularization (Adversarial training) based on the model. The results in Table 1 also show the similarity in performance. It is OK to give a novel regularization from other perspectives. But I think further comparison should be considered to clarify the advantages/disadvantages/scopes of the two regularizations. The article is well-written and soundness to me. I have no further questions. \nAs mentioned above, I think further comparison should be considered to clarify the advantages/disadvantages/scopes of the two regularizations.If it is done and we can find more insights in the two regularizations after that, I will consider increasing the score ", " To remedy with the extrapolation error in offline setting, this paper formulates the offline RL as an adversarial process between the agent and the environment. The environment is modeled as a neural network and trained with both MLE objective and conservative objective which is essentially minimizing the state value of the agent. Previous research [1] proved that this formulation enjoys a PAC performance guarantee, and the experiments do demonstrate some performance improvement on certain D4RL tasks. \n\n[1] Masatoshi Uehara and Wen Sun. Pessimistic Model-Based Offline Reinforcement Learning under Partial Coverage. ICLR 2022. Overall I enjoyed reading this paper. \n+ This paper is well organized and comprehensive in presentation. The authors manage to convey their idea in concise but informative language.\n+ The main argument that adversarial training can relieve value overestimation and model defect is supported by an illustrative example in the appendix. This is a persuasive argument to justify the effectiveness of the proposed method. \n+ The authors provide the detail about their hyper-parameter configuration and strategy in the appendix. I think the strategy of deciding on RAMBO's hyper-parameters is acceptable, for no excessive online fine-tuning is involved in training stage. \n\nSome weaknesses:\n+ The baselines, no matter model-based ones or model-free ones, seem to be old at this time. For model-free ones, there are [2] and [3]; for model-based algorithms, there is [4] as far as I can recall. \n+ The analysis of RAMBO is too brief. From table 2 I can tell that RAMBO does better on medium-replay datasets while (comparatively) worse on medium-expert datasets, but the authors did not provide in-depth analysis of this phenomenon. More discussions, such as whether this phenomenon is related to adversarial training and how, is expected. \n\n[2] Gaon An, Seungyong Moon, Jang-Hyun Kim, Hyun Oh Song. Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble. NeurIPS 2021. \n\n[3] Ilya Kostrikov, Rob Fergus, Jonathan Tompson, Ofir Nachum. Offline Reinforcement Learning with Fisher Divergence Critic Regularization. ICML 2021.\n\n[4] Yijun Yang, Jing Jiang, Tianyi Zhou, Jie Ma, Yuhui Shi. Pareto Policy Pool for Model-based Offline Reinforcement Learning. ICLR 2022. \n\n Just one question about COMBO baseline. In the original paper of COMBO, it was not clarified which version of the dataset (v0 or v2) was used in the experiment, so I suppose COMBO used v0 datasets as CQL did. So did you re-implement COMBO and tested it with v2 dataset? \n\nAlso, it is interesting that the performance of COMBO reported in this paper is precisely identical to the original paper, except for walker2d-medium-expert, hopper-medium-replay, hopper-medium and walker2d-medium. So would you elaborate more on how did you obtain the result of COMBO? \n\nPlease correct me if there is any mis-understanding. \n The authors do mention some limitations of their work at the conclusion section, such as the computational cost. However I don't think this is a major concern. \n", " This paper proposes RAMBO, an offline model-based algorithm that formulates the optimization of models and policies as a two-player zero-sum game. Specifically, RAMBO trains models with an extra objective that minimizes the value function to achieve conservative policy optimization. The authors provide a PAC performance guarantee and prove that the learned value function lower bounds the true value function. Experiments demonstrate that RAMBO achieves state-of-the-art performance. Originality: \n\nThe idea of RAMBO is novel. This paper introduces the idea of RARL into offline model-based RL, which can provide a pessimistic value estimation and alleviate the distribution-shift issue. \n\nQuality: \n\nThe theory part is solid.\n1. The authors provide a PAC bound that guarantees the accuracy of the learned value function.\n2. The author proves that the learned value function is a lower bound of the value function in a real environment.\n\nNevertheless, the authors may want to provide more details on the algorithm and experiment parts.\n1. I can not understand how the gradient is computed according to (5), especially if $s^\\prime$ is a terminal state. \n2. Appendix shows that RAMBO is sensitive to hyperparameters, especially in Hopper and Walker2D. I wonder whether the gradient in (5) has high variance if $s^\\prime$ is a terminal state, and then the optimization of models becomes unstable.\n3. I wonder whether the model training phase in RAMBO leads to high computational cost. The authors may want to compare RAMBO and COMBO in wall-clock time. \n4. I wonder whether RAMBO learns a lower bound of the true value function in authors' empirical studies. If the authors can provide analyses similar to those in TD3 [1] and REDQ [2], the authors' claims would be better supported.\n\nClarity:\n\nThis paper is well written and easy to follow.\n\nSignificance:\n\nThis paper provides a new idea for learning a pessimistic value function. However, the performance improvement is insignificant, as COMPO outperforms RAMBO^OFF as shown in Appendix. \n\n[1] Fujimoto S, Hoof H, Meger D. Addressing function approximation error in actor-critic methods[C]//International conference on machine learning. PMLR, 2018: 1587-1596.\n\n[2]Chen X, Wang C, Zhou Z, et al. Randomized Ensembled Double Q-Learning: Learning Fast Without a Model[C]//International Conference on Learning Representations. 2020. Please refer to the \"Strengths And Weaknesses\" part. The authors have addressed parts of the limitations of RAMBO. Nevertheless, I still have concerns about the sensitivity of RAMBO to the hyperparameters.", " The authors tackle the generic Offline RL problem and propose a model-based approach to address it. They introduce a novel algorithm (RAMBO), consisting in adversarially training the model in order to minimize the trained policy performance estimate. The justification of the work builds on a previously published theoretical paper. The empirical validation of the algorithm is limited to MuJoCo-v2 domain where it performs very well, and the AntMaze-v0 where it achieves better than other model-based baselines, but much worse that model-free baselines. Strengths: \n* The authors propose a novel algorithm for model-based Offline RL, which makes a lot of sense and is potentially high impact.\n* RAMBO performs reasonably well in the experiments in the benchmark.\n* The paper is clear overall.\n\nWeaknesses:\n* The formalization/theory is not rigorous enough:\n * The theoretical claims are vaguely implying an overstatement of what they are. Near optimality is intractable in Offline RL [Foster2021,Xiao2022] so lines 48-50 makes claims that are too strong. The PAC bounds of [69] are loose in the general case. Similarly, lines 189-190 are much too strong. The constants are not discussed at all, which prevents the reader to understand the true nature of the offered guarantees.\n * Equation 1: what does it mean to have 2 policies in your setting? It is laudable to anchor your work into the literature, but I find it very confusing here, while Problem 1 is much clearer right after.\n * Equation 3: $\\widehat{T}_{MLE}$ has been defined, and the concept of it is not well defined in large/continuous environments. I understand that the authors, once again, try to make connection with the literature, but this is again confusing. I would have expected to see here the TV with respect to the dataset, instead, which we learn much later to be what the practical algorithm is doing (section 5.2).\n * Theorem 1: $(1-\\gamma)^2$ should be $(1-\\gamma)^{-2}$.\n * Proposition 2 is wrongly formalized: $s$ is used both in $V_\\phi^\\pi(s)$ and in the sampling in the expectation. I have looked into the proof in Appendix A and found that the same imprecision is in the proof. It should actually be instead $z\\sim d_\\phi^\\pi(s)$ where $d_\\phi^\\pi(s)$ is the state distribution starting from $s$.\n* The empirical validation is limited:\n * Some simpler problems (multi-arm bandits, gridworld with finite state-action space) might be useful for a better analysis of what it does better than other model-based Offline RL algorithms.\n * Some harder problems with visual input (Atari for instance) would be useful too as it is where model-based approaches typically fail.\n\n[Foster2021] Offline reinforcement learning: Fundamental barriers for value function approximation, Foster, Dylan J and Krishnamurthy, Akshay and Simchi-Levi, David and Xu, Yunzong (NeurIPS, Offline RL workshop 2021)\n\n[Xiao2022] The Curse of Passive Data Collection in Batch Reinforcement Learning, Xiao, Chenjun and Lee, Ilbin and Dai, Bo and Schuurmans, Dale and Szepesvari, Csaba (AISTATS 2022). In addition to addressing the weakness points above, I would be appreciative that the authors answer the following questions:\n* Equation 9: why use the *standard* MLE loss instead of the TV loss as the theory prescribes?\n* Remark 1: I think that this point is important to make stronger. I fail to see why Problem 1 does not reduce to pessimistic reward modification. My intuition is that the worst possible transition outcome consists in transiting to a state with minimal value, which is equivalent to applying a pessimistic reward modification of this amplitude.\n\nMinor points:\n* Line 1: find near-optimal => near-optimality is intractable in general in Offline RL, so this is not the right objective for Offline RL.\n* Line 6: achieve conservatism => enforce conservatism.\n* I don't like the use of conservatism to refer to any kind of regularisation with respect to the dataset, including pessimism. Generally, conservatism is the regularization of the policy to remain close to that of the behavioural policy, which is generally made in opposition to pessimism which intends to find a lower bound to the value function and does not guarantee to remain close to the behavioural.\n As I have said before, I am worried that over-stating theoretical findings may lead to improper use of them in the future. Other than that, I do not have any concern with respect to the societal impact of this submission." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Syez9iD7B3O", "CWr7IrvC4R", "bg0JiDq_fnV", "NRKRcGLDsix", "io2yKENun2P", "SPCsO5J0_cI", "bxl3ecwh3-T", "DbKEwR89opc", "W-MHJD7KT-7", "phP4TeUGEfE", "1URRL-_p9pG", "io2yKENun2P", "BahsH0d7H4D", "bFDbYxLQNqP", "RmrGhq9kxC1", "nips_2022_nrksGSRT7kX", "nips_2022_nrksGSRT7kX", "nips_2022_nrksGSRT7kX", "nips_2022_nrksGSRT7kX" ]
nips_2022_k713e8vXzwR
Large-Scale Differentiable Causal Discovery of Factor Graphs
A common theme in causal inference is learning causal relationships between observed variables, also known as causal discovery. This is usually a daunting task, given the large number of candidate causal graphs and the combinatorial nature of the search space. Perhaps for this reason, most research has so far focused on relatively small causal graphs, with up to hundreds of nodes. However, recent advances in fields like biology enable generating experimental data sets with thousands of interventions followed by rich profiling of thousands of variables, raising the opportunity and urgent need for large causal graph models. Here, we introduce the notion of factor directed acyclic graphs ($f$-DAGs) as a way to restrict the search space to non-linear low-rank causal interaction models. Combining this novel structural assumption with recent advances that bridge the gap between causal discovery and continuous optimization, we achieve causal discovery on thousands of variables. Additionally, as a model for the impact of statistical noise on this estimation procedure, we study a model of edge perturbations of the $f$-DAG skeleton based on random graphs and quantify the effect of such perturbations on the $f$-DAG rank. This theoretical analysis suggests that the set of candidate $f$-DAGs is much smaller than the whole DAG space and thus may be more suitable as a search space in the high-dimensional regime where the underlying skeleton is hard to assess. We propose Differentiable Causal Discovery of Factor Graphs (DCD-FG), a scalable implementation of $f$-DAG constrained causal discovery for high-dimensional interventional data. DCD-FG uses a Gaussian non-linear low-rank structural equation model and shows significant improvements compared to state-of-the-art methods in both simulations as well as a recent large-scale single-cell RNA sequencing data set with hundreds of genetic interventions.
Accept
In this paper, the authors propose a new DAG constraint for low-rank adjacency matrices., which can scale to larger graphs. All the reviewers consider this paper is sound and the experiments are well designed. However, one question about the case of different graph spaces from other reviewer should be addressed in the final version.
train
[ "gnR_M-z-WJ3", "ZkffK-ETA67", "mD460VI6TB", "4AnXiiSKrJ3", "XSScXHJJhfE", "TaJM35EYnhP", "HlyZOfI-VkC", "sHsky6D8LVE", "HpKqNNESSU1", "KcnTKmHAzm0", "szuEDVlVVyL", "zj_swDhfZd2", "q_sN4pbIeW3", "4FaJo88pvT" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear reviewers and area chairs, \n\nWe noticed that the server of anonymous4openscience was down, so we have took the liberty to upload our files to a separate Google Drive so that the remaining reviewer(s) can access the results of our supplementary experiments. \n\nhttps://drive.google.com/file/d/1PlocBals72tAhSeZ-ShFdU6yRooT8GsJ/view?usp=sharing\n\nThanks for your attention,\nThe Authors,", " Although we leave the development of thorough benchmarking studies in the context of observational data as future work, we are happy to report that we managed to run a limited set of experiments to accommodate the reviewer's request. \n\nMore precisely, we generated data in two experimental settings (linear and non-linear), matching the parameters we use in the paper (d=100, m=10), but with only observational data. Because of time constraints, we simulated data for only five sampled DAGs per setting instead of ten. We report results for NOTEARS, NOTEARS-LR, as well as DCD-FG (the top three performing methods throughout the paper).\n\nThe results of these experiments (https://drive.google.com/file/d/1PlocBals72tAhSeZ-ShFdU6yRooT8GsJ/view?usp=sharing; files nlin-no-interv.pdf and lin-no-interv.pdf)** show that performance is overall lower on observational data compared to interventional data. This is expected as each intervention provides richer information about the causal structure, and similar results are reported in the appendices of the DCDI paper. However, DCD-FG still outperforms the other competing methods. Especially, it is interesting that DCD-FG outperforms NOTEARS-LR by a large margin, even on the linear dataset. \n\nWe will add these results to the final version of the paper. However, even though the results suggest that the method is largely applicable on observational data, we will still carefully discuss that the scope of the paper is only interventional data.\n\n** Unfortunately anon4openscience is currently down at the time of writing this post, so we have uploaded the results onto another public hosting service (Google Drive). We will attempt tomorrow posting them to anon4openscience as well.", " We thank the reviewer for acknowledging that in the context of our paper, the DAG constraints we consider are both necessary and sufficient.\n\nThe reviewer raises the point that the subset of baselines derived from NOTEARS might not be adequate in this benchmarking because they have been designed for inference on observational data. First, we would like to highlight that IGSP is specifically designed to handle interventional data, and has been part of our comparisons throughout all of our experimental settings. Second, while we agree with the reviewer that NOTEARS has originally been proposed for observational data, we believe that our adaptation of NOTEARS and NOTEARS-LR to the (perfect) interventional setting is a sensible point of comparison (full description in Appendix F.3). Indeed, this extension simply consists in masking of likelihood for the intervened nodes as proposed in the DCDI manuscript [1]. In fact, similar adaptations of NOTEARS have already been considered in the context of learning from temporal data [2], so we would consider them fairly standard by now. Additionally, we would like to note that the NOTEARS-LR extension performs extremely well on the linear Gaussian SEM experiments (Figure 4). \n\nThen, the reviewer proposes that we apply the method in the purely observational setting for fair comparison with NOTEARS. Overall, we consider this investigation to be out of scope for this manuscript for the following reasons:\n1. The observed beneficial performance of NOTEARS on observational data might be due to specific properties of the simulated data [3, 4] exploited during inference time. Indeed, the authors of those papers showed that z-scoring the data before feeding it to NOTEARS caused it to perform as poorly as random guessing. However, in this work and in the DCDI [1] paper, we present experiments with interventional data in which gradient-based methods lead to state-of-the-art results *after data normalization*. \n2. The signal-to-noise ratio of observational data, especially in the biological applications that we highlight, is too low for any reasonable inference, as has been highlighted in the field of single-cell RNA-seq [5].\n3. Given recent substantial advances in experimental biology, interventional data is rapidly growing, both in lab models, and in human data (with natural genetic variants as interventions). Our work offers an important – and currently underserved – use case. \nWe will add these discussions in the camera-ready version of the paper.\n\nWe also will consider adding some benchmarks in the observational setting to the final version of the paper, but given the current time-constraints (~20 hours left until the end of the author-reviewer discussion period), it is unlikely that we will be able to accommodate this additional request for benchmarks during this present discussion period.\n\n[1] Brouillard, Philippe, et al. \"Differentiable causal discovery from interventional data.\" Advances in Neural Information Processing Systems 33 (2020): 21865-21877.\n[2] Gao, Tian, et al. \"IDYNO: Learning Nonparametric DAGs from Interventional Dynamic Data.\" International Conference on Machine Learning. PMLR, 2022.\n[3] Reisach, Alexander Gilbert, et al. \"Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game.\" Advances in Neural Information Processing Systems 34 (2021)\n[4] Kaiser, Marcus, et al. \"Unsuitability of NOTEARS for Causal Graph Discovery.\" arXiv (2021)\n[5] Pratapa, Aditya, et al. \"Benchmarking algorithms for gene regulatory network inference from single-cell transcriptomic data.\" Nature Methods 17-147–154 (2020)", " I agree with the author that if we decompose the graph structure and the parameter, then DAG constraint becomes sufficient and necessary.\n\nI just noticed the paper are working on interventional data. Meanwhile, the baselines, notears and notears-lowrank are developed for observational data. Thus it would be good if there is some results on observational data.", " **Theoretical results supporting robustness to noise:** We agree with the reviewer that a fully-fledged analysis of our estimator would be interesting, but we consider this out of scope of the paper in its present form. We partially address this point in Theorem 1 by considering random corruptions of the underlying skeleton of the causal graph as a proxy for statistical noise and showing that such corruptions are likely to increase the Boolean rank of the graph. Moreover, we consider the favorable generalization performance of DCD-FG in Section 5 as empirical evidence toward the claim of increased noise tolerance.\n\nTo address this question, we will correct the relevant sentence in the abstract to be more precise, as the current version is an overstatement. \n\n**Runtime comparisons:** A preliminary comparison of runtime for specific steps of the algorithm can be found in Figure 3 of the paper. To further clarify the superior runtime of DCD-FG, we will add information on total runtime to the paper on both of the simulated data, and the biological case study (https://anonymous.4open.science/r/rebuttal-neurips-D93C/runtime_total.pdf). We now briefly describe the results.\n\nOn our synthetic datasets, with a relatively small number of nodes (d=100), DCD-FG has reasonable runtime (50 min) in comparison to other algorithms. In order to provide a fair comparison with another neural networks based method, we ran DCDI on one of the datasets and noticed that DCD-FG is indeed significantly faster (50 vs 150 min). Additionally, we wish to note that on this small dataset, the runtime of vanilla NOTEARS can appear to be competitive with low-rank approaches (NOTEARS-LR & DCD-FG). However, this is mostly caused by NOTEARS not learning any meaningful graph. In those cases, the augmented Lagrangian method will terminate faster since the algorithm finds a very sparse adjacency matrix, resulting in the very poor statistical performance observed in Section 5.1. \n\nOn our large-scale biological dataset, with thousands of features, we were hard-pressed to find _any_ algorithms that (1) admit non-linear functional relationships between nodes and (2) terminate in under 24 hours (IGSP requires clustering of features in 20 groups at most to terminate within a day; DCDI yields memory errors). By contrast, DCD-FG terminates in about 2 hours in most cases.\n\n[1] Dixit, Atray, et al. \"Perturb-Seq: dissecting molecular circuits with scalable single-cell RNA profiling of pooled genetic screens.\" Cell 167.7 (2016): 1853-1866.\n\n[2] Frangieh et al. \"Multimodal pooled Perturb-CITE-seq screens in patient models define mechanisms of cancer immune evasion.\" Nature Genetics (2021)\n\n[3] Norman et al. \"Exploring genetic interaction manifolds constructed from rich single-cell phenotypes\" Science (2019)\n\n[4] Adamson et al. \"A Multiplexed Single-Cell CRISPR Screening\nPlatform Enables Systematic Dissection of the\nUnfolded Protein Response\" Cell (2016)\n\n[5] Heimberg et al. \"Low Dimensionality in Gene Expression Data Enables the Accurate Extraction of Transcriptional Programs from Shallow Sequencing\" Cell Systems (2016)\n\n[6] Cleary et al. \"Efficient Generation of Transcriptomic Profiles by Random Composite Measurements\" Cell (2017)", " We thank the reviewer for their review and are glad they approve of our experimental validation. Note that due to the character limit in OpenReview comments, our response is broken into multiple parts.\n\n**Confusing introduction:** We acknowledge that lines 41-44 were confusing and will revise them in the final version.\n\n**Presentation of theoretic properties:** We apologize for any confusion caused by the presentation of Proposition 1 and clarify below the relationship of the Boolean low-rank models we consider with general low-rank models.\n\nWe argue that one could consider a variety of low rank graph models. In particular, the following three seem natural to us:\n\n\\begin{align*}\n\\mathcal{G}\\_{\\mathrm{lin}}^m &= \\\\{ G : G \\text{ admits weighted adjacency matrix}~ W \\in \\mathbb{R}^{d \\times d} \\\\\\\\\n& \\quad \\quad \\text{ with } W=UV \\text{ for } U \\in \\mathbb{R}^{d \\times m}, V \\in \\mathbb{R}^{m \\times d}\\\\}, \\text{ the set of linear rank $\\leq m$ graphs}\n\\end{align*}\n\n\\begin{align*}\n\\mathcal{G}\\_{\\mathrm{lin,nonneg}}^m &= \\\\{ G : G \\text{ admits weighted adjacency matrix } W \\in \\mathbb{R}\\_{\\geq 0}^{d \\times d} \\\\\\\\\n& \\quad \\quad \\text{ with } W=UV \\text{ for } U \\in \\mathbb{R}\\_{\\geq 0}^{d \\times m}, V \\in \\mathbb{R}\\_{\\geq 0}^{m \\times d}\\\\}, \\text{ the set of linear non-negative rank $\\leq m$ graphs}\n\\end{align*}\n\n\\begin{align*}\n\\mathcal{G}\\_{\\mathrm{bool}}^m &= \\\\{ G : G \\text{ admits adjacency matrix } A \\in \\\\{0,1\\\\}^{d \\times d} \\\\\\\\\n& \\quad \\quad \\text{ with } A=U \\diamond V \\text{ for } U \\in \\\\{0,1\\\\}^{d \\times m}, V \\in \\\\{0,1\\\\}^{m \\times d}\\\\}, \\text{ the set of Boolean rank $\\leq m$ graphs}\n\\end{align*}\n\nProposition 1 shows that the factor graphs we consider give rise to $\\mathcal{G}^m_{\\mathrm{bool}}$. One can show that $\\mathcal{G}^m_{\\mathrm{bool}} = \\mathcal{G}^m_{\\mathrm{lin,nonneg}} \\subsetneq \\mathcal{G}^m_{\\mathrm{lin}}$. We think the reviewer is focusing on this last strict inclusion.\n\nWe respectfully disagree with the reviewer's assessment that the set of Boolean low rank graphs $\\mathcal{G}^m_{\\mathrm{bool}}$ constitutes a \"very special case\" of low-rank models. The only other class that comes to mind is $\\mathcal{G}^m_{\\mathrm{lin}}$, which might seem more natural at first glance when considering linear structural equation models. However, the only graphs in $\\mathcal{G}^m_{\\mathrm{lin}} \\setminus \\mathcal{G}^m_{\\mathrm{bool}}$ that are not captured by DCD-FG are those where the contributions of several factors cancel out precisely to produce zeros not expected by the sparsity pattern of $U$ and $V$, leading to a violation of faithfulness in the associated factor graph. We argue that those graphs are fairly unintuitive and we do not see a straightforward way of extending the factor semantics of our nonlinear models to these models.\n\nWe will clarify this point in the Supplementary Materials of the final version of the paper.\n\n**Real life examples for low-rank assumption:** Our model is in particular motivated by biological applications. One hallmark of transcriptional regulation of gene expression is the presence of \"regulons\", where one or a set of proteins, called transcription factors (TFs), all co-regulate one set of target genes. Moreover, those TFs can themselves be regulated at the transcriptional level as part of earlier regulons or by shared upstream proteins that activate them, and those upstream proteins are often co-regulated at the transcriptional level as regulons themselves. Concordantly, a low-rank structure has been observed in the total transcriptomic effects of genetic perturbations, for example, see Figure 4A in [1]. Such patterns have been observed across a multitude of systems [1, 2, 3, 4, 5, 6].\n\nMoreover, we consider the low-rank assumption a reasonable approximation to restrict the model class, even if it might be violated in practice. The favorable results on the Perturb-Seq data in Section 5.2 corroborate this.\n\nTo clarify this point, we will add more information on the appropriateness of the low-rank assumption to the discussion section of the paper.", " We thank the reviewer for their review and are glad they appreciated the simplicity of our proposed algorithm.\n\n**Choice of number of factors:** We agree with the reviewer that choosing the number of factors is an important consideration. However, having at least an upper bound on the number of factors is crucial to guarantee a reasonable runtime of DCD-FG. Importantly, as long as the number of factors needed for good statistical performance is not too large, we found that a good number can be easily found by optimizing the negative log-likelihood on a validation dataset, which is how we performed the reported experiments. Such a procedure is already necessary for searching for the strength of sparsity regularization in DCDI / NOTEARS, and therefore introduces minimal overhead. \n\nMore specifically, we investigated the performance of DCD-FG for different values of the number of factors ($m$) in the simulated data from the linear causal mecanism. For several values of $m$, we selected the best performing model based on cross-validation across other hyperparameter (e.g., sparsity), but only for that specific $m$ (DCD-FG [$m$=X]). We reported the performance of this model (F1-score), and compared it to the best performing model across the whole hyperparameter grid (DCD-FG). We observe that performance is better for $m=15$, but does not drastically change for values of $m$ close to the groundtruth. Second, we verified that the performance in the hyperparameter grid (here denoted by F1 score), was well correlated with the validation likelihood. All in all, those results suggest that $m$ can be effectively chosen via cross-validation. We will add those results into new figure to the final version of the paper (https://anonymous.4open.science/r/rebuttal-neurips-D93C/rank-score.pdf). \n\nThere may be opportunities for extensions with adaptive / non-parametric number of factors, but we leave this for future work.\n\n\n**Other interventional settings:** We envision that our framework should be compatible with extensions to imperfect interventions or interventions with unknown targets, similar to those incorporated into DCDI [1]. However, the factor semantics might slightly complicate these extensions, and we consider them out of the scope of our current work. They are definitely exciting venues to be explored in future work, as we will note in the revised discussion.\n\nOur framework technically does apply to observational data, but we would argue that the signal-to-noise ratio of such data, especially in the biological applications that we highlight, is too low for any reasonable inference, so we would consider it out of scope for the current publication. Moreover, given recent substantial advances in experimental biology, interventional data is rapidly growing, both in lab models, and in human data (with natural genetic variants as interventions). This offers an important -- and currently underserved -- use case. We will clarify this in the camera-ready version of the paper.\n\n**Robustness to misspecification:** As mentioned above, we actually chose the rank in our experiments adaptively, and the best performing rank was usually slightly higher than the ground truth rank. Thus, we concluded that DCD-FG is not sensitive to the exact ground truth rank.\n\nTo address robustness to misspecification of the noise model, we will add new experiments to the Supplementary Materials. For these experiments, we repeated the experiments from Figure 4 of the paper, but changed the Gaussian noise to uniform noise, with matching variance. We report the interventional likelihood (I-NLL), as well as the F1 score for the top three performing methods, NOTEARS, NOTEARS-LR, and DCD-FG. The results of these experiments can be found here: https://anonymous.4open.science/r/rebuttal-neurips-D93C/linearuniform.pdf & https://anonymous.4open.science/r/rebuttal-neurips-D93C/nnuniform.pdf. Note that the F1 scores are slightly lower than for the Gaussian simulations, presumably because of the effects of model misspecification. However, DCD-FG clearly outperforms NOTEARS and NOTEARS-LR as in our previous experiments.\n\n[1] Brouillard, Philippe, et al. \"Differentiable causal discovery from interventional data.\" Advances in Neural Information Processing Systems 33 (2020): 21865-21877.", " **Triviality of Proposition 2:** We thank the reviewer for pointing out this algebraic argument for proving part of Proposition 2. We would like to make a few additional comments on this point:\n1. We think that the elementary nature of the proof we provide for Proposition 2 has its merits. Indeed, it is based on an elementary graphical argument (the definition of a cycle, and a path) and does not require any knowledge of linear algebra. Moreover, we do not consider it significantly longer or more complicated than the proof provided by the reviewer.\n2. While we admit it is a simple result and the reviewer considers it obvious, the paper cited by the reviewer, [1], did _not_ exploit this result in a similar context, leading to a less efficient algorithm.\n3. We do not consider the proof of Proposition 2 the main result of our paper but rather part of a larger, novel framework of low-rank assumptions for causal discovery, together with further implementation details such as the functional form of the structural equations. In terms of our theoretical contributions, we also highlight Theorem 1 as one of the main results.\n\nBased on the relative simplicity of the proof, we will change the designation of Proposition 2 to be a Lemma in the revised manuscript. We will also add the elegant, alternative algebraic proof provided by the reviewer to the relevant section of the paper.\n\n**Expressivity of low-rank factor graphs:** We agree with the reviewer that general DAGs cannot be represented as the half-square of a factor graph with a low number of factors. The chain structure is one instance, but in general, Erdos-Renyi graphs may also have high (matrix and likely Boolean) rank, as explained in [1]. However, our work is very motivated by biological applications, in which many genes are affected by similar other genes, forming a very dense interconnected network with several core sets of gene regulators. In this setting, using factor DAGs is an extremely appealing modeling technique. We will clarify this motivation in the revised manuscript.\n\nMoreover, it is evident from our experiments that the space of all DAGs together with arbitrary functional relationships might simply be too large to be learned efficiently when the number of nodes is high, necessitating _some_ form of statistical regularization. Simple sparsity regularization, as provided by NOTEARS, did not seem sufficient. Thus, we argue that factor graphs provide one possible route of extracting some meaningful causal relationships, even without exhaustively covering the space of all DAGs.\n\nWe will also clarify the limited expressivity of factor graphs in the discussion section, focusing on the rationale for using them in the context of biological data.\n\n**Theorem 1 (rank increase) in identifiability section, and consequences on faithfulness:** We apologize for any confusion regarding the relationship of Theorem 1 with the identifiability of the underlying graph. Let us rephrase the key results of Section 3.2:\n\nWhen investigating the identifiability of the underlying DAG in the infinite data limit, one can appeal to standard results for causal discovery, such as those employed in [2], to show that the skeleton of the graph can be recovered _if_ the underlying structural equation model is faithful. Note that faithfulness here is not simply a property of the graph, but of the full probabilistic model. Since the classes of graphs we consider are highly connected, for random graphs, we show in Appendix B (see Lemma 2 and Proposition 6) that recovering the skeleton in turn is sufficient to identify the whole graph.\n\nNext, we argue that identifiability is a very low bar to clear, given how far we can be in practice from the \"infinite data\" regime. Thus, we set out to investigate other notions of statistical sensitivity of the problem in question. We assumed that the lack of data could easily lead to errors in the recovery of the skeleton of the ground truth graph. Thus, we proceeded to show in Theorem 1 that (random) corruptions in the graph lead to a (Boolean) rank increase with high probability. This provides evidence for the beneficial statistical effect of restricting the (Boolean) rank of the graphs in our search space.\n\nIn order to clarify our reasoning to the reader, we will edit the relevant section of the paper, in particular changing its title from \"Identifiability of Random Causal Factor Graphs\" to \"Statistical Properties of Random Causal Factor Graphs\".\n\n\n[1] Fang, Zhuangyan, et al. \"Low rank directed acyclic graphs and causal structure learning.\" arXiv preprint arXiv:2006.05691 (2020)\n[2] Brouillard, Philippe, et al. \"Differentiable causal discovery from interventional data.\" Advances in Neural Information Processing Systems 33 (2020): 21865-21877.", " We believe the reviewer is concerned that our method does not search the entire space of linear low-rank graphs, $\\mathcal{G}\\_{\\mathrm{lin}}^m$. Indeed, by the strict inclusion $\\mathcal{G}\\_{\\mathrm{bool}}^m \\subsetneq \\mathcal{G}\\_{\\mathrm{lin}}^m$, finding a decomposition of the weighted adjacency matrix $W=UV$ with non-negative $U \\in \\mathbb{R}\\_{\\geq 0}^{d \\times m}, V \\in \\mathbb{R}\\_{\\geq 0}^{m \\times d}$ is only sufficient for $G \\in \\mathcal{G}\\_{\\mathrm{lin}}^m$ to be a DAG, but not necessary, since we could be missing graphs in $\\mathcal{G}\\_{\\mathrm{lin}}^m \\setminus \\mathcal{G}\\_{\\mathrm{bool}}^m$. Similarly, assume we start with an arbitrary linear decomposition with no sign constraints on $W$, $U$, and $V$, i.e., $W = UV$, $U \\in \\mathbb{R}^{d \\times m}, V \\in \\mathbb{R}^{m \\times d}$. In order to check whether $W$ corresponds to a DAG, we need to apply a transformation to $W$ that turns its negative entries into positive ones, for example taking the element-wise absolute value $\\mathrm{abs}(W)$. This precludes us from applying the trick in Proposition 7, namely, $h_d(\\mathrm{abs}(UV)) \\neq h_m(\\mathrm{abs}(VU))$.\n\nHowever, this is only a concern if there is reason to believe that precisely searching through DAGs in $\\mathcal{G}\\_{\\mathrm{lin}}^m$ for our dependency masks is of any particular practical interest. Besides being more immediate when starting from a linear structural equation model, we see no particular reason for favoring $\\mathcal{G}\\_{\\mathrm{lin}}^m$ over $\\mathcal{G}\\_{\\mathrm{bool}}^m$ in light of the practical benefits outlined in our paper. Moreover, the only linear low-rank models not captured in $\\mathcal{G}\\_{\\mathrm{bool}}^m$ are those in which the contributions of multiple factors cancel out to produce more zeros than expected from the sparsity pattern of the factors $U, V$, corresponding to a lack of faithfulness of the factor graph. We consider these models edge cases that could safely be excluded from the search space.\n\nTo address these issues in the revised manuscript, we will add comments clarifying the relationship between these different sets of graphs to the supplementary material of the paper and refer to them from the main text.\n\nIf we misunderstood the reviewer's concerns, we would appreciate some further clarification from the reviewer to help us address the points raised.\n\n**Similarities to Fang, Zhuangyan, et al.:** The reviewer raises concerns about similarity in our results with [1], and we would like to summarize the key similarities and differences between the two works here. Both [1] and our work employ low-rank assumptions in the context of continuous optimization-based DAG learning algorithms. However, our work differs from [1] in several key aspects:\n\n1. In the above notation, [1] consider the class of **linear** low-rank graphs $\\mathcal{G}\\_{\\mathrm{lin}}^d$ (potentially given by the coefficients of a linear Gaussian SEM) while we focus on Boolean low-rank $\\mathcal{G}\\_{\\mathrm{bool}}^d$ graphs. Our work is more general and separates the inference of the graph, and of the likelihood model, and makes a distinction between the matrix rank and the Boolean rank. In particular, this allows us to not only exploit the rank constraint for the purpose of the penalty evaluation, but also to consider a novel class of functional relationships with beneficial effects on statistical and computational performance.\n2. [1] barely exploits the low-rank assumption for computational gains and they only provide a very cursory treatment of runtime issue. In fact, as explained above, to the best of our understanding, all algorithms suggested in [1] explicitly expand the full (weighted) adjacency matrix before applying the trace exponential penalty, therefore incurring an asymptotic complexity of $O(d^3)$ per penalty evaluation. Only in the linear case do they exploit an explicit low-rank parametrization that lowers the time complexity for the fitting term evaluation. Indeed, we could find a table that recapitulates walltime to run their algorithms with different parameters on simulated data, which shows that NOTEARS-LR may be twice as slow as NOTEARS for 300 nodes and a rank of 30. More importantly, the low-rank version of GraN-DAG has similar or higher runtime complexity (due to the calculation of the nuclear norm) compared to GraN-DAG. Because GraN-DAG requires one neural network per node (as DCDI), this regularization approach is not applicable to thousands of variables (as DCDI). By contrast, we discuss time and space complexity extensively in the paper, and empirically compare runtime per iteration. In all cases, we show significant improvement compared to DCDI / NOTEARS through **cleanly exploiting the low-rank constraint** on the graph for both the penalty and the fitting term.\n\nWe will add these comments as an additional discussion in the supplementary materials of the paper. ", " We thank the reviewer for their assessment, in particular for acknowledging the beneficial scaling of our proposed method. First, we clarify that we consider the scope of our paper as more comprehensive than just introducing new DAG constraints. We introduce novel, nested complexity classes of structural equation models that encompass both the DAG structure and the functional relationships between variables. Both are crucial to enable scaling of the causal discovery method to thousands of variables.\n\nIn the following, we address specific points raised by the reviewer.\n\n**Necessary vs sufficient DAG conditions** We argue that the criticism pointed out by the reviewer does not apply to neither of our work, and of the previous low-rank work. \n\nTo demonstrate this, we would like to start by distinguishing between several classes of low-rank graphs on $d$ nodes:\n\n\\begin{align*}\n\\mathcal{G}_{\\mathrm{lin}}^m &= \\\\{ G : G \\text{ admits weighted adjacency matrix}~ W \\in \\mathbb{R}^{d \\times d} \\\\\\\\\n& \\quad \\quad \\text{ with } W=UV \\text{ for } U \\in \\mathbb{R}^{d \\times m}, V \\in \\mathbb{R}^{m \\times d}\\\\}, \\text{ the set of linear rank $\\leq m$ graphs}\n\\end{align*}\n\n\\begin{align*}\n\\mathcal{G}_{\\mathrm{lin,nonneg}}^m &= \\\\{ G : G \\text{ admits weighted adjacency matrix } W \\in \\mathbb{R}\\_{\\geq 0}^{d \\times d} \\\\\\\\\n& \\quad \\quad \\text{ with } W=UV \\text{ for } U \\in \\mathbb{R}\\_{\\geq 0}^{d \\times m}, V \\in \\mathbb{R}\\_{\\geq 0}^{m \\times d}\\\\}, \\text{ the set of linear non-negative rank $\\leq m$ graphs}\n\\end{align*}\n\n\\begin{align*}\n\\mathcal{G}_{\\mathrm{bool}}^m &= \\\\{ G : G \\text{ has adjacency matrix } A \\in \\\\{0,1\\\\}^{d \\times d} \\\\\\\\\n& \\quad \\quad \\text{ with } A=U \\diamond V \\text{ for } U \\in \\\\{0,1\\\\}^{d \\times m}, V \\in \\\\{0,1\\\\}^{m \\times d}\\\\}, \\text{ the set of Boolean rank $\\leq m$ graphs}\n\\end{align*}\n\n\n\nNote that in the definition of $\\mathcal{G}\\_{\\mathrm{lin}}^m$, we allow for $W$ to encode the presence of an edge in $G$ with any non-zero entry, positive or negative. One can easily show that $\\mathcal{G}\\_{\\mathrm{bool}}^m = \\mathcal{G}\\_{\\mathrm{lin,nonneg}}^m \\subsetneq \\mathcal{G}_{\\mathrm{lin}}^m \\subsetneq \\mathcal{G}$ for $m < d$. \n\nWe point out a subtlety here: for a given matrix, its non-negative rank and Boolean rank do not necessarily coincide, but $\\mathcal{G}\\_{\\mathrm{bool}}^m = \\mathcal{G}\\_{\\mathrm{lin,nonneg}}^m$ since we allow for arbitrary weighted adjacency matrices in the definition of $\\mathcal{G}\\_{\\mathrm{lin,nonneg}}^m$.\n\nThe paper mentioned by the reviewer [1] in fact deals with $\\mathcal{G}\\_{\\mathrm{lin}}^m$ and considers two strategies for enforcing this low rank constraint (explicit parameterization and nuclear norm penalization). In both cases, the DAG constraint is simply enforced by first expanding the full adjacency matrix $W$ and then enforcing $h_d(\\mathrm{abs}(W))=0$. Consequently, this methodology does not require to apply the absolute value to each of the matrices $U$ and $V$, and seems to not have the flaw that the reviewer pointed out: their strategy searches through all DAGs in $\\mathcal{G}_{\\mathrm{lin}}^m$. However, this design also precludes [1] from harnessing any benefit in asymptotic algorithmic complexity.\n\nBy contrast, we do _not_ consider $\\mathcal{G}\\_{\\mathrm{lin}}^m$, but instead choose to exclusively work with $\\mathcal{G}\\_{\\mathrm{bool}}^m$. We enforce this constraint by searching over non-negative decompositions of the weighted adjacency matrix (obtained by the Gumbel-sigmoid parametrization). In fact, we make use of the following slight extension of Proposition 7 in our Appendix: If $G \\in \\mathcal{G}^m\\_{\\mathrm{bool}}$, then\n\n$$\n\\begin{alignat*}{2}\nG \\text{ is DAG} \\Leftrightarrow & \\exists U \\in \\mathbb{R}\\_{\\geq 0}^{d \\times m}, V \\in \\mathbb{R}\\_{\\geq 0}^{m \\times d}: {}&& W = UV \\text{ is weighted adjacency matrix for } G \\\\\n&&& \\text{ with } h_d(UV) = 0\\\\\n\\Leftrightarrow & \\exists U \\in \\mathbb{R}\\_{\\geq 0}^{d \\times m}, V \\in \\mathbb{R}\\_{\\geq 0}^{m \\times d}: && {} W = UV \\text{ is weighted adjacency matrix for } G\\\\\n&&&\\text{ with } h_m(VU) = 0\n\\end{alignat*}\n$$\n\nIn effect, in our paper, we search over all DAGs contained in $\\mathcal{G}^m_{\\mathrm{bool}}$ since the continuous relaxation we employ is indeed a necessary and sufficient condition for $G$ to be a DAG. As we show in our paper, considering $\\mathcal{G}^m_{\\mathrm{bool}}$ instead of $\\mathcal{G}^m_{\\mathrm{lin}}$ gives further rise to an intuitive way of restricting the nonlinear _functional_ relationships on top of the graphical structure while maintaining low asymptotic computational complexity. This aspect is completely absent from [1].", " We thank the reviewers for their valuable feedback. Reviewer AHb3 concisely conveyed the intuition of our contribution: “the authors [...] introduce factor directed acyclic graphs (f-DAGs) which have many promising properties by incorporating the structural assumption to restrict the search space and reduce the optimization difficulty”. All reviewers point out the scalability of our approach: \"Compared to original DAG constraints, the new constraint can scale to larger graphs\" (R ADLi). Two out of three reviewers explicitly highlight the soundness of the method, and the strength of our empirical evaluation. For example, the reviewers mention that \"The approach is a nice and relatively simple solution to a difficult problem. [...] Empirical results show strong performance with respect to prior art. (R AHb3), \"Theoretical analyses and experimental results demonstrate the superiority of f-DAGs and DCD-FG. [...] The experiments are well-designed.\" (R WAgt). \n\nReviewer ADLi has questions about the potential restrictiveness of the class of low-rank graphs we consider for DCD-FG (Boolean low-rank), and why we supposedly discard the treatment of more general scenarios (matrix low-rank). We address these questions in this response, and will include them in the camera-ready version. In a nutshell, we argue that matrix low-rank makes sense for the special case of linear SEMs, while our class of Boolean low-rank graphs is particularly meaningful for reasoning about non-linear causal models, which is the focus for this paper, and is especially important in some domains, such as biological genetic networks.\n\nWe address the concerns of each reviewer in the point by point response below. In some of the answers, we include results from novel experiments we performed in response to the reviewers' questions and suggestions.", " The paper proposed a new DAG constraints for low rank adjacency matrices. Compared to original DAG constraints, the new constraint can scale to larger graphs. On binary graph, the proposed DAG constraints is a necessary and sufficient condition for an adjacency matrix to form a DAG. However, for a weighted adjacency matrix, it seems that without further work, the proposed DAG constraint is a sufficient but not necessary condition. In that case the proposed method actually is optimising over a space that is smaller than the true DAG space. The idea of the paper is very similar to [1], and [1] is rejected by ICLR2020 and now it is only a preprint. Unfortunately, the paper share the same fatal problem as [1]. Let us consider an adjacency matrix W = UV, if we enforce that \nWhen apply such low rank DAG constraints, it is easy to derive a necessary and sufficient condition on binary adjacency matrices. If W is a weighted matrix, then we have to first make the entries of W to be positive to apply the NOTEARS type DAG constraints. In this procedure, in order to use the low rank property, one has to apply absolute value, or square over entries of U and V. In that case, the DAG constraint becomes sufficient only.\n\nThe main result (Proposition 2) of the paper is actually trivial. The proof in the supplementary is over complicated. In fact the proof of this trivial result is very straightforward. For a matrix W \\in R_{>=0}^{n\\times n} = UV, where all entries of U and V are also no-negative, W is nilpotent if and only W is a DAG. Furthermore, VU is nilpotent if and only if UV is nilpotent, and thus VU is DAG is equivalent to UV is DAG.\n\nThere is also a minor problem. The rank of DAG graphs can be very large. For example, if we consider a chain structured DAG with n nodes, the rank of its adjacency matrix is n - 1. Thus the proposed method only suits sparse graph structures. \n\nFinally, in the identifiability section, the authors provide a theorem about the rank of the graph? How the the theorem related to the identifiability? Will this enforce the faithfulness of the discovered graph?\n[1] Fang, Zhuangyan, et al. \"Low rank directed acyclic graphs and causal structure learning.\" arXiv preprint arXiv:2006.05691 (2020). See my comments above. 1. The main result is trivial and it is only an sufficient condition of DAG.\n2. There may be no real identifiability result.", " This paper proposes an approach to causal discovery in high dimensional settings where interventional data is available. The authors use a low-rank assumption on a factor graph, and incorporate the model into a differentiable causal discovery algorithm by utilizing a likelihood model which assumes perfect interventions. Empirical results show strong performance with respect to prior art. Strengths:\n\n* The introduction of a method that allows for a low rank factor assumption is very nice, and provides a valuable tool in causal discovery where sparsity can be a difficult assumption to enforce.\n* The approach is a nice and relatively simple solution to a difficult problem.\n* The parameterization of the model with the Gumbel-softmax prior is quite nice.\n\nWeaknesses:\n\n* Unclear how the number of factors should be chosen. \n* Experimental evidence which shows the robustness to misspecification (both in terms of rank and the Gaussian assumption) would be helpful.\n* (Minor) It would be good if the authors more clearly delineated that this paper deals solely with interventional data. * Is there a parameterization that would allow for the rank to be chosen as well? Perhaps I am missing something along these lines.\n* The work assumes perfect interventions in the likelihood. Would this line of work extend to imperfect interventions? Purely observational settings? It would be nice if the authors were able to clearly delineate within the text where the interventions are strictly required.\n* Is it possible to have a set of experiments, as detailed above, that show robustness to misspecification? The authors do a nice job of describing their work within the parameters of necessary assumptions and constraints. I don't foresee clear societal impact issues ehre.", " This paper aims to solve the large-scale causal discovery problem with hundreds of nodes. To this end, the authors first introduce factor directed acyclic graphs (f-DAGs) which have many promising properties by incorporating the structural assumption to restrict the search space and reduce the optimization difficulty. Then, they propose Differentiable Causal Discovery of Factor Graphs (DCD-FG) to complete large-scale causal discovery. Theoretical analyses and experimental results demonstrate the superiority of f-DAGs and DCD-FG. Strength:\n1. This paper effectively reduce the search space in the large-scale causal discovery problem by introducing factor directed acyclic graphs. Based on which they propose Differentiable Causal Discovery of Factor Graphs to realize scalable causal discovery.\n\n2. The experiments are well designed. This paper provides runtime experiments, simulation studies, and a case study on single-cell RNA sequencing data with hundreds of genetic interventions to demonstrate the efficacy of the proposed method.\n\nWeakness:\n1. It seems quite confusing in line 41-44. In the first sentence, it says “sparsity [17] as well as low-rank assumptions [18] are often exploited in algorithms”. However, in the next sentence, it points out “the use of low-rank constraints remains largely under-explored for this purpose”.\n\n2. The theoretic property of this work is not well presented. The proposition 1 is only a very special case of lpw rank models. It would be better to justify the method in a wider scenarios.\n 1. How to justify that the low rank assuption is commonly met in real scenarios?\n\n2. Although intuitively making sense, is there a theoretical result for the claim that \"more statistically robust in the high-dimensional regime where the underlying skeleton is hard to assess\"?\n\n3. It seems most experimental results suround the SHD, and F1 score. Since your claim is fast searching, can you also show some computational efficiency results? There are no limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "nips_2022_k713e8vXzwR", "mD460VI6TB", "4AnXiiSKrJ3", "sHsky6D8LVE", "TaJM35EYnhP", "4FaJo88pvT", "q_sN4pbIeW3", "HpKqNNESSU1", "KcnTKmHAzm0", "zj_swDhfZd2", "nips_2022_k713e8vXzwR", "nips_2022_k713e8vXzwR", "nips_2022_k713e8vXzwR", "nips_2022_k713e8vXzwR" ]
nips_2022_VnAwNNJiwDb
Generating Long Videos of Dynamic Scenes
We present a video generation model that accurately reproduces object motion, changes in camera viewpoint, and new content that arises over time. Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence. A common failure case is for content to never change due to over-reliance on inductive bias to provide temporal consistency, such as a single latent code that dictates content for the entire video. On the other extreme, without long-term consistency, generated videos may morph unrealistically between different scenes. To address these limitations, we prioritize the time axis by redesigning the temporal latent representation and learning long-term consistency from data by training on longer videos. We leverage a two-phase training strategy, where we separately train using longer videos at a low resolution and shorter videos at a high resolution. To evaluate the capabilities of our model, we introduce two new benchmark datasets with explicit focus on long-term temporal dynamics.
Accept
All four reviewers enjoyed this paper and were particularly impressed by the videos provided in the supplementary material. The results are very impressive indeed. The reviewers also agreed that using a multi stage approach was interesting and effective. The two new datasets were deemed useful to the generation community and the proposed metrics and human evaluations were appreciated by the reviewers. A few smaller concerns included a missing failure analysis and some clarifications questions which were addressed in the rebuttal. Given the above, I recommend acceptance.
train
[ "GYbF_2ZsuyZ", "XVfypRsO2m", "iThPDgezB7C", "o2VTGEGG8Qf", "IRgZ6I08Dg9", "lDFytyh7Du1", "5mskrduqx0D", "u0jiGSW8zJ" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and insightful feedback. We are encouraged that you agree long-term dynamics is understudied and that we selected the correct challenge in video generation to address.\n\n**“Since we are not showing long videos to the discriminator, the generated videos do not have enough dynamics or look repetitive. Although I agree with this argument, IMO, the main challenge is modeling the temporal dynamics. We can still (wisely) select shorter videos but with enough frames change. But we need better structures to model the temporal dynamics.”**\n\nWe agree that other approaches may be successful at modeling temporal dynamics in future work. We identified that long-term dynamics is an understudied aspect of video generation and introduced an approach using much longer videos at low resolution to improve these dynamics. As you suggest, it may be possible to instead capture long-term dynamics using fewer frames that are intelligently selected, and we will happily add discussion of this to our paper. We hope our work encourages more research on long-term dynamics and welcome other approaches.\n\n**“I appreciate the honest selection of the generated video samples in the supplementary materials. It is clear that in some scenarios, like horse riding, we see novel objects enter the scene, and the motion is not repetitive as the baselines. However, in some cases like ACID: acid_grid.mp4, second row, we see a lot of distortions in the generated video. Although the method works better for high dynamics, it may not handle low dynamics.”**\n\nYes, there are certainly visible distortions in some results such as the video you point out, and we will mention this limitation more clearly. However, note that even for the ACID dataset with relatively low dynamics, there are unrealistic repetitions in videos generated by StyleGAN-V that our model does not suffer from. The results of our user study found that humans prefer videos generated by our model over 80% of the time on each dataset, including ACID.\n\n**“Fig. 5 is a little hard to read due to the mix of green and blue colors. However, it seems that although the color average between the proposed method is the closest to the real data, the variance is much lower. What does that imply?”**\n\nFurther investigation would be needed to draw significant meaning from the standard deviation on these plots, but in general it is preferred to match the dataset. Too low of a standard deviation may indicate there is too little variation in the amount of change over time across different generated videos.\n\n**“I am wondering if the color similarity based on the frame separation is a good indicator/metric for better scene dynamics. Why did the authors select this metric?”**\n\nThe color similarity plots are a simple illustration of how quickly videos change over time. They are not intended as a standalone metric of scene dynamics, but as a probe into the biases of videos generated with different models. They show when videos change far too quickly or slowly over time. We experimented with feature distances rather than color similarity in appendix A.2 and found similar results.\n\n**“I found this statement confusing 'time offset of the generated video can be controlled by shifting and reshaping the temporal noise, respectively.' How shifting the noise, can decide the time offset?”**\n\nWe apologize for the confusing statement. What we meant is that because our architecture is fully-convolutional across time, the output video is translation equivariant with respect to the input temporal noise. Therefore, by translating/shifting the input temporal noise, we can translate the output video forward or backward in time. We will clarify this in the text.\n\n\n**“I am wondering if a pre-trained super-resolution network (perhaps on image datasets) can perform nearly as well as what authors trained on the video examples.”**\n\nGood question. We have not tried pretrained super-resolution networks. We did try training our super-resolution network on individual images in an early experiment, and found that it led to temporal flickering. We prioritized the low-resolution generator of long temporal dynamics, and believe replacing or improving the super-resolution module is a valuable topic for future work.\n\n**“What about the number of generated shots? Applying a shot detector on the real vs generated data, how much the distribution of the number of shots per clip is similar per class? What about the distribution of consecutive shots similarity? Do authors think it will be a complementary indication to the color examination?”**\n\nThank you for this suggestion. We agree that comparing the distributions of detected shots per clip or similarity between shots makes sense as a potential metric. We believe determining the right evaluation for long-term video dynamics is an important topic that deserves a targeted investigation of its own.", " Thank you for the thoughtful feedback. We are glad that you find our work has interesting insights and useful practices for video generation.\n\n**“An in-depth failure case analysis is absent.”**\n\nWe are happy to add this in the final paper. We discuss the “swirly” artifacts below in more detail and will include that discussion. Another failure case we observed is difficulty preserving 3D consistency for scenes with very little motion, such as in the ACID dataset. In cases where there is little motion, one may consider using an explicit 3D representation to improve results in future work. A third failure case is in long-term consistency of small details (e.g., distant jumps in generated horseback riding videos) that begin to appear before quickly fading out. We believe issues with long-term small details are due to limitations of our super-resolution network, and that improving the super-resolution network in future work will address these artifacts.\n\n**“The proposed architecture seems to heavily depend on the correct data augmentation.”**\n\nIndeed, we found that using the proper strong data augmentation is especially important since we train on much longer videos that are subject to overfitting.\n\n**“According to the color similarity plots, the standard deviations of Dataset are quite high, and often higher than that of models. Does it tell anything?”**\n\nThe color similarity plots illustrate bias in the rate of change over time for different models, such as if videos change far too quickly or slowly. Further investigation would be needed to draw significant meaning from the standard deviation on these plots, but in general it is preferred to match the dataset. Too low of a standard deviation may indicate there is too little variation in the amount of change over time across different generated videos.\n\n**“The “swirly” artifacts visible in some of the videos might be attributed to the multi-resolution strategy … Could you further explain this in detail?”**\n\nWe observed that the “swirly” artifacts are most prominent in the super-resolution output and not in the low-resolution output. We believe the artifacts may be at least partially related to the domain gap between real and generated low-resolution videos passed as input to the super-resolution network. We mitigate this concern using low-resolution conditioning augmentation to improve generalization (see appendix D.1), but acknowledge it is still a clear area for improvement. Further investigation will be required to clarify the cause and address these artifacts in future work. We will rephrase the discussion of this limitation on line 305.", " Thank you for your review and suggestions – we are encouraged that you find our idea elegant and the new datasets a useful contribution.\n\n**“For the user study, the authors should have provided \"equally same\" as an option”**\n\nFor our human evaluation, we followed the forced-choice format used in established computer vision papers such as [1], and will keep your suggestion to include an \"equally same\" option in mind for future user studies.\n\n**“I am wondering if a better evaluation for long-term consistency would be conditional generation”**\n\nWe appreciate this idea to evaluate long-term consistency using conditional generation tasks and believe determining the right evaluation for long-term video dynamics is an important topic that deserves a targeted investigation of its own.\n\n\n**“The idea of using low and high resolution for long and small temporal segments reminds me of SlowFast networks”**\n\nThank you for this suggestion, we will happily add discussion of the related SlowFast video recognition paper.\n\n\n[1] Zhang et al., The Unreasonable Effectiveness of Deep Features as a Perceptual Metric\n", " We appreciate your review and feedback, and are glad you find our treatment of temporal noise interesting and the results on our new datasets impressive.\n\n**“Subsampling in space and time has been proposed in the previous literature. It has been shown that training at high res both in space and time is prohibitively expensive.”**\n\nThank you for drawing our attention to the work of Saito et al. (TGAN-V2). We will certainly add discussion of this work and elaborate discussion of DVD-GAN. Similar to our model, these works overcame computational limits during training by using multiple resolutions and fewer frames at high resolution. TGAN-V2 and DVD-GAN used end-to-end training, and decreased framerate at higher resolutions during training in the generator and/or discriminator. Instead, we train an entirely separate video super-resolution network. This both overcame computational limits and isolated our task of generating long videos to the low resolution, enabling us to prioritize the design of a new low-resolution architecture and temporal latent space.\n\n**“The paper fails to report UCF, FaceForensics and others from stylegan-v & tats, arguing that these datasets contain less new content and camera movements. While it might be case, the method only reports better scores on their own proposed datasets.”**\n\nThe focus of our work is on enabling generation of long videos (e.g., 10s) with a moving camera and new objects/scenery over time, and we believe our two new datasets were necessary to study this. We do not evaluate on the common UCF101 or FaceForensics datasets with videos that do not meet these criteria. Please note that we tuned the StyleGAN-V baseline on our datasets to be more competitive (Table 1), we evaluate on the common SkyTimelapse dataset against multiple baselines and produce competitive FVD scores, and that we outperform StyleGAN-V on all datasets including the existing SkyTimelapse and ACID datasets in terms of human preference in our user study.\n\n**“Other resolutions. Stylegan-v and mocogan-hd show training on much higher resolutions”**\n\nIndeed, we do not train at higher resolutions than 256. While a number of video generation papers have focused on high resolution output, we believe the long-term dynamics of videos with new content over time has been an understudied topic. We therefore focus primarily on the temporal aspect of video generation in this work.\n", " The paper discusses a method for generating long videos in which new content is introduced as the camera moves forwards. Training on such videos especially in high resolution is prohibitively expensive. One of the possible ways to mitigate the issue is to train a two stage architecture, in which dynamics is trained on low-res, followed by super-resolving the low-res long video with the second stage. To show advantages of their approach the authors collect two new datasets, in which the camera moves forward. According to the results, the method outperformed current works on these two datasets, and finishes second on previously available datasets. Strengths. The paper has some:\n\nS1. The paper shows that treating temporal noise is important. The idea of filtering it with a low-pass filter is interesting. This way only low-frequency, longer events are captured by the model.\nS2. Results are quite impressive on the two new datasets. Very impressive!\n\nThere also are weaknesses, unfortunately:\n\nW1. Subsampling in space and time has been proposed in the previous literature. It has been shown that training at high res both in space and time is prohibitively expensive. In [TGAN-V2] for example, they generate longer videos at low resolution and as they increase resolution with more generators they drop the frame rate. This way they achieved efficient training of 256^2 resolution--highest resolution of this paper--in 2018. Another similar idea was used in a discriminator of DVD-GAN [9]. They downsampled the resolution for video discriminator, and reduced frame-rate for image discriminator. [TGAN-V2] is not cited, DVD-GAN is cited but the similarities/differences are not discussed. \n\nW2. The proposed framework is reasonable and seems to be working well on the proposed datasets. The key new interesting idea is temporal filtering, the rest of the framework from the high level is known (see W1). I acknowledge the amount of effort the authors put into making it work, of course. \n\nW3. Other video datasets. The paper fails to report UCF, FaceForensics and others from stylegan-v & tats, arguing that these datasets contain less new content and camera movements. While it might be case, the method only reports better scores on their own proposed datasets. The problem with this is that it takes a lot of effort to tune each method to each dataset, making it not very clear if the proper and sufficient tuning for stylegan-v was made. If one compares to existing numbers on existing datasets, such as UCF, on which TATS and stylegan-v show reasonable performance, one can guarantee that the method is evaluated against the numbers in which proper time was invested. Otherwise it's always possible to select a dataset on which the method scores best.\n\nW4. Other resolutions. Stylegan-v and mocogan-hd show training on much higher resolutions, as high as 1024. While this method focuses on the temporal part of videos, it's not clear what's the resolution upper bound. It's important to understand that even for this work, since scaling GANs is a non-trivial task both in terms of computational resources and quality. \n\n\n[TGAN-V2] Saito, Masaki, et al. \"Train sparsely, generate densely: Memory-efficient unsupervised training of high-resolution temporal gan.\" International Journal of Computer Vision 128.10 (2020): 2586-2606. Please see above. Please see above", " Brief Summary: The paper addresses the problem of video generation with a focus on longer time horizon videos which requires consistency. To this end, the authors introduce two new benchmark datasets on horse-riding and mountain biking. The key observation is that the main components for temporal consistency are preserved at lower spatial resolution. The authors therefore propose using a hierarchical architecture to first create long low-resolution video, followed by sliding windows to create higher resolution videos at shorter time-steps.\n\nThe authors have further performed a human evaluation on mechanical turk to find their method was preferred over 80% of the time. Pros:\n\n1. The idea is simple yet elegant. Implementation-wise it is quite interesting to find that a straightforward extension of stylegan to videos where input images are simply concatenated leads to promising results. \n\n2. New datasets contributions are always welcome. The two contributed datasets on mountain biking and horseback riding could be useful for future research especially to evaluate temporal consistency.\n\n3. The authors have done a human eval which is very important in video generation, and found their method was preferred 80% of the time. \n\n4. The authors also show color-change as a heuristic for temporal which clearly show the benefits of hierarchical training.\n\n5. The visualizations provided in the supplementary are very cool!\n\nCons:\n1. For the user study, the authors should have provided \"equally same\" as an option, and used sanity check that \"equally same\" is picked when both videos are obtained from the same model. I am not sure if this could have created any issues in the assessment (my guess is the effect could be mild, but not negligible).\n\n2. I am wondering if a better evaluation for long-term consistency would be conditional generation, where first few and last few seconds of real video are provided. For instance, if a tree is seen at the last frame (fig-1, 10s), one could have a soft-metric that the tree be seen (albeit at lower resolution) at some intermediate frame. Could make the task on human evaluation easier as well given that they would be comparing more similar videos. Q1. The idea of using low and high resolution for long and small temporal segments reminds me of SlowFast networks [Ref1], where the slow path samples sparsely with more channel dimension, while fast path samples densely with less channel dimension. Obviously, it is not 1-1 correspondence, but some kind of discussion could be useful (maybe in supplementary).\n\n[Ref1]: Feichtenhofer, Christoph, Haoqi Fan, Jitendra Malik, and Kaiming He. \"Slowfast networks for video recognition.\" In Proceedings of the IEEE/CVF international conference on computer vision, pp. 6202-6211. 2019. I think authors have done a good job of highlighting failure cases as well as pointing out where FVD metric is not very predictive of human eval performance.", " The paper presents a video generation model that is capable of producing new content (e.g., new object or scenery), object motion, and changes in camera viewpoint over time for longer videos than prior works. In order to achieve this, temporal modeling is emphasized in the proposed architecture, unlike existing works in which frame quality is often prioritized. As their main contribution, the authors redesign the temporal latent representation and train the carefully-designed model, which has the capability to operate over long time scales with a vast temporal receptive field, on longer videos at a low resolution and shorter videos at a high resolution. Two new benchmark datasets are introduced to best evaluate the proposed model since there are no existing datasets with long enough videos. Evaluated on 4 datasets with different characteristics, the proposed model outperforms prior methods especially qualitatively on aspects including generating plausible dynamics and object persistence, producing new content while maintaining consistencies over time.\n Strength:\n+ The paper has suggested several interesting insights and many useful practices for video generation. For example, a multi-resolution two-stage strategy might be a good solution for training and deployment for models to handle long videos; the low-resolution generator should be fully convolutional over time to learn long-term temporal correlations whereas the super-resolution generator can operate in a frame-by-frame basis. Videos with longer sequence length tend to exacerbate the issue of overfitting and therefore some strong augmentations might be required. \n\n+ The paper has provided a deep investigation on the metrics of video generation. Particularly, they propose to analyze color change over time as a simple way to diagnose potential bias captured by the different models. In addition, existing commonly-used metrics such as FVD and LPIPS are also discussed, and the authors have found these metrics to agree less with the qualitative results or user study results.\n\n+ The qualitative results are convincing. It was shown that the proposed model is capable of generating videos with rich motion and scenery changes. Existing methods are incapable of generating realistic long videos, and explanations and analysis presented in the paper are reasonable.\n\n+ Two new datasets are proposed which might be beneficial for future researchers. \n\n\n\nWeaknesses:\n- An in-depth failure case analysis is absent, which might be interesting to have for readers.\n\n- The proposed architecture seems to heavily depend on the correct data augmentation in use.\n\n 1. According to the color similarity plots, the standard deviations of Dataset are quite high, and often higher than that of models. Does it tell anything? \n\n2. The “swirly” artifacts visible in some of the videos might be attributed to the multi-resolution strategy, and it was referred to as “RGB bottleneck” (line 305). Could you further explain this in detail?\n The authors have discussed the limitations and potential negative societal impact. From the qualitative results, it seems that the model also struggles when objects in the scene interact with each other.\n", " This paper takes the initiative to improve the problem of video generation in terms of the dynamics of the generated videos. The main strategy proposed by the author is to increase the real video lengths seen by the discriminator by reducing the resolution. The reduced resolution is compensated separately with a super-resolution network.\n\nThe authors provide some analysis of the color dynamics of the real dataset and the generated videos using different methods.\n Strengths:\n\n1- The literature has overlooked video dynamics and temporal modeling for video generation in generative models. This research has selected the correct challenge in video generation to address.\n\n2- Great writing and supplementary materials.\n\n3- Decoupling the temporal dynamics from frame resolution has some novelties. However, it has been remotely mentioned by previous research but has never been explicitly modeled with two networks in two different stages.\n\nWeaknesses:\n\n1- Although I respect the authors' braveness and choice of problem, I believe the paper conveys such a message that the main reason for low dynamics in video generation models is data. Since we are not showing long videos to the discriminator, the generated videos do not have enough dynamics or look repetitive. Although I agree with this argument, IMO, the main challenge is modeling the temporal dynamics. `We can still (wisely) select shorter videos but with enough frames change.` But we need better structures to model the temporal dynamics. \n\n2- I appreciate the honest selection of the generated video samples in the supplementary materials. It is clear that in some scenarios, like horse riding, we see novel objects enter the scene, and the motion is not repetitive as the baselines. However, in some cases like `ACID: acid_grid.mp4, second row`, we see a lot of distortions in the generated video. Although the method works better for high dynamics, it may not handle low dynamics. \n\n 1 - Fig. 5 is a little hard to read due to the mix of green and blue colors. However, it seems that although the color average between the proposed method is the closest to the real data, the variance is much lower. What does that imply?\n\n2 - I am wondering if the color similarity based on the frame separation is a good indicator/metric for better scene dynamics. Why did the authors select this metric?\n\n3 - I found this statement confusing `time offset of the generated video can be controlled by shifting and reshaping the temporal noise, respectively.` How shifting the noise, can decide the time offset?\n\n4 - I am wondering if a pre-trained super-resolution network (perhaps on image datasets) can perform nearly as well as what authors trained on the video examples. \n\n5 - What about the number of generated shots? Applying a shot detector on the real vs generated data, how much the distribution of the number of shots per clip is similar per class? What about the distribution of consecutive shots similarity? Do authors think it will be a complementary indication to the color examination? \n I believe the authors are honest about the negative societal impacts. I have no more points to add here." ]
[ -1, -1, -1, -1, 5, 7, 7, 6 ]
[ -1, -1, -1, -1, 5, 4, 3, 5 ]
[ "u0jiGSW8zJ", "5mskrduqx0D", "lDFytyh7Du1", "IRgZ6I08Dg9", "nips_2022_VnAwNNJiwDb", "nips_2022_VnAwNNJiwDb", "nips_2022_VnAwNNJiwDb", "nips_2022_VnAwNNJiwDb" ]
nips_2022_-e2SBzFDE8x
Adaptively Exploiting d-Separators with Causal Bandits
Multi-armed bandit problems provide a framework to identify the optimal intervention over a sequence of repeated experiments. Without additional assumptions, minimax optimal performance (measured by cumulative regret) is well-understood. With access to additional observed variables that d-separate the intervention from the outcome (i.e., they are a d-separator), recent "causal bandit" algorithms provably incur less regret. However, in practice it is desirable to be agnostic to whether observed variables are a d-separator. Ideally, an algorithm should be adaptive; that is, perform nearly as well as an algorithm with oracle knowledge of the presence or absence of a d-separator. In this work, we formalize and study this notion of adaptivity, and provide a novel algorithm that simultaneously achieves (a) optimal regret when a d-separator is observed, improving on classical minimax algorithms, and (b) significantly smaller regret than recent causal bandit algorithms when the observed variables are not a d-separator. Crucially, our algorithm does not require any oracle knowledge of whether a d-separator is observed. We also generalize this adaptivity to other conditions, such as the front-door criterion.
Accept
This paper exploits the causal structure in the multi-armed bandits setting and gives a set of novel and strong results, including (1) the conditional benign property -- a nice and simple generalization of prior assumptions; (2) an impossibility result for the previous algorithm C-UCB; and (3) a new algorithm gives sublinear regret in any cases and optimal regret when there actually is a d-separator. The paper is well-organized and nicely written. The reviewers are unanimously positive about this paper.
train
[ "Od5F0y1V32b", "qpTmNUFXYLP", "NjFwin3IrZM", "OiEkWpLHZw", "2vat4PAthL3", "VKSS6EZ_qxE", "U7jp9aTVRLG", "paygbYkJN9y", "GjpZpJCykqr", "mYum_crvzL" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the insightful example. I think including something like it in the paper will contribute greatly.", " I appreciate the authors' response, and some of my concerns have been addressed. This paper studies a novel problem that concerns the trade-off between exploiting (possibly misspecified) graphical structure in specific cases while maintaining a reasonable performance in general cases. The analysis of adaptive minimax optimality in Theorem 6.2 is particularly interesting. Indeed, the regret bound of HAC-UCB might not be optimal and could be improved. However, no one should expect a novel problem to start and end with a single paper. This paper provides an initial baseline for an important online learning problem with uncertain structural assumptions. Due to these reasons, I maintain my assessment and vote for acceptance.", " Thank you for your detailed review. In regards to your question, the experiments were repeated 300 times for each experiment, and the average value is plotted. Since we know the distribution, we can compare to the exact mean reward in the definition of regret, so the only variability to average over is the algorithms’ randomness.", " Thank you for your detailed review. We now address your main concern.\n\n## Interpretation of Theorem 6.2.\n\n**First, we clarify that there is no typo in the quantification of constants, and yet our conclusion that adaptive minimax optimality is impossible is still correct.** To see this, note that for the conditionally benign setting, the RHS of Eqn (6.2) depends on |Z| rather than |A|, and hence since C’ in Theorem 6.2 must be chosen independently of both of these, no algorithm can be adaptively minimax optimal (even if C’ is arbitrarily smaller than the C in the first equation of Theorem 6.2).\n\nSecond, you rightfully point out that the dependence on log factors is not fully resolved. Specifically, we do not know if it is possible to obtain “near optimal adaptivity” with respect to log factors in the causal setting, and agree that this is a very interesting open problem to pursue. We will be sure to update the remarks following Theorem 6.2 to make this absolutely clear. \n\nWe make no claims that HAC-UCB cannot be improved upon, only that the optimal notion of adaptivity is impossible and hence the HAC-UCB regret cannot be fully improved to the optimal rates respectively (admittedly, we used confusing wording in the sentence you highlighted). In the worst-case, there need not be any dependence on log factors (see, e.g., the Bandit Algorithms book for how to shave these off), and hence our conclusion that adaptive minimax optimality is impossible still holds. We would like to note that it was not even known conclusively until Neurips 2021 (https://proceedings.neurips.cc/paper/2021/hash/49ef08ad6e7f26d7f200e1b2b9e6e4ac-Abstract.html) that the log factors are necessary for UCB, and hence a lot remains to be done in the study of log factors for bandit algorithms, and we believe it is reasonable to defer establishing tight log factors to future work.", " Thank you for your detailed review. We address the three comments that you raise.\n\n## Real world applicability \n(the following two paragraphs are repeated in our response to reviewer KRsJ)\n\nWe agree that connecting our theoretical results to real world applications is important. Due to their recent development (the theoretical interest only began in 2016), causal bandit algorithms have not yet been widely deployed in real world applications as far as we know, and we think that this is an exciting line of inquiry to pursue in future work. First, note that once interventions are possible (which is the case for randomized control trials, and necessary to avoid unverifiable assumptions), the causal setting is _exactly_ the same as the bandit setting. We now motivate a potential setting of interest where our algorithm could be used, and are happy to add this motivation in the camera-ready discussion.\n\nSettings where HAC-UCB will be useful are those where the intervention space is very large and additional information is available. One such setting—which we are admittedly *not* domain experts in—is learning the causal effect of genes on disease phenotypes (e.g., [1]). Here, scientists have the ability to actually intervene via “perturbations” [2], yet choosing where and how to perturb often results in a combinatorially large number of interventions. As a high-level example, in [3] the authors note that the total number of variations (“single nucleotide polymorphisms”, or SNPs) is far too large to exhaustively test, but propose clustering genes into “modules”. These modules (observed via “gene expression probes”) can act as post-action contexts, but the authors note that many causal diagrams are possible (see their Figure 1), with the modules being actual d-separators, only mediators, or potentially causally unrelated. Hence, **using HAC-UCB could allow one to learn causal effects without depending on the infeasible number of SNPs, and yet also not rely on potentially incorrect assumptions about the causal graph.** Obviously, to state this more precisely and actually deploy our algorithm would require collaboration with domain experts, but we are optimistic that our approach can have concrete benefits in such settings.\n\n[1] https://www.ahajournals.org/doi/10.1161/circresaha.114.302904\n\n[2] https://www.broadinstitute.org/genetic-perturbation-platform\n\n[3] https://bmcproc.biomedcentral.com/articles/10.1186/s12919-016-0009-x\n\n## Algorithm novelty\n\nContrary to the full-information setting, adaptivity is poorly understood for bandit problems in general. Further, we are the first to consider adaptivity with respect to causal structure. **We are also not aware of existing work that uses hypothesis tests to obtain adaptivity in this way, particularly for bandit feedback problems. Hence, we believe that our algorithm is novel in this regard.** Indeed, we believe that our hypothesis testing approach may be fruitful for adapting in other settings, even when existing aggregation approaches fail (as we have shown experimentally that Corral fails to adapt in our setting). \n\n## Algorithm optimality\n\nAs noted, we have not demonstrated optimality of HAC-UCB, and it is natural to ask whether the T^{3/4} can be improved. In light of (a) our positive results demonstrating that HAC-UCB dominates existing causal bandit algorithms and (b) our negative results demonstrating that optimal adaptivity is impossible, we believe it is reasonable to defer demonstrating full optimality to future work. We are actively pursuing such results, but have not yet solved this problem.\n", " Thank you for your detailed review. One small part we want to clarify: while we do introduce the notion of adaptive Pareto optimality for this problem, we do not show HAC-UCB achieves this, and instead leave it as an open problem. The rest of your summary, including that we show (a) adaptive optimality is impossible, (b) C-UCB gets linear regret in the worst case, and (c) HAC-UCB recovers optimal regret in the conditionally benign setting and beats C-UCB in the worst case, is accurate. We now address your comment about real world applications. \n\n## Real world applicability \n(the following two paragraphs are repeated in our response to reviewer tTqa)\n\nWe agree that connecting our theoretical results to real world applications is important. Due to their recent development (the theoretical interest only began in 2016), causal bandit algorithms have not yet been widely deployed in real world applications as far as we know, and we think that this is an exciting line of inquiry to pursue in future work. First, note that once interventions are possible (which is the case for randomized control trials, and necessary to avoid unverifiable assumptions), the causal setting is *exactly* the same as the bandit setting. We now motivate a potential setting of interest where our algorithm could be used, and are happy to add this motivation in the camera-ready discussion.\n\nSettings where HAC-UCB will be useful are those where the intervention space is very large and additional information is available. One such setting—which we are admittedly *not* domain experts in—is learning the causal effect of genes on disease phenotypes (e.g., [1]). Here, scientists have the ability to actually intervene via “perturbations” [2], yet choosing where and how to perturb often results in a combinatorially large number of interventions. As a high-level example, in [3] the authors note that the total number of variations (“single nucleotide polymorphisms”, or SNPs) is far too large to exhaustively test, but propose clustering genes into “modules”. These modules (observed via “gene expression probes”) can act as post-action contexts, but the authors note that many causal diagrams are possible (see their Figure 1), with the modules being actual d-separators, only mediators, or potentially causally unrelated. Hence, **using HAC-UCB could allow one to learn causal effects without depending on the infeasible number of SNPs, and yet also not rely on potentially incorrect assumptions about the causal graph.** Obviously, to state this more precisely and actually deploy our algorithm would require collaboration with domain experts, but we are optimistic that our approach can have concrete benefits in such settings.\n\n[1] https://www.ahajournals.org/doi/10.1161/circresaha.114.302904\n\n[2] https://www.broadinstitute.org/genetic-perturbation-platform\n\n[3] https://bmcproc.biomedcentral.com/articles/10.1186/s12919-016-0009-x\n\n## Limitations\n\nWe briefly remark on two settings where there is no benefit to using HAC-UCB: (a) when |A| is already very small or otherwise |Z| is approximately the same size as |A|, and (b) when it seems very unlikely that Z is a d-separator. In both cases, there are basically no benefits to using any causal bandit algorithm, and hence one should just use a worst-case optimal algorithm like UCB. Note that the guarantees for HAC-UCB still apply in these settings, and still prescribe much better performance than C-UCB (especially in case (b)), just that these settings are basically what UCB was designed to be optimal for. \n", " The paper addresses the multi-armed bandit problem where post-action contexts are observed and may or may not be a d-separator. When there is no d-separator, many algorithms including the UCB algorithm achieve the optimal regret. On the other hand when there is information on which variable is a d-separator, UCB style algorithms (such as C-UCB) have been proposed that achieve a regret bound that depends on the cardinality of the d-separator variable instead of the number of actions. \nHowever, there is no former algorithm that addresses the case where the learner does not know whether a post-action variable is a d-separator or not. A safe way would be to apply the original UCB algorithm, but this will achieve suboptimal regret in case there actually is a d-separator. On the other hand, authors prove that using the C-UCB algorithm can result in linear regret in case there is no d-separator. \nAn adaptive algorithm is required to identify the d-separator variables and decide whether to implement original UCB or C-UCB for action selection. Authors propose a new algorithm based on consecutive hypothesis tests that identify whether a variable is a d-separator. The algorithm achieves optimal regret when there actually is a d-separator, and sublinear (O(T^{3/4})) regret when there is no d-separator. Authors also prove that their bound cannot be shaved off by more than T^{1/4} order.\n Strength: The authors define the conditional benign property under which the proposed algorithm achieves optimal regret. This property is a weaker assumption than previous works. Also, the proposed algorithm does not require to know the exact marginal distribution of the d-separator variables. The optimal regret is achieved with approximate distributions as well.\n Were the experiments performed repetively, or was it just one run?\n I did not find any discussion on the societal impact.", " This paper proposes a novel algorithm for causal bandits that achieves “best of both worlds”: comparable performance to the benchmark C-UCB algorithm in conditionally benign environment while achieving a much better performance in the worst-case environment. In the causal bandit setting, the player observes some other post-action contexts after they play some action, rather than observing all contexts before they play an action. Under this setting, they introduce the conditional benign property which is closely tied with d-separability. They show that the benchmark C-UCB algorithm suffers linear regret when the environment is not conditionally benign, and it is impossible to achieve worst-case optimal regret and optimal regret in the conditional benign environment simultaneously. Then, they introduce the notion of Pareto optimality based on the Pareto frontier and show that their new algorithm, HAC-UCB, is Pareto optimal. \n In terms of originality, the paper provides several interesting contributons. Firstly, they introduce the idea of a \"post-action\" context. Secondly, they provide a novel algorithm, HACUCB to solve the post-action context problem with a specific application of causal bandits when observing a d-seperator of d-separators. Perhaps most interestingly is the new impossibility results for previous algorithms (C-UCB) which achieve optimality in the assumption of a conditional benign environment, and provide interesting lower bounds. \n\nOverall the paper is well written and easy to understand. The impossibility results are very surprising, including those about adaptive minimax adaptivity. As far as the experiments, they are not extensive, however they demonstrate the advantage of HACUCB over CUCB in benign settings effectively. \n\nMy only comment is that although this paper is a direct extension to the existing literature Lu et al, it needs more justification to show the significance. For example, I would love to see some real applications of their algorithm.\n\nOverall, I would love to accept the paper based on the amount of work and technical contributions. I think this paper has interesting ideas and the paper is well written. \n None in addition to the above. The author could discuss more on the limitations of their proposed algorithm, e.g. in which situation does it work well, and in which situation does it work poorly, in theory and in simulation? \n", " The authors study adaptivity in the setting of causal bandits. They consider a bandit setting where after an action a is taken, a post-action context Z_a is observed. They propose a new property, a conditionally benign bandit environment that subsumes Z being a d-separator and other assumptions previously studied in the literature. They develop an algorithm that is adaptive to whether the bandit environment is conditionally benign. This allows the algorithm to obtain a regret bound matching C-UCB, a SOTA algorithm for causal bandits, when the bandit environment is conditionally benign and that has a sublinear regret otherwise. The authors also show that it is impossible to be minimax optimal at once for environments satisfying the conditionally benign property and violating the conditionally benign property. Strengths:\n\nThe paper initiates the study of adaptivity in causal bandits, a very important problem. \n\nThe authors give a set of strong and novel results. In particular, their impossibility result on strict adaptivity is interesting.\n\nThe proposed notion of conditionally benign environments seems like a nice and simple generalization of prior assumptions in the literature. \n\nThe authors give a very strong empirical validation of the theory. It is nice to see that existing algorithms do not adapt to the distinct settings, while the proposed algorithm does. \n\nWeaknesses:\n\nThe authors do not seem to give real-world examples of the bandit protocol, which may make it seem more like a mathematically convenient setting to study adaptivity in causal bandits. In what applications, would a post-action context be observed?\n\nWhile the problem itself it important and nice theoretical progress has been made, it is unclear to me that the proposed algorithm would lead to practical gains on real-world problems. In the particular, the initial burn-in time of playing each arm $O(\\sqrt{T}/|A|)$ times seems costly and the hypothesis test seems potentially loose. It would be nice to see the empirical performance of the algorithm on real-world problems or larger-scale simulations.\n\nThe algorithm uses fairly standard ideas in the literature. It uses a hypothesis test to determine whether the environment is conditionally benign. Therefore, the algorithmic novelty seems limited.\n\nAs the authors indicate, the rate $T^{3/4}$ may be suboptimal. I'm curious as to whether this result is tight for the algorithm and whether there may be an algorithm with an improved rate. What are real-world examples of the protocol?\n\nDo the authors believe the algorithm could make improvements on real-world problems? It would be nice to see the empirical performance of the algorithm on real-world problems or larger-scale simulations. Yes.", " This paper studies the problem of online learning in causal bandits, i.e., multi-armed bandit models compatible with a certain set of structural constraints encoded in a causal graph. Existing methods assume that the correct causal graph is provided. This paper studies an important generalization of this problem setting where the causal graph could be misspecified. The authors propose an algorithm that could achieve a sub-linear regret in the most general cases where the graph is incorrect. It is also able to exploit the existence of independence constraints to improve the learning performance when the graph is accurate. The authors further provide the worst-case analysis showing that any online learning algorithm is unable to achieve the optimal regret bound with and without the causal graph simultaneously when the graph misspecification is possible. This paper is clearly written and well-organized. It attempts to address a challenging problem: online learning combined with hypothesis testing on pre-specified domain knowledge, which could possibly be misspecified. Existing methods in the causal bandit literature mostly assume the correct causal graph is provided. Therefore, I think this paper could have an impact on both reinforcement learning and causal inference. The proposed algorithm, HAC-UCB, appears sound and demonstrates some desirable properties. That is, the algorithm is able to achieve a sub-linear regret in the most general bandit model while improving the regret bound by exploiting underlying independence constraints when assumptions in the graph are accurate, represented in the form of a set of d-separators. \n\nWhile it is sub-linear, the regret bound of HAC-UCB is still worst than UCB in the general case. To understand this observation, the authors also provide the worst-case analysis of this learning setting. The result is interesting. It suggests that there exists no online learning algorithm that could achieve the optimal regret bound with and without knowing the correct causal graph simultaneously. This suggests that the performance gap between HAC-UCB and UCB might be expected.\n\nAs for the weakness, I think the lower bound result in Theorem 6.2 does not suggest that the regret of HAC-UCB could not be improved. Theorem 6.2 states that one could not design an online algorithm achieving regret $\\mathcal{O}(\\sqrt{|A|T})$ (without graph) and $\\mathcal{O}(\\sqrt{|Z|T})$ (with graph) simultaneously. However, this does not mean that it is infeasible to design a robust online algorithm capable of graph misspecification while achieving the same regret as UCB, which is $\\mathcal{O}(\\sqrt{|A|T\\log(T)})$. The main reason is that the regret bound of UCB is not tight, containing a log factor $log(T)$. Therefore, the authors comment, \"In Theorem 6.2, we will show that, up to logarithmic factors, it is impossible to improve the $T^{3/4}$ n this bound to $\\sqrt{T}$ while still achieving the improved regret on conditionally benign environments\", is somewhat misleading, and should be removed. In Definition 6.1, the constant C in Eqs 6.1 and 6.2 appear to be the same. While in Theorem 6.2, the negative result utilizes two different constraints C and C'. Is this a typo? Otherwise, Theorem 6.2 does not seem to show that \"it is impossible to be adaptively minimax optimal with respect to the conditionally benign property.\" The authors have clearly stated the assumptions behind the proposed methods. This work is mainly theoretical, and its long-term societal impact is unclear to see." ]
[ -1, -1, -1, -1, -1, -1, 7, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "VKSS6EZ_qxE", "OiEkWpLHZw", "U7jp9aTVRLG", "mYum_crvzL", "GjpZpJCykqr", "paygbYkJN9y", "nips_2022_-e2SBzFDE8x", "nips_2022_-e2SBzFDE8x", "nips_2022_-e2SBzFDE8x", "nips_2022_-e2SBzFDE8x" ]
nips_2022_dMK7EwoTYp
MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction
In recent years, neural implicit surface reconstruction methods have become popular for multi-view 3D reconstruction. In contrast to traditional multi-view stereo methods, these approaches tend to produce smoother and more complete reconstructions due to the inductive smoothness bias of neural networks. State-of-the-art neural implicit methods allow for high-quality reconstructions of simple scenes from many input views. Yet, their performance drops significantly for larger and more complex scenes and scenes captured from sparse viewpoints. This is caused primarily by the inherent ambiguity in the RGB reconstruction loss that does not provide enough constraints, in particular in less-observed and textureless areas. Motivated by recent advances in the area of monocular geometry prediction, we systematically explore the utility these cues provide for improving neural implicit surface reconstruction. We demonstrate that depth and normal cues, predicted by general-purpose monocular estimators, significantly improve reconstruction quality and optimization time. Further, we analyse and investigate multiple design choices for representing neural implicit surfaces, ranging from monolithic MLP models over single-grid to multi-resolution grid representations. We observe that geometric monocular priors improve performance both for small-scale single-object as well as large-scale multi-object scenes, independent of the choice of representation.
Accept
There was a range of reactions to this paper from borderline reject to strong accept. Although several of the reviewers highlighted that the contribution could be viewed as incremental, it is clearly described, and robust across different types of scenes, and I concur with the three reviewers that give positive ratings. Therefore I am accepting this paper.
test
[ "fTk3TUMTV10", "5iPaUY8rnSl", "iDj-BDIe1ZJ", "KdAUG5_D8Ik", "Mj2ixa9Np1A", "IMSMcpsrjmP", "rimXmVYRTI3", "BzLmkdMxcM", "dPbQ-LMWlgr", "urLzkGW8FLy", "lW-dc6VCnbh", "VGBxfy7T4_", "zwxCSSbVh5p", "wbkQ-A2VCq0", "-pCaRQhf8Oq", "XHCz1fUCWuR", "Ew-2SFxbUOK", "FQ4os9jA0Xv", "DJ59g-xISgf", "wKiXsoR-kvd", "oA4y5lgkgUg", "LrFE1FwM1Sn", "13Sqj0UV-O1", "96CJLZ9sfm", "mE7zik7Q1K5" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your reply and for increasing your score. We are happy to change the title if the reviewers and AC recommend this.", " Thanks to the authors for addressing my concerns. After reading the rebuttal, I still find the experiments on architectural choices distracting to the main contribution of this paper. In essence, the results showed that different architectures respond differently to adding monocular depth and normal cues. If the objective is to maximize the accuracy *with monocular cues*, then the experimentation should be leaned towards that scenario, rather than simply comparing the four architectures for the purpose of constructing a baseline. After all, the architecture is another topic.\n\nI'm willing to increase my score as my other concerns have been addressed. My rating is borderline accept as I think the contribution/improvement is largely due to the leverage of pre-trained omnidata model for monocular depth and normal. The paper also has a gap between the title and the content. The title says it's exploring \"monocular geometric cues\" while the paper is actually resorting to extermal models with monocular depth and normals, which is more like a prior and not what I'm expecting after reading the title.", " Thanks for the kind response.", " Dear Reviewer D9of,\n\nThank you very much for your reference and for increasing your score.\n\nFirst, please note that the reference paper is submitted to arXiv after the NeurIPS deadline and should be considered as concurrent work. Second, compared to this paper, our approach is not restricted to indoor scenes, and we did experiments on the challenging 3-view setting in the DTU dataset. Our approach is the first method that yields reasonable reconstruction on the large-scale Tanks and Temples dataset. Further, we demonstrated the effectiveness of monocular cues on various neural scene representations, ranging from MLP to multi-res. feature grids.\n\nRegarding comparison to other cues, we provided a comparison with Manhattan-SDF, which utilises semantic and multi-view stereo depth maps, and other baselines in our response (1). Our monocular cues significantly improve the reconstruction quality.\n\nWe believe our approach is well-qualified and brings value to the NeurIPS community, and we would be grateful if you could increase your score to acceptance.\n\nThank you very much for your time.", " Thanks for the response. Regarding the details which have partially solved my questions. I am willing to increase my score.\n\nHowever, I am still not too convinced by the proposed novelty. Whether the proposed method is incremental or not is not checking if there is anyone working on exactly the same idea or not, otherwise, anyone can find one point which is definitely not investigated by previous papers. I understood the paper wanted to show the importance of \"monocular\" cues compared to cues in general. Then it should include more experiments to show why it is critical.\n\nTherefore, I still tend to keep my negative rating.\n\nAlso, the following paper seems also uses normal supervision from monocular prediction.\nhttps://arxiv.org/pdf/2206.13597.pdf", " Dear Reviewer D9of,\n\nThanks for your reply. First, using depth maps from multi-view stereo as additional supervision fails in the indoor scenes with lots of textureless regions, see the table in our response (1). Because multi-view stereo methods use photometric cues to establish correspondences, they struggle in texture-less scenes and scenarios with sparse views. In contrast, monocular depth and normal estimators are trained on large-scale datasets in a supervised manner, and their outputs can serve as strong prior for optimizing neural implicit models.\n\nSecond, to the best of our knowledge, our approach is the first method that utilizes monocular depth and normal cues to neural implicit surface models and has demonstrated significant improvements over baseline methods on several challenging datasets. We are not aware of prior work that studied mono cues in the context of Neural implicit surface reconstruction and would be grateful if you could provide more details regarding prior work that investigated this setting.\n\nThank you very much for your time.", " Hi, thanks so much for the very long response to my questions.\nHowever, I still think the contribution is relatively incremental and weak, and I disagree with your response.\nI don't think there is a fundamental difference between using additional \"cues\" either from multi-view reconstruction or monocular predictions. Both have pros and cons, and there are several papers that have already done that.", " \nDear Reviewer grib,\n\nThank you again for your review. We hope that our rebuttal could address your questions. As the discussion phase is only 4 hours left, we wondered if you might still have any concerns we could address.\n\nThank you for your time.", " \nDear Reviewer Ak1a,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is only 4 hours left, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Dear Reviewer D9of,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is only 4 hours left, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Thanks for sharing this additional results.\n\nIt is pretty clear now the proposed network out-performs previous ones even with similar number of parameters (I am comparing MLP 12 layers v.s. multi-res 2^15). The visual results of rendered RGB also looks reasonable, and would be nice to include more discussion about this in the revision.\n\nI don't have any further concerns and will keep my rating as acceptance.\n", " \nDear Reviewer grib,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Dear Reviewer c8RS,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Dear Reviewer Ak1a,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Dear Reviewer D9of,\n\nThank you again for your review. We hope that our rebuttal could address your questions and concerns. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address.\n\nThank you for your time.", " Thank you for your recognition and your time. We are very glad that you consider this paper a technically strong paper with novel ideas and excellent impacts, evaluation, resources. We address your remaining comments below.\n\n**Ablation on the number of input images and monocular geometric cues.**\n\nWe ran the experiments with a different number of input images and monocular geometric cues. Please refer to https://imgur.com/a/jqkYC1w for a comparison. We found that adding the monocular geometric cues leads to consistent improvements. We will add these results to our revised paper.\n", " Thank you for your insightful comments. We appreciate that you find that our approach is simple and effective, our experimental results are thorough, and our paper is well written. We now address the remaining comments in the following.\n\n**Number of parameters of each network.**\n\nWe list the number of parameters of each network used in the following table::\n\n|Method|MLP| Dense SDF Grid | Single-res. Feature Grid | Multi-res. Feature Grid|\n|-|-|-|-|-|\n|Num. Params | 0.7M | 6.4M | 33.8M | 12.8M|\n\nAs reviewer D9of asked for a more detailed comparison of different architectures, we ran additional experiments with different architecture configurations. More specifically, we consider MLPs with varying numbers of layers, and Multi-res. Feature Grids with varying sizes of hash tables, to evaluate the performance under different model capacities. \n\nWe list the number of learnable parameters under different architecture configurations in the table below and also show their performance over the optimization process in https://imgur.com/a/btDDxEC. \n\n|Model configuration | Num. Params|\n|-|-|\n|MLP (2 layers) | 0.15M |\n|MLP (4 layers) | 0.26M |\n|MLP (8 layers) | 0.63M |\n|MLP (12 layers) | 0.8M |\n|Multi-res. Feature Grids (hash table size $2^{13}$)| 0.41M |\n|Multi-res. Feature Grids (hash table size $2^{15}$)| 1.1M |\n|Multi-res. Feature Grids (hash table size $2^{17}$)| 3.67M |\n|Multi-res. Feature Grids (hash table size $2^{19}$)| 12.67M |\n\nOur experiments show that using monocular geometric cues improves reconstruction quality and convergence speed independent of the network configuration, which supports our claims in the paper. We will add this experiment and the results to the revised version of the paper. \n\n**Visualizing the texture of the surface.**\n\nPlease refer to https://imgur.com/a/1dYdvXE for comparing novel view synthesis results on the DTU dataset with three input views. We further a provide quantitative comparison as follows:\n\n|Method|MLP| MLP w/ cues|\n|-|-|-|\n|PSNR | 17.65 | **23.64** |\n\nUsing monocular geometric cues improves novel view synthesis results significantly. We will add both qualitative and quantitative results to our revised paper.\n\n**Showing actual image views or ground truth rendered images.**\n\nThanks for the great suggestion. We will add the ground truth images to our revised paper. We also provide an updated comparison (https://imgur.com/a/19oXNTJ and https://imgur.com/a/LLtMc4E) which includes the ground truth mesh for reference.\n", " Thank you for your valuable comments and constructive feedback. We address your concerns in the following. \n\n**The exploration of the \"monocular geometric cues\" is relatively weak. It is curious to see how different depth estimators may affect the improvement when incorporating the additional prior.**\n\nThanks for the question. We ran additional experiments to compare different depth estimators, yielding the following results:\n\n|Method|MLP| w/ MiDaS depth [1] | w/ LeReS depth [2]| w/ Omnidata depth|\n|-|-|-|-|-|\n|F-score | 64.2 | 68.6 | 72.6 | 86.7|\n\nAs shown in the table, adding monocular depth cues improves our performance over a single MLP, independently from the depth predictors we use. Unsurprisingly, better depth predictors lead to better performance, with the state-of-the-art Omnidata model giving the best results. We thus believe that the development of better depth cues will further improve the performance of our approach. \n\nMoreover, we tested our model with different monocular normal predictors:\n\n|Method|MLP| w/ Tilted normal [3] | w/ Omnidata normal|\n|-|-|-|-|\n|F-score | 64.2 | 78.3 | 92.2 |\n\n\nWe find that both monocular normal predictors improve the results, and similarly to our observations above, using normals predicted by the state-of-the-art Omnidata model leads to the best performance. We will add these results to the final paper.\n\n**It is also peculiar that this paper chose depth and normal as the only two monocular cues to experiment. The Omnidata is able to generate high quality ground truth for 19 more more tasks.**\n\nWe use depth and normal cues as they are directly related to geometry and naturally integrate into the volume rendering formulation (see L166-L178 in the main paper). Our experiments show that using these two cues already significantly improves performance, even compared to Manhattan-SDF, which uses depth, planarity, and semantic cues. We agree that exploring other monocular cues such as occlusion edges and curvature is an interesting future direction.\n\n**Appearance aspect of the reconstruction.** \n\nThanks for the suggestion! Please refer to https://imgur.com/a/1dYdvXE for comparing novel view synthesis results on the DTU dataset with three input views. We further provide quantitative results: \n\n\n|Method|MLP| MLP w/ cues|\n|-|-|-|\n|PSNR | 17.65 | **23.64** |\n\nUsing monocular geometric cues improves novel view synthesis results significantly. We will add both qualitative and quantitative results to our revised paper.\n\n\n**Experiments for the architectures are confusing.**\n\nWe first evaluate the different architectures without the cues to establish baseline results in order to measure the improvements obtained with the cues. Without using monocular cues, we find that Multi-res. Feature Grids perform better than MLPs, while after using monocular geometric cues, both architectures improve and achieve similar results on the Replica dataset. We agree that this is an interesting finding. We hypothesize that for the MLP, the model capacity is better allocated around surface regions of the 3D geometry if additional prior information is used during optimization, while the Multi-res. Feature Grids have enough model capacity for both cases, optimization with and without prior information. Further, we find that MLPs perform better than Multi-res. Feature Grids in the datasets with noisy observations (e.g., motion blur in image) and noisy camera poses such as ScanNet. Generally, a single MLP is robust to noises but tends to yield smooth surfaces, while Multi-Res. Feature Grids can capture details and converge fast but are less robust to noise and ambiguities in the input images. We will add this discussion to the paper.\n\n**I'm wondering what is the performance comparison between the proposed monocular priors vs self-supervised depth+normal estimation?**\n\nWe ran an experiment using a monocular depth map from a pretrained state-of-the-art self-supervised indoor depth estimator [4] in our framework: \n\n|Method|MLP| w/ self-supervised depth [4] | w/ Omnidata depth|\n|-|-|-|-|\n|F-score | 64.2 | 45.6 | 86.7 |\n\nThe self-supervised depth estimator degrades performance. We hypothesize that this is due to the weaker performance of the self-supervised model which is also trained with an RGB loss and hence suffers from the under-constrained problem of recovering geometry from multi-view images. We will add these results to the paper.\n\n**References**\n\n[1] Ranftl et al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer, T-PAMI 2022\n\n[2] Yin et al., Learning to Recover 3D Scene Shape from a Single Image, CVPR 2021\n\n[3] Do et al., Surface Normal Estimation of Tilted Images via Spatial Rectifier, ECCV 2020\n\n[4] Li et al., StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation, ICCV 2021", " **How robust is the monocular cue supervision to the model, e.g., when there are noise or large errors in the monocular predictions**\n\nThanks for the question. We did not observe any significant errors in the monocular predictions in the real-world datasets (ranging from object to large-scale indoor scenes). Even for ScanNet dataset where motion blurs or noises are present in the RGB images, the predicted monocular depths and normals are still of high quality. However, we do observe that the model predicts infinite depth in some region of the Replica dataset (e.g. windows or doors with completely black colors). We filter out these images if the maximum depth is ten times larger than the minimum depth in the image. \n\nTo further analyze the robustness of our approach to monocular geometric cues of different levels of quality, we further tested our model with different depth predictors: \n\n|Method|MLP| w/ MiDaS depth [6] | w/ LeReS depth [7]| w/ Omnidata depth|\n|-|-|-|-|-|\n|F-score | 64.2 | 68.6 | 72.6 | 86.7|\n\nFor all three methods, adding monocular depth improves performance over a single MLP without cues. Unsurprisingly, better depth predictors lead to better performance, with the state-of-the-art Omnidata model giving the best results. We thus believe that the development of better depth cues will further improve the performance of our approach.\n\nWe also tested our model with different monocular normal predictors and obtain the following results:\n\n\n|Method|MLP| w/ Tilted normal [8]| w/ Omnidata normal|\n|-|-|-|-|\n|F-score | 64.2 | 78.3 | 92.2 |\n\nWe find that both monocular normal predictors improve the results, and similarly to our observations above, using normals predicted by the state-of-the-art Omnidata model leads to the best performance. We will add this discussion and both experiments to our revised paper.\n\n**How are $w$ and $q$ estimated?**\n\n$w$ and $q$ are estimated per image as each depth map is defined up to an unknown scale and shift. In our implementation, we sample one image randomly in each iteration and then sample a batch of rays within the image. Then we estimate $w$ and $q$ using this batch of rays with a least-squares criterion which has closed-form solution, see L26-L36 in the supplementary document. We will make this clear in our revised paper.\n\n[6] Ranftl et al., Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer, T-PAMI 2022\n\n[7] Yin et al., Learning to Recover 3D Scene Shape from a Single Image, CVPR 2021\n\n[8] Do et al., Surface Normal Estimation of Tilted Images via Spatial Rectifier, ECCV 2020", " **However, the methodology did not make any changes (auxiliary loss terms) compared to the case of learning standard NeRF. I would expect different choices or additional analysis due to the task of surface reconstruction instead of simply applying the loss.**\n\nThere surely are more complex ways to utilize monocular cues, e.g., to reduce the number of samples by avoiding sampling in empty space. The advantage of using auxiliary loss terms compared to a deeper integration is flexibility, allowing us to use monocular cues independently of the used scene representation. At the same time, this \"simple\" way of integrating monocular cues already leads to significant and consistent improvements in challenging scenes. We consider the combination of simple and flexible integration and strong results a strength rather than a weakness of our approach.\n\n**For example, compared to NVS, will normal cues be more important?**\n\nWe find that both depth and normal cues are complementary and improve reconstruction results. We achieve the best results with both cues, see table 2 in the main paper. Nevertheless, we indeed found that normal cues lead to relatively more improvements, especially when using an MLP instead of a multi-res. grid representation.\n\n**For (2) part, it is a bit disconnected from the main story.**\n\nWe make the general statement that incorporating monocular cues significantly improves performance. In order to verify this statement, we need to show that the cues improve performance independently of the chosen scene representation. \nThus, exploring the impact of the cues on various commonly used representations is a central part of our study. Moreover, a surprising conclusion from our experiments is that otherwise inferior MLP representations are able to attain performance on par with more recent multi-resolution feature grids when exploiting monocular cues, see table 2 in the main paper. We believe that these results are interesting and worth to be shared with the research community.\n\n\n**The findings from the paper about MLP vs. explicit grids are not surprising.**\n\nIn our experiments on ScanNet, Multi-res. Feature grids lead to noiser reconstruction compared to MLPs due to dataset low image quality (e.g., motion blur) and noisy camera poses. We argue that this was not obvious before our experiments and our results, for the first time, reveal that the MLP architecture is more robust to noisy inputs compared to Multi-res feature grids. Generally, a single MLP is robust to noises but tends to yield smooth surfaces, while Multi-Res. Feature Grids are able to capture details and converge fast but are less robust to noise and ambiguities in the input images. Using monocular cues, however, we surprisingly found that a simple MLP architecture performs best overall, demonstrating that MLPs, in principle, can represent complex scenes while converging more slowly compared to grid-based representations. We believe our findings will provide valuable insights for future work. \n\n**More comparison of different architecture configurations.**\n\nThanks for the great suggestion! We agree that exploring different configurations is interesting and thus performed experiments with different architectural configurations. In order to evaluate the performance with different model capacities, we consider MLPs with a different number of layers and multi-resolution feature grids with different sizes of the hash table.\n\nWe list the number of learnable parameters using different architecture configurations in the table below, and also show their performance over the optimization processes in https://imgur.com/a/btDDxEC. \nOur experiments show that using monocular geometric cues improves reconstruction quality and convergence speed independent of the network configuration, which is consistent with our findings in the paper. We will add this experiment to the revised version of the paper.\n\n\n|Model configuration | Num. Params|\n|-|-|\n|MLP (2 layers) | 0.15M |\n|MLP (4 layers) | 0.26M |\n|MLP (8 layers) | 0.63M |\n|MLP (12 layers) | 0.8M |\n|Multi-res. Feature Grids (hash table size $2^{13}$)| 0.41M |\n|Multi-res. Feature Grids (hash table size $2^{15}$)| 1.1M |\n|Multi-res. Feature Grids (hash table size $2^{17}$)| 3.67M |\n|Multi-res. Feature Grids (hash table size $2^{19}$)| 12.67M |\n", " Thank you very much for the constructive feedback. We particularly appreciate that you find that our approach is \"quite clean and easy to apply to any neural implicit method\". We believe that the combination of simplicity (in terms of formulation and implementation), flexibility (in terms of not being tied to a specific scene representation), and state-of-the-art results is a particular strength of our approach. Below, we address the concerns raised in the review.\n\n**The contribution is weak and relatively incremental. The entire paper can be seen as a combination of two different parts: (1) adding monocular cues as additional supervision to improve MVS; (2) exploring different architecture choices for neural implicit representations.**\n\nWe respectfully disagree that our contribution \"is weak and relatively incremental\". To the best of our knowledge, existing approaches that use depth priors, such as DS-NeRF [1], Dense Depth Priors [2], and Manhattan-SDF [3], obtain these priors from multi-view reconstruction (either sparse point clouds from Structure-from-Motion or dense point clouds from Multi-View Stereo). However, multi-view reconstruction approaches often struggle in texture-less scenes and in scenarios with sparse views, see the table in the next answer block. \n\nIn contrast, we use monocular cues which are versatile (i.e., can be extracted from a single image using a feedforward network) and exploit the recognition ability of state-of-the-art deep neural networks as opposed to multi-view reconstruction approaches which utilize photoconsistency cues similar to the NeRF objective itself. Our insight is that photometric consistency cues used by surface reconstruction methods (such as VolSDF) and the recognition cues provided by monocular geometric networks are complementary, see L52-L59 in the paper. We show that such readily available monocular cues can be easily used to significantly increase 3D reconstruction quality, especially in challenging settings such as 3-view DTU and Tanks \\& Temples. We are not aware of any prior work based on neural implicit scene representations which yields good results for the advanced split of the Tanks \\& Temples dataset.\n\nFinally, using monocular rather than multi-view cues comes with its own challenges, most notably that depth is only defined up to an unknown scaling factor and an unknown shift per depth map and that the cues can be rather inaccurate. As such, the losses used by our approach differ significantly from the losses used by methods which integrate multi-view depth constraints (eg, DS-NeRF), by taking scale-invariance into account and modeling normal consistency. \n\n**For (1), although the idea is clean and easy to understand (as stated in strengths), it is not new in the scenario of learning neural fields. For example, several papers have already explored using predicted depth and other semantic features in learning NeRF. This work applies a very similar idea to surface reconstruction.**\n\nAs argued above, our approach uses a fundamentally different source of depth cues (monocular predictions) compared to existing work (multi-view predictions). \nGeometrically, our cues are somewhat weaker as each depth map is defined up to unknown scale and shift values (see also below), thus introducing additional parameters that need to be estimated. Our paper demonstrates that estimating these parameters during optimization is possible, leading to consistently improved 3D reconstruction results when integrating weak monocular cues.\n\nThe table below compares our approach, based on geometrically weak monocular cues, to Manhattan-SDF (which uses multi-view depth and semantic cues, as well as normals obtained from manhattan-world assumption) and multiple variants of VolSDF (used as baselines in [3]) for the ScanNet dataset. As can be seen, our monocular cues significantly improve performance compared to using multi-view cues. Please see [3] for a detailed explanation of the baselines.\n\n|Method|Chamfer-L1|F-score|\n|-|-|-|\n|VolSDF|0.267|0.364|\n|VolSDF + Colmap Depth [4]|0.164|0.431|\n|VolSDF + Colmap Depth [4] + Semantic [5]|0.104|0.474|\n|Manhattan-SDF [3] |0.070|0.602|\n|Ours|**0.042**|**0.733**|\n\n\n[1] Deng et al., Depth-supervised NeRF: Fewer Views and Faster Training for Free, CVPR 2022\n\n[2] Roessle et al., Dense Depth Priors for Neural Radiance Fields from Sparse Input Views, CVPR 2022\n\n[3] Guo et al., Neural 3D Scene Reconstruction with the Manhattan-world Assumption, CVPR 2022\n\n[4] Schönberger et al., Pixelwise View Selection for Unstructured Multi-View Stereo, ECCV 2016\n\n[5] Chen et al., Encoder-decoder with atrous separable convolution for semantic image segmentation, ECCV2018\n", " Existing neural fields-based methods fail to reconstruct high-quality surfaces for larger and complex scenes with sparse viewpoints. In this work, the authors inspect the issue as inherent ambiguity in RGB loss which provides insufficient constraints. Inspired by the area of monocular geometry prediction, this paper proposes MonoSDF, which explores the utility of depth and normal cues predicted by general-purpose monocular estimators. Experiments demonstrate the geometric monocular priors significantly improve the performance both for single and multi-object scenes. Strengths:\n\n-\tThe proposed method, which incorporates monocular predictions to ease geometry learning, is quite clean and easy to apply to any neural implicit method.\n\n\n\nWeaknesses:\n\n-\tThe contribution is weak and relatively incremental. The entire paper can be seen as a combination of two different parts: (1) adding monocular cues as additional supervision to improve MVS; (2) exploring different architecture choices for neural implicit representations.\n\n-\tFor (1), although the idea is clean and easy to understand (as stated in strengths), it is not new in the scenario of learning neural fields. For example, several papers have already explored using predicted depth and other semantic features in learning NeRF. This work applies a very similar idea to surface reconstruction. However, the methodology did not make any changes (auxiliary loss terms) compared to the case of learning standard NeRF. I would expect different choices or additional analysis due to the task of surface reconstruction instead of simply applying the loss. For example, compared to NVS, will normal cues be more important? How the noise of the prediction affects the geometry quality?\n\n-\tFor (2) part, it is a bit disconnected from the main story. It is always a nice contribution to conduct a systematic exploration of the best architecture for surface reconstruction. Although the findings from the paper about MLP vs. explicit grids are not surprising, the comparison itself can be a good topic. However, such experiments can be done at any settings with different loss functions.\n When conducting the comparison between MLP and dense grids, how is it possible to make the comparison fair? For example, MLP-based representation is known to be computationally heavy while having low storage requirements, while dense grids can consume very large memory. It is also hard to measure the “capacity” between the MLP-based model and grid (including multi-resolution grids). Simply comparing the number of learnable parameters might not be enough. Therefore, to conduct a complete exploration of different architecture choices, simple tables (like Table 2) might not be enough. For example, for each choice, a figure is required to see the performance vs.. model capacity vs. trade-offs (spatial, convergence speed).\n\nHow robust is the monocular cue supervision to the model? For example, how the results change if there are large errors in the depth or normal predictions\n\nIn Eq 13, are w and q estimated for each image separately? In Line 189, it mentioned, “per batch”. Does one batch contain multiple images?\n The paper discussed the limitations and social impacts.", " This paper proposed an approach to address 3D reconstruction from multi-view images. The paper is built upon several milestone papers/techniques and the results presented are relatively better. It started off from signed distance function (SDF), and volume rendering of implicit surfaces. The proposal is the incorporation of two losses, i.e. depth and normal consistency estimated from individual images. The estimation, or \"ground truth\" for supervision, is from a pre-trained Omnidata model [14]. The paper also explored several architectures. *Strength*\n1. The paper is easy to read and the main idea is clearly delivered. In essence, the paper took an off-the-shelf depth estimator to serve as a strong prior. \n2. The proposed approach is robust across different numbers of images. The method can not only be applied to single objects, but also large scale scenes.\n\n*Weakness*\n1. The novelty of this paper is insignificant. It pieces together several techniques (SDF, VolSDF, Omnidata etc). And many of them are well established. I think the paper can potentially compensate this weakness by addressing the points below:\n1.1 The exploration of the \"monocular geometric cues\" is also relatively weak. It is curious to see how different depth estimators may affect the improvement when incorporating the additional prior. \n1.2 It is also peculiar that this paper chose depth and normal as the only two monocular cues to experiment. The Omnidata is able to generate high quality ground truth for 19 more more tasks. \n\n2. The color aspect of the reconstruction is never demonstrated or explained. The results are mostly focused on the geometric reconstruction but not the color appearance or rendered 2D images.\n\n3. The experiments for the architectures are confusing. The paper first showed results for comparing different architectures without monocular cues and arrived at the conclusion that the best model is Multi-Res. Fea. Grids. Then after adding the proposed monocular cues, the paper concluded that MLP is the best model. This behavior is not well explained. 1. I'm wondering what is the performance comparison between the proposed monocular priors vs self-supervised depth+normal estimation? The paper has addressed the limitations of the existing model it used for depth estimation. However, it does not address whether Omnidata is the best prior to use compared to other models. It also does not address other monocular cues to explore.", " In this work, the authors proposed a novel and powerful geometric representation using neural implicit function. Previous neural implicit functions are trained purely on RGB reconstruction loss and have difficulty in representing more complicated geometry. In this work, the authors try to address this problem from two directions: first, they propose a novel depth and normal cues that significantly improves the quality of the reconstruction. Secondly, they explored different representation functions, including dense SDF grid, simple MLP, feature grid + MLP and multi-resolution feature grid + MLP. Both of these changes significantly improve the quality of reconstructed geometry. This is a high quality work. The idea of using depth and normal cues are simple and effective. The proposed multi-resolution feature grid + MLP representation is also novel and effectively improves the reconstruction quality over MLP solution. The experimental results are thorough and the paper is well written and easy to follow.\n\nI don’t find any particular negative point. I only have a few minor questions for the authors:\n\nIn Table 1, in addition to network type, it is also useful to report the number of parameters of each network. Normally, in neural representation, larger network parameters often lead to a better representation power, so this information is important to understand whether the improvement comes from network architecture itself, or simply a larger set of parameters. Moreover, it would be better to compare networks with similar numbers of parameters in Table 1.\n\nThe proposed neural representation also recovered the texture of the surface (color). I think it is also important to visualize them to understand the accuracy of texture recovery, even though they are not the main task of this work. It is hard for me to tell whether the network can also represent texture, or the color prediction network is simply to help to train the SDF.\n\nI also have one minor suggestion to this work:\n\nIn Figure 3, 4, it would be better to show actual image view or ground truth rendered image, as it is hard to tell whether some small reconstructed geometry are correct or not.\n\n No as far as I know.", " This paper presents a framework to utilize monocular geometric cues to improve multi-view 3D reconstruction quality, efficiency, and scalability for neural implicit surface models. A systematic comparison and detailed analysis of design choices of neural implicit surface representations including vanilla MLP and grid-based approaches has been presented. Among these representations, a simple MLP architecture performs quite well, which demonstrates that MLPs are able to represent complex scenes. I really like the idea proposed in this paper. To improve the reconstruction quality with sparse input, shape priors should be added. There are several ways to construct the priors. One solution is to construct the parametric model for some special types like face and body. This paper explores another way with the help of depth estimation from single image. Although the estimated depth and normal may contains noises or with wrong scales, the proposed method well handles these issues. One ablation study I want to see in the paper is the number of input multi-view images and monocular geometric cues. Specifically, the monocular geometric cues could help the reconstruction when the input images are sparse. If the input images are dense, the monocular geometric cues might influence the reconstruction quality due to the error contained in the monocular cues. I wonder where is the balance for the input number of images? For a reconstruction problem with different number of input images, how should I know whether the monocular geometric cues could help or not? Except the question listed in the above, I don't have other concerns to this paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ "5iPaUY8rnSl", "FQ4os9jA0Xv", "KdAUG5_D8Ik", "Mj2ixa9Np1A", "IMSMcpsrjmP", "rimXmVYRTI3", "urLzkGW8FLy", "mE7zik7Q1K5", "13Sqj0UV-O1", "LrFE1FwM1Sn", "Ew-2SFxbUOK", "mE7zik7Q1K5", "96CJLZ9sfm", "13Sqj0UV-O1", "LrFE1FwM1Sn", "mE7zik7Q1K5", "96CJLZ9sfm", "13Sqj0UV-O1", "wKiXsoR-kvd", "oA4y5lgkgUg", "LrFE1FwM1Sn", "nips_2022_dMK7EwoTYp", "nips_2022_dMK7EwoTYp", "nips_2022_dMK7EwoTYp", "nips_2022_dMK7EwoTYp" ]
nips_2022_Euv1nXN98P3
TarGF: Learning Target Gradient Field for Object Rearrangement
Object Rearrangement is to move objects from an initial state to a goal state. Here, we focus on a more practical setting in object rearrangement, i.e., rearranging objects from shuffled layouts to a normative target distribution without explicit goal specification. However, it remains challenging for AI agents, as it is hard to describe the target distribution (goal specification) for reward engineering or collect expert trajectories as demonstrations. Hence, it is infeasible to directly employ reinforcement learning or imitation learning algorithms to address the task. This paper aims to search for a policy only with a set of examples from a target distribution instead of a handcrafted reward function. We employ the score-matching objective to train a Target Gradient Field (TarGF), indicating a direction on each object to increase the likelihood of the target distribution. For object rearrangement, the TarGF can be used in two ways: 1) For model-based planning, we can cast the target gradient into a reference control and output actions with a distributed path planner; 2) For model-free reinforcement learning, the TarGF is not only used for estimating the likelihood-change as a reward but also provides suggested actions in residual policy learning. Experimental results in ball and room rearrangement demonstrate that our method significantly outperforms the state-of-the-art methods in the quality of the terminal state, the efficiency of the control process, and scalability.
Accept
After a strong rebuttal from the authors and an extensive discussion among the reviewers, I believe this work will be a valuable contribution to NeurIPS. I recommend it for acceptance and encourage the authors to address the reviewers comments for the camera-ready version of the paper, especially the point about the simplistic evaluation of the method - please consider a more realistic evaluation scenario.
train
[ "YgtMCuHE1nh", "ei6ZeVvqp0H", "2bhf0XbgkY", "q0n1RDldKxx", "-JXAzTDMF1_", "7UozQ3OS0-j", "OAEbFhwbtA", "PU_oJZoC-Xx", "I2oFuW0KalU", "Ny-8tE-DTr", "oz-TjrksYtA", "iz9N3OK2xn", "34FhNqZRDGI", "Eb4SmRvIn8s", "6nl8YB8RXXy", "LXCjKWQiZaH", "IHl1-4GwYZQ", "jlgfVD7Hhq", "9hBcUj-YNF", "c7-lJcUukr3", "3CZzxXMp7vG" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for raising your rating to 7. We are so glad that our responses help address your concerns. Thanks again for all your valuable feedback!", " I thank the authors' detailed clarification. Most of my concerns are addressed for example the multi-model distribution and experiment task setups. Based on that, I'll increase my score from 6 to 7.", " As the author-reviewer discussion is ending, we would like to ask whether our replies have addressed your concerns. Please let us know whether you have further questions. **We are sincerely waiting for your response.**", " Thanks for raising your rating to 6. We are glad that our responses help alleviate your concerns. Thanks again for all your valuable feedback!", " We would like to ask whether we have addressed your concerns for **updating your rating**? Please let us know whether you have further questions. We are sincerely waiting for discussion with you", " > I thank the authors for their response. The planner results in particular are really quite compelling as it both shows the efficacy of the learned reward in a longer horizon setting and shows that the reward is stable when only a single object is being moved at a time. The updated limitation section is also appreciated.\n\nMany thanks for your response. We are glad that our responses help alleviate your concerns.\n\n> The only point I will push back on is room rearrangement vs object arrangement. …..\n\nFor this only point, we appreciate you for reminding us of these concurrent works. As described in Sec 2.1 of our main paper, there are also a number of previous works naming similar tasks as \"Object arrangement\", particularly in computer graphics and robotics. Nevertheless, we acknowledge that the two papers in ECCV 2022 (The acceptance is announced on Jul. 3) also name the setting of \"manipulating objects without explicit specifying goal\" as \"rearrangement\". **So, we have renamed our \"arrangement\" task to \"rearrangement\", as requested.**\n\nThese concurrent works also demonstrate that the problem we study is important and challenging to the community. We are also glad to see these newly released datasets, which can help us further extend our method and conduct experiments on the embodied AI scenarios. **We would like to extend our framework to these concurrent benchmarks in the future.**\n\nWe would like to emphasise that we are \"rearranging objects without explicit goal specification\" rather than achieving a specific goal. The concurrent works you mentioned [2, 3] exploit the commonse knowledge from Large Language Model (LLM) or memex graph to infer rearrangements goals. In this paper, we focus on a more fundamental problem in rearrangement: *how to estimate the similarity (i.e., the target likelihood) between the current state and the example sets and manipulate objects to maximise it*. We only need a set of positive state examples to learn the gradient field, rather than the prior knowledge about specific scenarios. **We revised our main paper accordingly and described the differences between our work and the others** to ensure our work can be properly appreciated by the community working on a similar task.\n \nMany thanks again for your response! We hope to hear back if you have further questions!\n", " I thank the authors for their response. The planner results in particular are really quite compelling as it both shows the efficacy of the learned reward in a longer horizon setting and shows that the reward is stable when only a single object is being moved at a time. The updated limitation section is also appreciated.\n\nThe only point I will push back on is room rearrangement vs object arrangement. While Batra et al didn't describe the author's object arrangement exactly as one of their examples and the present challenge of object arrangement and room rearrangement are different (IRL and RL, respectively), they are very similar tasks as the end of the day; the goal specification is just different. [1], [2], [3] all study similar tasks and refer to their task as rearrangement (note that these are concurrent works). My encouragement of using rearrangement isn't to detract from the novelty of this work, but to make sure this work can be properly appreciated by community working on a very similar task and so that it's easier for readers to understand how this work relates to others.\n\n\n[1] Netanyahu, Aviv, et al. \"Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning.\" ICML 2022.\n[2] Yash Kant et al, \"Housekeep: Tidying Virtual Households using Commonsense Reasoning\", ECCV 2022\n[3] Gabriel Sarch et al, \"TIDEE: Tidying Up Novel Rooms using Visuo-Semantic Commonsense Priors\", ECCV 2022.", " Thanks for raising your rating to 6. We are glad that our responses help alleviate your concerns. Thanks again for all your valuable feedback!\n\n", " The **major revisions to the initial submission** are summarized as follows:\n\n- Fix the typos.\n- Update the problem formulation in Sec 3 with an additional assumption.\n- Add eward details in Sec 4.4.\n- Replace the variance bar with confidence intervals in Figure 5.\n- Update details of the generalisation experiment in Sec 6.2.\n- Update the limitation and future work in Sec 7.\n- Rename \"Arrangement\" task to \"Rearrangement\".\n\nBesides, we have **demonstrated the additional analyses** on an [anonymous site](https://sites.google.com/view/neurips2022-paper2108-rebuttal) and **updated the supplementary** with the experiment details of the additional analyses (See Sec 5.6~5.9 in supplementary).", " Thank you for the response! I don't have any further questions at the moment. Given the response, I have updated my rating to 6.", " > **Q1**: The evaluation is very simple ... (one object moves at a time). \n\n**A1**: Sorry for making you confuse our object arrangement with the existing room rearrangement. We argue that the rearrangement and arrangement are two quite different tasks, even though their names are similar.\n\nFor a clear comparison, we compare the two tasks in the following table in different aspects:\n\n| | Room Rearrangement | Object Arrangement |\n|--------------------|--------------------|-------------|\n| Deterministic Goal \t| Yes | No | \n|Pre-defined Reward \t| Yes | No |\n| Observation \t\t | Partial\t\t| Global | \n| Learning Paradigm | Reinforcement Learning | Inverse Reinforcement Learning | \n\n**We highly encourage you to read the [*common responses*](https://openreview.net/forum?id=Euv1nXN98P3&noteId=jlgfVD7Hhq) for a better understanding of the difference between the two tasks.**\n\nWe agree that evaluating the methods in a realistic 3D environment can further improve our paper. But such 3D environments will bring additional distractors (partial observation, dynamic noises) that make the results too complex to analyze, particularly at the current stage. Note that a concurrent work in ICML 2022 [4] in object arrangement, published after our submission, also uses 2D environments for evaluation.\n\n**Object arrangement is not limited to the room scenario.** For instance, in multi-agent formation control, the UAV/UGV are required to move together to form a pattern in the shortest path. Our ball arrangement tasks are exactly inline with this real-world scenario.\n\nThe reason why we further evaluate our method in a room scenario is that we would like to show our method can handle more observational variables, e.g., orientation, object size, and category. Inspired by the multi-chair arrangement example in our demo video, we enable all objects to be moveable to simplify the dynamic, so as to emphasize the key difficulty of the arrangement task.\n\n**Object arrangement task is an underexplored problem.** To emphasize the key difficulty of this problem, we evaluate our method in controlled environments with fewer variants. Note that a concurrent work for reward learning in ICML [4] (released after our submission), adopts a similar research paradigm. Compared with this work, our experiment setting considers significantly more objects with diverse attributes (e.g. categories, bounding box). \n\nWe have refined our paper to make it more clear.\n\n\n>**Q2**: In the second the furniture in rooms must be rearranged back to its initial state.\n\n**A2**: Actually, we do not require rearranging the furniture back to its initial state. \nIn Figure 4, Ours(SAC) arrangement results look similar to the 'ground truth' rooms because we demonstrate **the nearest configuration** w.r.t. its 'ground truth'. \nThank you for pointing this out. We revised the caption of Figure 4 of our main paper accordingly to make this clear.\n\n> **Q3**: ... likely doesn't outperform GAIL + SAC ... Can the authors report confidence intervals instead of variance?\n\n**A3**: Thank you for pointing this out, we have replaced the variance intervals on paper with confidence intervals. It is clear that our method outperform GAIL+SAC with a large margin in room arrangement (Ours: 0.040 +- 0.002 vs. GAIL: 0.173 +- 0.007).\n\n\n", " > **Q1**: The paper is somewhat limited in scope. It is only applied to a very specific robotics problem (that of object rearrangement)...\n\n**A1**: Object arrangement is not limited to the room scenario. For instance, in multi-agent formation control, the UAV/UGV are required to move together to form a pattern in the shortest path. Our ball arrangement tasks are exactly inline with this real-world scenario.\n\nThe reason why we further evaluate our method in a room scenario is that we would like to show our method can handle more observational variables, e.g., orientation, object size, and category. Inspired by the multi-chair arrangement example in our demo video, we enable all objects to be moveable to simplify the dynamics so as to emphasize the key difficulty of the arrangement task.\n\n**Object arrangement task is an underexplored problem.** To emphasize the key difficulty of this problem, we evaluate our method in controlled environments with fewer variants. This research paradigm is common in machine learning communities. Also, a concurrent work for arrangement study in ICML [1] (released after our submission) adopts a similar research paradigm. Compared with this work, our experiment setting considers significantly more objects with diverse attributes (e.g. categories, bounding box).\n\n\n>**Q2**: Here, some major simplifying assumptions had to be made (such as the fact you can directly control the velocity of any object ...\n\n\n**A2**: All comparisons with the baseline use the same action space and information, so this point would not affect our conclusion in terms of system design. For the force-based action, we can easily extend the velocity-oriented policy with a two-level hierarchy controller, i.e., the high-level controller outputs the expected velocity, and the low-level controller (a PID-like controller) outputs the force to adjust the velocity accordingly. \n\nTo evaluate the effectiveness of this hierarchy controller, we further conduct experiments on the clustering + circle environment. We compare this extended bi-level approach with Ours(SAC) in our paper. The results in Sec. 4 in [**our site**](https://sites.google.com/view/neurips2022-paper2108-rebuttal/) show that *velocity-oriented gradients remain meaningful to force-based action space* but will face some cost in time steps due to the control error. \n\n\n>**Q3**: The paper only shows results in low-dimensional domains (small graphs). Since score-based generative modelling also works in high-dimensional domains (such as images), it would be interesting to see if the method can be used for reward learning from scene images, for example.\n\n\n**A3**: Thank you for this question. To show our method can be used for reward learning from raw-pixel images, we further analyse our framework by training a target score network that takes the image as input and then use the trained target score network to train our policies.\n\nWe compare this image-based gradient field (denoted as Ours(Image)) with 1) the state-based target score network, denoted as *Ours(State)*; and 2) a goal-conditioned baseline, denoted as *Goal(State)*. \nNote that *Ours(State)* and Goals(State) represent *Ours(SAC)* and *Goal(SAC)* in the main paper, respectively.\n\nResults in [**our site**](https://sites.google.com/view/neurips2022-paper2108-rebuttal/) Sec. 5 show that *Ours(Image)* achieves comparable results with Goal(State), demonstrating our framework still has good performance even though the target score network uses the raw-pixel images as input.\nThese results also indicate that raw-pixel observation is a distractor compared with our key focus. Hence, we choose to conduct our experiments in a state-based setting in our main paper.\n\n\n>**Q4**: While the paper is nicely structured, it contains a large number of typos and grammatical errors. \n\n\n**A4**: Thanks for pointing this out. We have fixed most typos and grammatical errors in the revision.\n\n\n>**Q5**: “learning a score function to qualify a state” → what does “qualify” mean here? \n\n**A5**: Thanks for pointing this out. Here we mean learning a score function to quantify the arrangement likelihood of a state. We have revised the paper accordingly.", " >**Q6**: How do you ensure collision avoidance when using model-free RL? \n\n\n**A6**: As mentioned in L202 in the main paper, the RL agent will receive a centralised reward (likelihood) from the score network and a decentralised reward (collision penalty $c_t^i $) from the environment. \n\nHence, the reward for the i-th agent at timestep t can be written as $r_t^i = r_{likelihood} - \\lambda*c_t^i$. As described in supplementary Sec. 2.4(and 3.1), $\\lambda > 0$ is a hyper-parameter to balance the immediate reward and the collision penalty, the collision penalty counts the total number of collisions to agent i $c_t^i = \\sum_{j\\neq i} col_{i, j}$, where $col_{i, j}$ equals to 1 when i-th and j-th agent collide with each other and 0 elsewhere.\n\nThank you for pointing this out. We revised the Sec. 4.4 of our main paper accordingly to make this clear.\n\n>**Q7**: I don't think the authors have done a good job of addressing the limitation of the work...\n\n**A7**: Thanks for the suggestions. We have updated the limitation section of our paper.\n\nWe sincerely hope that our response above makes things clearer to you and addresses your concerns well. Otherwise, please do not hesitate to ask us more, and we are very happy to discuss further. Thank you again for the useful comments and questions!\n\n[1]Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning. Aviv Netanyahu*, Tianmin Shu*, Joshua B. Tenenbaum, and Pulkit Agrawal. ICML 2022\n", " > **Q4**: Is (1) a well-defined optimization problem? For any non-trivial starting state, won't p_tar(s_0) always be negative infinity, so the discounted sum will always be negative infinity.\n\n**A4**: Thank you for this valuable question! We can additionally assume that the $p_{tar}(s_0)$ always be positive. \nIf there are some states that $p_{tar}(s_0) =0$, we can slightly perturb all the original target examples using a small Gaussian noise (e.g., N(0, 0.0001)). Then we can replace the original target distribution with this perturbed one at almost no cost. The above trick used in our implementation was previously used in [5] to tackle the manifold hypothesis issue. We will add the details in the revision.\n\n>**Q5**: This reduces the time horizon, isn't representative of how the objects would be moved by a single agent (one object moves at a time).\n\n**A5**: As described in the previous question, we argue that multiple objects moving together is also a practical setting. We agree there are cases where objects should be moved one by one, but our framework still has the potential to meet this setting. To this end, we design a bi-level approach (denoted as *Ours + Planner*) for object arrangement: The high-level policy determines which object to move according to the trained target score network (e.g. choosing the object with the largest gradient component). The low-level policy leverages the target score network and ORCA planner to output the action. \n\nWe compare this approach with another heuristic-based bi-level planner (denoted as *Goal + Planner*): The high-level planner first generates goals for each object and chooses the object with the farthest distance to the goal to move. The low-level planner is the same as *Ours + Planner*.\n\nAs shown in Figure. 3 in [**our site**](https://sites.google.com/view/neurips2022-paper2108-rebuttal/), *Ours + Planner* is better than *Goal + Planner* in efficiency. This shows the effectiveness of our methods in handling the scenario where the agent can move one object at a time.\n\n> **Q6**: However, this residual policy requires an oracle entity-based representation of the state ...I am concerned by the assumption that such a representation is available as the task complexity increases.\n\n**A6**: Thank you for pointing this out. Currently, we focus on the setting where the state is fully observable to the agent. Hence, the oracle states of the objects are available to the agent. Considering a more complex setting, e.g., vision-based control, there are three ways to realize a vision-based object arrangement: \n\n- 1) employ the computer vision models to extract the explicit state from the image, e.g., using rotated box detection[1] to provide the bounding box and category of objects.\n- 2) Learn an object-centric world model as an implicit representation of the oracle state [2].\n- 3) Learning the vision-based policy by cloning the state-based behaviour, such as [3].\n\n> **Q7**: It is unclear if the baselines receive the same information ... not position + category + bounding box, in the paper.\n\n**A7**: Thank you for pointing this out. All the baselines and our methods receive the same state representation in the same task, yet the representations differ in different tasks.\n\n- For ball arrangement, the detailed state representations are mentioned in Supp 1.1.\n- For room arrangement, the state is defined as the concatenation of 2-D position, 1-D orientation, 2-D bounding box, and a category label. We added the details in the supplementary Sec. 1.2.\n\n\n> **Q8**: Rearrangement should be used instead of arrangement to be inline with what the community is calling this task. See Batra et. al. 2020.\n\n**A8**: The problem setting we studied is different from *Batra et. al. 2020* in motivation, formulation, and challenges. As described above, the room rearrangement focus on the planning and exploration problems in embodied AI, where the goal is given and deterministic. Instead, the core challenge of object arrangement is **learning to control with examples and without reward**. So we do not agree that rearrangement should be used in this work.\n\nWe sincerely hope that our response above makes things clearer to you and addresses your concerns well. Otherwise, please do not hesitate to ask us more, and we are very happy to discuss further. Thank you again for the useful comments and questions!\n\n**Reference:**\n[1] https://github.com/open-mmlab/mmrotate\n\n[2] Kipf, Thomas, Elise van der Pol, and Max Welling. \"Contrastive learning of structured world models.\" ICLR 2020.\n\n[3] Zhong, Fangwei, et al. \"Towards distraction-robust active visual tracking.\" International Conference on Machine Learning. ICML 2021.\n\n[4] Netanyahu, Aviv, et al. \"Discovering Generalizable Spatial Goal Representations via Graph-based Active Reward Learning.\" ICML 2022.\n\n[5] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. NeurIPS 2019 (Oral).", " We also notice that there are some questions and suggestions for our work. We further conduct additional experiments and analyses to demonstrate:\n- 1) Our method can tackle the multi-modal distribution and would not arrange objects into a 'mean pattern' \n- 2) Our proposed score-based reward learning method can work on image-based observation \n- 3) Our method can handle the cases when only one object can move at a time or the agent can only apply force on objects. \n\nWe show the vivid results at a [anonymous site](https://sites.google.com/view/neurips2022-paper2108-rebuttal). \n\nWe sincerely hope our work contributes to the ML+Robotic research field. Below we reply to reviewers’ questions point-by-point. Thanks again for your valuable comments and suggestions.\n", " > **Q4**: I would suggest trying with bi-level planning which first plans ... to avoid the collision\n\n**A4**: Thanks for your insightful suggestion! \nIn fact, the Goal (ORCA) baseline in ball arrangement can be regarded as an implementation of bi-level control. We do not apply it in the room arrangement as the rectangle-shape furniture objects do not satisfy the circle-shape assumption made by the decentralised planner ORCA. We also tried to implement a centralised planning algorithm (e.g. RRT [1]). However, it is costly in time to search for a reasonable path, i.e., taking almost 1 min for 3x3 balls. If we increase the number of objects (i.e., from 3x3 to 3x7 balls), we find that it failed to search a motion path in a limited time (i.e. 10 minutes). \n\nWe notice that such a solution is of two main limitations: \n1. Open-loop planning: The proposed goal and the initial state may not be reachable or far away from each other.\n2. Accessibility of the generated goal: The goal proposer ignores the environment dynamics. So the generated goal may be physically inaccessible (e.g. objects overlap with each other), as shown in Fig.4 in the supplementary.\n\nAs demonstrated in the ball arrangement experiments, the above two limits lead to weak performance : \n1. In Fig. 5 of the main paper, the likelihood curves of goal-based methods are significantly below ours. \u000bIn Table 1 of the supplementary, the average length of trajectories of goal-based methods is significantly below ours (e.g., In Circling + Clustering, the averaged state change of Ours(SAC) 48.93 +- 4.68 achieves less than half of Goal(SAC) 122.72 +- 5.93).\n2. Besides, in the room arrangement, the rectangle-shape furniture objects do not satisfy the circle-shape assumption made by the decentralised planner ORCA. We also tried to implement a centralised planning algorithm for ball arrangement (i.e. RRT). However, though this algorithm can find a feasible solution in one minute when the object number is less than 3*3, it failed to search a motion plan in ten minutes when the number of objects got higher. This is also the well-known curse of the dimensionality problem of the centralised planner.\n\nThese aspects cause unsatisfied performance for the goal-based approach. Thanks for bringing up this baseline. We are willing to add it to the revised paper if requested.\n\n> **Q5**: Did you test the generalisation of the proposed method? e.g. Testing in an unseen room with novel furniture.\n\n**A5**: Yes, we tested the generalisation. \nIn the ball arrangement, we test the generalisation of the unseen initial state across various numbers of balls. To be specific, the gradient fields and policy are trained in the environment with 3x7 balls. Then, we test the learned policy in the environment with 3x8, 3x9, and 3x10 balls. The results are shown in Fig.6 of the main paper.\n\nIn the room arrangement, we emphasise evaluating the generalisation of the target score network. The target score network is trained under 756 room examples and tested on 83 unseen environments. In this case, we train the RL-based policy in 83 testing environments with the pre-trained target score network, which provides the reward and gradient-based action for policy learning. \n\nTo further evaluate the generalisation of the learned policy, we conduct an additional experiment, where we also train the policy in 756 rooms and evaluate the policy in 83 unseen environments. The results of the above setting are reported as below:\n\n| Setting | Coverage Score | Collision Num |\n|----------------------------------|----------------|----------------|\n| Gradient(Unseen), Policy(Seen) | 0.038 +- 0.001 | 0.152 +- 0.007 |\n| Gradient(Unseen), Policy(Unseen) | 0.041 +- 0.002 | 0.145 +- 0.002 |\n\nWe can see that the policy trained in the different settings are of comparable performance, showing the good generalisation of our method in room arrangement. \n\nWe sincerely hope that our response above makes things clearer to you and addresses your concerns well. Otherwise, please do not hesitate to ask us more, and we are very happy to discuss further. Thank you again for the useful comments and questions!\n\n[1] Rapidly-exploring random trees: A new tool for path planning, LaValle, Steven M and others, 1998", " >**Q1**: How does the RL residual policy learn to avoid collision? \n\n**A1**: As mentioned in L202 in the main paper, the RL agent will receive a centralised reward (likelihood) from the score network and a decentralised reward (collision penalty $c_t^i$) from the environment. Hence, the reward for the i-th agent at timestep t can be written as $r_t^i = r_{likelihood} - \\lambda*c_t^i$. As described in supplementary Sec. 2.4(and 3.1), $\\lambda > 0$ is a hyper-parameter to balance the immediate reward and the collision penalty. The collision penalty counts the total number of collisions to agent i $c_t^i = \\sum_{j\\neq i} col_{i, j}$, where $col_{i, j}$ equals to 1 when i-th and j-th agent collide with each other and 0 elsewhere.\nThank you for pointing this out. We revised the Sec. 4.4 of our main paper accordingly to make this clear.\n\n\n>**Q2**: The gradient field might struggle to handle multi-modal distributions.\n\n**A2**: The learned gradient field can handle the multi-modal goal distributions, leading the state to the closest mode. Empirical results on ball arrangement demonstrated supported this argument: The target examples for clustering were sampled from a bi-modal Gaussian distribution. The two modes were of different colour order, i.e., the centres of different clusters ordered in R-G-B or R-B-G. As shown in Fig. 3 of the main paper, images of two middle rows on the rightest column demonstrate these two modes. \nMoreover, we also conduct additional analysis to further validate the effectiveness of our method in cases of multi-modal goal distribution. Specifically, we extend the target distribution of the clustering task to a multi-modal Gaussian and evaluate our method on this task, where the six modes are:\n\n| | Mode1 | Mode2 | Mode3 | Mode4 | Mode5 | Mode6 |\n|-------|-------|-------|-------|-------|-------|-------|\n| Area1 | R | R | B | B | G | G |\n| Area2 | G | B | G | R | B | R |\n| Area3 | B | G | R | G | R | B |\n\nWe also visualise the typical examples of each mode in [**site**](https://sites.google.com/view/neurips2022-paper2108-rebuttal/). Qualitative results demonstrate the learned gradient can successfully guide both planning-based and learning-based planners to reach one mode (instead of a `mean' mode) according to the initial state. \t\nBesides, we evaluate the latent distribution (i.e., the probability of a state belonging to one specific mode) of the arrangement results (states). In Table. 5, the average entropies of the latent distributions of our methods are lower than GT (target examples). This shows the arrangement results of our methods are closer to the mode centres. The averaged latent distribution (overall arrangement results) of our methods achieve comparable orders of magnitude in different modes. This shows the arrangement of our methods can cover all the mode centres.\n\n>**Q3**: Do we consider the 2d rotation of the furniture during the movements? Basically, what's the action space, and why is it hard compared to the ball arrangement\n\n**A3**: Yes, we do consider the 2d rotation of the furniture during the movements. (See video 3:25-3:35 )\nIn the room arrangement, the action space of each object(agent) is a three-dimensional continous vector $(v_x, v_y, v_{yaw})$, where $v_x $ and $v_y$ are the velocity in x-axis and y-axis respectively, $v_{yaw}$ is the angular veloicty. \nWe added the details of the state and action spaces of room arrangement in supplementary Sec. 1.2. \n\n", " We thank all reviewers for appreciating our ideas and experiments. \"The paper is well-written with a clear introduction of the implementation details.\"(eUMp), \"the learned gradient field could be used to tackle three major challenges of object arrangement, which is easy to follow and makes a lot of sense.\" (eUMp), \"The velocity-based action space of the designed task showcases that the proposed method could be used in complex continuous control scenarios.\" (eUMp), \"The proposed method is able to learn solely from examples and does not require any additional data.\" (ZnF9), \"Learning reward functions is an interesting application of score-based generative modeling, and has not been explored before.\" (NBX4).\n\nHowever, we notice that some reviewers may misunderstand our problem setting, and confuse our object arrangement with the room rearrangement. We summarise the difference between these two tasks in the following table:\n| | Room Rearrangement | Object Arrangement |\n|--------------------|--------------------|-------------|\n| Deterministic Goal \t| Yes | No | \n|Pre-defined Reward \t| Yes | No |\n| Observation \t\t | Partial\t\t| Global | \n| Learning Paradigm | Reinforcement Learning | **Inverse** Reinforcement Learning | \n\n- In the room rearrangement setting, the agent aims to rearrange a room into a specific target configuration. If the global state is given, this task is trivial for goal-conditioned reinforcement learning. Hence, in the room rearrangement setting, only partial (visual) observation instead of a global state is provided. This reveals the key difficulties of the room rearrangement: *How to effectively explore the room to infer the rearrangement goal while recognizing and manipulating the objects from visual observation*.\n- In our object arrangement setting, the agent **has not specified a goal**. Instead, the agent is *given a set of target examples to infer the arrangement pattern and then moves the objects to increase the target likelihood as efficiently as possible*. \n- Hence, they focus on **different research areas**. The room rearrangement focus on the embodied ai, particularly in **3D scene understanding and exploration**. Differently, the object arrangement focuses on **goal distribution inference**, i.e., finding the most efficient path in the physical world to reach a goal distribution. In other words, in room rearrangement, the task is to \"make the room the same as a specific reference room\". However, in room arrangement, the task is to make the room of similar features to the examples. \n- For example, in the case of \"tidy up a room\", the rearrangement agent requires **a tidied-up room as a reference**, which should be of the same objects (numbers, category, and shape) as the current state and only some objects are of different poses. Hence, we have to prepare a reference room for the rearrangement agent, and the agent has to explore the difference between the two rooms first. Instead, the arrangement agent only needs **some examples of the tidied-up room during training**. Note that the examples can be collected from other scenes. Hence, there would be diverse states that are expected in object arrangement, but only one expected state is in room rearrangement.\n- Moreover, the learning paradigms for these two tasks are also different. Rearrangement policy can be trained via **reinforcement learning (RL)**. That is because we can easily define a reward function for learning as the oracle goal state is specific and known in room rearrangement. However, RL is infeasible in arrangement, since it is hard to define the reward. Hence, the arrangement agent should be trained via **inverse reinforcement learning (IRL)**, i.e., learning a reward from the examples for policy learning, which remains challenging to the community.\n\nObject arrangement is not limited to the room/ desktop scenario. For instance, in multi-agent formation control, the mobile agents (UAV/ UGV) are required to move together to form a pattern in the shortest path. Our ball arrangement tasks are exactly inline with this real-world scenario.\n\nThe reason why we further evaluate our method in a room scenario is that we would like to show our method can handle more observational variables, e.g., orientation, object size, and category. Inspired by the multi-chair arrangement example in our demo video, we enable all objects to be moveable to simplify the dynamic, so as to emphasize the key difficulty of the arrangement task.\n\n", " This work introduces a novel approach to tackling object rearrangement. The main idea is to learn a scoring function in the form of a gradient field. The learned gradient field could be used by a model-based planning algorithm to find collision-free arrangement plans or as a reward function to train RL agents. Experiment results show promising results of the proposed method for rearranging a room and sample efficient policy learning. Strength:\n1. The paper is well-written with a clear introduction of the implementation details.\n2. I really enjoy reading the paper following the motivation of how the learned gradient field could be used to tackle three major challenges of object arrangement, which is easy to follow and makes a lot of sense.\n3. The velocity-based action space of the designed task showcases that the proposed method could be used in complex continuous control scenarios. The discussion of collision avoidance planning with ORCA further strengthens the practicality of the proposed approach in real-world tasks.\n4. It is impressive to see such a gradient-based scoring function could be used as a reward function for training the RL agent. Also, the gradient-based action could further alleviate the burden of the action generation network so that only a small residual policy network needs to be trained, which largely improves the sample efficiency of the policy learning process. \n\nConcerns:\n1. How does the RL residual policy learn to avoid collision? As mentioned in the model-based planning method, the gradient field does not have environment information so the gradient cannot avoid collision. Since the residual policy is based on gradient-based action, does it need an additional collision penalty reward during training? In the paper, you mentioned the centralized and decentralized reward but more details here would be better.\n2. The gradient field might struggle to handle multi-model distributions. What if there are multiple arrangement goals for the same environment. Will the gradient become a mean of different distributions which will mislead the planner or the policy learning? I feel a probabilistic model would make more sense for this situation.\n3. The authors mentioned that the room-arrangement task is too complex for planning-based methods. I would suggest trying with bi-level planning which first plans high-level arrangements plans with the learned gradient and then use a collision-avoidance trajectory planner to optimize the path to avoid the collision.\n 1. When talking about collision-free room arrangement, do we consider the 2d rotation of the furniture during the movements? Basically, what’s the action space, and why it is hard compared to the ball arrangement?\n2. Did you test the generalization of the proposed method? e.g. Testing in an unseen room with novel furniture. 1. Might fail to handle multi-model goal distributions.\n2. Missing some details of the experimental setups in the main paper.", " ## Post Rebuttal\n\nI thank the authors for their response. I have increased my score.\n\n## Pre Rebuttal\n\nThis paper attempts to tackle the problem of learning to rearrange a set of objects to a \"sensible\" state without a pre-defined reward function. Towards this end, they propose Target Gradient Fields (TarGF). TarGF estimates the gradient of the likelihood that the current environment state is a goal (or \"sensible\") state for the objects.\n\nTarGF is learned by randomly permuting correct arrangements.\n\nThe output of TarGF is used to estimate the best action to take at every timestep and/or used to estimate the reward.\n\nThe method evaluated in two scenarios. In the first a set of balls must be rearranged into either a circle, clusters by color, or a combination. In the second the furniture in rooms must be rearranged back to its initial states. # Strengths\nThe proposed method is able to learn solely from examples and does not require any additional data other than correct examples (negatives are generated by adding noise to the correct examples).\n\nIn the circle scenario, the proposed method outperforms various baselines across a series of metrics -- namely, collision count and success.\n\n# Weaknesses\n\nThe evaluation is very simple. There isn't an agent that rearranges these objects, instead the objects all rearrange themselves. This is a good initial evaluation that shows the method provides a reasonable reward signal, but it leaves a lot of questions since this reduces the time-horizon, isn't representative of how the objects would be moved by a single agent (one object moves at a time), and removes the initial exploration to find the objects. An evaluation that is closer to a real scenario would be helpful.\n\nIdeally, the authors would evaluate on something like HAB (Szot et al 2021), or AI2Thor rearrangement.\n\nThe residual policy derived from the gradient-based action seems quite important to overall performance of the model free approach. However, this residual policy requires an oracle entity-based representation of the state -- requires all object positions, object categories, and bounding boxes. While it is fine to assume this at training time, this is also required at evaluation time. I am concerned by the assumption that such a representation is available as the task complexity increases.\n\nOn this topic, it is unclear if the baselines receive the same information. The supplement says they operate on state, but state is defined just as object position, not position + category + bounding box, in the paper.\n\nFinally, in the most realistic scenario, Room Arrangement, the proposed method likely doesn't outperform GAIL + SAC by a statistically significant margin. Can the authors report confidence intervals instead of variance? Is (1) a well-defined optimization problem? For any non-trivial starting state, won't p_tar(s_0) always be negative infinity, so the discounted sum will always be negative infinity\n\nComments:\nRearrangement should be used instead of arrangement to be inline with what the community is calling this task. See Batra et. al. 2020. Limitations are addressed.", " This paper learns a reward function / score function for objective rearrangement tasks, making use of recent advancements in score-based generative modeling. The authors term this score function a “target gradient field”, and show that it can be used with either a path planner or a reinforcement learning algorithm to solve object rearrangement tasks in a toy setting (i.e. where you can directly control object velocities). The authors compare against a number of baselines and competing methods in these simulated object rearrangement domains, and show that their method performs well across several different metrics. \n Strengths\n- Learning reward functions is an interesting application of score-based generative modeling, and has not been explored before, as far as I know. \n- Proposed approach performs well against other reward learning approaches on the object-rearrangement task. \n\nWeaknesses\n- The paper is somewhat limited in scope -- it is only applied to a very specific robotics problem (that of object rearrangement), and even here some major simplifying assumptions had to be made (such as the fact you can directly control the velocity of any object. \n- The paper only shows results in low-dimensional domains (small graphs). Since score-based generative modeling also works in high-dimensional domains (such as images), it would be interesting to see if the method can be used for reward learning from scene images, for example. \n- While the paper is nicely structured, it contains a large number of typos and grammatical errors. These errors must be fixed before the paper is ready for publication. I will provide a non-exhaustive list here, but I would also suggest making useful of professional proof-reading services if possible. \n\nTypos and grammatical errors:\n- “We do also tries to find a example-based control method”\n- “Differently, our proposed approach focus on learning the target gradient fields from the examples.”\n- “Besides, both RL method and traditional planner can be supported by our target gradient fields in the object arrangement task.”\n- “even if we are accessible to the target distribution”\n- “In address these problems”\n- \"reducing the risk of objects being collided”\n Some questions\n\n- “learning a score function to qualify a state” → what does “qualify” mean here? Can this sentence be re-written to make it less ambiguous? \n- How do you ensure collision avoidance when using model-free RL? Or is collision avoidance not pursued in this setting? \n I don't think the authors have done a good job of addressing the limitation of the work. In the limitations section, the only limitation that the authors mentioned is the fact that their work only considered planar settings, and not 3D settings. There is only a very brief mention of the fact that future work could use real robots. I think the authors should provide a more detailed limitations section (by reducing space used elsewhere). Some ideas for limitations can be seen in the weaknesses section above. A somewhat more comprehensive plan on how the method could be applied to more realistic settings would also be useful. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "ei6ZeVvqp0H", "2bhf0XbgkY", "9hBcUj-YNF", "c7-lJcUukr3", "OAEbFhwbtA", "OAEbFhwbtA", "Eb4SmRvIn8s", "Ny-8tE-DTr", "nips_2022_Euv1nXN98P3", "3CZzxXMp7vG", "c7-lJcUukr3", "3CZzxXMp7vG", "3CZzxXMp7vG", "c7-lJcUukr3", "jlgfVD7Hhq", "9hBcUj-YNF", "9hBcUj-YNF", "nips_2022_Euv1nXN98P3", "nips_2022_Euv1nXN98P3", "nips_2022_Euv1nXN98P3", "nips_2022_Euv1nXN98P3" ]
nips_2022_rnJzy8JnaX
Rethinking Resolution in the Context of Efficient Video Recognition
In this paper, we empirically study how to make the most of low-resolution frames for efficient video recognition. Existing methods mainly focus on developing compact networks or alleviating temporal redundancy of video inputs to increase efficiency, whereas compressing frame resolution has rarely been considered a promising solution. A major concern is the poor recognition accuracy on low-resolution frames. We thus start by analyzing the underlying causes of performance degradation on low-resolution frames. Our key finding is that the major cause of degradation is not information loss in the down-sampling process, but rather the mismatch between network architecture and input scale. Motivated by the success of knowledge distillation (KD), we propose to bridge the gap between network and input size via cross-resolution KD (ResKD). Our work shows that ResKD is a simple but effective method to boost recognition accuracy on low-resolution frames. Without bells and whistles, ResKD considerably surpasses all competitive methods in terms of efficiency and accuracy on four large-scale benchmark datasets, i.e., ActivityNet, FCVID, Mini-Kinetics, Something-Something V2. In addition, we extensively demonstrate its effectiveness over state-of-the-art architectures, i.e., 3D-CNNs and Video Transformers, and scalability towards super low-resolution frames. The results suggest ResKD can serve as a general inference acceleration method for state-of-the-art video recognition. Our code will be available at https://github.com/CVMI-Lab/ResKD.
Accept
After the rebuttal and discussion, two reviewers recommend acceptance, one borderline rejection. Most concerns of the raised in the borderline review were addressed at a sufficient detail in the rebuttal. The AC sees no reason the reject this paper.
val
[ "PFYjHk21vWf", "DDDcCTV0AJ", "IKCRXHV61y0", "FOdFdNH8UF3", "737lMsA2xZs", "X_R_-THJlQZ", "f5TYKNVuDkH", "hmfqalCDc6-", "LTnpGFs5WDM", "Dft_Jwsxu_", "NAU4rqFVbBJ", "q2JKtNmCYgL" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer THu4,\n\nThank you for your feedback. As we are approaching the end of the discussion period, we would like to ask whether there are any remaining concerns regarding our paper or our response? We are happy to answer any further questions.\n\nWe sincerely thank you for your efforts in reviewing our paper and your suggestions for strengthening the experimental part. We will add the ablation study mentioned above into the latter version of the paper. If you find our responses have addressed all of your concerns, would you like to kindly raise the rating?", " The authors have addressed my concern. I have no further question.", " Dear Reviewer uP8D,\n\nThanks for pointing out your confusion. We want to clarify some misunderstandings that caused some of your concerns.\n\n1. Performance without deep teacher network\n- We emphasize that, as we are studying efficient video recognition, **both accuracy and efficiency should be taken into consideration** to evaluate a method. You mentioned AdaFocus V2 achieved higher accuracy. However, this is achieved at the cost of efficiency (e.g., the number of FLOPs required by AdaFocus V2 doubles compared with ResKD). Actually, from Figure 3 in this paper, it can be shown that **at the same computational budget (17.4 GFLOPs)**, 76.2% is a SOTA result (e.g., well above OCSampler [26]).\n- Due to the time limit of rebuttal period, we only trained the student network for 50 epochs. The training time is much shorter than our default settings, i.e., 200 epochs. From our experience, longer training will further benefit the student, e.g., the accuracy of student R50 increases by 1.5% when increasing the training epochs from 50 to 200, with R152 as a teacher.\n\n2. Novelty\n- This paper does not aim to be exactly a new-method paper because **the goal of this paper is to reveal the great potential of low-resolution frames in efficient settings and provide a strong but simple baseline to fulfill the potential**. We believe that is where the value of empirical study papers lies, i.e., challenging common beliefs and identifying surprising results of the method.\n- **From the empirical perspective**, this paper investigates the underlying causes of performance degradation on low-resolution frames, revealing their great potential in efficient settings. **From the technical perspective**, the proposed cross-**Res**olution **KD** (ResKD) overcomes some major flaws of previous KD methods to significantly boost the effectiveness of cross-resolution KD for the first time. For these reasons, we feel it is a bit unfair to simply summarize the contribution of this paper as \"utilizing the typical KD framework but changing the teacher/student networks\".", " I would like to thank the authors for their rebuttal and additional evaluations, as well as further discussion of the novelty. The rebuttal addressed part of my concerns. However, the performance of ResNetKD without deep teacher network (76.2%) is lower than AdaFocus V2 and OCSampler as shown in Table 1. It is hard to say “it still achieves SOTA performance.” Moreover, the rebuttal says “our work does not aim to be exactly a new-method paper”, which further confuses me about the novelty of this paper. Utilizing the typical KD framework but changing the teacher/student networks lacks enough novelty as a NeurIPS paper to me. Given these points, I maintain my original rating of Borderline Reject.", " Dear Reviewer uP8D,\n\nThank you for your feedback. We feel like we have addressed all of your questions from the initial reviews in our response. As we are approaching the end of the discussion period, we would like to ask whether there are any remaining concerns regarding our paper or our response? We are happy to answer any further questions.", " Dear Reviewer THu4,\n\nWe really appreciate your comments and your support for our work. We hope our response can address your concerns.\n\n1. \"The section 3 of this paper seems to be trivial and not closely related to ResKD.\"\n- **Clarification of the controlled experiments in Section 3**. It seems your concern regarding the trivial aspect of Section 3 is mostly related to the experimental designs and results in Figure 1, so we make the following clarification on the insights of these experiments:\n - The first experiment (Figure 1.(a)) is not merely to say spatially down-sampling frame resoluton will cause drastic performance degradation. But more importantly, we reveal that information loss, e.g., loss of high-frequency signals, during the down-sampling process is not the main cause of degradation. This conclusion is drawn from the observation that re-up-sampling low-resolution frames can close most of the gap with original high-resolution input.\n - In the second experiment (Figure 1.(b)), R50-S (the one with a removed early down-sampling layer) outperforms R50-H on low-resolution inputs. However, the advantage of R50-S over R50-H is not simply attributed to more computations in R50-S, as R50-H outperforms R50-S at high resolutions (e.g., 224p) with less computation. Instead, the result suggests that the architecture of R50-S is more compatible with low-resolution input. Considering most existing networks are designed with high-resolution input in mind, we argue that mismatch between network architecture and input size can be an important factor of underperformance on low-resolution frames.\n- **The relation between Section 3 and ResKD**. Section 3 provides the underlying motivation for applying ResKD, which consists of two key ideas:\n - Low-resolution frames (e.g., 112p) still contain sufficient information for accurate recognition, which serves as the basis for ResKD. (The quality of low-resolution frames poses an upper-bound for ResKD. Note that the rationale of ResKD is to guide the learning of the student on low-resolution input. But if low-resolution frames do not contain sufficient visual clues, there is not much ResKD can do.)\n - Mismatch between network architecture and input size is a main cause of performance degradation. This is the direct motivation of ResKD, which is proposed to minimize the gap between network and input size. Section 4.6 provides more explanations on how ResKD helps close the gap.\n\n2. Lack novelty as the paper simply adopts the typical knowledge distillation framework.\n- The design of ResKD is simple but not trivial. We have identified and overcome some major flaws in previous methods (e.g., [19], [34]) applying cross-resolution KD on videos, which brings up to 5.2% gains in mAP on ActivityNet v2 (in Table 5 of ablation study).\n- As an empirical study, our work does not aim to be exactly a “new-method paper”. In contrast, we try to keep the design of framework as simple as possible to make the idea clear -- compressing spatial resolution of videos can be a practical and powerful way to boost video recognition efficiency.\n- As the name suggests, the contribution of this paper largely lies in the empirical findings, which have not been presented in literature of efficient video recognition:\n - Resolution is an important source of redundancy. Low-resolution frames have great potential for efficient video recognition.\n - The main cause of performance degradation on low-resolution frames is not information loss in the down-sampling process, but the mismatch between network architecture and input size.\n - Cross-resolution KD can serve as a strong baseline to fulfill the potential of low-resolution frames.\n - Keeping spatial and temporal hints from the teacher is critical for the success of cross-resolution KD in videos.\n\n3. **Ablation on intermediate layer KD.**\nThis is quite an interesting question. Actually, we have experimented with both KD at early stages and multi-layer KD. However, we empirically found adding features from early stages for knowledge distillation brings no good or even slightly decreases the performance. The detailed results can be found in the table below. \nNumber in the \"Stages\" row means the output features from the corresponding stage of ResNet-50/152 are used for ResKD. The results are reported on ActivityNet v2. We set the weight of KD loss to 100 for all stages.\n\n\n>| Stages | &nbsp;&nbsp; -- | [1] | [2] | [3] | [4] | [1, 2, 3, 4] |\n>|:------:|:-----:|:-----:|:-----:|:-----:|:-----:|:------------:|\n>| mAP | 71.8% | 71.0% | 71.5% | 71.7% | 78.5% | 78.2% |\n\n4. **Results of Swin_S (224) -> Swin_S (112).**\nWithout using a large-scale teacher (Swin_B), the gain from cross-resolution KD is still considerable.\n\n>| &nbsp;&nbsp;Teacher | Student | mAP |\n>|:------------:|:------------:| ----- |\n>| -- | Swin_S (112) | 76.3% |\n>| **Swin_S (224)** | **Swin_S (112)** | **78.8%** |\n>| Swin_B (224) | Swin_S (112) | 80.0% |", " Dear Reviewer uP8D,\n\nWe really appreciate your comments. We hope our point-to-point response can address your concerns and clarify our contribution.\n\n1. \"The basic idea of this paper is to distill the knowledge from large-scale teacher model.\"\n\n- Firstly, we clarify that the basic idea of ResKD is **not to distill knowledge from a large-scale teacher model but from high-resolution frames**. We have discussed in detail the motivations for adopting cross-resolution KD in Section 3. Moreover, ablation studies in Table 6 validate that cross-resolution KD contributes the most to the performance gains. As shown in the table below, without using large-scale networks as the teacher, ResKD still achieves SOTA performance on ActivityNet v2 (better mAP and fewer GFLOPs).\n- The key point of this paper is not limited to studying cross-resolution KD in efficient video recognition. **More importantly**, we provide a comprehensive and in-depth review of resolution and efficiency in the context of video recognition. Our empirical study does reveal some **counter-intuitive findings**, e.g., the main cause of performance degradation on low-resolution frames is not information loss in the down-sampling process, but mismatch between network architecture and input size. Based on this observation, we propose ResKD. The strong performance of ResKD not only validates the effectiveness of cross-resolution KD in efficient video recognition, but also **demonstrates the great potential** of low-resolution frames in trade-off for efficiency, which is largely overlooked by current literature.\n\n>| &nbsp;Method | Teacher Backbone | Backbone | mAP | GFLOPs |\n>|:--------:|:----------------:|:---------:|:-----:|:------:|\n>| AdaFocus | NA | ResNet-50 | 75.0% | 26.6 |\n>| ResKD | ResNet-50 | ResNet-50 | 76.2% | 17.4 |\n\n\n\n2. \"It is not surprising that KD from a powerful model could benefit efficient models.\" \n\n- As discussed in response #1, a \"powerful\" teacher is not a pre-requisite of the high performance.\n- Our work is the first to show cross-resolution KD can **massively boost** the performance of efficient video recognition. Previous methods [19, 34] simply adopt logit KD for cross-resolution KD on videos, which seems to be a natural choice given the feature maps from the teacher and the student have different spatial sizes. However, we point out in Section 5, **traditional logit KD compromises hints in temporal and spatial dimensions** which are crucial for cross-resolution knowledge transfer in videos. As a result, early work[19] only achieved **minor improvements** with KD (**0.2% mAP** on ActivityNet v2, in Table 5 of [19]). Similar cases can be observed in [34].\n- In contrast, we propose pixel-level feature KD. Albeit simple, it performs surprisingly well as shown in ablation studies in Table 5 (switching from clip-level logit KD to pixel-level feature KD results in **5.2% mAP** gain on ActivityNet v2). The large performance gain and simplicity of the design reveal that keeping spatial and temporal hints from the teacher is one key for cross-resolution KD in videos. Otherwise, only minor improvements can be obtained by vanilla KD.\n\n>| &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;Method | KD design | Temporal hint | Spatial hint |\n>|:------------------:|:----------:|:-------------:|:------------:|\n>| Didik, et al. [34] | Logit KD | N | N |\n>| Dynamic-STE [19] | Logit KD | Y | N |\n>| ResKD | Feature KD | Y | Y |\n\n\n3. There exists work applying cross-resolution KD in video recognition. \n\n- As discussed in response #1, the focus and contribution of this paper are beyond studying cross-resolution KD in video recognition. Instead, we are talking about **a more general question** -- Is down-sampling spatial resolution of videos a promising solution to boosting efficiency? Our in-depth analysis on this question reveals some novel findings challenging the common belief. Besides, we use extensive experiments and strong results to give an \"yes\" answer to the question, which **has not been addressed in early works**. \n- As discussed in response #2, the design of ResKD is different from previous works applying cross-resolution KD, which makes a fundamental difference in performance. From this perspective, we think the fact \"there exists work applying cross-resolution KD in video recognition\" does not undermine the contribution of this paper. Instead, it proves the technical contribution of this paper is non-trivial as no existing works conduct in-depth studies on effective strategies for cross-resolution KD in videos. The ablation studies on KD design, analysis, and high performance further strengthen our contributions.", " 4. \"This paper is more like a technical report that validates the existing KD technique on efficient video recognition.\" \n\n- We hope responses 1&2&3 can address most of the concerns regarding novelty of the paper. \n- Besides, we emphasize that, as an exploratory work, the goal of this paper is to reveal the great potential of low-resolution frames in efficient settings and provide a strong but simple baseline to fulfill the potential. Looking into current literature [42, 26, 43, 47, 30, 36], there seems to be a common belief that low-resolution frames alone cannot support efficient video recognition. For instance, low-resolution frames are mostly used to provide side information of a video (e.g., help identify salient frames), while accurate recognition still highly relies on high-resolution frames, which consumes most of the computational budget. In this sense, our work challenges this common belief and encourages future works to rethink the usage of low-resolution frames.\n\n5. Lack insights on video data\n- Firstly, we point out that **spatial redundancy itself is an important topic in efficient video recognition (e.g., [42], [43])**. The performance gains further confirm the significance of our study on spatial resolution in videos. In recent years, huge progress has been made to alleviate temporal redundancy in videos. For example, OCSampler [26] is able to compress a short video of thousands of frames into 6 frames, without sacrificing much accuracy. However, further compressing the temporal dimension will risk a much higher chance of missing key frames, leading to rapidly deteriorating trade-off between efficiency and accuracy. For this reason, we turn to exploring spatial redundancy of video data to further boost efficiency.\n- We do not propose further complexities in the framework to deal with temporal redundancy, as it would interfere with the message we try to convey - compressing spatial resolution of videos can be a practical and powerful way to boost video recognition efficiency.\n- Moreover, our developed ResKD closely correlates with video data and demonstrates temporal dimension information should be maintained during KD. One distinction between early work (e.g., [34]) applying cross-resolution KD on videos and ResKD is that **KD in ResKD is frame-based**. As shown in Table 5, switching from clip-level KD to frame-level KD brings more than 3% increment in mAP on ActivityNet v2. A possible explanation to this result is that high-frequency temporal supervision provides more hints because frames are noisy in nature. This is an important new finding that we believe is instrumental to future research.\n- Although ResKD only explores spatial redundancy, potentially it can be combined with existing methods working on temporal redundancy to explore benefits of both sides (e.g., use OCSampler [26] to sample frames and ResKD to compress spatial resolution of sampled frames).\n\n6. **Table 1.**\nThank you for making these suggestions to help us increase clarity of Table 1. We have revised Table 1 following your advice in the new version of the paper. BTW, the reason we originally highlighted ResKD as the best result on FCVID is in consideration of both accuracy and efficiency (ResKD achieves very similar performance, but only uses half of the computation as AdaFocus V2).\n \n7. **Throughput comparison.**\nThank you for helping us strengthen the experiments. As many baseline methods are not open-source, we only find AdaFocus and AdaFocus V2 for comparison. We report results on ActivityNet v2. Throughput (number of videos processed per second) is measured on a single Tesla V100 SXM2 GPU with the batch size of 64.\n\n\n>| &nbsp;&nbsp;&nbsp; Method | mAP | GFLOPs | Throughput (videos/s) |\n>|:-----------:|:-----:|:------:|:---------------------:|\n>| AdaFocus | 75.0% | 26.6 | 72 |\n>| AdaFocus V2 | 78.9% | 34.1 | 100 |\n>| ResKD | 80.0% | 17.4 | 263 |\n>\n> Notably, ResKD demonstrates even stronger efficiency when evaluated with throughput than GFLOPs. This is because most adaptive methods like AdaFocus are not fully parallel in computation, i.e., they formulate the frame/patch selection policy as a sequential decision task as pointed out in [26]. While ResKD does not suffer from such problems for its simple design.", " Dear Reviewer AzGj,\n\nWe really appreciate your comments for our work and your support for acceptance. We hope our response can address your concerns.\n\n**The training settings in Section 4.5:** The training of the student backbone in Section 4.5 for each different low resolution is almost the same as the default settings stated in Section 4.2 (e.g., the same teacher and hyper-parameters). The only difference is the resolution of input of the student.", " This paper proposes to improve efficient video recognition on low-resolution frames by cross-resolution knowledge distillation (ResKD). Specifically, the student network with shallower architecture and lower-resolution frames is guided by a teacher network with deeper architecture and higher-resolution inputs. By distilling the learnt knowledge from teacher network, ResKD narrows the performance gap between efficient model and large model. The experiments are conducted on several public video recognition datasets. The results validate the good performance-efficiency trade-off achieved by ResKD. Strengths:\n\n+ The paper is mostly clear and easy to follow.\n+ The good performance on several public datasets validate the effectiveness of ResKD.\n\nWeaknesses:\n\n- Novelty. The basic idea of this paper is to distill the knowledge from large-scale teacher model. However, utilizing the popular knowledge distillation (KD) for video recognition is not new (e.g., [34], [35]). It is not surprising that KD from a powerful model could benefit efficient models. Especially, the KD between networks with different input frame resolutions for video recognition has been proposed in [34]. This paper is more like a technical report that validates the existing KD technique on efficient video recognition. Given these points, the technical contribution of this paper is limited.\n- Insights on video data. Considering this paper mainly focuses on video recognition, some insights on video data should be given. However, the proposed ResKD ignores the temporal redundancy in videos and only studies the spatial redundancy. ResKD is more like a general low-resolution image classification framework and has no special design for video data.\n- Table 1. In Table 1, the depth of ResNet (in Backbones column) should be listed. The performance of ResKD is lower than AdaFocus V2 on FCVID, however, ResKD is marked as the best result, which may mislead the readers.\n- Throughput comparison. This paper only lists the throughput in Table 4. Considering that GFLOPs cannot fairly measure the efficiency of a network, the throughput comparisons between ResKD and baseline methods should be given.\n My main concerns are novelty and insights. Please see the details in Strengths and Weaknesses part. The limitations and potential negative societal impact are well discussed in the paper.", " This paper analyzed the underlying causes of performance degradation on low-resolution frames, and proposed a cross-resolution knowledge distillation (ResKD) to bridge the gap between network and input size. This work is interesting. The problems are clearly stated, and the manuscript is well written.\nSome experimental conditions are not clearly described. In Section 4.5, it is not clear how the student Backbone is trained for each different low resolution. NA.", " This paper proposes resolution distillation for efficient video recognition (ResKD).\nResKD adopts the typical knowledge distillation framework, which consists of a teacher network (with high resolution) and a student network (with low resolution). \nResKD calculates the MSE loss between teacher network feature and the upsampled student network feature for knowledge distillation.\nThe authors conducted extensive experiments to validate the efficacy of this framework. ## Strengths\n1. ResKD is a simple & effective idea for efficient video understanding. \n2. Extensive experiments have been conducted to validate the effectiveness of this framework. \n\n## Weakness\n1. This paper simply adopts the typical knowledge distillation framework, which lacks novelty to some extent.\n2. The section 3 of this paper seems to be trivial and not closely related to ResKD. \n 1. If the backbone is unchanged and the spatial resolution is reduced, the recognition performance may severely suffer from the over early down-sampling and the drastically reduced computation amounts (4x reduced for 224 -> 112, *etc.*).\n 2. However, if you remove a down-sampling layer in the early stage, the computation amounts would be close to the original network with high resolution. Thus that's not a good idea for efficient video understanding. \n\nIn summary, I think this paper is a good empirical study, but lacks novelty. 1. Ablation study: In this paper, you present the results of using the last-layer feature for ResKD, have you tried features at other layers or to use features at multiple stages for ResKD?\n2. What's the recognition performance for this setting: Dataset: Kinetics400, Teacher: Swin_S (224), Student: Swin_S (112). Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "q2JKtNmCYgL", "LTnpGFs5WDM", "FOdFdNH8UF3", "hmfqalCDc6-", "Dft_Jwsxu_", "q2JKtNmCYgL", "Dft_Jwsxu_", "Dft_Jwsxu_", "NAU4rqFVbBJ", "nips_2022_rnJzy8JnaX", "nips_2022_rnJzy8JnaX", "nips_2022_rnJzy8JnaX" ]
nips_2022_hBaI5MY0CBz
Feature-Proxy Transformer for Few-Shot Segmentation
Few-shot segmentation~(FSS) aims at performing semantic segmentation on novel classes given a few annotated support samples. With a rethink of recent advances, we find that the current FSS framework has deviated far from the supervised segmentation framework: Given the deep features, FSS methods typically use an intricate decoder to perform sophisticated pixel-wise matching, while the supervised segmentation methods use a simple linear classification head. Due to the intricacy of the decoder and its matching pipeline, it is not easy to follow such an FSS framework. This paper revives the straightforward framework of ``feature extractor $+$ linear classification head'' and proposes a novel Feature-Proxy Transformer (FPTrans) method, in which the ``proxy'' is the vector representing a semantic class in the linear classification head. FPTrans has two keypoints for learning discriminative features and representative proxies: 1) To better utilize the limited support samples, the feature extractor makes the query interact with the support features from bottom to top layers using a novel prompting strategy. 2) FPTrans uses multiple local background proxies (instead of a single one) because the background is not homogeneous and may contain some novel foreground regions. These two keypoints are easily integrated into the vision transformer backbone with the prompting mechanism in the transformer. Given the learned features and proxies, FPTrans directly compares their cosine similarity for segmentation. Although the framework is straightforward, we show that FPTrans achieves competitive FSS accuracy on par with state-of-the-art decoder-based methods.
Accept
This paper studies the plain segmentation framework (feature extractor + linear classification) for few-shot segmentation. It introduces a prompt based query and support interaction method to enable this framework to work well. All the reviewers recognize the proposed method is novel and the performance is good. Though they have some concerns on the computational cost and the fairness of the experiment comparison (e.g., whether they use the same backbone), the authors address these concerns well in their response. All the reviewers agree with accepting this submission. Although their ratings are not very strong supportive, AC agrees this submission brings some values to the community. It inspire some new thinking of the FSS framework design. The overall framework is still heavy. Hopefully, in the future follow up works, the framework can be further simplified.
train
[ "72qhJ8g8wr", "K-X8sYKcYbs", "rPDUYifTyOO", "JM4mce0HAx", "b4OA8GPxjf", "S8VSMgIlvYb", "Bxb7krUJkcQ", "4evA427VNxj", "_KvzerFQR4A" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The rebuttal solves my concerns well, so I raise my rating to 6.", " Thanks for the further comments. \n\n---\n**Q4:** The CNN backbones are typically fixed to alleviate overfitting. In contrast, the ViT backbone is much larger and yet shows resistance against overfitting. Even the baseline with a plain vision transformer can achieve SOTA results. Why is that?\n\n**A4:** We did not claim that the transformer backbone does not encounter the overfitting problem. Instead, we believe that the transformer backbone also has an overfitting risk. In the manuscript, we wrote this sentence \"previous works typically fixed the pretrained backbone parameters to alleviate overfitting\" according to PFENet[42]. Given your concern, we feel this statement might be inaccurate because we are not sure whether the overfitting problem is the only reason for current CNN-based methods to fix the backbone. Moreover, we think it is possible that CNN-based methods will benefit from fine-tuning the backbones with some still-unknown solutions. Therefore, we will remove the statement \"to alleviate overfitting\". \n\nAs for the phenomenon that the transformer backbone seems stronger than the CNN backbone, It is possible that the transformer does has better generalization ability (in spite of the larger model size) under the few-shot learning task, compared with the CNN model. It is because the model size is not the only factor that determines the generalization ability (or overfitting risk). One evidence to support this argument is: a recent work[r2] observes that the transformers \"are strong few-shot learners\" (compared with the CNN models). \n\n---\n**Q5:** The relative computation overhead the proposed prompts strategy brings, and the inference time and flops comparison between the model \"without prompting\" and \"with prompting\". It would be better if relative performance gains were also included in this comparison.\n\n**A5:** Thanks for these good suggestions. In Table r4, we compare using prompt (#2) and no prompt (#1) w.r.t. the relative improvements, parameters, FLOPs, and throughput. To better understand the prompt generation process, we employ a light prompt generator using the first 3 blocks in ViT (#3). This light prompt generator significantly reduces the computation overhead and still achieves similar improvement. In conclusion, the prompt generation process has the following characteristics:\n* It does require extra computation costs for representing class-aware information in each episode, as discussed in the limitation part of the manuscript.\n* However, we find that using a much smaller prompt generator can maintain similar improvement and effectively reduce the computation overhead.\n* Moreover, the prompt generator is used in an off-the-shelf manner. As generating prompts only need support samples, the prompts can be offline precomputed to further accelerate the online inference process.\n\nTable r4. Ablation studies on the prompt generator. ('ep/sec' indicates episodes per second.)\n\n| # | Model | Prompt Generator | PASCAL-5i | $\\Delta$ | COCO-20i | $\\Delta$ | Params(M) | GFLOPs | Throughput (ep/sec) |\n| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |\n| 1 | No Prompt | - | 64.0 | | 40.3 | | 72.8 | 156.0 | 24 |\n| 2 | Large Prompt Generator | 10-block ViT | 64.7 | +0.7 | 42.0 | +1.7 | 145 | 247.2 | 15 |\n| 3 | Light Prompt Generator | 3-block ViT | 64.6 | +0.6 | 41.8 | +1.5 | 95.4 | 193.7 | 19 |\n\n", " Thank the authors for their response. \n\nI further have one question after reading the rebuttal. As stated by the authors in A1-Point1 and the paper, \"previous works typically fixed the pretrained backbone parameters to alleviate overfitting\". Therefore, the authors claim that the proposed method can revive the plain \"feature extractor + classifier\" framework without fixing the backbone network. \nI am curious why the ViT-based backbone does not encounter the overfitting problem even if it contains much more parameters. For instance, in Tabel 2r, 159M compared with 31~71M ResNet-based methods. In the paper, even the baseline with a plain vision transformer can achieve SOTA results (56.8 vs. 51.1 on COCO 5-shot). The Feature-Proxy and Prompts seem to have nothing to do with this surprising and counterintuitive result. \n\nBesides, about Q2, the authors do not directly answer my question. I still think the authors should provide the **relative** computation overhead the proposed prompts strategy brings rather than the overall Flops of FPTrans. Could the authors provide the **inference time and flops** comparison between the model \"without Prompting\" (Line 1 in Table r3) and \"with Prompting\" (Line 4 in Table r3)? It would be better if relative performance gains were also included in this comparison. \n\n", " Thanks for your valuable comments. We will explain your concerns point by point.\n\n---\n**Q1**: Is there a reason that you chose not to use a decoder? In my experience, using a simple feature + decoder can largely increase segmentation accuracy. I do not agree that refraining from upsampling is a simplification.\n\n**A1:** As explained in the introduction, our overall motivation for not using the decoder-based framework is: we observe that in the recent SOTA methods, the decoder becomes more and more complicated (much more complicated than just upsampling, as revisited in Sec. 2.1 in the manuscript). Under this background, we think this pipeline is difficult to follow/improve and might soon reach a plateau. We would like to explain this viewpoint in detail as below:\n\nSpecifically, while these decoders try to exploit the matching capacity of the already-given features, they barely pay an effort to improve the feature itself. In fact, most decoder-based methods use the fixed pretrained CNN models as the feature extractor. We think directly using the pretrained feature is likely to restrict the upper bound of the FSS accuracy. Interestingly, although transformer are the potential to provide higher discriminative ability, our implementation of \"transformer + decoder\" does not achieve improvements. Therefore, we revive the plain framework and focus on learning discriminative features and representative classification proxies (in FPTrans, the transformer backbone is fine-tuned rather than being fixed). We think that based on this plain framework, future research might be relatively easier to achieve further improvement. We agree with you that the \"transformer + decoder\" framework is also potential to bring improvement (maybe with some necessary modification and hyper-parameter optimization). We are open to this alternative strategy and look forward to the corresponding progress from the research community.\n\n---\n**Q2:** It is not easy to judge whether the improvements come from the proposed method, as the backbones have been changed.\n\n**A2:** We guess your concern is not about our improvement over the transformer baseline, but about the comparison between FPTrans and SOTA CNN-based methods, i.e., why FPTrans has higher performance. We think two reasons are important to the superiority of FPTrans, i.e., strong transformer backbone and our improvement over the transformer baseline. On the one hand, the transformer is indeed a strong backbone for learning discriminative features, e.g., the DeiT baseline achieves 64.9% mIoU on PASCAL, which is already comparable with some SOTA methods. On the other hand, based on the strong transformer baseline, FPTrans further brings considerable improvement (e.g., +3.7% mIoU on PASCAL), consequently setting up a new state of the art. \n\nSince the transformer baseline already maintains relatively high accuracy, achieving further improvement is not easy: we tried our best to adapt existing decoder-based methods to transformer backbones and have not achieved improvements. Moreover, during rebuttal, we find a concurrent work CATrans[r1] shares a similar observation. Specifically, CATrans investigates both ResNet-101 and Swin Transformer as the backbone, and finds that the superiority of using Swin Transformer is only marginal (e.g., +0.6% on PASCAL). Therefore, it is valuable for FPTrans to improve the transformer baseline with a relatively simple framework.\n\n---\n**Q3:** The architecture seems quite involved, in particular the need for a proxy generation step. \n\n**A3:** We agree that using a prompt generation step increases the complexity over the transformer baseline (as discussed in the limitation part of the manuscript). Given the observation in the supplementary that the prompt generation is robust to some extent, we think this step is the potential to be simplified in future work. Moreover, despite the extra generation step, FPTrans is still quite efficient compared with SOTA methods. For example, FPTrans with DeiT-S/16 achieves competitive results with 80.7 FLOPs, only 1/5 of BAM on ResNet-101, as shown in Table r2 in the response to Reviewer d8mB. \n\n---\n**Q4:** Code not provided, however, promised to be released.\n\n**A4:** The code and pretrained models will be openly accessible.\n\n---\n**Q5:** Performance gains relative to recent work are not significant, in particular, compared to BAM. \n\n**A5:** Compared with the strongest competitor BAM, although the performance gain under the 1-shot setting is relatively small (e.g., +1.0% and +0.8% on PASCAL-5i and COCO-20i, respectively), the performance gain becomes much larger when there are more support samples. For instance, in the 5-shot setting, FPTrans surpasses BAM by +7.1% and +7.7% on PASCAL-5i and COCO-20i, respectively. Moreover, FPTrans has less computation cost as explained in the response of Q3. \n\n---\n[r1] S. Zhang, et. al, CATrans: Context and Affinity Transformer for Few-Shot Segmentation, 2022, arXiv:2204.12817", " Thanks for your valuable comments. We will explain your concerns point by point.\n\n---\n**Q1:** Discussion about the main contribution.\n\n**A1:** Our main contributions lie in three aspects. \n\n1. With a rethink of prior state-of-the-art FSS methods, we find that they all fix the backbone for feature extraction and design more and more complicated decoders (Section 2.1 and Figure 1 in the manuscript). Fixing the backbone restricts the feature discriminative ability and makes it difficult to achieve further improvement by designing more sophisticated decoders. Therefore, we revive the plain \"feature extractor + classifier\" framework and focus on learning discriminative features and representative proxies. \n\n2. We integrate two keypoints, i.e., query-support interaction and multiple local background proxies, into the proposed FPTrans. Although using multiple local background proxies is not very novel (we did not claim this as a novel point either), these two keypoints both rely on a novel prompting strategy. All the reviewers positively recognize this prompting strategy and the query-support interaction. \n\n3. We show that this plain and relatively simple framework achieves competitive FSS accuracy, compared with the decoder-based methods. More specifically, FPTrans achieves comparable (sometimes even better) performance with relatively fewer FLOPs (see the answer to Q2) compared with SOTA methods. Moreover, FPTrans can better handle the domain shift problem (+5.6% and +9.0% in the 1-shot and 5-shot settings on COCO->PASCAL, compared with HSNet).\n\n---\n**Q2:** The prompt generation process is too heavy. The large computational cost makes the absolute gain seems relatively small.\n\n**A2:** The computational cost of the proposed FPTrans is actually relatively small, although it has a prompt generation process. The quantitative analysis is summarized in the second table for R1-Q1. For example, FPTrans with the DeiT-S/16 backbone requires only 80.7 GFLOPs and is faster than all the competing CNN methods (while maintaining competitive accuracy). Moreover, FPTrans with DeiT-B/16 backbone is superior to the SOTA method BAM w.r.t. both the accuracy (mIoU) and speed (FLOPs).\n\nThe reason for our relatively low computational cost is that FPTrans uses much smaller feature maps. Specifically, the CNN-based methods use the dilated convolutions (in ResBlock3 and ResBlock4) to get the large $60\\times 60$ sized feature maps, which significantly increases the computational cost of the original CNN backbone by about $4\\sim 5 \\times$. In contrast, FPTrans on the DeiT/B-16 backbone requires a relatively small feature map ( $30\\times 30$ sized feature maps) and achieves competitive accuracy with high computational efficiency. \n\n---\n**Q3:** The ablation of only using the learnable prompts and the analysis of the prompts.\n\n**A3:** The ablation studies of prompts are listed in Table r3. From the results in the table above, we have two observations. \n\n1. #2 (using only the learnable prompts) and #3 (using only the extracted prompts) actually decreases and increases the accuracy over #1 (baseline), respectively. \n\n2. Comparing #4 against #3, we observe that adding the learnable prompts brings further improvement. \n\nTherefore, in our manuscript, we consider the learnable prompts as a prompt augmentation approach (which should not be used alone). \n\nTable r3. Ablation studies of the prompting strategy. \"Extracted prompts\" indicate the foreground and background mean vectors extracted in the prompt generation step. \"Learned prompts\" indicate the extra added learnable vectors.\n\n| # | Extracted prompts | Learnable prompts | PASCAL-5i | COCO-20i |\n|:---:|:---:|:---:|:---:|:---:|\n| 1 | | | 64.0 | 40.3 |\n| 2 | | ✓ | 63.5 | 39.2 |\n| 3 | ✓ | | 64.3 | 41.4 |\n| 4 | ✓ | ✓ | 64.7 | 42.0 |", " Thanks for your valuable comments. We will explain your concerns point by point.\n\n---\n**Q1:** Using the DeiT backbone for PFENet, CyCTR, and two SOTA methods BAM and HSNet for better comparison. Comparisons on the parameter size and FLOPs are also recommended. \n\n**A1:** Thanks for these good suggestions. \n\n1. We first add the comparison against these SOTA methods on PASCAL-5i, using both the ViT and DeiT backbones in Table r1. It is observed that: **i)** The proposed FPTrans surpasses all the compared methods on both the ViT and DeiT backbones. **ii)** These SOTA methods undergo considerable decreases when they change the backbone from CNN to transformer (ViT, DeiT), which is consistent with the observation in the manuscript. \n\nTable r1. Comparison of FPTrans with SOTA methods using ResNet and transformer backbones. The * indicates that the results are not officially reported but are achieved by running their source code on the ResNet-101 backbone. \n\n| Method | ResNet-50 | ResNet-101 | ViT-B/16 | DeiT-B/16 |\n|:---:|:---:|:---:|:---:|:---:|\n| PFENet | 60.8 | 60.1 | 58.7 | 57.7 |\n| CyCTR | 64.2 | 64.3 | 60.1 | 61.0 |\n| HSNet | 64.0 | 66.2 | 53.6 | 61.8 |\n| BAM | 67.8 | 67.5* | 59.3 | 50.1 |\n| Baseline | - | - | 61.8 | 64.9 |\n| FPTrans | - | - | 64.7 | 68.8 |\n\n\n2. We further add the comparison of the parameter size and FLOPs in Table r2. For a fair comparison, we fix the input size as $480\\times 480$. We observe that the proposed **FPTrans is actually relatively efficient**, considering its superiority in FSS accuracy. For example, FPTrans with the DeiT-S/16 backbone has 41 Mb parameters and only 80.7 GFLOPs. It is faster than all the competing CNN methods and yet achieves competitive accuracy. Moreover, FPTrans with DeiT-B/16 backbone is superior to the SOTA method BAM w.r.t. both the accuracy (mIoU) and speed (FLOPs).\n\nTable r2. Comparison of FPTrans with SOTA methods on the number of parameters and computation cost.\n\n| Backbone | Method | Params (M) | GFLOPs | Mean-IoU (%) |\n|:---:|:---:|:---:|:---:|:---:|\n| ResNet-50 | PFENet | 34 | 231.2 | 60.8 |\n| | CyCTR | 37 | 244.7 | 64.2 |\n| | HSNet | 28 | 95.9 | 64.0 |\n| | BAM | 52 | 302.2 | 67.8 |\n| ResNet-101 | PFENet | 53 | 367.9 | 60.1 |\n| | CyCTR | 59 | 381.1 | 64.3 |\n| | HSNet | 47 | 145.0 | 66.2 |\n| | BAM* | 71 | 438.9 | 67.5 |\n| ViT-B/16 | FPTrans | 145 | 247.2 | 64.7 |\n| DeiT-T/16 | FPTrans | 11 | 26.7 | 59.7 |\n| DeiT-S/16 | FPTrans | 41 | 80.7 | 65.3 |\n| DeiT-B/16 | FPTrans | 159 | 271.8 | 68.8 |\n\n\n---\n**Q2:** FPTrans with ViT backbone shows inferior 1-shot performance than BAM. \n\n**A2:** We note that the superiority of BAM partially comes from the model ensemble (a base-learner + a meta-learner). Since model ensemble generally brings extra improvement, we think our FPTrans achieving comparable accuracy without model ensemble is also valuable. Moreover, when using DeiT as the backbone, FPTrans maintains higher (68.8%) mIoU and lower computational cost (271.8 GFLOPs), compared with BAM on ResNet-50 (67.8% mIoU and 302.2 GFLOPs) or BAM on ResNet-101 (67.5% mIoU and 438.9 GFLOPs).\n\n---\n**Q3:** The performance of BAM with ResNet-101.\n\n**A3:** BAM-ResNet101 achieves 67.5% mean IoU on PASCAL-5i, which is slightly lower than BAM-ResNet50 (67.8%). Similar observation (i.e., using the smaller CNN backbone is slightly better) can be observed on PFENet, as well. ", " Unlike mainstream methods that involve intricate decoder structure, this paper proposes to revive the plain framework of feature extractor + linear classification head for few-shot segmentation (FSS). Concretely, this paper designs a framework called Feature-proxy Transformer (FPTrans), where it utilizes proxy, a vector to represent the foreground as well as the background classes. It makes queries interact with support features and use multiple background proxies through a novel prompting strategy. Extensive experiment results verify that the proposed method achieves competitive performance, which shows that it can serve as a simple yet strong baseline for FSS. ##### Strengths #####\n1) This paper is well-organized and easy to read. Figure 2 is very clear to illustrate the whole framework. \n\n2) This paper incorporates prompt learning to few-shot segmentation in a clever way. The framework provides dedicated modifications to make it more suitable for FSS, for instance, multiple background prompts. \n\n3) The visualization results in the paper are good. \n\n##### Weaknesses #####\nI have some questions about the experiment results. \n1) Network backbone. The comparison is conducted with different network backbones, i.e., the previous methods take ResNet-50 / 101, while the proposed method takes ViT / DeiT. I notice that the author compares PFENet and CyCTR on ViT. However, the experiments on DeiT are not included. And the comparsions on more SOTA methods, e.g., BAM, HSNet are also omitted. In addition, I think it is better to add the parameter size and FLOPS of different backbones for a clearer comparison. \n2) 1-shot performance. I notice that the proposed FPTrans shows inferior performance than BAM when taking ViT as the backbone. Can the authors have more discussions or explanations about this phenomenon? In addition, I am curious about the performance of BAM with ResNet-101, which is also an important baseline. \n\nMinor issues:\nThe text in the figures can be larger to make it clearer. \n My questions and suggestions are listed in weaknesses. My main concern lies in the experiment results. N/A", " This paper proposes the Feature-Proxy Transformer for few-shot segmentation, which utilizes the ViT as the backbone and interacts the query features with support features through prompts. Besides, this paper introduces to obtain multiple background proxies via Voronoi-based method. Experiments are conducted on Pascal-$5^i$ and COCO-$20^i$ datasets. \n**Strengths**\n1. This paper is easy to follow and may be easy to implement.\n2. This paper demonstrates that the framework with ViT backbone could achieve decent performance on both benchmarks.\n3. The proposed operations are effective according to the experiments.\n4. Interacting query and support features through prompts is interesting.\n5. It shows an excellent transferable performance (from COCO to Pascal).\n\n**Weaknesses** \nIt is hard to tell the main contribution of this work. \nFirst, the idea of extracting multiple proxies (prototypes) from support features is not novel since it has been discussed in [30,52]. The only difference is that this paper applies the Voronoi-based method instead of the EM algorithm [30,52]. \n\nBesides, although this paper claims that the FPTrans is a plain framework, the prompt generation process is too heavy. It requires to use the entire ViT backbone to produce the prompts, which largely increases computational cost. Specifically, this additionally adds an approximate 50% computational overhead compared with the baseline, which may be unacceptable since the ViT-Base backbone is already a much larger model compared with Resnet-50/101, but the overall framework can only obtain comparable results with previous approaches with ResNet-50 backbone. Besides, this paper misses the ablation of only using the learnable prompts.\n In the rebuttal, I would like to see more discussion about the main contribution of this work and the analysis of the prompts. From my perspective, the Pair Loss and Multiple Proxies are minor contributions that are effective but lack novelty. The Prompts for FSS is novel, but the absolute gain (considering the computational cost) is relatively small. It would be better to include the mentioned ablation study, and also the quantitative computational cost comparison (*eg.*, inference time, flops et. al.) in the rebuttal. Limitations are disscussed in the paper. ", " The paper proposes a different approach to FSS, whereby instead of independently processing and matching support and query samples, the support samples are processed to generate prompts, which are then concatenated with both query and support tokens. Importantly these prompts act as message passers between the support and query feature extractors. Finally, proxies are generated and the query tokens classified. The authors claim that this approach has multiple benefits,\n1) Architecture is simplified, compared with decoder architectures\n2) Performance is on par with or greater than previous approaches ## Major Strengths\n* Novelty, specifically the prompting approach and message passing for conditioning has to my knowledge not previously been done in FSS. I think the approach is interesting and may yield further insights.\n\n## Minor Strengths\n* Paper is clearly written and easy to follow.\n* Extensive experiments, in particular ablations.\n\n## Major Weaknesses\n* It is not easy to judge whether the improvements come from the proposed method, as the backbones have been changed. The authors have provided reimplementations of previous methods using the ViT backbone, however one has to assume that such a change may require finetuning (e.g. learning rates) for those previous methods and hence the comparison is not simple to make. Judging from vision transformers success in other related tasks, such as segmentation and detection, I believe there is a large risk that most (if not all) gains can be attributed to the change of backbone.\n\n## Minor Weaknesses\n* While the authors argue that their approach is simpler than previous decoder-based methods, I do not personally agree. To me, the architecture seems quite involved, in particular the need for a first proxy generation step, separate from the rest.\n* Code not provided, however, promised to be released.\n* Performance gains relative to recent work are not significant, in particular, compared to BAM.\n\n## Conclusions\nWhile there are potential issues with where the performance gains come from, the authors also do not claim this as their main contribution. Instead, it is to be viewed as an alternate way to perform FSS. I think the paper is interesting enough to warrant acceptance, despite my concerns. * Is there a particular reason that you chose not to use a decoder? In my experience, using a simple feature + conditioning based decoder on the predictions can increase segmentation accuracy by quite a large amount. I do not agree that refraining from any upsampling is a simplification of the method, but I'm open to hear your reasoning.\n The authors addressed the issue of having to use a separate network for the prompt generation." ]
[ -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "S8VSMgIlvYb", "rPDUYifTyOO", "b4OA8GPxjf", "_KvzerFQR4A", "4evA427VNxj", "Bxb7krUJkcQ", "nips_2022_hBaI5MY0CBz", "nips_2022_hBaI5MY0CBz", "nips_2022_hBaI5MY0CBz" ]
nips_2022_5JdyRvTrK0q
Private Synthetic Data for Multitask Learning and Marginal Queries
We provide a differentially private algorithm for producing synthetic data simultaneously useful for multiple tasks: marginal queries and multitask machine learning (ML). A key innovation in our algorithm is the ability to directly handle numerical features, in contrast to a number of related prior approaches which require numerical features to be first converted into {high cardinality} categorical features via {a binning strategy}. Higher binning granularity is required for better accuracy, but this negatively impacts scalability. Eliminating the need for binning allows us to produce synthetic data preserving large numbers of statistical queries such as marginals on numerical features, and class conditional linear threshold queries. Preserving the latter means that the fraction of points of each class label above a particular half-space is roughly the same in both the real and synthetic data. This is the property that is needed to train a linear classifier in a multitask setting. Our algorithm also allows us to produce high quality synthetic data for mixed marginal queries, that combine both categorical and numerical features. Our method consistently runs 2-5x faster than the best comparable techniques, and provides significant accuracy improvements in both marginal queries and linear prediction tasks for mixed-type datasets.
Accept
This paper provides a method for generating synthetic differentially-private datasets for use in answering statistical queries, including Mixed Marginal Queries, Class Conditional Linear Threshold Queries, and "Querying the Error." The is an improvement over previous work. A solid paper that all reviewers are positive about.
train
[ "Mqky4FqVTf_", "hIYRWGBafUZ", "TqWzAPzSIYT", "PH2WhtoFni", "UJTex8YlgfM", "OEY6t6sQavt", "2fCBAKi5cdt", "X_kAfS-x3wY", "rAbqvCtEvCX", "uc7_CAFay4W" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Great --- and thank you again for the time you spent reviewing our paper, which has been valuable to us. ", " Terrific --- and thank you for the time you spent reviewing our paper! Your feedback has been valuable.", " Thanks you for the response!\nI am happy with the responses given. I think that the clarification given in your response regarding the relation of RAP and RAP++ will help readers not entrenched in the literature have a clear picture of the lineage of your approach. I also believe that the discussion on accuracy in Reviewer vqTp's response and the response above would also strengthen the paper.\n\nI'll lightly edit my review above to indicate that the limitation I listed has been addressed.\nI won't change the review numerically, but will do so shortly after discussion with other reviewers.", " Thank you for the detailed response! After reading I believe my concerns were are well-addressed by the response. I will read the discussions with the other reviewers and then reconsider my review.", " Thank your for your review! Here are our answers to your questions:\n\n**RAP vs. RAP++ on Categorical Marginal queries:** In most experiments, we used MWEM+PGM as our primary baseline because it was the most performant of the various approaches we tried (MWEM+PGM, DPMERF, CTGAN, and RAP). The difference between RAP and RAP++ is in how it treats numeric valued features and threshold based queries. In particular, for categorical marginal queries, RAP and RAP++ are essentially the same algorithm, and so it does not make sense to compare RAP and RAP++ in this setting. We'll update the caption to our figure and add discussion to reflect this. \n\n**Figure 1:** We intended figure 1 to communicate a qualitative sense for how our annealing approach helps, but we agree that it might be confusing at the point that it comes in the paper --- we will add exposition to describe more details of the experiment it comes from (that at the moment would only make sense to a reader who had read further into the paper, since we have not yet described out datasets at the point that figure 1 appears). Thanks for pointing this out!\n\n**Accuracy guarantees from RAP:** The accuracy analysis for RAP assumes that the optimization step is solved perfectly, and then only uses the fact that the queries to be answered are of bounded cardinality, and that the queries themselves are Lipschitz continuous. As such, our method, which is an instantiation of RAP with new query classes, inherits RAP's accuracy guarantees (again, under the assumption that the optimization problem can be solved). Threshold queries are not Lipschitz continuous (indeed, all threshold queries cannot be answered accurately subject to differential privacy in the worst case because they have unbounded Littlestone dimension --- this is a result of [ALMM19] which we cite). But our Sigmoid approximations to them are Lipschitz, with Lipschitz parameters determined by the choice of sigmoid temperature. We will add a discussion of RAP's accuracy guarantees and the sense in which we inherit them to the paper. \n\n**Initialization of the synthetic dataset:** We always initialize the synthetic dataset uniformly at random over the dataset domain. We will update our pseudocode to specify this --- thanks for catching the oversight!\n\n\n**Temperature scaling and CM queries:** You are right that the temperature scaling is only relevant to MM queries and that for workloads consisting only of CM queries, RAP++ reduces to RAP. We will clarify this in our algorithm and discussion.\n\n**Beyond Linear Models:** In our experiments, more complicated models trained on the synthetic data generally resulted in performance that was comparable or slightly worse than the performance of linear models trained on the synthetic data, even when the more complex models outperformed linear models on the original data. (So its not so much that they perform poorly, but they fail to realize performance gains beyond that achievable by linear models). This is why we say that producing synthetic data for downstream tasks beyond linear classification remains open. We will add detail and elaborate on our preliminary (non)findings in the non-linear case. \n\n\n", " Thanks for your review! \n\n**Accuracy Guarantees:** \nYou are right that our algorithm does not come with a worst-case accuracy guarantee, and this is for fundamental reasons --- in general, no computationally efficient algorithm with provable privacy guarantees can have provable accuracy guarantees that exceed those of the simple Gaussian mechanism. This is a result of Ullman '16: (Ullman, Jonathan. \"Answering $n^2+o(1)$ counting queries with differential privacy is hard.\" SIAM Journal on Computing 45.2 (2016)) which we can cite and discuss in the paper. Because of this fundamental barrier, all computationally efficient algorithms for generating synthetic data that have provable privacy guarantees must have heuristic accuracy guarantees. There are some ways to partially circumvent this: for example, RAP has accuracy guarantees for the continuous valued queries it answers under the assumption that the optimization is in fact able to solve the ``projection'' problem. These were proven in [ABK+21] and [NTZ13]. Our algorithm is an application of RAP (with a new query class and optimization procedure), and so inherits these same accuracy guarantees (once again, conditional on the success of the optimization). We can add a discussion of this in the paper as well. The main novelty in our paper (and, we emphasize again, necessarily -any- paper giving provably private and computationally efficient algorithms for synthetic data generation) is therefore solving a number of design challenges that allow us to get provable privacy guarantees (using standard analyses) with new, state of the art empirical accuracy findings. \n\n**Technical Innovation:** The primary technical innovation in our paper is in our use of Sigmoid approximations to threshold functions, which allows us to apply continuous optimization machinery. Although this is a natural idea, for reasons that we discuss in the paper, it does not work well on its own, because for any setting of the sigmoid parameter, either the differentiable queries described via the sigmoid approximation are poor approximations of the original query, or they are hard to optimize. We are able to make this idea work through our dynamic scaling of the sigmoid parameter (which we call the sigmoid temperature). This idea, of dynamically changing the queries we are optimizing for over the course of the optimization, has not appeared before and turned out to be the key to making our approach work. Another benefit of this approach is that we do not have to tune the sigmoid parameter of our approximations, eliminating what would otherwise be a hyper-parameter.\n\n", " Thank you for your careful review!\n\n**Choice of Comparison Algorithms:** We chose MWEM+PGM as the exemplar algorithm to compare to for the category of \"moment matching\" methods. Many competitive methods are variants of PGM (including MST). McKenna et al. (https://arxiv.org/pdf/2201.12677.pdf) summarizes in Table 1 properties of different instantiations of PGM. We selected MWEM+PGM because it is memory efficient for high-dimensional datasets (is \"workload aware\"). MST is not memory efficient and could not be run on the datasets used in our experiments. Other variants of PGM (e.g. AIM) differ via heuristics that are added in a modular fashion and could be applied also to other algorithms like RAP. For example, AIM differs from MWEM+PGM in that it uses a dynamic strategy to allocate privacy budget at each round. We think of this as an optimization (that may or may not help for different tasks) that can be added on to any iterative algorithm, and so to perform the most direct head to head comparison we use the same simple (non-adaptive) budget allocation scheme across all of the iterative algorithms. \n\n**Runtime Comparisons:** We compare RAP++ to MWEM+PGM in runtime because MWEM+PGM was the most competitive of our comparison algorithms in terms of accuracy. Amongst the algorithms we compare to, DP-MERF runs more quickly, but its accuracy is not competitive, and so we did not do a detailed run-time comparison. We can add a discussion of this to the paper. \n\n", " The paper considers the problem of privately generating synthetic datasets based on a private dataset for which each data point is a mix of numerical and categorical variables. The best-performing past approach in this area is an approach called moment-matching, which tries to generate a synthetic dataset that minimizes the error of marginal queries (i.e., queries that take 0-1 variables x_i, and ask for the expectation of $\\prod_{i \\in S} x_i$) made on the synthetic database. They build off an algorithm called RAP, which repeatedly (privately) chooses new marginal queries for which the current synthetic database has high error, and then tries to minimize the error on all queries chosen so far. Past approaches made numerical variables suitable for marginal queries by placing them into bins (thus making them categorical); the authors argue the binning heuristic is impractical. They instead propose two new classes of queries. The first applies a threshold to the numerical variables to turn them into 0-1 variables. The second applies a linear classifier to a subset of the numerical variables to generate a 0-1 variable. \n\nThe most challenging step in the RAP algorithm is to, after choosing a set of queries, find a synthetic dataset that minimizes the error on these queries. RAP does this by defining a fractional relaxation of synthetic datasets and an error function over fractional datasets, which allows one to use continuous optimization techniques. The authors propose RAP++, which is RAP, but also with the option to choose queries from their two proposed classes. To make the error function differentiable, the authors approximate the threshold/linear classifier in the queries with a sigmoid function. However, a sigmoid function has to either be highly non-smooth (which makes it hard to optimize over) or poorly approximate threshold functions. The authors propose that rather than doing a one-shot optimization over the error function defined using sigmoids, instead using an annealing-style algorithm that repeatedly solves the optimization problem, but each time decreases the temperature of the sigmoids. \n\nIn experiments, the authors compare RAP++ to several other synthetic data generation methods in the literature. The authors show that RAP++ produces synthetic datasets with better accuracy/error than the other methods in most settings. However, since RAP++ places less emphasis on marginal queries just on categorical data in its training process, in tasks where numerical data is not useful it is outperformed by some other methods. The authors also show the runtime of RAP++ is much less than the runtime of the next-best method in their experiments. Originality: To the best of my knowledge, while the algorithm in this paper builds upon a pre-existing algorithm (RAP), the two improvements made to that algorithm (new types of queries, and an annealing method for optimization) are both fairly original. Furthermore, the problem is fairly well-studied, so a new approach for the problem is more novel than perhaps, say, a new approach appearing in the second or third paper on a problem.\n\nQuality: The methods the author propose appear to be very natural despite their originality/novelty. The privacy guarantees of the algorithm are sound and easy to see. The experiments appear to be fairly extensive and support the authors' claims. I do have some questions/concerns about the experiments, however, which I have placed in the questions section of the review.\n\nClarity: I thought that the paper was very clean from an exposition standpoint. The introduction does a good job explaining the issue with past approaches, and when explaining the algorithm I felt the design decisions the authors made were well-motivated by their explanation of the problems those decisions addressed. There are some clarity issues with the experimental design/results related to my questions/concerns from the previous bullet.\n\nSignificance: I think the paper is fairly impactful; synthetic data generation is a well-studied problem that many practitioners are trying to solve in production. In turn, algorithms like the RAP++ algorithm proposed in this paper have a reasonable chance to be deployed in practice on wide scales. My two main questions/concerns are regarding the experiments:\n\n-How were the algorithms the authors compared against chosen? In particular, it seems like in [TMH+21] https://arxiv.org/pdf/2112.09238.pdf (which the authors cite), they state that an algorithm called MST frequently performs well in a series of benchmarks; in particular, it seems to outperform 3 of the 4 benchmark algorithms tried in this paper. However, the authors did not compare to MST. Was there a reason MST wasn't chosen?\n\n-As far as I can tell, even after looking in the appendix, the only benchmark to which the authors made a runtime comparison is PGN. This seems to be motivated by the fact that PGN is the best performing of the benchmarks. However, the results presented in the paper don't seem to preclude the possibility of another algorithm having better runtime/scalability than RAP++ at the cost of having a higher error as well.\n\nEDIT: After reading the authors' rebuttal, I feel both questions have been addressed and are not of concern. Aside from the concerns in \"questions\", I don't believe there are any major limitations or negative social impacts not addressed by the authors.", " This paper provides a method for generating synthetic differentially-private datasets for use in answering statistical queries without the need for binning.  Specific query types that were analyzed include Mixed Marginal Queries, Class Conditional Linear Threshold Queries, and \"Querying the Error.\" The is an improvement over previous work for certain classes of statistical queries. The main improvement in this paper is for the case of mixed-type queries queries, which contain a mixture of categorical and numerical features. The technique in the paper is an extension of the \"relaxed projection mechanism\" by considering the k hardest queries and also adding a simulated annealing step.  The experimental results are also impressive.  Overall, this would be a worthwhile contribution.\n\nHowever, I do have some concerns: One question I had is about the lack of theorems in the paper.  Epsilon-delta privacy follows straightforwardly from the addition of the noise, how about other components, eg added error?  Another thing that would help if the authors explained the where the main difficulty of applying these known techniques lies. Can you address what bounds are given for this algorithm, eg on added error?\nWhere do the main technical innovations of this paper lie? Yes, the limitations have been addressed.", " The paper iterates upon the \"Relaxed Adaptive Projection (RAP)\" framework from [ABK+21], a \"moment matching approach\" for generating private synthetic data. The primary improvement wrt to the prior work is the ability of handling a mixture of both categorical and numerical features --- where RAP requires a discretization of the numeric domain via binning. This is achieved in two steps: (1) numeric based queries are introduced (which are different types of threshold queries); and (2) a differentiable approximation to these numeric queries are introduced with via a tempered sigmoid annealing query. The later is key in the optimization of the private synthetic data. With these tools to handle numerical based queries, differential privacy mechanisms are used to ensure that the synthetic data is private.\n\nIn summary, the approach can be broken down as follows:\n1. The K worst query functions are selected wrt error and using \"Report Noisy Top-K\", a DP mechanism;\n2. The values of these queries are calculated with the Gaussian DP mechanism;\n3. The query functions in (1) are converted to their differentiable approximations;\n4. A projection step occurs using results of (2) and (3) to find the best data wrt error.\n\nThese steps done in the paper's Algorithm 2. The proof of $ (\\epsilon, \\delta) $-DP follows from standard composition and post-processing properties of zero concentration DP.\n\nExperimentally, the paper uses their approach to generate synthetic data for multitask learning (multiple columns of labels to target in classification). With this in mind, the paper evaluates their approach over many variants of the ACS dataset. The results present appear promising for both mixed-marginal query evaluation and (linear) classification.\n Strengths\n - The paper presents an iterative improvement over RAP. This is particularly shown in the experimental section, where RAP++ is competitive or superior to other prior work.\n - The explanation and motivation for the tempered sigmoid, the subsequent queries, and annealing approach was good.\n\nWeaknesses\n - Although there is a privacy guarantee and promising experimental results, there is not an theoretical error analysis. From briefly looking at [ABK+21], RAP has a bound on query error.\n - There are a few unclear aspects of the paper.\n - Fig 2 & 5 cites the Appendix for comparison against other approaches, but this does not appear in the Appendix.\n\nPoints which could be rephrased as questions / comments are reiterated below. Comments / Suggestions / Questions\n1. Comparison of Fig 2 & 5 for all approaches is not present in Appendix. This seems to be key in evaluating RAP++, especially examining the difference between RAP and RAP++ for CM to see if RAP++ is strictly better.\n2. Fig 1. seems a bit odd without it specifying the experiment it comes from.\n3. What if any of the bound on query error for RAP [ABK+21] can be \"inherited\" for RAP++? In particular for the numerical queries.\n4. The initialization of synthetic dataset doesn't seem to be discussed anywhere, Algorithm 2, Line 3.\n5. Algorithm 1/2 is slightly confusing as it appears to suggest that temperature scaling is utilized even for CM queries (which from digging into the code is not the case). Clarification on the specifics of the loss functions for different queries would be helpful. Furthermore, stating the case in which RAP++ reduces to RAP explicitly would be helpful for the reader (which I believe occurs when all queries are CM type).\n6. The limitation section states that \"Producing synthetic data useful for downstream learning beyond linear classification largely remains open\". Do non-linear classifiers perform poorly on RAP++? Plots of failure cases would be interesting to see for this case.\n\nMinor typos\n - Algorithm 1. Line 5: \"Stating\" -> \"Starting\"\n - Algorithm 1. Line 5: Should \"$D_{k}$\" be \"$D_{j}$\"?\n - Algorithm 1. Line 5: Should \"[...] descent on $\\hat{D}$\" be \"[...] descent on $L_{j}(\\hat{D})$\"?\n - Line 227: \"Eq. 2\" is inconsistent with equation citation in previous parts of the paper. The author states the limitation for ML downstream tasks to that of only linear classification. It would be useful to see explicit experiments on non-linear classifiers trained on RAP++ synthetic datasets.\n\n---\n\nEdit: Addressed in the reviewer discussion. " ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "PH2WhtoFni", "TqWzAPzSIYT", "UJTex8YlgfM", "2fCBAKi5cdt", "uc7_CAFay4W", "rAbqvCtEvCX", "X_kAfS-x3wY", "nips_2022_5JdyRvTrK0q", "nips_2022_5JdyRvTrK0q", "nips_2022_5JdyRvTrK0q" ]
nips_2022_fXq93VpCIy
Sauron U-Net: Simple automated redundancy elimination in medical image segmentation via filter pruning
We present Sauron, a filter pruning method that eliminates redundant feature maps by discarding the corresponding filters with automatically-adjusted layer-specific thresholds. Furthermore, Sauron minimizes a regularization term that, as we show with various metrics, promotes the formation of feature maps clusters. In contrast to most filter pruning methods, Sauron is single-phase, similarly to typical neural network optimization, requiring fewer hyperparameters and design decisions. Additionally, unlike other cluster-based approaches, our method does not require pre-selecting the number of clusters, which is non-trivial to determine and varies across layers. We evaluated Sauron and three state-of-the-art filter pruning methods on three medical image segmentation tasks. This is an area where filter pruning has received little attention and where it can help building efficient models for medical grade computers that cannot use cloud services due to privacy considerations. Sauron achieved models with higher performance and pruning rate than the competing pruning methods. Additionally, since Sauron removes filters during training, its optimization accelerated over time. Finally, we show that the feature maps of a Sauron-pruned model were highly interpretable. The Sauron code is publicly available at https://github.com/blindedrepository.
Reject
The paper proposed a method for pruning filters in image segmentation networks by removing filters during training that are closely clustered. Unlike prior works, the approach is described as single-phase, meaning it prunes during normal training. To obtain smaller networks, a term which promotes feature map clustering is added to the loss. The experiments uses nnUNet as a baseline network and 3 other recent network pruning methods as comparison. The results show good performance (Dice/HD95) with more pruned networks on three medical segmentation datasets (with 3D volumes), and largely reduced FLOPs. Deep neural networks pruning for medical image segmentation is an important problem due to high dimensionality and long training times. The paper is well-written, method and experiment setup are clearly explained. The contribution was nevertheless found to be limited among a very fast growing literature. The core contribution could be better explained. The authors mostly proposed a general filter pruning without any specific optimization for U-Net series. It is thus necessary to compare Sauron against more existing filter pruning methods. The contribution is fair, but that it remains overall a bit limited for NeurIPS, in the sense that it does not offer strong insights. Besides, the effectiveness of Sauron is also questionable on 3D U-Net, where the pruned model fails to provide satisfactory performance.
train
[ "mGdA-hxhAy6", "ceI52SzKcaF", "1y6eRZUzBzv", "kdy4XjUuvdE", "QGib6ga6JoW", "H5GJ7JmK46A", "C1hkn_gwAG", "Y1YvqkiVFh0", "jeUl0hKoUm6", "k8Aim3YcdUs", "Xpj9J-RO5fJ", "mxgTz5X4rOO", "PhQCgu1Xj6I", "q9guf_9EAjG" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the time and feedback provided.\n\n> I am still not convinced about \"the proposed regularization term further promotes such cluster formation\" by enforcing the similarity of features against the first one/channel, though the final results improved.\n\nAs the reviewer indicated, we showed that Dice coefficient, HD95, and FLOPs reduction improved with the proposed $\\delta_{opt}$ regularization (Section 4.1). Additionally, we showed that feature maps' clusterability also increased with $\\delta_{opt}$ regularization (Section 4.2 and Figure 1). This is because minimizing $\\delta_{opt}$ equates to minimizing an upper bound of the distances between the feature maps of the same cluster (lines 128-132). Let us illustrate this with an example.\n\nConsider a convolutional layer with five feature maps, $O_i : i \\in $ {$1 , 2, 3, 4, 5 $}. These feature maps are grouped in two clusters: cluster A comprises feature maps $O_1, O_2, O_3$, and cluster B comprises feature maps $O_4, O_5$. By minimizing $\\sum^{N}_{r=2} ||O_1 - O_r||_2$ (a simplified version of Eq. 2), where $N=5$, feature maps 1, 2 and 3 (cluster A) will become closer to each other (i.e., more similar) because the distance between feature maps $O_1$ and $O_2$, and feature maps $O_1$ and $O_3$ is reduced. Furthermore, because $||O_1 - O_4||_2 + ||O_1 - O_5||_2$ is an upper bound of $||O_4 - O_5||_2$ (lines 131-132), minimizing such upper bound reduces $||O_4 - O_5||_2$. In consequence, feature maps in cluster B also become closer to each other. We show this effect in Section 4.2 in different convolutional layers and with different $\\lambda$ values.\n\n> Is there any way to analyze the clusters further?\n\nCluster analysis, such as determining whether a dataset has clusters or finding the number of clusters, is challenging and it is a research topic on its own (Adolfsson et al. (2019)). In our work, we showed clusterability with three different metrics; one of them, the dip-test (Kalogeratos and Likas (2012)), was published in this venue.\n\n> Maybe see what the clusters really represent?\n\nWe argue that each cluster can be thought of as a distinct operation and that the feature maps within each cluster can be viewed as noisy outputs of such operation (line 281). This view is supported by Figure 2 in Section 4.3, where we can see that, after pruning with Sauron, each feature map represents a different operation (e.g., the top-right feature map of Figure 2 corresponds to the operation \"extract lesion from rat brain image\"). In contrast, without pruning, we can see that there are several feature maps that perform the same operation (e.g., Appendix E, Figure S3, feature maps 2, 8, 16, 17 and 18 also correspond to the operation \"extract lesion from rat brain image\"). The goal of Sauron is to make those feature maps that correspond to the same operation closer to each other (i.e., more similar) by increasing feature maps clusterability via $\\delta_{opt}$ regularization to, subsequently, eliminate all feature maps except one.\n\nWe hope to have clarified Reviewer Vhi7's concern, and we are happy to discuss further if there remain any questions.\n\nArgyris Kalogeratos and Aristidis Likas. Dip-means: an incremental clustering method for estimating the number of clusters. Advances in neural information processing systems, 25:2393–2401, 2012.\n\nAndreas Adolfsson, Margareta Ackerman, and Naomi C Brownstein. To cluster, or not to cluster: An analysis of clusterability methods.Pattern Recognition, 88:13–26, 2019.\n", " I am still not convinced about \"the proposed regularization term further promotes such cluster formation\" by enforcing the similarity of features against the first one/channel, though the final results improved. Is there any way to analyze the clusters further? Maybe see what the clusters really represent?", " We thank the reviewer for reading and answering our response, and we are glad that the reviewer was satisfied with our reply to most of the comments. We hope to clarify our previous answer regarding Sauron’s hyperparameters.\n\n> * \"it facilitates the implementation of our method and the tuning of Sauron's hyperparameters\", how and why?\n> * \"Thus, filter pruning methods that have no hyperparameters related to the general strategy because they resemble typical CNN optimization, such as Sauron, are more beneficial\", for what? I don't think this is sufficiently clear.\n\nThe lack of hyperparameters related to the design decisions of the general pruning strategy (e.g., \"number of epochs before pruning starts\", \"pruning iterations\") makes Sauron resemble typical CNN optimization (lines 96-98). Due to such resemblance, the implementation of Sauron is easy; in our implementation, Sauron is a plug-and-play module incorporated into nnUNet (Section 3.5). By contrast, consider other pruning approaches that have multiple pruning phases and require hyperparameters related to their specific optimization and pruning flow. Such approaches require much more effort to implement and deploy in existing applications because of their difference with respect to standard CNN optimization, whereas utilizing Sauron is as simple as copypasting our code provided in the supplementary material. In summary, having no hyperparameters related to the general pruning strategy, such as Sauron, is beneficial for implementing and deploying pruning strategies.", " We thank you for reading and answering our response, and we hope to address the two remaining concerns.\n\n1- The significance of $\\delta_{opt}$ regularization is in the greater reduction of FLOPs without harming performance. This is possible because $\\delta_{opt}$ regularization further promotes feature maps clustering, as we demonstrate in Section 4.2. In other words, pruning without $\\delta_{opt}$ regularization ($\\lambda=0$) may either not eliminate enough feature maps because feature maps are less clustered, or it may accidentally remove feature maps that are not redundant (line 231).\n\nSpecifically, Tables 1-3 (second vs. third row) show that our $\\delta_{opt}$ regularization did not deteriorate Dice coefficient and HD95, and Table 4 (second vs. third row) shows that $\\delta_{opt}$ regularization resulted in a higher reduction in the FLOPs in ACDC and KiTS datasets. In Rats dataset, however, given the smaller performance without $\\delta_{opt}$ regularization (Table 1, second vs. third row), the marginally higher reduction in FLOPs could indicate the accidental elimination of non-redundant feature maps, as aforementioned (line 231).\n\n2- We agree that more experiments on 3D networks would be beneficial in further illustrating the effectiveness of Sauron. We employed 2D and 3D CNNs on three distinct datasets (Section 4.1), and, in our experiments with 3D CNNs (KiTS dataset), Sauron outperformed, by a large margin, the competing methods, achieving higher Dice coefficients (Table 3) and greater FLOPs decrease (Table 4, last column). Given these results, and since Sauron focuses on increasing the redundancy across feature maps regardless of whether convolutional filters are 2D or 3D kernels, we expect that Sauron will perform favorably on other 3D CNN architectures.\n\nWe think that reducing the FLOPs on 2D UNets is also important. Certain 2D medical and biomedical images, such as those from cancer pathology sections and those obtained in electron microscopy, are comprised by millions of voxels (e.g., A.B. Hamida et al. (2021) employed the AiCOLO dataset in which each image encompasses around 4 to 5 billion pixels, S. Akers et al. (2021) utilized 3042x3044 pixel images). Thus, achieving small 2D UNets enables efficient whole-volume segmentation in such cases. Additionally, pruning 2D UNets during the optimization (as Sauron does) decreases gradually GPU memory usage, permitting to increase the batch size, which, in medical image segmentation, is typically much smaller than in natural image segmentation. Furthermore, due to patient privacy issues, it is not always possible to run analyses on servers. We think that our experiments in both 2D and 3D networks (Section 4.1) strengthen our work as they showed that Sauron can prune different types of networks (2D and 3D) and architectures (Tables 1-3, Appendix C).\n\nAkers, Sarah, Elizabeth Kautz, Andrea Trevino-Gavito, Matthew Olszta, Bethany E. Matthews, Le Wang, Yingge Du, and Steven R. Spurgeon. \"Rapid and flexible segmentation of electron microscopy data using few-shot machine learning.\" npj Computational Materials 7, no. 1 (2021): 1-9.\nHamida, A. B., Devanne, M., Weber, J., Truntzer, C., Derangère, V., Ghiringhelli, F., Forestier, G. & Wemmert, C. (2021). Deep learning for colon cancer histopathological images analysis. Computers in Biology and Medicine, 136, 104730.", " I am satisfied with the reply to most of my comments and I understand the difficulty of illustrating the interpretability of feature maps in a condensed form, but it is not completely satisfying if this is the end of it.\n\nI do not understand the authors reply regarding hyperparameters and the relevance of separating these two types of hyperparameters as described.\n- \"it facilitates the implementation of our method and the tuning of Sauron's hyperparameters\", how and why?\n- \"Thus, filter pruning methods that have no hyperparameters related to the general strategy because they resemble typical CNN optimization, such as Sauron, are more beneficial\", for what? I don't think this is sufficiently clear.", " 1. $\\delta_{opt}$ is listed among the top-2 contributions in the submission. However, from Tables 1-4, I did not get the significance of $\\delta_{opt}$ in either segmentation or FLOPs. Then, what is the exact meaning of $\\delta_{opt}$?\n\n2. More experiments using 3D UNet/nnUNet are necessary. In my view, reducing the FLOPs of 2D UNet is not that important because in most cases, we do not need to run medical diagnosis on a mobile device (we can run it on a server). ", " We really appreciate the feedback given and thank you for your time and useful comments.\n\n1 - $\\delta_{opt}$ regularization term was designed to increase redundancy in the feature maps to facilitate the subsequent filter pruning. In other words, minimizing $\\delta_{opt}$ was not intended to increase performance directly unlike perhaps other types of regularization such as L2 regularization. To this end, we showed that minimizing $\\delta_{opt}$ did increase the clusterability of feature maps (Section 4.2) and that the pruned models became smaller than without $\\delta_{opt}$ regularization (Table 4). Furthermore, in our experiments, we observed a slight increase in performance when minimizing $\\delta_{opt}$.\n\n2 - We employed 2D and 3D UNets as both types are widely used in medical image segmentation. Particularly, 2D UNets are often the preferred choice in 3D anisotropic data (see e.g. De Feo et al., 2021), such as in Rats and ACDC datasets, or in 2D medical images.\n\nThe performance on KiTS test set was obtained by training the architecture that won KiTS19 competition (nnUNet) following a very similar configuration (reported in the supplementary material). We agree with the reviewer that a higher performance would be more satisfactory, but our focus was not on surpassing a particular metric in KiTS19 test set, which would have required a very careful choice in the preprocessing, data augmentation, and postprocessing steps. Instead, our focus was on achieving a higher pruning rate while not decreasing performance compared to other filter pruning methods.\n\nDe Feo R, Shatillo A, Sierra A, Valverde JM, Gröhn O, Giove F, Tohka J. “Automated joint skull-stripping and segmentation with Multi-Task U-Net in large mouse brain MRI databases”. NeuroImage. 2021 Apr 1;229:117734.\n\n3 - We thank the reviewer for this suggestion and we will include an appropriate citation. In agreement with Huang and Wang (2018), we believe that fewer hyperparameters and fewer design decisions (as single-phase pruning methods) are positive and desirable properties of pruning approaches.\n\nHuang Z, Wang N. “Data-driven sparse structure selection for deep neural networks”. In Proceedings of the European conference on computer vision (ECCV) 2018 (pp. 304-320).\n\n4 - We think that the compared methods (cSGD, FPGM, Autopruner) are still state-of-the-art; they were published within two years since we started our work, are widely used, and they have been compared with filter pruning methods published this year (e.g., Wang and Li (2022), Li et al. (2022), Joo et al. (2022), Yu et al. (2022), Hou et al. (2022)). These compared methods were chosen based on the availability of code and the similarity to our approach.\n\nWang Z, Li C. “Channel Pruning via Lookahead Search Guided Reinforcement Learning”. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2022 (pp. 2029-2040).\n\nLi Y, Adamczewski K, Li W, Gu S, Timofte R, Van Gool L. “Revisiting Random Channel Pruning for Neural Network Compression”. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022 (pp. 191-201).\n\nJoo D, Kim D, Yi E, Kim J. “Linear Combination Approximation of Feature for Channel Pruning”. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2022 (pp. 2772-2781).\n\nYu S, Mazaheri A, Jannesari A. “Topology-Aware Network Pruning using Multi-stage Graph Embedding and Reinforcement Learning”. In International Conference on Machine Learning 2022 Jun 28 (pp. 25656-25667). PMLR.\n\nHou Z, Kung SY. “Multi-dimensional dynamic model compression for efficient image super-resolution”. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2022 (pp. 633-643).\n\n**Questions**\n\n1 - We agree with the reviewer. We will cite the work of Reyes et al. (2020) that connects models' large number of parameters and models’ interpretability in the context of medical imaging.\n\nReyes M, Meier R, Pereira S, Silva CA, Dahlweid FM, von Tengg-Kobligk H, Summers RM, Wiest R. “On the interpretability of artificial intelligence in radiology: challenges and opportunities”. Radiology: artificial intelligence. 2020 May;2(3).\n\n2 - We referred to Table 4 in line 221.\n\n3 - The provided GitHub link was a placeholder to be replaced by the official GitHub repository link upon acceptance. We included the content of that GitHub repository as the Supplementary Material (including Sauron’s code and a README file that describes how to run all our experiments).", " We thank you for your time and feedback provided.\n\n> * I find significance is hard to asses, as the results and the novel contributions of the present work is no t put and discussed in the context of prior work to a high enough degree.\n\nWe thank the reviewer for this comment on the Previous work section. We divided previous works based on the type of pruning method in order to separate those approaches more related to our work (Redundancy elimination methods).\n\n> * The section on Feature maps interpretation could benefit greatly from having the comparison to the non-pruned nnUnet in the paper and not in the appendix. As it is, it is not possible to assess the claims that the pruning makes the models easier to interpret.\n\nWe agree with the reviewer that it would be beneficial to have those figures in the paper rather than in the appendix. Those figures are quite big as they contain several feature maps. Since we wanted to show them in high resolution and due to space limitations, we placed those figures in the appendix.\n\n> * It is unclear to me if the clustering experiments (Figure 1) were done with multiple training runs. A single experiment where clustering is improved is perhaps not so interesting.\n\nThe clustering experiments in Figure 1 were done on a single run. Due to the stochasticity of CNNs optimization, there is no exact correspondence between feature maps in the same layer across different runs (i.e., feature map 1 in layer $l$ might belong to different clusters in different runs). Thus, plotting \"average\" tSNE plots (Fig 1-a,b,c) would not have been possible. Then, to understand the evolution of feature map clustering during and after the optimization, we also computed Fig 1-d,e,f,g. (details can be found in Section 4.2). We agree with the reviewer that a single experiment where clustering is improved is not that interesting. In line with this, Fig. 1-h offers the results of this experiment with different $\\lambda$ values to understand the effect of Sauron’s $\\lambda$ on clustering. The trend depicted in Fig 1-h (each $\\lambda$ is a different run), the rest of the analysis in Section 4.2, and the FLOPs reduction in the trained models (Table 4) show that clustering was improved via our proposed $\\delta_{opt}$ regularization.\n\n> * It seems like the proposed method does have a number of additional hyperparameters. See for instance lambda (l114), the layer-specific thresholds (l137), the patience hyperparameter (l149), the number of pruning steps (l198), etc. Considering that more hyperparameters are mentioned as a weakness of the prior works in contrast to the proposed, the authors should maybe clarify why and if they consider their approach superior in this respect?\n\nSauron has no hyperparameters that alter the typical flow of the optimization in CNNs, which, in practice, is advantageous because it facilitates the implementation of our method and the tuning of Sauron's hyperparameters. Specifically, we can divide the hyperparameters of filter pruning methods into two groups: 1) hyperparameters that affect the way in which pruning is performed (e.g. Sauron's hyperparameters, line 1 in Algorithm 1), and 2) hyperparameters related to the design decisions of the general pruning strategy, such as \"number of epochs before pruning starts\" and \"pruning iterations\". These two types of hyperparameters are highly coupled. For instance, the hyperparameter \"number of epochs before pruning starts\" can easily depend on how challenging a segmentation task is, and based on this hyperparameter's value, pruning hyperparameters, such as Sauron's $\\tau_{max}$ (maximum threshold) or $\\rho$ (patience) would differ. Thus, filter pruning methods that have no hyperparameters related to the general strategy because they resemble typical CNN optimization, such as Sauron, are more beneficial. As suggested by the reviewer, we will clarify this in the paper.", " Thank you for your time and the positive assessment of our work.\n\n> * The motivation for including the regularization term is not clear to me. For instance, I am not sure why the similarity of features is always enforced between the feature of the first channel and the rest. I did not see how the statement \"... makes those feature maps near the feature map in the first channel O_1 even closer. At the same time, those features that are dissimilar to O_1 become more similar to other feature maps...\" is achieved, especially how are the other clusters formed as in the second part? \n\nThe motivation for minimizing the proposed regularization term is to reduce the distance between feature maps in the clusters formed during CNNs optimization. We would like to emphasize that clusters of feature maps are formed regardless of any regularization term or pruning strategy (line 125). In consequence, such cluster formation phenomenon has motivated the development of filter pruning methods that eliminate redundant feature maps (see \"Redundancy elimination\" in the Previous Work section). Our experiments also corroborated the formation of clusters of feature maps in vanilla CNN optimization (Fig 1-b: feature maps are clustered after optimizing a CNN without the regularization term; Fig 1-e,f,g \"Increase\": some convolutional layers have higher clusterability after the optimization).\n\nIn practice, it does not matter which feature map is chosen as a reference to minimize the distances in $\\delta_{opt}$ (Eq. 2). We chose the first feature map as it was easy to implement; we will include this justification in the paper to improve its clarity.\n\nWith the first feature map chosen as a reference (Eq. 2) and considering that the formation of clusters occurs regardless of any regularization term, minimizing $\\delta_{opt}$ during the optimization will reduce (even more) the distance between the first feature map and all the other feature maps from the same cluster. Hence, we wrote \"$\\delta_{opt}$ regularization makes those feature maps near the feature map in the first channel O_1 even closer\". Regarding the feature maps in other clusters, their within-cluster distance is also reduced because minimizing $\\delta_{opt}$ equates to minimizing an upper bound of such distances. Hence, we wrote that \"At the same time, those features that are dissimilar to O_1 become more similar to other feature maps from the same cluster\".\n\nIn summary, the formation of clusters of feature maps during the optimization of convolutional neural networks is inherent to CNNs optimization, and minimizing the proposed regularization term further promotes such cluster formation that we ultimately exploit for filter pruning.\n\n> * The proposed regularization term is rather straightforward, which is not technically innovative.\n\nWe agree that the proposed regularization term is straightforward; this is similar to most influential regularization terms, such as L2. We think that complexity is not a necessary component of technical innovativeness, and given the outperformance and higher pruning rate achieved by our method, the straightforwardness of the proposed regularization term is a positive property that makes our method easy to follow, implement, and overall attractive.\n\n> * Are the computed FLOPs computed in training or testing? It will be helpful to show the number of filters removed in the model and the exact number of parameters for the adjusted models.\n\nWe computed the FLOPs in testing. We thank the reviewer for the suggestion, and we will include the number of filters removed and the exact number of parameters after pruning.", " Thank you very much for your positive feedback and useful suggestion. We agree, and we will add to our Previous work section those filter pruning methods applied to medical/biomedical image segmentation tasks since they share the same focus as our paper.", " This paper tackles filter pruning on deep convolutional networks for medical image segmentation. Training loss is augmented with a regularization term that penalizes distances between feature maps and forms clusters, and thus, pruning does not introduce extra training parameters or phases such as pre-training. Feature maps whose distances are below a threshold are then dropped, where the thresholds are adaptive based on layer and epoch. This strategy is used for training the state-of-the-art nn-UNet on 3 benchmark medical image segmentation datasets. The resulting pruned models maintain the segmentation performance of unpruned models and outperform other pruning methods in terms of segmentation performance and inference time. \n Strengths: Pruning deep neural networks for medical image segmentation is an important problem due to high dimensionality and long training times. The paper is well-written, method and experiment setup are clearly explained. \n\nWeaknesses: Following recent and similar works are not mentioned in related works/experiments, hindering the clarity of novelty. Particularly, Zhou et al. (2020), Chen et al. (2020) and Dinsdalea et al. (2022) focus on filter pruning for medical image segmentation; it is suggested that the authors compare/contrast their approach with these works. \n\nZhou et al. “Evolutionary Compression of Deep Neural Networks for Biomedical Image Segmentation” (2020)\n\nChen et al. “α-UNet++: A Data-Driven Neural Network Architecture for Medical Image Segmentation” (2020)\n\nDinsdale et al. “STAMP: Simultaneous Training and Model Pruning for Low Data Regimes in Medical Image Segmentation” (2022)\n\nHe et al. “CAP: Context-Aware Pruning for Semantic Segmentation” (2021)\n\nDitschuneit et al. “AUTO-COMPRESSING SUBSET PRUNING FOR SEMANTIC IMAGE SEGMENTATION” (2022)\n\nChen et al. “MTP: MULTI-TASK PRUNING FOR EFFICIENT SEMANTIC SEGMENTATION NETWORKS” (2022)\n\nSabih et al. “DyFiP: Explainable AI-based Dynamic Filter Pruning of Convolutional Neural Networks” (2022)\n N/A Limitations and potential impacts are discussed.", " The authors proposed a regularization term in the loss for the filter pruning in medical image segmentation tasks. The regularization term minimizes the difference of feature maps from each convolutional layer, which enforces the similarity of all features against the feature map of the first channel. The filter pruning is conducted by removing the filter producing similar features. Three medical segmentation datasets (with 3D volumes) are employed for the experiments, and Superior results of the proposed method are reported in comparison to previous feature pruning methods. I have several concerns about the presented work, mainly regarding the experimental results. + The proposed pruning method can achieve better results than prior arts, proving the add-on regularization term effectively promotes feature clustering.\n+ The manuscript is overall easy to follow.\n\nWeakness:\n- The motivation for including the regularization term is not clear to me. For instance, I am not sure why the similarity of features is always enforced between the feature of the first channel and the rest. I did not see how the statement \"... makes those feature maps near the feature map in the first channel O_1 even closer. At the same time, those features that are dissimilar to O_1 become more similar to other feature maps...\" is achieved, especially how are the other clusters formed as in the second part? \n- The proposed regularization term is rather straightforward, which is not technically innovative.\n- Are the computed FLOPs computed in training or testing? It will be helpful to show the number of filters removed in the model and the exact number of parameters for the adjusted models. It will be helpful to see the authors' response to my abovementioned concerns. Good for the presented application.", " The paper proposed a method for pruning filters in image segmentation networks by removing filters during training that are closely clustered. Unlike prior works, the approach is described as single-phase, meaning it prunes during normal training. To promote smaller networks, a term which promotes feature map clustering is added to the loss. The experiments uses nnUNet as a baseline network and 3 other recent network pruning methods as comparison. The results show better performance (Dice/HD95) with more pruned networks. + Well motivated.\n\n+ clearly written and easy to follow.\n\n- I find significance is hard to asses, as the results and the novel contributions of the present work is no\nt put and discussed in the context of prior work to a high enough degree.\n\n- Having a list of contributions at the end of introduction, I feel is a good way to summarize reasons the a particular manuscript is relevant to read. In this case, however, it repeats too much for my taste. I would\n have preferred a stronger focus on novelty and what separates this work from prior works specifically.\n\n- The section on Previous work hard to follow. I would have preferred more detail and more relevance given the context of the submitted work. As it is, it just reads as a listing of prior approaches, with too little detail to be informative.\n\n- The section on Feature maps interpretation could benefit greatly from having the comparison to the non-pruned nnUnet in the paper and not in the appendix. As it is, it is not possible to assess the claims that the pruning makes the models easier to interpret.\n\n- It is unclear to me if the clustering experiments (Figure 1) were done with multiple training runs. A single experiment where clustering is improved is perhaps not so interesting.\n - It seems like the proposed method does have a number of additional hyperparameters. See for instance lambda (l114), the layer-specific thresholds (l137), the patience hyperparameter (l149), the number of pruning steps (l198), etc. Considering that more hyperparameters are mentioned as a weakness of the prior works in contrast to the proposed, the authors should maybe clarify why and if they consider their approach superior in this respect?\n - Aside from the above limitations, I think the work addresses limitations as aspected.\n", " The authors proposed Sauron, a filter pruning methodology designed for U-Net-like medical image segmentation networks. In comparison to most filter pruning frameworks that consist of more than one distinct phase, Sauron applies filter pruning during optimization in a single phase, making it require fewer hyperparameters and design decisions. To achieve this goal, Sauron facilitates and promotes the formation of feature map clusters by optimizing a regularization term, and does not enforce the number of these clusters. In experiments, Sauron achieves comparable segmentation performance and largely reduced FLOPs on nnUNet. Strengths:\n\n1. The proposed clustering-based filter pruning method sounds interesting and reasonable. The regularization term $\\delta_{opt}$ in Sauron makes those feature maps near the feature map in the first channel even closer. Meanwhile, based on the upper bound in lines 129-130, $\\delta_{opt}$ forces those feature maps that are dissimilar to the feature map in the first channel to be more similar to other feature maps from the same cluster. \n\n2. Sauron performs well on 2D nnUNet, providing comparable performance on Rats and ACDC datasets with largely reduced FLOPs.\n\nWeaknesses:\n\n1. In Tables 1-3, it seems that the regularization term $\\delta_{opt}$ does not have a significant impact on the segmentation performance. For example, Sauron $\\delta_{opt}=0$ performs comparably with Sauron on Rats, ACDC, and KiTS (Kidney). In my opinion, I think these results cannot fully reflect the significance of the introduced regularization term (the major contribution of Sauron).\n\n2. The results of 3D nnUNet on KiTS are not satisfactory, especially on the Tumor class. Actually, I recommend the authors to lay more emphasize on 3D UNets, which often require many more computational resources than 2D UNets. However, most of the experiments were made on 2D UNets, which are not deep and maintain high inference speed (not really in need of filter pruning).\n\n3. The authors stated that Sauron is a single phase pruning method, which is more advantageous since it requires fewer hyperparameters and design decisions, including the number of epochs for training and fine-tuning, pruning iterations, etc. The authors should verify their statements with either experimental results or appropriate citations.\n\n4. The pruning baselines, i.e., cSGD, FPGM, and Autopruner, are somewhat out-of-dated, which were published about three years ago. The authors should consider including more recent (published within two years) filter pruning approaches for fair comparisons. 1. Lines 29-30: The statement \"models with a few filters can be easier to interpret than large models, which is crucial not only in clinical applications but also in research.\" needs citations.\n\n2. Table 4 is never referred in the main text.\n\n3. The provided github link does not exist. The authors adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "ceI52SzKcaF", "jeUl0hKoUm6", "QGib6ga6JoW", "H5GJ7JmK46A", "Y1YvqkiVFh0", "C1hkn_gwAG", "q9guf_9EAjG", "PhQCgu1Xj6I", "mxgTz5X4rOO", "Xpj9J-RO5fJ", "nips_2022_fXq93VpCIy", "nips_2022_fXq93VpCIy", "nips_2022_fXq93VpCIy", "nips_2022_fXq93VpCIy" ]
nips_2022_TrsAkAbC96
Implicit Warping for Animation with Image Sets
We present a new implicit warping framework for image animation using sets of source images through the transfer of motion of a driving video. A single cross-modal attention layer is used to find correspondences between the source images and the driving image, choose the most appropriate features from different source images, and warp the selected features. This is in contrast to the existing methods that use explicit flow-based warping, which is designed for animation using a single source and does not extend well to multiple sources. The pick-and-choose capability of our framework helps it achieve state-of-the-art results on multiple datasets for image animation using both single and multiple source images.
Accept
Consistent reviews, both in content and in score. The cross-identity motion transfer is a good test of the paper's capability -- it would improve the paper to provide more such examples, which are clearly more challenging than the same-identity case. The concerns about the limited diversity of example subjects mentioned by R1 are indeed relevant. The video examples are all male, with quite light skin tone. Please include examples with female subjects, darker (Fitzpatrick 6+) skin tone, and other ethnicities. To be clear: the rebuttal's current response "we will emphasize that training on a diverse dataset is a must" does not go far enough. It is very important that qualitiative examples are shown, even more important than that the test datasets are diverse. If the results are less good, efforts should be made before NeurIPS to improve them (e.g. by retraining), and if improvement is not possible, this should be very clearly stated in the limitations of the final copy and NeurIPS presentation/poster.
test
[ "hdmeJpXgef6", "uHpFik0Fi4U", "JxHhZ-D2Vaw", "Xhu_ICy04aa", "5tchBBeI_JpU", "XY8bArHRtY_", "z7U1Bh286uI", "LCCFpbT_bTa" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response and my main concerns are addressed. I would encourage the authors to make the updates they describe and am happy to upgrade my review to Accept. ", " **Extra key-and-values implementation:**\nIn our implementation, the extra keys and values are learned by the network during training. We initialize new parameters in the decoder, corresponding to the extra keys and values, as shown in the Python PyTorch code snippets below: \n```python\n# Initialization.\n​​self.extra_k = nn.Parameter(torch.randn(1, num_extra_kv, dim_qk)) \nself.extra_v = nn.Parameter(torch.randn(1, num_extra_kv, dim_v)) \nself.norm_extra_k = nn.LayerNorm(dim_qk) \nself.norm_extra_v = nn.LayerNorm(dim_v)\n```\n\n\nDuring the forward pass, these extra keys and values are appended to the keys and values obtained from the input source image(s):\n```python\n# Forward pass.\n# k, v are from the source image(s).\nBk, Nk, Ck = k.shape \nBv, Nv, Cv = v.shape\nk = torch.cat((k, self.norm_extra_k(self.extra_k).expand(Bk, *self.extra_k.shape[1:])), dim=1) \nv = torch.cat((v, self.norm_extra_v(self.extra_v).expand(Bv, *self.extra_v.shape[1:])), dim=1)\n```\n\nAfter training, the above extra keys and values are kept fixed. \nThese extra keys and values are not conditioned on the input source or driving images. They are static learned vectors that are specific to the dataset and task, similar to the weights of the rest of the network. We can also interpret them as a learned dictionary mapping.\n", " **Error bars and multiple runs:** Our networks on the TalkingHead-1KH dataset are trained in two steps, as detailed in Section D of the supplementary material — first at 256x256 resolution, followed by finetuning at 512x512. The first stage takes 3.5 days (7 days) on an NVIDIA DGX server with 8 A100 40GB GPUs (8 V100 32GB GPUs). The second stage takes about a day on the DGX A100, or 2+ days on the DGX V100. Due to the limited computational resources, combined with experiments on multiple datasets and settings, we chose to perform ablations only at the 256x256 resolution. Each ablation experiment was run 3 times. We then chose the best networks and finetuned them at 512x512 for faces. The best settings found were also used for the voxceleb2 and TED-Talk datasets. \n\nThe ablations in Table 5 do demonstrate high variance, but we would like to note that the best metrics over all runs were obtained by the setting in the last column of Table 5. Below, we provide the best metrics obtained over 3 runs for the columns of Table 5:\n| Metric | No residual or extra key-value | Residual connection only | Residual connection + Extra key-value |\n| :--- | :----: | :----: | :----: |\n| FID | 19.10 | 18.32 | **17.53** |\n| PSNR | 23.49 | 23.63 | **23.82** |\n| LPIPS | 0.117 | 0.112 | **0.109** |\n| AKD (MTCNN) | 1.953 | 1.903 | **1.827** |\n\nBased on this observation, we chose to use the last column setting of residual connection + extra keys and values, for our main experiments. We will provide detailed metrics for each of the 3 runs in the supplementary material. \n\n**Additional keys become very application specific:** Yes, this is true. The trained model, including the learned keys and values are not transferable across very different datasets, for e.g. from faces to half-body videos. Details about the implementation are available in the common comment section.\n\n**Mismatch in Table 1 and Table 2:** Table 1 measures metrics for image reconstruction while using a single source image, while Table 2 measures metrics for image reconstruction using multiple source images. As mentioned in Line 231, for Table 2, we use at most 180 frames per video during evaluation. For Table 1, we used the entire videos, which have up to 1024 frames (Line 477 in supplementary). Hence the difference. We will clarify this in the updated draft.\n\n**Application-level evaluation:** Our proposed method can be used for both motion retargeting, as well as video reconstruction for conferencing purposes. In the case of video conferencing, the sender’s side (encoder) only has the keypoint detector network shown in Fig. 8. The receiver’s side (decoder) contains the query, key, and value networks as shown in Fig. 9.\nNote that we do not have to transmit the q x d sized attention matrix A — we only have to transmit the keypoint locations and associated scalar keypoint strengths. After training, all networks are fixed and the attention matrix can be obtained on the decoder side using the keypoint location and strength information as inputs to the query, key, and value networks. \n\n**Limitations of keypoint detector:** This is an excellent observation! If the keypoint detector is trained on a biased dataset, it might not perform well for all skin colors, lighting conditions, etc. We will emphasize that training on a diverse dataset is a must, to ensure wide coverage of the keypoint detector as well as the decoder stack.", " **Ablation at reduced resolution:** Our networks on the TalkingHead-1KH dataset are trained in two steps, as detailed in Section D of the supplementary material — first at 256x256 resolution, followed by finetuning at 512x512. The first stage takes 3.5 days (7 days) on an NVIDIA DGX server with 8 A100 40GB GPUs (8 V100 32GB GPUs). The second stage takes about a day on the DGX A100, or 2+ days on the DGX V100. Due to the limited computational resources, combined with experiments on multiple datasets and settings, we chose to perform ablations at the 256x256 resolution. Each ablation experiment was run 3 times. We then chose the best networks and finetuned them at 512x512 for faces, and trained them at 384x384 from scratch for the upper body TED-Talk dataset.\n\n**Extra key-and-values implementation:** Please see common reply comment for detailed answer.\n\n", " **Resolution of cross-modal attention:** As mentioned in Lines 149-150, the keys, queries, and values are produced at 1/4th resolution of the input images, i.e. 64x64 for 256x256, and 128x128 for 512x512 input images. The corresponding attention maps and warped features are also at 64x64, and 128x128 resolutions respectively. Our decoder network has 2 upsampling layers that produce an output image at 4x the resolution of the warped features, similar to FOMM and face-vid2vid, as shown in Fig. 9 in the supplementary material. For faces, we first train at 256x256, and then finetune at 512x512, without any changes to the network architecture. We will make this clear in the main text.\n\n**Keypoint representation and its usage:** Similar to FOMM, we use a keypoint predictor network to predict keypoint locations by applying a spatial softmax on its outputs. Additionally, we also predict a scalar per-keypoint in the range [0-1]. Similar to FOMM, in the decoder, we place a Gaussian of fixed mean and variance at the predicted keypoint location. We further modulate these spatial keypoint representations by multiplying them with the associated scalar. These modulated features of *K* channels (where *K* is the number of keypoints) are fed as input to the query and key networks, as shown in Fig. 9.\n\n**Effect of number of keypoints:** We used 20 keypoints in all comparisons following the prior work of face-vid2vid. We found that increasing the number of keypoints from 20 to 30 on the TalkingHead-1KH dataset improved the quality of results — increasing PSNR from 23.32 to 24.26, reducing AKD (MTCNN) from 3.48 to 3.18, and L1 from 12.86 to 11.92, showing improved reconstruction and keypoint fidelity.\n\n**Newer baselines:** We actually do use the newer work \"Motion representations for articulated animation.\" Proc. CVPR. 2021, in our comparisons on the upper-body TED-Talk dataset in Table 1, and Fig. 5. Note that this method is referred to as AA-PCA [28] (Articulated Animation using PCA), also noted in Lines 208-209. For faces, we compare with the state-of-the-art face-vid2vid. We will make this naming convention clear.\n\n**Method efficiency comparison:** When using a single source image at 512x512 output resolution, the FPS of FOMM, face-vid2vid, and implicit warping (ours) are 13.3, 7.1, and 9.6. When using 3 source images, the corresponding FPS are 9.5, 3.6, and 5.1. Our method is slower than FOMM but faster than face-vid2vid. These framerates were obtained on an NVIDIA RTX A6000 GPU. \n\n**Extra key-and-values implementation:** Please see common reply comment for detailed answer.", " This paper concerns the task of animating a set of one or more source images driven by a target (or driving) video; applications include efficient video compression (e.g. for video calls) or retargeting a video.\n\nOut of a four stage pipeline, the authors note that their work focuses on the warping of image features from the source image(s) to the output. Previous work provides feature representation extraction from the source and driving images, we well as decoding from from the warped feature representation to the final output image.\n\nThe main contribution is the direct application of a dot-product attention based transformer to perform implicit warping between the driving and source image(s); this is an elegant solution that requires no special treatment for different numbers of source images. This is contrasted against previous approaches that only consider a single source image without an obvious method to extend to multiple sources images that avoids undesirable image operations (e.g. averaging over output images or hard transitions between source images).\n\n\n\n\n\n\n \nWhilst the direct application of a transformer, the resulting approach is an elegant solution that prioritises simplicity for which I commend the authors. I found the presentation of the method to be straight-forward and clear - I believe the use of the different components is logical and well justified. \n\nThe proposed method offers a number of advantageous properties over previous approaches whilst removing complexity. I believe it is fair to say that the empirical evaluation should be of critical consideration in determining the merits of the approach as it determines whether or not the the hand specified architectural and modelling decisions (or implicit priors) in previous approaches can be relaxed in favour of a data driven approach. For this reason, this is the main focus of the review.\n\nIn terms of the quality of the results, I have a number of questions regarding the quantitative evaluation. This also speaks to the significance of the results in terms of how well we expect the method to generalise and how well it would perform at downstream tasks. There are a number of points that I currently see to be weaknesses in the evaluation; please see the questions below for my comments on this which include specific questions for the authors to clarify.\n\nCaveat: This is not my main area of research; whilst I am familiar with the methods and concepts being brought to bear, I am not familiar with the literature in the application area and cannot speak to the novelty or otherwise for the application; I cannot guarantee that there is not related work that I have missed. Additional keys: The use of the additional keys seems relatively critical to the method (particularly some of the outputs where occlusion means there are no suitable source regions to use). The ablation study for this is relatively simple (as a binary consideration) whereas there are a number of parameters for the additional keys. At the same time, the ablation study results are not very conclusive (Table 5 in the supplement); the improvements for the majoritiy of the metrics are well within the associated standard error values provided. It would seem that a further investigation of this area is waranted - as the authors point out, it should be vital to the success but this doesn't seem to follow through to the ablation study metrics - please could the authors comment on this? Is there not a need for further study here and how were the parameters (e.g. the number of additional keys) derived? It would also seem that these additional keys become very application specific (e.g. synthesising eyes); does this limit the general applicability of the results across datasets?\n\nError bars/Variance: Given the contrast between Table 4 and Table 5 (where std errors are provided) it seems necessary to provide the std errors to the other results tables - it is difficult to judge the significance of the improvement and we would suspect, from the Table 5 std dev, that these improvements may not be particularly significant given the dataset (e.g. table 1 and 2). Please can these values be provided and the authors comment on this - if this is not important, please could the authors indicate why it should be neglected in the analysis of the quantitative results?\n\nMismatch in the results: Please could the authors explain why the first columns of Table 2 don't match the corresponding columns in Table 1 (i.e. the results for single source images)? Sorry if I have missed something.\n\nApplication Level Evaluation: Please could the authors comment on the intended application for such work? For example, is the goal retargetting or also efficiency for (e.g.) video calls? It seems the information required per frame (the A matrix) exceeds the original image dramatically therefore there would not be a reduction in bandwidth if the result is to be sent over a communications channel? How would this compare with the need to send the warping field (which might be readily compressible)? The authors specifically cite this in lines 257-258 as an application and it seems to me that no data compression has been achieved? Or is it the case that we don't need to send q x d and that the driving keypoint network is fixed so only the keypoint input data can be sent? The authors flag that operating under high levels of occlusion will be very challenging (this makes sense and the arguments that multiple images help with this is reasonable - perhaps a consideration of how the source images should be chosen would be helpful)? There is also discussion that computational/memory complexity could be improved (under the observation that there is perhaps significant sparsity that has not been exploited).\n\nComments that might want to be considered by the authors but I don't believe warrant ethical review:\n\nIn consideration of societal factors, the authors point out that the final video is generated from the source image and therefore they would not expect racial bias in the results; whilst this makes sense for one part, I think this misses the consideration that the method relies on the key point, representation networks and feature decoders to operate well and these are all trained on data - if these datasets contain significant bias would we not expect the performance to vary as a result?\n\nThe authors included the evaluation protocol and payment for the human evaluation study; I noted that the estimated hourly rate provided ($5/hour) is demonstrably lower than the US minimum wage (around $7.5/hour).\n", " This paper presents a novel method for transferring the motion of a driving video to a subject via an image set of the subject. By using a cross-model attention layer for directly generating warped features, the proposed method differs from most previous methods which find explicit correspondences and warp images accordingly. By using the proposed mechanism, the proposed method can benefit from multiple source images without resorting to ad hoc approaches and suffering from sub-optimal results such as flicker or blur. Overall, the paper appears to be very promising. The paper describes a simple yet effective method of generating an animation from a set of images. In the case of multiple images, the proposed method clearly outperforms other methods. Though the proposed attention layer is commonly used in many applications, its use in this problem setting and the overall architecture design are novel.\n\n### Strengths ###\n* The proposed method is both simple and effective. The core of the algorithm is the cross-model attention layer. Although the idea has been used extensively in many applications, its use in this setting is novel to my knowledge.\n\n* The results of the experiments are promising and convincing. The proposed method was compared with a couple of SOTA methods using multiple datasets. A major advantage of the proposed method is its ability to utilize multiple images simultaneously and effectively.\n\n* The paper is well written and enjoyable to read. The supplementary material, in particular the video, is very well done.\n\n### Weaknesses ###\n* The cross-modal attention layer does not present much technical innovation. However, the overall flow and architecture design are novel.\n\n* It is not entirely clear how to add additional keys and values. It would be helpful if a more detailed description was provided.\n\n* It is not clear why the ablation study was conducted at a reduced resolution.\n Could you please provide more details regarding the procedure for adding additional keys and values?\n\nIs there a reason why the ablation study was conducted at a reduced resolution?\n The paper discusses the societal impacts adequately. The method may fail if it is required to fill in a large amount of missing information. Additionally, it may encounter problems when dealing with extreme expressions not present in the training data.", " The paper presents a method for warping-based image animation using multiple source images (i.e., images of the scene to be animated) and a driving video (i.e., image frames from the same or similar scene used to drive the animation). Since previous methods have focused mostly on animating scenes with a single source image, including multiple source images in these frameworks is non-trivial. Yet, it makes sense that multiple source images could improve performance since they may provide textures from occluded regions that can be used in the animation. The key technical contribution is to combine existing keypoint-detection methods with an attention-based warping. This allows the model to attend to image features across all input frames in order to produce a dense grid of warped output features. The method is evaluated for face and body animation and demonstrates state-of-the-art results for both single and multiple input source images.\n *Strenghts*\n\n- The paper is well-written and the technical descriptions are clear and fairly easy to follow.\n\n- This method seems to elegantly and sensibly address the challenge of warping multiple input frames into the single output using information from the driving video. The implicit warping formulation using the attention mechanism helps to solve some limitations of previous methods (which could only support explicit, linear warping). I could see this approach being relevant to other problems in image-based rendering or sensor fusion.\n\n- The method clearly outperforms the other baselines and the additional results (included in the videos and webpage) are compelling.\n\n*Weaknesses*\n\n- While the paper is generally easy to follow, there are some technical details missing from the main paper (see questions below).\n\n- Some aspects of the related work section could be improved. For example the \"Image Animation\" section notes that \"none of [these frameworks] is designed to take advantage of complementary information available in different input images.\" Yet, there are a number of papers that use multiple input images, e.g., for facial animation [30-33]. In these works, multiple input images can be used to synthesize a dense texture map of the face, which can then be animated using a 3DMM style model. Some rewording here could help make this section more accurate.\n\n- One other limitation of the method seems to be scalability. While there are results demonstrated on 512x512 images, achieving higher resolution seems difficult since the attention-based mechanism happens at a relatively low resolution (the 64x64 resolution is mentioned in the paper). The features predicted after the attention module are then upsampled afterwards to produce the a higher resolution output, but I suspect there's a limit to how much upsampling can be done while retaining good quality.\n\nOverall I think these are relatively minor weaknesses; the method appears sufficiently novel and original, and the results seem quite strong. - The main paper doesn't seem to describe how the outputs of the cross-modal attention are upsampled to the final output resolution. While this is clear from the supplement, I think a sentence should be added for completeness.\n\n- I wasn't entirely sure what the keypoint representation is. In the FOMM paper, the keypoint predictor produces a heatmap that is passed through a softmax to produce a probability distribution whose mean is used to calculate a pixel location. Here, the keypoints seem to be passed directly into a convolutional layer so are the heatmaps (pre- or post-softmax) used for the keypoint representation?\n\n- How much does the number of keypoints matter in this case? I see that 20 keypoints are used for all experiments, but I'm curious how much this hyperparameter matters.\n\n- How are the extra/added key-value pairs implemented? Are these simply learnable parameter vectors of dimension d? Or are they conditioned somehow on the input source images?\n\n- I was curious why the FOMM model was selected for baseline comparisons rather than the more recent work from the same authors which addresses some of the shortcomings of the original model (Siarohin et al., ref. below). If I look at the quantitative results from that paper, they seem close to the proposed method, e.g., for AKD on the TED Talk dataset. Is there a reason this baseline wasn't included?\n\n - Siarohin, Aliaksandr, et al. \"Motion representations for articulated animation.\" Proc. CVPR. 2021.\n - The paper briefly touches on computational efficiency, but I think this limitation could be clarified a bit more. How does the proposed attention-based model compare to the baseline methods in terms of efficiency? I would guess that FOMM or FV2V have a much higher framerate, and the proposed model achieves better quality, but with some penalty to efficiency. Additional clarification on this point would be helpful.\n\n- There are obvious ethical concerns with the proposed method, especially with respect to potential misuse for deep fakes. Still, I think the authors address this point reasonably well in the societal impact section while highlighting positive benefits of this class of technique.\n\nTypos\nFig. 3: \"Coss-model\" -> \"Cross-modal\"\n" ]
[ -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "JxHhZ-D2Vaw", "nips_2022_TrsAkAbC96", "XY8bArHRtY_", "z7U1Bh286uI", "LCCFpbT_bTa", "nips_2022_TrsAkAbC96", "nips_2022_TrsAkAbC96", "nips_2022_TrsAkAbC96" ]
nips_2022_cA8Zor8wFr5
AttCAT: Explaining Transformers via Attentive Class Activation Tokens
Transformers have improved the state-of-the-art in various natural language processing and computer vision tasks. However, the success of the Transformer model has not yet been duly explained. Current explanation techniques, which dissect either the self-attention mechanism or gradient-based attribution, do not necessarily provide a faithful explanation of the inner workings of Transformers due to the following reasons: first, attention weights alone without considering the magnitudes of feature values are not adequate to reveal the self-attention mechanism; second, whereas most Transformer explanation techniques utilize self-attention module, the skip-connection module, contributing a significant portion of information flows in Transformers, has not yet been sufficiently exploited in explanation; third, the gradient-based attribution of individual feature does not incorporate interaction among features in explaining the model's output. In order to tackle the above problems, we propose a novel Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer's output. Extensive experiments are conducted to demonstrate the superior performance of AttCAT, which generalizes well to different Transformer architectures, evaluation metrics, datasets, and tasks, to the baseline methods. Our code is available at: https://github.com/qiangyao1988/AttCAT.
Accept
This is an interesting paper with good contribution to the field. Most reviews are positive.
val
[ "9B-iOsuV51i", "KS2lquXMQRz", "ALGE96Bj9n-", "lWrdau-eGFo", "iwontCcncYY", "lL7aBJlwhLR", "wGaHuNdRTaP", "JH-U7HbHo2T", "1k1wgU9-3bT", "9Y6ZagbQTq", "DOTQceLuB2", "VHTq9w1v9Gv" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors addressed my concerns, so I would like to keep my rating.", " Thank you for taking time to read our response! We appreciate your suggestion on articulating the possible extension of our AttCAT to other domains, and we will carefully incorporate our response in the final version of this manuscript. \n\nHere we would like to provide more information on the point you raised regarding ablation studies of skip connections. \n\nSeveral existing works [1][2][3] including ours (as shown in Figure 1 and Eq. 2) demonstrate that the skip connections are the crucial component of Transformer. Generating explanations without the skip connections may not duly demonstrate the inner working of Transformers due to the significant information loss. It is worth mentioning that Lu et al. NeurIPS 2021 [1] have done the ablation study on skip connections (Table 1 in their paper) and concluded that the ablation only produces random ablated accuracies on various NLP tasks, including Subject-Verb Agreement (SVA), Reflexive Anaphora (RA), and Sentiment Analysis (SA). This ablation study further substantiates the pivotal role of skip connections for Transformers to function correctly. \n\nWe elaborate on the details of related works below: \n\nLu et al. NeurIPS 2021 [1] demonstrate that a significant portion of information flow in BERT goes through the skip connections instead of the attention heads. Furthermore, the authors discuss that the important information is simply “copied” to the next layer through the skip connections. Thus the skip connections are traversed much more often than attention heads. This is consistent with our analysis of disentangling information flows in Transformer in Section 4.1. Importantly, their ablation study on skip connections corroborates that skip connections relay important information directly and cannot be replaced or dropped out. \n\nBrunner et al. ICLR 2020 [2] first point out that there is a strong influence passing through skip connections, which retains the identity information of the input tokens. The identity information refers to the ability of a model to learn stable representations, which is a desirable property affecting the replicability and interpretability of the Transformer’s predictions. Thus, the ablation study on the skip connections may cause identity information loss, leading to poor interpretability.\n\nSimilarly, Dong et al. ICML 2021 [3] show the importance of skip connections by decomposing and analyzing forward-pass computations in self-attention modules.\n\nReferences:\n\n[1] Kaiji Lu, Zifan Wang, Piotr Mardziel, and Anupam Datta. Influence patterns for explaining information flow in bert. Advances in Neural Information Processing Systems, 34, 2021\n\n[2] Brunner, Gino, Yang Liu, Damian Pascual, Oliver Richter, Massimiliano Ciaramita, and Roger Wattenhofer. \"On identifiability in transformers.\" ICLR (2020).\n\n[3] Dong, Yihe, Jean-Baptiste Cordonnier, and Andreas Loukas. \"Attention is not all you need: Pure attention loses rank doubly exponentially with depth.\" In International Conference on Machine Learning, pp. 2793-2803. PMLR, 2021.\n", " I appreciate the efforts made by the authors for the detailed response.\n\nRegarding the skip connections, I feel it is more about self-contained. It is important to check if your conclusion has the same trend as the previous works showed.\n\nRegarding the task in other domains, if you mainly tested the proposed method in NLP tasks, it is probably good to articulate the words as you showed to me here.", " We appreciate the reviewer’s comment on examining performance change against the corruption rate. We have performed a similar analysis in the original manuscript. As we stated in Section 5.3 (lines 245 and 246, \"To avoid choosing an arbitrary **k** , we remove 0, 10, 20, · · · , 100% of the tokens in order of decreasing saliency, thus arriving at… ”. We report the average AOPC and LOdds scores over varying values. We have added some results of the Amazon dataset in the following table. Our AttCAT achieves the highest AOPC scores with a small corruption rate **k** (i.e., 10, 20, 30), further demonstrating that AttCAT has detected the most important words for model predictions.\n\n| Method | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 |\n|------------|-----------|-----------|-----------|-------|-------|-------|-------|-------|-------|\n| RawAtt | 0.041 | 0.140 | 0.209 | 0.291 | 0.392 | 0.395 | 0.473 | 0.485 | 0.442 |\n| Rollout | 0.039 | 0.080 | 0.117 | 0.114 | 0.157 | 0.291 | 0.321 | 0.361 | 0.449 |\n| Grads | 0.055 | 0.101 | 0.147 | 0.186 | 0.227 | 0.236 | 0.357 | 0.366 | 0.422 |\n| AttGrads | 0.069 | 0.126 | 0.187 | 0.196 | 0.271 | 0.336 | 0.379 | 0.389 | 0.419 |\n| PartialLRP | 0.050 | 0.180 | 0.117 | 0.114 | 0.157 | 0.291 | 0.321 | 0.361 | 0.449 |\n| TransAtt | 0.084 | 0.145 | 0.222 | 0.371 | 0.371 | 0.467 | 0.402 | 0.464 | 0.423 |\n| CAT | 0.121 | 0.175 | 0.211 | 0.316 | 0.324 | 0.355 | 0.411 | 0.408 | 0.436 |\n| AttCAT | **0.158** | **0.284** | **0.392** | 0.402 | 0.404 | 0.397 | 0.466 | 0.418 | 0.442 |\n\nWe also agree with the reviewer that it is also useful to see how the two metrics change when we remove the **k** % lowest scored words. We have added the results in the table below. Since we are removing the **k** % lowest words, which are the less informative tokens, lower AOPC and higher LOdds scores are better. Our AttCAT achieves the best performance on Amazon and Yelp datasets shown in the table. \n\n| Method | Amazon | Amazon | Yelp | Yelp |\n|------------|--------|--------|-------|--------|\n| | AOPC | LOdds | AOPC | LOdds |\n| RawAtt | 0.118 | -0.262 | 0.142 | -0.697 |\n| Rollout | 0.157 | -0.358 | 0.152 | -0.613 |\n| Grads | 0.119 | -0.214 | 0.126 | -0.624 |\n| AttGrads | 0.123 | -0.206 | 0.118 | -0.538 |\n| PartialLRP | 0.151 | -0.290 | 0.146 | -0.711 |\n| TransAtt | 0.115 | -0.192 | 0.111 | -0.359 |\n| CAT | 0.117 | -0.201 | 0.124 | -0.526 |\n| AttCAT | **0.108** | **-0.039** | **0.098** | **0.025** |\n\nWe would add others results in our final version.\n\n**Q1 “Instead of simply averaging the AttCAT over multiple heads and multiple layers, are there better methods for doing the aggregation of AttCAT here?”**\n\nThank you for giving this thoughtful suggestion. There are an array of works discussing how we can deal with the multiple heads and multiple layers. They are either pruning [1] (or ablating [2]) several heads or probing a single head or layer [3]. Since we are explaining the performance of the pre-trained Transformers, it might not be a good idea to prune or ablate attention heads or layers due to the information loss. In addition, probing of a single attention head or layer may be insufficient to explain the inner working mechanism of Transformers. Existing works, e.g., [4][5], typically average over the multiple heads and/or rollout over the multiple layers. Yet the rollout also triggers information loss issues in some cases, as we demonstrated in Figure 4, which motivated us to utilize a summation operation over the multiple layers. Although the results are promising, we will keep investigating better methods for aggregating over multiple heads in future work.\n\n**Q2 Datasets Statistics**\n\n| Datasets | #Test Samples | # Classes |\n|----------|---------------|-----------|\n| SST2 | 1,821 | 2 |\n| QQP | 2,000 | 2 |\n| MNLI | 9,815 | 2 |\n| Amazon | 2,000 | 2 |\n| Yelp | 2,000 | 2 |\n| IMDB | 2,000 | 2 |\n\nReferences:\n\n[1] Voita, Elena, David Talbot, Fedor Moiseev, Rico Sennrich, and Ivan Titov. \"Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned.\" arXiv preprint arXiv:1905.09418 (2019).\n\n[2] Michel, Paul, Omer Levy, and Graham Neubig. \"Are sixteen heads really better than one?.\" Advances in neural information processing systems 32 (2019).\n\n[3] Clark, Kevin, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. \"What does bert look at? an analysis of bert's attention.\" arXiv preprint arXiv:1906.04341 (2019).\n\n[4] Samira Abnar and Willem Zuidema. Quantifying attention flow in transformers. arXiv preprint arXiv:2005.00928, 2020.\n\n[5] Hila Chefer, Shir Gur, and LiorWolf. Transformer interpretability beyond attention visualization. CVPR 2021.", " **\"The paper currently fused the introduction and related works into one section… the reviewer did not clearly understand the advantages of the proposed method and the key point leading to the good performance. \"**\n\nThank you for your comment. The reason why we fuse these two sections is that we want to seamlessly survey the literature on explaining Transformer and describe the current research gaps, collectively illustrated in Figure 1. We also highlight the formulation differences between the baseline methods and our proposed method in Figure 2, as pointed out by the reviewer. In Figure 2, we clearly observe that our methods AttCAT and CAT (ablation version) leverage the features h and their gradients, demonstrating the point that our method exploits the magnitudes of the features directly. Additionally, since the features h are directly aggregated from two parts, as shown in Figure 1 and Eq. 2, our method utilizes the skip connection directly to generate the explanations.\n\n**Ablation studies on (1) the magnitudes of the features and (2) the skip connections**\n\nThanks for the valuable suggestion. In the original manuscript, we have conducted one ablation study, which compares our AttCAT and CAT (ablation version) to see whether attention weights help generate better explanations. In addition, we may consider other baseline methods as ablation studies on the magnitude of features (the first ablation studies suggested by the reviewer). Compared to our methods, RawAtt and Rollout only exploit the attention weights without features and their gradients. Similarly, Grads and AttGrads only exploit the attention weights and their gradients without considering the features and their gradients. PartialLRP and TransAtt only exploit the attention weights gradients and layer-wise relevance propagation. For another ablation study suggested by the reviewer on studying the effect of the skip connections, it seems unnecessary to do the ablation study on the skip connections separately since we do not want to explain a Transformer with a major information loss. It is because the skip connection component is a core of feature aggregation, as shown in Eq. 2. Moreover, as mentioned in Lu et al. [1] that a significant portion of information flow in BERT goes through the skip connection instead of the attention heads (i.e., three times more often than attention on average). \n\n**Tasks in other domains**\n\nWe thank the reviewer for pointing out the application of our method in computer vision tasks. In this paper, we mainly develop our method to explain Transformers (i.e., BERT base model and variants such as DistillBERT and RoBERTa) on various NLP tasks, such as sentiment analysis (SST2, IMDB, Yelp, and Amazon), natural language inference (MNLI), paraphrase detection (QQP), and question answering (SQuADv1 and SQuADv2). We also present some details of these tasks in Section B of Appendix. Since there are different visions of Transformer architectures, such as ViT [2] and Swin Transformer [3], which are much different from Transformers used on NLP tasks, it is valuable to explain these architectures and extend our method as our future works.\n\n**References:**\n\n[1] Kaiji Lu, Zifan Wang, Piotr Mardziel, and Anupam Datta. Influence patterns for explaining information flow in bert. Advances in Neural Information Processing Systems, 34, 2021\n\n[2] Dosovitskiy, Alexey, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani et al. \"An image is worth 16x16 words: Transformers for image recognition at scale.\" arXiv preprint arXiv:2010.11929 (2020).\n\n[3] Liu, Ze, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. \"Swin transformer: Hierarchical vision transformer using shifted windows.\" In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 10012-10022. 2021.\n", " We thank Reviewer 1Khh for summarizing the strengths and novelty of our work from both the qualitative and quantitative sides, as well as the structure of our presentation and organization. Particularly, we appreciate your insight on the alignment of our work and previous ones and highlight our original contributions. As mentioned in your comment, we quantitatively show our sum operation is better than the rollout operation in Figure 4 of Section 6.2. ", " **“The proposed method seems to have limited novel differences against the former method, CAT.”**\n\nThanks for bringing up this important question, but we want to reemphasize that CAT is also a new method that we proposed in this work, which can be considered as an ablation version of AttCAT without incorporating attention weights. In Figure 2 of the original manuscript, the purple box clearly shows the differences between CAT and AttCAT in the formulation.\n\n**“The work needs more qualitative and quantitative evaluation methods to prove why AttCAT is better than former methods in helping people understand Transformers instead of just showing performance numbers.”**\n\nThanks for your comment. As we presented in Section I (lines 78-88) and Section 4 (lines 191-197), our method demonstrates a better performance than the baseline methods since we leverage not only the magnitude of the features and the skip connections but also the attention weights to interpret the inner working mechanism of Transformers. The effectiveness of our method was shown via both qualitative (i.e., Tables 1&2 and Figures 3&4) and quantitative (i.e., Figure 5) evaluations. More evaluations are presented in our Appendix, including 1) precision with varying k values and 2) visualization of impact score over various methods and tasks.\n\n**Questions 1&2 \"In Table 1, when evaluating on QQP, TransAtt seems to have a better evaluation performance than the proposed method, AttCAT. Is there a reason why this happens\" and \"Similar pattern can be observed in Table 2, AttGrads also has a better performance than the proposed method, AttCAT. Does this mean the evaluation performance heavily depends on the dataset and may not be consistent? Therefore, is the improvement of AttCAT also conditional?\"**\n\nThank you for your comment. We understand where you are from, but we would like to emphasize that our AttCAT method outperforms the baseline methods on the vast majority of different data sets and tasks. We note that the questions in the QQP dataset are typically very short, with only a few words. In other words, it is easy to capture the most important words in the QQP task within short sentences. As a result, all the compared methods achieve a good performance. On the contrary, other datasets have longer sentences with more complex structures, which demonstrates that our method is superior to the baseline methods in more complex data sets and tasks, as shown in Tables 1 and 2.\n\n**Questions 3 \"In the qualitative comparison, the author does not compare AttCAT against CAT. I think this is the most important comparison, especially when the difference between AttCAT and CAT is very limited.\"**\n\nAs we reemphasized before, CAT, as one of this work’s contributions, is an ablation version of AttCAT without incorporating attention weights. We observe that its performance drops compared with AttCAT in the quantitative evaluations, so we only select AttCAT as our final version to qualitatively compare with other baseline methods.", " Thank you all for the time you took to review our paper. We hope that our responses have fully addressed your concerns and remain committed to clarifying any further questions that may arise during the discussion period.", " The paper proposes a Transformer explanation technique via attentive class activation tokens, aka, AttCAT, leveraging encoded features, their gradients, and their attention weights to generate a faithful and confident explanation for Transformer’s output.\n The work clearly defines the problem and presents a detailed work-through with formulas of the proposed method.\n\nThe work also present experiments with obvious gains to demonstrate the effectiveness of the proposed method\n\nThe proposed method seems to have limited novel differences against the former method, CAT.\nThe work needs more qualitative and quantitative evaluation methods to prove why AttCAT is better than former methods in helping people understand Transformers instead of just showing performance numbers.\n\n In Table 1, when evaluating on QQP, TransAtt seems to have a better evaluation performance than the proposed method, AttCAT. Is there a reason why this happens?\n\nSimilar pattern can be observed in Table 2, AttGrads also has a better performance than the proposed method, AttCAT. Does this mean the evaluation performance heavily depends on the dataset and may not be consistent? Therefore, is the improvement of AttCAT also conditional?\n\nIn the qualitative comparison, the author does not compare AttCAT against CAT. I think this is the most important comparison, especially when the difference between AttCAT and CAT is very limited.\n The work does not talk about its limitations.\n\nEspecially refer to above, it would be important to know in which dataset, AttCAT is better than former ones and in which dataset similar to QQP AttCAT may not perform better.", " The paper proposes a new method, termed AttCAT, for evaluating the importance of each input token in Transformers. The proposed method considers two perspectives for token importance, including the attention perspective and the gradient perspective. The authors introduce Class Activation Tokens, which is inspired by GradCAM [26]. And the Class Activation Tokens is combined with the Transformer attention to output AttCAT. The experimental results show that the resulting metric for token importance is superior to some other methods. ## Strengths\n- The method is well motivated and clearly stated. It makes sense to combine the attention scores and the gradient weights to obtain a new metric for token importance in Transformers.\n- The experimental results clearly demonstrate the effectiveness of the proposed metric.\n\n## Weaknesses\n- The experiments could be extended. It would be helpful to see how the evaluation metrics, namely AOPC and LOdds, change against the corruption rate $k$ (removing the $k$% top-scored words). Also, it is interesting to see how the two metrics change when we remove the $k$% lowest scored words. This is different from removing the $k$% top-scored words because the inputs to the model are different in the two cases. When removing the $k$% lowest scored words, we are caring the order of the less informative tokens, while in removing the $k$% lowest scored words, we are caring the order of the most informative tokens.\n - Instead of simply averaging the AttCAT over multiple heads and multiple layers, are there better methods for doing the aggregation of AttCAT here?\n- Could the author provide the statistics of the datasets, e.g., number of classes and dataset size? The authors mention that they would extend the AttCAT method to explain generative and vision Transformer architectures as future works. But more discussions on limitations of this work would be useful.", " The paper proposes \"Attentive Class Activation Tokens\" (AttCAT), a post-hoc method for the explanation of Transformer models addressing NLP tasks.\n\nDifferent from existing related methods which only focus specific components of the models being explained (leading to reduced faithfulness) and/or heuristics, the proposed method stresses the use of a) the features encoded by the model, b) their gradients, and c) their associated attention weights as a complete package to address the weaknesses in existing explanation methods. \nThis enables not only the explanation of the parts of the input (tokens) that have a high impact on the prediction made by the model, but also whether this impact positively or negatively contributed to the prediction (directionality). = Strengths\n\n+ The manuscript had a very good structure and organization. This led to clear content with a good flow. Overall I enjoyed reading this paper. I applaud the authors for the effort put on the presentation of this manuscript.\n\n+ The proposed method is relatively simple. The inclusion of each of the components (the features encoded by the model, their gradients, and their associated attention weights) is well motivated.\n\n+ On the qualitative side, to the best of my knowledge, the proposed method is novel and complementary to what is out there. On the quantitative side, it obtained state-of-the-art results w.r.t. existing methods.\n\n+ Empirical validation of the method was conducted considering a rich set of well known components including several transformer architectures, several NLP-related datasets, and several model explanation baselines from the literature.\n\n+ Observations made via the proposed method align with observation from previous efforts, e.g. [11] in Sec. 6.2. This already hints at the added value of the proposed method. N.A. N.A.", " This paper proposed two methods to understand how transformers work: CAT and AttCAT. They are motivated to take care of the magnitudes of the features, the gradients, and the skip connection to examine (1) which tokens mostly influence the model's output and (2) whether a token pose a positive or negative contribution to the output. Both CAT and AttCAT are developed based on GradCAM. It also studied a lot of previous approaches and mainly compared weight-based methods, gradient-based methods, and methods based on layer-wise relevance propagation. Results on diverse common benchmarks show the strength of the proposed method, including SST2, QQP, MNLI, Amazon, Yelp, and IMDB. Some qualitative results are included to help understand. The reviewer believes the studied direction is pretty much important for related communities where transformers gradually become the most common techniques we used in various tasks. The reviewer also agrees with the authors that we should also consider the magnitude of the features and the skip connection which play big roles in transformers. \n\nThey carefully studied a lot of literature and mainly compared with three categories including: (1) attention-weights-related works, like RawAtt and Rollout; (2) gradient-related works, like Grads and AttGrads; (3) layer-wise relevance propagation methods, like PartialLRP and TransAtt. Figure2 clearly tells the difference between the proposed method and all the previous approaches. \n\nExtensive quantitative experiments are conducted over various datasets including SST2, QQP, MNLI, Amazon, Yelp, IMDB, and SQuAD v1 and v2. Results show the proposed method generally achieves better performance compared with the previous approaches. The reviewer especially likes the red notation in the visualization of the proposed method which indicates the negative effect of the input texts.\n\n\nThe paper currently fused the introduction and related works into one section, which makes the introduction section pretty lengthy and a bit hard to read. Although the authors described the differences between the proposed method and previous methods in many places (intro, method, exps), the reviewer did not clearly understand the advantages of the proposed method and the key point leading to the good performance.\n\nAlso, the reviewer did not see related ablation studies related to two major claims that we should exploit (1) the magnitudes of the features and (2) the skip connections. There are huge missings, where the authors can not consider the skip connections when aggregating the information to see if considering skip connections really help the understanding.\n\nBesides, the abstraction shows the reviewer the proposed method address various different tasks while it mainly focused on NLP tasks and currently have no discussion to tasks in other domains. The reviewer is mainly concerned with the missing of potential ablation studies and will adjust scores if it gets addressed properly. The reviewer currently did not see sufficient ablation studies to support their claims. Also, the method currently did not provide evidence to support understanding of how transformers work in vision tasks." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "lWrdau-eGFo", "ALGE96Bj9n-", "iwontCcncYY", "9Y6ZagbQTq", "VHTq9w1v9Gv", "DOTQceLuB2", "1k1wgU9-3bT", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5", "nips_2022_cA8Zor8wFr5" ]
nips_2022_O5arhQvBdH
Trading off Utility, Informativeness, and Complexity in Emergent Communication
Emergent communication (EC) research often focuses on optimizing task-specific utility as a driver for communication. However, there is increasing evidence that human languages are shaped by task-general communicative constraints and evolve under pressure to optimize the Information Bottleneck (IB) tradeoff between the informativeness and complexity of the lexicon. Here, we integrate these two approaches by trading off utility, informativeness, and complexity in EC. To this end, we propose Vector-Quantized Variational Information Bottleneck (VQ-VIB), a method for training neural agents to encode inputs into discrete signals embedded in a continuous space. We evaluate our approach in multi-agent reinforcement learning settings and in color reference games and show that: (1) VQ-VIB agents can continuously adapt to changing communicative needs and, in the color domain, align with human languages; (2) the emergent VQ-VIB embedding spaces are semantically meaningful and perceptually grounded; and (3) encouraging informativeness leads to faster convergence rates and improved utility, both in VQ-VIB and in prior neural architectures for symbolic EC, with VQ-VIB achieving higher utility for any given complexity. This work offers a new framework for EC that is grounded in information-theoretic principles that are believed to characterize human language evolution and that may facilitate human-agent interaction.
Accept
From the ratings alone this paper appears borderline leaning towards acceptance, however, I want to highlight to the authors that in discussion with reviewers and my own reading of the paper there are aspects that shifted this even closer to the decision boundary. In the end, my own conflicted views of the work and the lack of further discussion from the more negative reviewer led to a recommendation of accept. I'll briefly review some of the strengths and weakness in the latest revision as I see them. + The work truly studies the effect of controlling multiple objectives of communication in the emergent communication setting and how these affect trade-offs in complexity, informativeness*, and utility. This scientific approach to the experimental work is in my view a clear strength of the work. + There is a clear motivation for the choice of objectives based upon existing work in related fields. Although I have reservations about how well the realization, building on the Zaslavsky et. al.'s work and within emergent communication is something I think will benefit the community. + The writing itself is clear and easy to read. Figures in the main text and appendix were informative and quite interesting. - Important details, especially around the math and experimental setup, are lacking. Some of thing examples that bothered me were: ambiguity on definition of terms in the objective (all three terms could be stated more precisely but for U(X, Y) we are not told what Y is and for I(X, C) it is not clear if this is coming from 3.2.2 or 3.2.4); lacking clarity around gradient flow (passing gradients back to the sender in this type of setup should be stated very clearly and upfront). - Connection between I(X, C) and complexity. As I was reading I interpreted this to be (as in 3.2.2) "the KL divergence of μ(x) and σ(x) from a unit Gaussian", and found this is to be a very strange choice (not as an objective but as a measure of the complexity of the language). This is, I believe, a different concern than what was raised by one of the reviewers. The issue I saw there was that the codebook need not be uniformly distributed in the continuous space, and therefore the implications on message complexity of a particular variance in the continuous latent space may be inconsistent. For some regions of latent space a unit variance could imply a single message with high probability, while other areas of the latent space may be more closely clustered causing the same unit variance to be nearly uniform over several different messages. Moving on to 3.2.4, we see (perhaps) evidence that the authors encountered the repercussions of this choice "simply penalizing this term in training was insufficient to train VQ-VIB agents to use fewer unique discrete embedding". The motivation is great, but I strongly suspect that there are better ways of realizing it and that this particular choice is not capturing what the author(s) intended. - Limited novelty of the method. The combination of VIB and VQ-VAE seemed like something that would have already been published, since regularizing the prior of the latent distribution is such a well used (maybe even well understood) method, but I looked and must grant that this does seem to be a novel combination. That said, it is not so novel that it would be able to stand on its own without the specific setting itself and connection with models of complexity in human languages to support it. - (Minor) The author(s) made several additional changes to address reviewer concerns around prior work and putting their design choices into context, but there remain some issues in this space. I found the discussion of related work to be fairly shallow and at times even dismissive. It reads as though the work was undertaken entirely devoid of consideration for related work except for that which directly motivated the approach, and then was added defensively but without making real connections between this work and others. I mark this as minor because it is unfortunately somewhat common and because it is a more subjective evaluation. I hope this and the other reviews will help author(s) to understand how the work may be experienced by readers and potentially make further refinements. Overall, despite limitations, I do believe this work will be of interest to researchers in emergent communication and potentially more broadly due to common underlying questions around trading off complexity and other primary learning objectives.
test
[ "D1JUsiRfcf", "anbwjByLyWK", "9vNkbmedKO", "ym2vD8qkwiQ", "J-1L-6-Eean", "7Xja8kj6YKL", "oMZywra4oJQ", "nISzgvlHb7", "9KrVr5sTbbK", "5hzjLuaTtYu", "vczQ21aa36m", "_FPaSb-VyTn", "cNrEvu77PGx", "zsCYJ475JDc", "fQnyiTL0XmB", "VU4xEVWk_dC", "Qo2z7733w3H", "Vs7PdVXHiP", "Ji2xlhDOY8s", "Z8TxVmz4Ss9", "25-x7i1Yhg", "ukq0svW0Nd_", "bmTe0woNaQW", "j4BFA5u-o0b", "duHS_HFVeIR", "ZwQWhZPbha1", "5qsHUVQDqg", "MNspLE303uT", "jYqe6kZEV40" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the helpful follow-up comments. We’re happy that the reviewer found our work overall interesting, and we hope that our response below will address all of the reviewer’s concerns. Given that the main concerns appear related to situating our work within the emergent communication literature, which we were able to address in the paper with a few clarifications in the introduction and related work sections, we hope that the reviewer will consider to recommend acceptance. \n\nIn our last response we addressed the reviewer’s concern about using Chaabouni et al. 2019 as a motivation for our work. That paper is indeed not central to our motivation and we have adjusted the paper to clarify that. Below we address the other concerns raised in the reviewer’s response.\n\n\n> ... but it is not clear to me how the problem they study align with the problem studied in this paper. Perhaps they don't, and this is fine, but in this case the authors should make a stronger point to motivate why addressing the issue of complexity in color-naming systems has any bearing on emergent communication (or machine learning, or computational linguistics, but the current framing of the paper is very much focused on emergent communication).\n\nWe are not aware of any prior work that directly leverages the empirical evidence that human languages evolve under pressure to optimize the IB complexity-accuracy tradeoff, in order to inform EC in artificial agents. This empirical evidence is a central part of our motivation, as explained in the second paragraph of the introduction. The color naming domain is a key example in this context, which has been the focus of many studies in machine learning and computational linguistics (e.g., Steels & Belpaeme 2005; Monroe et al., 2017; Chaabouni et al. 2021; all cited in our paper). Therefore, we believe it is of interest to the NeurIPS community. To clarify, we are not “addressing the issue of complexity”, but rather proposing to view EC through the lens of a complexity-informativeness-utility tradeoff.\n\n> I would be a bit wary of using the term \"lexical semantics\" as it generally refers to the study of the meaning of individual words in natural languages)\n\nWe have not used that term in the paper, but rather “human-like lexical communication systems”. For further clarity, we have changed that to “human-like semantic systems”, because semantic systems (short for systems of semantic categories) is a standard term that has previously been used in the literature.\n\n> Regarding the distinction between $m$ and $\\hat{m}$\n\nIt is important for implementation as well as for consistency with prior work on the IB framework for semantic systems.\n\n> Regarding the term “semantic communication” \n\nwe have changed this term to “semantic systems”, which is a more standard term.\n", " Thank you so much for confirming your initial score arguing for acceptance!\n\nAs for your final minor comments, we have changed the phrasing in our latest revision to clarify this sentence. It now states that Chaabouni et al. “... found that training with REINFORCE typically resulted in lower complexity communication than when training with messages generated via a gumbel softmax layer…”\n", " We are glad that the reviewer found our clarifications and comparison to Lin et al.’s failure mode useful. We have expanded upon the complexity measure of onehot in Appendix A. Lastly, we already discuss the complexity of VQ-VIB vs. onehot in Appendix D, and our latest revision notes how it is an interesting direction for future research.", " 1. “2.1 this isn't a great argument when you are arbitrarily setting the vocabulary size. how did you decide what the \"correct\" limit for a vocabulary is?”\n\nThe idea in our approach is to set the vocabulary size large enough so that agents could in principle use it to cover the entire input space, and then let the complexity loss determine the effective vocab size (the distinction between the codebook size and the effective codebook size in IB has been studied before, e.g., in Zaslavsky & Tishby 2019). For finite domains, like the 9-points and color domains, having a unique token for each element in the domain is enough to be able to represent a maximally accurate communication system, thus vocab sizes of 9 or 330 respectively are in principle sufficient (indeed, our results show that larger vocabularies don’t change much). In the uniform domain we considered vocab sizes that were shown to work well in prior work (Tucker et al., 2021). As you can see in Figs 5 and 12, varying the complexity penalty leads to systems with varying effective vocab sizes, i.e., agents use fewer distinct tokens even though the hard limit on the number of tokens they can use has not changed. Intuitively, you can think of our complexity term as a soft constraint on the bandwidth agents allocate for communication, which can be gradually adapted to changing communicative needs by annealing $\\lambda_C$.\n\n“2.2, 2.3, 2.4. these results are very interesting and I believe they strengthen your case that VQ-VIB is an interesting algorithm whose inductive bias can outperform REINFORCE/GS for your particular metrics” \n\nthank you so much for noting this!\n\n\n2. Our method is based on deterministic annealing, a standard technique in non-convex optimization. We use the first 3,000 steps to train agents to converge to a high-complexity (low $\\lambda_C$) and high-utility communication. Then, by gradually increasing $\\lambda_C$, we aim to track the local optima along the Pareto frontier with varying complexities. We only report results after these 3,000 steps and in 1,000 intervals, which is why we argue that the communication systems we report are stable solutions (as in deterministic annealing). However, each system is a stable solution for a different value of $\\lambda_C$, which is why they have different complexities and categorize the space in different ways. Indeed, introducing deterministic annealing into cooperative EC is a novel idea that we present in this paper, motivated by the work of Zaslavsky et al. which linked annealing with language evolution.\n", " > I do find your work to be complimentary to those but would like to point out the novelty claims should acknowledge that previous works have also used losses similar to yours with similar underlying principles (though explicitly using IB is novel).\n\nWe agree that our work is complementary to prior work, and we aimed to acknowledge the relation between our losses and prior work in Section 2 (e.g., we’ve written that “Numerous works simplify communication… these methods all correspond to limiting complexity,” “Wang et al. explicitly limit complexity,” “for example, Lin et al. use an autoencoding loss… tightly related to notions of informativeness”, and “... emergent communication literature has begun to rediscover the importance of complexity and informativeness in communication...”). Following this reviewer’s comment, we have now adjusted Section 2 to acknowledge this point more clearly.\n\n> From my perspective, Chaabouni et al use fewer manually-tuned hyperparameters than your work (if I am not mistaken) and thus is less handcrafted. In contrast, your work uses explicit losses that directly reflect the metrics you care about and you manually tune those hyperparameters. For this reason, the WCS results are not as exciting to me.\n\nWe do not manually tune the $\\lambda$ hyperparameters, but rather gradually anneal them in order to explore the space of communication systems spanned by these (soft) constraints, much like annealing a temperature parameter. It has been argued that this annealing process may drive language evolution (Zaslavsky et al., 2018), allowing populations to gradually adjust their communication systems to changing communicative pressures. Thus, our motivation for varying these parameters is to simulate agent adaptation and language change, rather than to fit to the WCS dataset as in Chaabouni et al. 2021. Our comparison with the WCS data suggests that this annealing process, in addition to capturing how actual color naming systems evolve, may also guide the evolution of human-like communication systems in artificial agents. \n\nIn addition, we believe that our approach is more theoretically justified because, based on extensive existing research on human languages, humans are more likely to be optimizing for a complexity-informativeness tradeoff than for constraints like Chaabouni et al.’s discriminative need (intuitively, it is unlikely that speakers of Ifugao, a language with 1.5 bits of complexity, never need to discriminate between two similar colors).\n\n> If you set $\\lambda_I = 0$ and repeated the WCS experiments, that would be equivalent to Chaabouni et al and would indeed be a measure of REINFORCE vs VQ-VIB inductive biases. This isn't an experiment I would demand so late in the rebuttal period, but just a perspective for future reference.\n\nWe have actually done this experiment already and refer to results from running REINFORCE with $\\lambda_I = 0$ in Appendix D.2; we found that training with onehot communication was unstable and often collapsed to no meaningful communication (complexity = 0). We omitted explicit discussion of VQ-VIB with $\\lambda_I = 0$ for brevity besides a general note that “VQ-VIB agents typically converged to higher informativeness (and higher complexity) communication than onehot when complexity was not penalized.” Given the reviewer’s request for more specific data, we analyzed our saved data from trials with $\\lambda_I = 0$ and found that complexity for VQ-VIB converged between 1.4 to 1.9 bits over 5 random trials. This exactly demonstrates the differences in inductive biases between VQ-VIB and onehot.\n", " Thank you for the encouraging response! We’re very happy that our earlier responses helped clarify the framing and contributions of this work.\n\n> Indeed you could likely do the same two extra losses (complexity, informativeness) for a standard REINFORCE gradient estimator. Perhaps, there is a reason why VQ-VIB is the only method that can perfectly work with these two extra losses in ways that others can't? If so, it could be useful to spell out explicitly. But I think figure 9 demonstrates at least that informativeness loss is a general idea that can be applied to many algorithms (this is a good thing!)\n\nIn contrast to other architectures, VQ-VIB is more similar to VIB in that it explicitly learns an encoder which is formulated as a Gaussian distribution over a latent space. But in contrast to VIB, VQ-VIB also learns an underlying symbolic structure by simultaneously learning a discretization of the latent space into prototypes. In that sense, VQ-VIB enjoys both worlds: it has a symbolic structure, like Proto, and to some extent like onehot (although onehot doesn’t have prototype embeddings), but it is also better suited for learning an IB encoder. Thank you so much for pushing on this point. We agree that this clarification is important. We also agree that the fact that we’ve shown that other architectures may also benefit from our informativeness loss is noteworthy on its own, as a secondary contribution of our work. We have added these important insights to the paper in Section 6.\n", " thank you for clarifying your contribution in the context of previous work. I believe the reference to the failure-mode of Lin et al makes your contribution more logical in that context.\n\nclarifications\n- thank you, that wasn't clear to me in the original work\n- that makes sense, it may be worth mentioning in an appendix but is not necessary\n- I see, this is an interesting point that I believe was unexplored. I would add a mention somewhere that VQ-VIB learns a more complex protocol which is better when used with the complexity loss", " 1. thank you\n2.1 this isn't a great argument when you are arbitrarily setting the vocabulary size. how did you decide what the \"correct\" limit for a vocabulary is?\n2.2, 2.3, 2.4. these results are very interesting and I believe they strengthen your case that VQ-VIB is an interesting algorithm whose inductive bias can outperform REINFORCE/GS for your particular metrics\n\n3. this is something that is very confusing and I would appreciate a response if time allows. \nby \"converged\", I mean that results have reached some sort of local minima and they do not change from there one. In most cooperative EC work, the protocol is relatively stable after convergence. Since you are training and plotting after convergence, does that mean that your methods protocol changes after \"convergence\". If so, do you have an explanation why this is? \n\n4. that is fine\n\n5. thank you for the zoomed-in plots in the appendix, they are much clearer to understand. as well, I believe figure 6 is a good representation of the point you're trying to make", " Mordatch and Abbeel (2017) as well as many other works have used entropy regularization and complexity regularization (M+A penalize large vocabulary sizes). This is not directly related to IB but very directly related to the auxiliary losses you propose. Demonstrating informativeness leads to improved convergence rates has also been explored by AEComm. I do find your work to be complimentary to those but would like to point out the novelty claims should acknowledge that previous works have also used losses similar to yours with similar underlying principles (though explicitly using IB is novel).\n\nFrom my perspective, Chaabouni et al use fewer manually-tuned hyperparameters than your work (if I am not mistaken) and thus is less handcrafted. In contrast, your work uses explicit losses that directly reflect the metrics you care about and you manually tune those hyperparameters. For this reason, the WCS results are not as exciting to me. If you set $\\lambda_I = 0$ and repeated the WCS experiments, that would be equivalent to Chaabouni et al and would indeed be a measure of REINFORCE vs VQ-VIB inductive biases. This isn't an experiment I would demand so late in the rebuttal period, but just a perspective for future reference. \n\nThank you for moving VQ-VIB after to the discussion, it makes the paper more streamlined and I believe does a better job of focusing on your most impressive results", " thank you for clarifying the framing. I still find that you are proposing two ideas (utility, complexity, informativeness - VQ-VIB) to be two mostly-separate approaches to the idea of incorporating ideas from IB into emergent communication. Indeed you could likely do the same two extra losses (complexity, informativeness) for a standard REINFORCE gradient estimator. Perhaps, there is a reason why VQ-VIB is the only method that can perfectly work with these two extra losses in ways that others can't? If so, it could be useful to spell out explicitly. But I think figure 9 demonstrates at least that informativeness loss is a general idea that can be applied to many algorithms (this is a good thing!)\n\nthank you for the additional experiments (I will address them in later comments). I acknowledge that your work is not aiming to beat some sort of benchmark but emergent communication is an inherently empirical field. You don't necessarily need to demonstrate better results but something novel compared to previous approaches. Based on your response, I will take the idea of greater utility for a similar complexity to be that (you could also argue it being a more human inductive bias but then I would prefer to see more experiments related to that, though WCS is a good start).\n\n\n", " We thank the reviewer for their response; reading it has clarified some of our questions about their initial review, and our new revision better situates our contributions (Introduction lines 24-27, 39-40; Related Work lines 76-78, 92-96; Experiment Preliminaries line 246-249).\n\nThe key change in our revision is highlighting a different Chaabouni et al. paper than the one noted by the reviewer, which perhaps led to some of our confusion in discussions. We now highlight [\"Communicating artificial neural networks develop efficient color-naming systems\"](https://www.pnas.org/doi/full/10.1073/pnas.2016569118) by Chaabouni et al. (PNAS 2021) instead of \"Anti-efficient encoding in emergent communication\" by Chaabouni et al. 2019.\n\nIn \"Communication artificial neural networks...\" (henceforth [1]), Chaabouni et al. study the complexity and informativeness of emergent communication in a color reference game, as in our work. Thus, [1] is perhaps the best motivation and comparison point for our paper, and we have sought to compare our work more explicitly to theirs in our revision. In particular, we highlight that in our framework, we generated a greater diversity of communication complexity than [1], and we did so without having to change the training environment. Conversely, Chaabouni et al. [1] varied how distractor images were selected to induce different levels of complexity, while we feel that optimizing for complexity directly is a much more natural and principle-based method.\n\nMore generally, we note that studying the complexity and informativeness of emergent communication is well-motivated and already considered important in prior literature, as evidenced by [1] and subsequent citations. We seek to fit within that area of research and thank the reviewer for helping us clarify this point.", " I thank the authors for running additional experiments and performing and including the new protocol analysis.\n\nI understand some of the other reviewers' concerns and especially think the novelty might be limited for people outside the context of the emergent community, but, nonetheless, I confirm the score that I assigned it in my initial review.\n\n--- \nMinor:\nline 726: Chaabouni et al. [35] had found that training with REINFORCE typically resulted in lower-complexity communication than with onehot, and that training was less stable. --> I understand the definition onehot makes sense within the context of gumbel-smax experiments but I found it confusing when comparing with REINFORCE. I would maybe say something along the lines of \"lower-complexity communication than with gumbel-softmax training with a fully connected layer generating message representations?", " ## References:\n\n- [1]: Chaabouni, Rahma, et al. \"Compositionality and Generalization In Emergent Languages.\" Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020.\n- [2]: Bouchacourt, Diane, and Marco Baroni. \"How agents see things: On visual representations in an emergent language game.\" Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2018.\n- [3]: Nowak, Martin A., Joshua B. Plotkin, and Vincent AA Jansen. \"The evolution of syntactic communication.\" Nature 404.6777 (2000): 495-498.", " I thank the authors for their response. Overall let me restate that I think the paper touches on interesting issues, but some of my concerns remain with respect to its situation in the emergent communication literature.\n\n> Complexity\n\nI appreciate that the authors added a more extensive discussion of the concept of complexity in the related work. I do think that the emphasis on \"lexical communication systems\"$^1$ and the associated information-theoretical notion of complexity could be made clearer earlier on in the paper (abstract/introduction). As it stands I think the opening is a bit too broad, and the connection to the current emergent communication literature is not very clear.\n\nAn example: in the introduction it is stated that \"[emergent communication] may lead to communication systems that are too complex for humans to understand [7]\" with a reference to Chaabouni et al. 2019 \"Anti-efficient encoding in emergent communication\". However, that paper does not really consider complexity as studied in this submission: they look at the distribution of message lengths rather than the size of the vocabulary. Furthermore, they specifically only study settings where the channel capacity (~ $\\text{vocab. size}^\\text{message length}$) is large enough for every input x to be assigned a single message. This is problematic since this reference to Chaabouni et al. 2019 is used to motivate the rest of the paper.\n\nMore generally I can think of a number of works studying idiosyncratic properties of emergent communication systems (e.g. non-compositionality [1], perceptual shortcuts [2]), but it is not clear to me how the problem they study align with the problem studied in this paper. Perhaps they don't, and this is fine, but in this case the authors should make a stronger point to motivate why addressing the issue of complexity in color-naming systems has any bearing on emergent communication (or machine learning, or computational linguistics, but the current framing of the paper is very much focused on emergent communication).\n\nOverall I think more work is needed in the introduction to properly situate this paper's contribution in the context of current issues in the emergent communication literature.\n\n$^1$I would be a bit wary of using the term \"lexical semantics\" as it generally refers to the study of the meaning of individual words in natural languages)\n\n## Further comments\n\n> VQ-VIB after moved to appendix\n\nThanks, I think the paper is much clearer this way.\n\n> The reviewer suggests that it “would have been interesting to also vary along this dimension” (informativeness) or focus on informativeness vs. complexity. Indeed, our results already examine the tradeoffs between these dimensions by varying either or (Figs 4b, 5, 9, 10, 11a, 13, and 15a).\n\nI meant that it would have been interesting to study the informativeness-utility trade-off further. I am aware that the paper performs multiple experiments on informativeness vs. complexity: my point was that this could have been the core focus of the paper (ie the title could have been \"Trading off Informativeness and Complexity in Emergent Communication\"). This is a relatively minor point though\n\n> $\\hat m$ / $m$ vs $x$, $\\hat x$\n\nThanks for clarifying, I had missed this in my first reading. That being said, is this distinction really useful in the context of the paper? $m$ and $\\hat m$ are not paper later on and their introduction might be more confusing than anything else.\n\nIf the distinction is critical, it might be worth reusing the notation elsewhere in the paper where appropriate (perhaps in figure 1?). Otherwise I think this notation can be removed\n\n> Semantic communication\n\nTo clarify my point: I think \"semantic communication\" is a pleonasm. You are right that other areas of linguistics are concerned with other aspects of **language** such as syntax, morphology, discourse, etc... But syntax for example is not integral to communication: for example the communication system studied in this paper is not syntactic. In this regard it makes sense to refer to \"syntaxic communication\" to denote systems of communication that exhibit syntax (see e.g. [3]). A similar reasoning would hold for morphology.\n\nHowever I am hard pressed to find examples of communication systems that do not exhibit semantics, i.e. communication systems that do not convey meaning. As far as I know, \"semantic communication\" is not a standard term in linguistics (it is certainly not used by Zaslavsky and colleagues).\n\nHow about just using \"communication\"? Or perhaps if the emphasis is to be on semantic, why not use \"semantic categorization (systems)\"?\n", " Based upon the reviewer's suggestions, we have included new qualitative analysis of particle-world communication in the revised paper (page 20, Figure 12). Our new analysis shows how tokens referred to regions in the Uniform environment 2D space, and that lower-complexity communication created larger regions, or coarser discretizations. Such analysis also highlights deep parallels with the color reference game, in which agents discretized a continuous (color space). The visualizations we now include make this connection more obvious; we thank the reviewer for this insightful suggestion.", " We thank the reviewer for the clarification on qualitative analysis; we will try to develop such tools in the coming days.\n\nWe have updated the paper to reflect findings from our REINFORCE experiments, which have now concluded. Findings are included in Appendix D and are most visible in Figure 15. We largely reproduced findings from earlier in our paper. In particular, comparing VQ-VIB and onehot showed similar communicative efficiency, but VQ-VIB achieved greater utility. We also found that training with an informativeness loss overcame many of the issues identified by Chaabouni et al.. That is, with $\\lambda_I = 50$, agents consistently learned non-degenerate communication, whereas Chaabouni et al. used 180 random seeds to overcome training failures. Thus, we believe that these results provide still further evidence for the advantages of our framework and neural method.", " As noted in our earlier responses, we had been unable to finish the suggested REINFORCE experiments in time for the initial rebuttal, but our experiments have now concluded. Results from using REINFORCE in the color reference game are included in Appendix D and highlighted in Figure 15.\n\nThe same trends already present in earlier findings held true in these experiments. Onehot and VQ-VIB learned efficient communication, and VQ-VIB achieved greater utility, for the same complexity, than onehot. The reviewer correctly noted that Chaabouni et al. had found that REINFORCE typically learned lower-complexity communication than gumbel softmax. We overcame this challenge via our informativeness and complexity losses, which allowed us to directly control these measures. Thus, we believe that these findings demonstrate that our framework (for utility, complexity, and informativeness) and technique (VQ-VIB) generalize to even more settings than we had first presented.", " Thanks for your reply. Here are some comments:\n\n1 - I find that figure interesting but not a deep qualitative analysis of the agents protocol. If anything, I think it corroborates the complexity result already presented, I don't think it properly addresses questions like the one I mentioned: \"are there specific messages the systematically mapped to actions of the environment e.g. go right, the target is in the top right corner, go there etc.\"\n\n4 - Chaabouni et al had experiments with Reinforce, if you are already running experiments with, at least with the onehot setup it should be feasible to report observed performance. This would at least confirm the transfer of your new losses to a different optimization scheme.", " We thank all the reviewers for their helpful and thoughtful comments. We have uploaded a revised submission to address the reviewers’ suggestions, and in our detailed responses to each reviewer we address all the concerns they raised. Below is a summary of the main changes in the revised submission.\n\n### Summary of revisions:\n\n1. Following reviewer kucm’s comments, we (i) included additional experiments to extend the range of lexicon sizes (added 200 and 500 tokens in the uniform env, 1024 tokens in the color reference game); (ii) added Fig 6 to highlight the advantage of VQ-VIB over onehot; and (iii) added a zoomed-in version of the color reference game results (Appendix D, Fig 14). We also conducted additional experiments with REINFORCE, as suggested by the reviewer (Appendix D, Fig 15). **All new experiment results requested by the reviewer corroborate trends identified in the initial submission.**\n2. Both reviewers kucm and 3YrT note that VQ-VIB After is not as interesting as VQ-VIB Before, and reviewer kucm also notes that many results are discussed too briefly in the main text. We have therefore moved VQ-VIB After to an appendix, and instead focused more thoroughly on the key results in the main paper.\n3. Reviewer 3YrT has raised concerns about our definition of complexity and pointed to the debate in the literature on the notion of linguistic complexity. In our detailed response to the reviewer, and in the paper, we explain the theoretical and empirical justification for our particular choice of complexity, especially in the context of **lexical semantics**. In addition, following the reviewer’s comments, we have added a note to Section 2 about this debate in the literature, including the relevant references suggested by the reviewer.\n4. Following reviewer 1FYj, we extended our description of the Proto architecture in Section 4.2.\n", " ### 3. Informativeness vs. utility:\n\nWhile informativeness and utility are indeed related, they differ in important ways. Generally, utility focuses on a specific task, whereas informativeness is task-agnostic. For example, Chaabouni et al., in their reference game experiments, only trained agents with a utility loss, so they could only affect communication by varying the environment. Conversely, our approach allows leaving the training environment (and therefore utility pressures) unchanged but varying informativeness and complexity to get a wider range of communication systems. Our results show the benefits of this approach, and in particular, the importance of considering informativeness in addition to utility alone. Specifically, we found that training with informativeness, in addition to utility, can yield higher overall utility and faster convergence (Figs 4a, 9, and 10), and that controlling complexity was necessary for learning communication schemes that reflected the varied complexities in the world’s languages (Fig 5, Table 1). In the Common Ground experiments, we explicitly tested agents in settings where utility and informativeness were not aligned. Informativeness would be maximized by the speaker communicating about both colors; utility could be maximized by the speaker communicating about only the target. Nevertheless, training with informativeness improved agent utility yet again in this setting. We have emphasized these differences in our revised paper, in Sections 5.2.1 and 5.2.2.)\n\nThe reviewer suggests that it “would have been interesting to also vary along this dimension” (informativeness) or focus on informativeness vs. complexity. Indeed, our results already examine the tradeoffs between these dimensions by varying either $\\lambda_I$ or $\\lambda_C$ (Figs 4b, 5, 9, 10, 11a, 13, and 15a).\n\n\n\n\n### **Specific questions**\n\ni. What is “non-semantic communication?” Research in other areas of linguistics like syntax and phonology consider aspects of communication that are related to form rather than meaning. We focus on semantic communication in this work, i.e., the way signals partition the environment into semantic categories.\n\nii. Reviewer kucm also questioned the point of VQ-VIB After. We appreciate this feedback and have moved this formulation from the main paper to an appendix in the revised submission. We initially developed VQ-VIB After as a baseline for comparison, and we agree that it is less interesting than VQ-VIB Before.\n\niii. “What are $m$ and $\\hat{m}$?” These variables correspond to probability distributions (belief states) over the feature space, as indicated on line 110. This notation borrows from Zaslavsky et al.’s variables for “meanings.” Intuitively, we think of there being some true state, $x$, and an agent’s belief over that state, $m$, which may be shaped by perceptual noise.\n\n", " We begin by addressing the points that are considered by the reviewer as weaknesses, and then we respond to the reviewer’s specific questions.\n\n### 1. Notions of complexity:\n\nFirst, we would like to clarify the motivation for our complexity term. From a theoretical perspective, the use of mutual information for measuring complexity is grounded in information theory (e.g., [Shannon 1959](https://www.gwern.net/docs/cs/algorithm/1959-shannon.pdf); [Tishby et al., 1999](https://www.cs.huji.ac.il/labs/learning/Papers/allerton.pdf); [Gilad-Bachrach et al. 2003](https://link.springer.com/chapter/10.1007/978-3-540-45167-9_43)) and has been widely studied in machine learning (e.g. [Bialek et al., 2001](https://www.princeton.edu/~wbialek/our_papers/bnt_01a.pdf)), neuroscience (e.g. [Tkacik & Bialek, 2016](https://www.annualreviews.org/doi/abs/10.1146/annurev-conmatphys-031214-014803)), and cognitive science (e.g. [Sims 2018](https://www.science.org/doi/abs/10.1126/science.aaq1118)). In the context of linguistics, this definition has recently been shown to capture the **semantic complexity of the lexicon**, i.e., the complexity of how words partition the environment into categories. This idea was introduced by Zaslavsky et al. (PNAS, 2018), and has been gaining empirical support across hundreds of languages and multiple domains ([Zaslavsky et al. 2018](https://www.pnas.org/doi/full/10.1073/pnas.1800521115), [2019](https://arxiv.org/pdf/1905.04562.pdf), [2021](https://escholarship.org/uc/item/2sj4t8m3), [2022](https://academic.oup.com/jole/advance-article-abstract/doi/10.1093/jole/lzac001/6566271?redirectedFrom=fulltext&login=false); [Mollica et al. 2021](https://www.pnas.org/doi/full/10.1073/pnas.2025993118)).\n\nWe agree that there are various notions of complexity in linguistics and that it is not always clear how to define complexity. While this is an important question, it is not the one this paper aims to tackle. The goal of this paper is to test whether the framework of Zaslavsky et al. for lexical semantics, which predicts a specific complexity measure, can inform emergent communication (EC) in AI. Having said that, we have extended Section 2 to acknowledge the debate about complexity and included the references suggested by the reviewer.\n\n**Intuition for our complexity measure**:\n\n1.1. Complexity and ambiguity: indeed, unambiguous language are maximally complex. However, in order to achieve this in our setup, agents must assign a unique signal to every possible referent, which may require a huge lexicon. Thus, if agents have limited resources (as humans do), they must compress inputs into signals in order to reduce the complexity of their lexicon. Our complexity term measures the degree of compression (Tishby et al., 1999). In our setting, only one signal can be transmitted in each round and all signals have the same length, thus other notions of complexity that are related to compositionality or word-forms are not applicable in this case. Note that we focus primarily on **lexical semantics** rather than on other aspects of language such as syntax or morphology.\n\n1.2. Complexity vs. entropy: while our complexity term is related to entropy and lexicon size, it is also fundamentally different. For example, Zaslavsky et al. (2018) showed that using lexicon size as a complexity measure yields qualitatively wrong predictions for human color naming, in contrast to the use of mutual information which currently gives the SOTA model in this context. Entropy only considers the marginal distributions of signals, which ignores the fact that languages often use non-deterministic signaling patterns.\n\n### 2. Unclear takeaways:\n\n As summarized in Section 6, our key contributions are: (a) finding that combining informativeness and complexity in training EC led to faster convergence and human-like communication systems; and (b) our new neural architecture, VQ-VIB, outperformed existing discrete communication methods. \n\nWe are certainly not the first to advocate for applying notions from cognitive science to EC, but we are (to the best of our knowledge) the first to integrate utility, informativeness, and complexity during training. This approach is motivated by the recent body of literature on the Information Bottleneck (IB) framework for semantic systems (Zaslavsky et al. 2018), which has not previously been incorporated directly into training EC, as we do here.\n", " ### **Citation suggestions**\nWe already cited Lin et al.’s autoencoder work, and have clarified in the revision the distinction between our findings and approaches (Sections 2 and 5.1). We have also added the reference to Maddison et al. 2016.\n\n### **Clarifications**\n\n- “Is the entropy regularization done in place of or on top of that loss?” — We assume the reviewer is referring to complexity equivalence classes loss from section 3.2.4. This loss is “on top of” the complexity loss: the normal complexity loss is the main training signal, and the entropy regularization is only intended to break ties between complexity equivalence classes.\n\n- “Is the entropy regularization in one-hot equivalent to regular entropy regularization done in previous RL works (e.g. Mnih et al 2016, Lazaridou et al 2018) or is it the opposite? Are you penalizing entropy or encouraging entropy?” — In contrast to prior work, we are considering the mutual information between inputs and signals as a measure of complexity, which agents aim to minimize. Since mutual information is the difference between the unconditional and conditional entropy, minimizing it amounts to minimizing the unconditional entropy while maximizing the conditional. Several earlier works, including those the reviewer cited, encourage the unconditional entropy of signals to prevent premature convergence. In our framework, this appears not to be necessary due to the informativeness pressure encouraging meaningful communication.\n\n- “VQ-VIB learns more complex communication” — This is what we intended to state. Even with no informativeness loss, VQ-VIB agents learned more complex and informative communication than onehot agents, which enabled us to create a greater range of communication systems when annealing the complexity loss.\n", " ### **Specific Questions**\n\n1. VQ-VIB After: Thanks to the reviewer’s feedback, we have moved VQ-VIB After to an appendix in the revised version, and now focus on VQ-VIB Before in the main paper.\n\n2. Particle world results (Section 5.1): We have shown that VQ-VIB outperforms onehot in the sense that it achieves >= utility for the same level of complexity (Figs 4b, 9). Following the reviewer’s request, we conducted additional experiments in the uniform environment with 200 and 500 tokens, for VQ-VIB Before and onehot, and noted those results in Section 5.1. Our results are unchanged: neither method improved significantly further when using more than 100 tokens. That is, VQ-VIB continued to outperform onehot. Furthermore, the computational cost of onehot communication increased with dimensionality (e.g., a single run of onehot using 500 tokens for 10000 episodes took 10 hours), illustrating a further advantage of VQ-VIB. Because of the duration of such trials, we were unable to perform similar hyperparameter sweeps in the 9 points environment in time for the rebuttal, but we will do so for the camera-ready submission, should this paper be accepted.\n\n - 2.1. “Why not increase vocab size more?” Performance in our framework is not measured by utility alone, but rather by the agent’s ability to maximize utility **with bounded resources**, which is more similar to what humans do. Therefore, we are interested in the utility attained with limited vocabularies rather than unbounded vocabulary sizes.\n - 2.2. “If one-hot can reach the same performance/complexity at a larger vocab size then you can't say it completely outperforms it” –– to clarify, our evaluation compares performance given the same level of complexity, even if it is attained with different vocabulary sizes. The reason is that the vocab size is a hard limit which is often not realized for lower complexities (as shown in Figs 5, 9).\n - 2.3. “neither agent is reaching optimal performance” – VQ-VIB attains the best performance among the **discrete** communication methods. While the continuous model attains higher utility for any given complexity, it is not ecologically valid and does not correspond to human-like communication.\n - 2.4. “if you can show that VQ-VIB outperforms one-hot accuracy/complexity for any one-hot hyperparameters, then your method becomes much more interesting and your results are stronger” – in our initial submission, we varied the number of communication tokens; based on the reviewer’s suggestions, we conducted further experiments with 200 and 500 tokens. These new trials corroborated existing findings. If the reviewer has other particular hyperparameters in mind that are worth considering, we welcome their suggestions.\n\n3. “Why does 4b show unconverged protocols?” We believe communication has converged in those results. As shown in Figure 4a, communication typically converged after 3000 episodes, and 4b shows results generated from episodes 3000 to 10000. Perhaps we have misunderstood the reviewer, and we would appreciate any clarification.\n\n\n4. Section 5.2. and REINFORCE: Following the reviewer’s suggestion, we are conducting REINFORCE experiments in the color reference game. However, REINFORCE is known to have slow convergence rates and thus we were unable to complete this experiment within the 1 week rebuttal period. We will include it in the final version, if accepted, and we are confident that this will not affect our main results and conclusions because (a) with gumbel softmax (GS) we already achieve near optimal communicative efficiency, and (b) given our direct method for controlling complexity, we covered a greater span in EC complexity than what Chaabouni et al. observed when training with either REINFORCE or GS. In addition, we would like to highlight that our particle-world environment results are generated using **MADDPG**, a policy-gradient algorithm, like traditional REINFORCE.\n\n5. “Figure 5a is suspicious”. To address this concern, we have extended our analysis to 1024 tokens, in addition to 330 tokens. A zoomed-in version of the results (with 1024 tokens) is now included in Appendix D, Fig 14, and we verified that similar trends exist in the zoomed-in version of Fig 5a (corresponding to 330 tokens). Increasing the number of tokens did not change results, which is expected given that there are 330 WCS color chips (see Gilad-Bachrach et al. 2003 for a mathematical analysis of the effect of vocabulary size). In addition, we note that, while both onehot and VQ-VIB learn efficient communication, VQ-VIB achieves greater utility for the same complexity than onehot (as illustrated in Fig 6, which we have added to the revised version of the paper to better highlight this result). \n", " ### III. Novelty\nThe reviewer is concerned about the novelty of our proposed objective function and results. First, to our knowledge, this is the first study that integrates utility maximization with the IB principle in the context of EC, in contrast to treating these components separately. In addition, as far as we know, Mordatch and Abbeel (2017) did not consider the IB complexity measure that we are using. Second, we do not simply show that “a complexity loss can reduce complexity” or that “these losses work as intended.” Rather, we controlled EC according to principled and cognitively-motivated metrics and showed tangible benefits beyond those metrics alone: training for informativeness improved **convergence rates** (Figs 4a, 9, 10), and penalizing complexity induced more **human-like communication** (Table 1, Fig 5). \n\nAs for Chaabouni et al’s (2021) work, it is not based only on the “natural inductive bias of neural networks” but also includes hand-crafted biases, such as the agents’ discriminative need, which indirectly affect complexity. In contrast, our approach aims to avoid such ad-hoc design choices by **training agents directly w.r.t. an objective function that human languages appear to be optimizing**. This is a fundamentally different approach to EC, and we believe that it will allow the field to explore a greater range of emergent human-like communication systems.\n\nAs for human-AI communication, we have demonstrated that our approach gives rise to human-like color communication, by comparison with human-generated naming data from 110 languages of the WCS. We agree that also showing improved zero-shot human understanding of self-play messages is an important research direction. However, as the reviewer notes, the paper is already packed with many new ideas and results, and therefore we decided to leave this extension for future work.\n\n### IV. Key Contributions and Appendices\nThe reviewer expressed concern about relegating key contributions to appendices. As also noted by this reviewer, our paper already contains “many novel insights and experiments,” so presenting the right subset of results and analysis in the main paper is challenging.\n\nIn our revised paper, we have relegated VQ-VIB After discussion to Appendix B (allowing us to focus on results from other methods), and included additional details and plots highlighting differences between VQ-VIB and onehot communication. ", " We are happy that the reviewer is inclined to recommend acceptance given additional experiments and clarifications. We have addressed the reviewer's suggestions and concerns, as detailed below.\n\n### I. Framing\nThe reviewer frames our paper as proposing two, unconnected ideas; however, these two contributions are profoundly linked and jointly, they establish a principled framework for emergent communication (EC) and a new method (VQ-VIB) that improves upon prior art. Our key insight is to **integrate EC with the Information Bottleneck (IB) framework for the evolution of semantic systems**, which has been gaining broad empirical support in cognitive science and computational linguistics ([Zaslavsky et al. 2018](https://www.pnas.org/doi/full/10.1073/pnas.1800521115), [2019](https://arxiv.org/pdf/1905.04562.pdf), [2021](https://escholarship.org/uc/item/2sj4t8m3), [2022](https://academic.oup.com/jole/advance-article-abstract/doi/10.1093/jole/lzac001/6566271?redirectedFrom=fulltext&login=false) , [Mollica et al. 2021](https://www.pnas.org/doi/full/10.1073/pnas.2025993118)). We do so by adopting the (task-specific) utility term from the EC literature, and deriving the complexity and informativeness terms from the IB principle of [Tishby et al. (1999)](https://www.cs.huji.ac.il/labs/learning/Papers/allerton.pdf). Within this framework, we propose a novel architecture (VQ-VIB) that supports the IB objective function. Much like VIB, which combines an objective function (a bound on the original IB objective) and an architecture that supports it, we propose VQ-VIB in order to implement VIB for discrete (rather than continuous) communication. We show that VQ-VIB outperforms previous discrete communication methods, which means that it is better suited for EC in our framework.\n\n### II. Additional Experiments\nWe are happy that the reviewer finds VQ-VIB interesting and appreciate the request for additional evaluation. We have done so in the revision (details under “specific questions,” below). While we feel that the results are stronger now, we would like to clarify that our original evaluation was also extensive: we considered several prior methods, including the popular onehot approach and the Proto approach ([Tucker et al., NeurIPS 2021](https://arxiv.org/pdf/2108.01828.pdf)), which is most relevant to our work. We also considered two variants of VQ-VIB, varied the $\\lambda$ hyperparameters, and varied the vocabulary size in the particle envs. While **our work focuses on discrete communication**, we also considered a continuous comm model, which is not ecologically valid, but offers an estimate of the upper limit of communication regardless of human constraints. We are not sure why the reviewer argues that “no previous benchmarks are improved upon,” as we have shown that VQ-VIB outperforms other discrete methods (it achieves >= utility for the same complexity), both in the particle envs and in the WCS setting (Figs 6, 11, and 14b, and Table 1).\n\nWe would also like to highlight that the goal of this work is to integrate EC with an **empirically-validated first-principles approach** to language evolution. This is primarily a **conceptual scientific-driven contribution**, rather than a purely engineering-driven contribution. Therefore, we find it noteworthy that our approach is able to improve on prior art, and these improvements - even if not huge - suggest that our framework has the potential of significantly advancing the field toward a better understanding of how natural languages may evolve in artificial agents.\n", " We are happy that the reviewer found our contributions beneficial for NeurIPS in general and for the emergent communication community in particular.\n\nAs for the reviewer’s specific comments:\n\n1. The reviewer suggests a qualitative analysis for the navigation environments as a useful addition to the paper. We believe that this is addressed, at least in part, in Appendix C, Fig 12 (Fig 10 in the original submission), which offers a qualitative view of the results. It can be seen that the agents learn to use the signals in a way that partitions the space in a meaningful way. Due to space limitations and the breadth of this work, we were unable to include this qualitative analysis in the main text, but we are pointing the reader to Appendix C for a full description of the results.\n\n2. The reviewer suggests extending this work by testing VQ-VIB in larger-scale settings. We absolutely agree that this is an important and exciting direction for future research, and we have actually started to explore this as a follow-up project. Results in this paper suggest that VQ-VIB might be better-suited to high-complexity domains than onehot. For example, in the color reference game, VQ-VIB learned high-informativeness, high-complexity communication even without an explicit informativeness loss, whereas onehot tended to lower-informativeness communication. Thus, VQ-VIB might be most useful compared to prior art in even more complex domains; we have highlighted the importance of such future work in Section 6.\n\n3. We have included a short summary of the Proto architecture in Section 4.2, in addition to the architecture details already provided in Appendix A.1.\n\n4. The reviewer’s suggestion to consider more challenging RL settings without gradient-passing is important. We are already exploring such methods via a REINFORCE-based implementation of the color reference game experiments, as suggested by reviewer KUCM. However, given convergence challenges associated with REINFORCE, we do not yet have results ready for the rebuttal; we will include them in the camera-ready version if this paper is accepted.", " This work proposes to learn to communicate while balancing three different losses. The effectiveness of communication (utility as usual), the informativeness of communication as measured by autoencoding loss, and the complexity as measured by the entropy of the communication distribution. Furthermore, the authors propose a new algorithm VQ-VIB which integrates the information bottleneck with a discrete gradient estimator VQ-VAE. The authors demonstrate how to learn to communicate with their novel method and how to integrate all three losses into their method. \n\nThe authors test the three losses and their novel algorithm on two environments, a particle world where a sender must give a location to a receiver and the World Colour Survey, following Chaabouni et al, where agents must learn a vocabulary to communicate colours. In the former, the use MADDPG and compare to regular discrete, continuous, and proto communication. In the latter they compare to just discrete communication using the gumbel-softmax estimator. They find that the autoencoding loss improves performance, and the complexity loss can be used to reduce complexity. In the particle world, VQ-VIB can outpeform a regular discrete method for the same vocabulary size but not a continuous communication method. In the WCS, agents can learn an optimal language (complexity for effectiveness) as in Chaabouni et al and the complexity loss can vary across the range of complexities. In some configuration, VQ-VIB learns a language that better corresponds to the space of colours than regular gumbel-softmax. \nThe paper is both interesting and well-written. It contains many novel insights and experiments but overall, it is not well organized and the overall message and novelty are not clear. If the authors could add some experiments listed in the questions section and perhaps formulate their contribution more clearly, I would be likely to recommend acceptance.\n\nTo begin, the paper proposes two separate ideas. The first is the three axes (or two auxiliary losses) utility, informativeness, and complexity. Utility is a given in emergent communication, and the latter two have been investigated separately before (informativeness is autoencoding in AEComm, complexity has been penalized as far back as Mordatch and Abbeel, 2017). The second idea is VQ-VIB which is a clever combination of VQ-VAE and VIB and does seem quite interesting. The issue is that these two contributions are quite separate as far as I can tell and neither contribution is very strong.\n\nFor VQ-VIB, although the algorithm is interesting it is not empirically verified in depth. The authors claim it achieves greater utility and informativeness for the same complexity but this is only true in the particle envs and not in the WCS (and could depend on hyperparamters). Furthermore, they do not do an exhaustive comparison of baselines (REINFORCE instead of GS, trying a wider range of vocabulary sizes, etc..). Where the results are better, they are not hugely better and for a novel algorithm it should be investigated on more environments or more in depth.\n\nFor the tradeoff of utility, informativeness, and complexity, the analysis is quite good and the results are interesting. Sadly, the novelty is not quite there. Demonstrating that a complexity loss can reduce complexity is not very novel and does not require so much attention (both Fig 4b and 5). Chaabouni et al's work on WCS was novel because it showed that the natural inductive bias of neural networks was similar to that of humans (low complexity, high informativeness). This work explicitly adds those factors as losses and shows it can find the optimal curve, which is again not very novel. Still, the auto-encoding loss is shown to be effective and this is quite exciting but it is not investigated in depth and no previous benchmarks are improved upon (e.g. MADDPG's particle envs)\n\nSimply showing that these losses work as intended is not particularly interesting and as a reviewer, I am not fully convinced of the value of VQ-VIB. Because of the quantity of new ideas here, none of them seem to be investigated in enough depth either and many results are relegated to the appendix with only one or two sentence summaries. This is not very convincing and the overall paper feels like it brushes over key contributions. \n\nOverall, a stronger contribution would be to focus on one of the two proposed ideas or demonstrate that both ideas are crucially necessary towards some goal. E.g. if the authors were working towards human-AI communication and demonstrate that both techniques (VQ-VIB, and losses) improve zero-shot human understanding of self-play messages. In that case, contributions feel like two separate but important tools towards a single goal. In the current case, there is an interesting investigation but none of the results are sufficiently convincing or novel. why use VQ-VIB after if it generates a continuous message and is worse than simple continuous messages?\n\nfor 5.1, you haven't demonstrated that VQ-VIB is better than one-hot, please test vocab size > 100\n- if performance increases as vocab size increases, why not increase vocab size more?\n- you can make the argument that VQ-VIB outperforms one-hot *at the same vocab size* but not at the sample complexity. If one-hot can reach the same performance/complexity at a larger vocab size then you can't say it completely outperforms it\n- furthermore, neither agent is reaching optimal performance!\n- if you can show than VQ-VIB outperforms one-hot accuracy/complexity for *any* one-hot hyperparameters, then your method becomes much more interesting and your results are stronger\n\nrelated to the above, Fig 4b shows that continuous communication (cont) clearly outperforms discrete communication (one-hot) but the only difference between the two channels should be the bandwidth\n- for a fairer comparison, you should increase the discrete vocabulary size (or use a channel of length > 1)\n\nwhy does Fig 4b show unconverged protocols? \n- ``complexity and reward measured at each increment of 1,000 training episodes from 3,000 to 10,000''\n- why are you measuring unconverged protocols during training? they are obviously going to be sub-optimal? \n\nfor 5.2, please test REINFORCE for discrete communication\n - Chaabouni et al found that REINFORCE learns less complex protocols than Gumbel-Softmax which is what you use\n - REINFORCE is as common (or more common) than GS for emergent communication, so it would be the better baseline for complexity\n\nFig 5a is suspicious\n- either 5a is too small and therefore we can't see the variance of results\n- or the learned protocols are at the exact optimal accuracy / complexity tradeoff and $lambda_I = 1$ is sufficient to achieve this (despite Chaabouni et al's results which show a variance when $lambda_I = 0$)\n- these results could imply the choice of a vocabulary size (or other hyperparameters) that guaranteed optimality (see Natural Language does not Emerge Naturally (Kottur et al)) and this graph should preferably me more zoomed in and likely should stick with the original vocabulary size 1024 used in Chaabouni et al\n\n\n### citation suggestions\n\nyour auxiliary loss for informativeness seems the same as AEComm (Learning to Ground Communication with Autoencoders, Lin et al, 2021) and may be worth a citation\n- notably, their experiments found that adding informativeness (AE) to utility (RIAL) reduces performance so your results are interesting and novel from that perspective!\n\nplease cite Concrete Distribution (Maddison et al, 2016) on top of Jang et al for the Gumbel-Softmax estimator (they are concurrent work)\n\n\n### clarification\nis the entropy regularization done in place of $\\lambda_c I(X,C)$ or on top of that loss? (I understood it to be in place of that loss)\n\nis the entropy regularization in one-hot equivalent to regular entropy regularization done in previous RL works (e.g. Mnih et al 2016, Lazaridou et al, 2018) or is it the opposite? Are you penalizing entropy or encouraging entropy?\n\n\nin 5.2.1 you meantion that VQ-VIB learn more complex communication, did you mean the opposite since we are trying to learn less complex, more effective communication n/a", " This paper analyzes the influence of three factors on languages emerging in communication games between simple VQ-VAE based agents. Specifically, the authors contrast the effect of promoting utility and \"informativeness\" versus reducing the complexity of the language. The authors demonstrate that informativeness generally improves success at the game, but at the detriment of complexity. An adequate tradeoff between these two factors seem to lead to \"human-like\" languages, at least as measured by a comparison to color-naming schemes. While the paper touches on interesting issues (the interplay between informativeness and complexity), I am having a hard time understanding what I should make out of it. The extreme simplicity of the tasks and languages considered, coupled with a debatable definition of complexity make it hard to understand the relevance of the results.\n\n\n### Strengths\n\n1. **Clear exposition**: the paper is rather clear throughout. The writing reads well and a number of well designed figures illustrate the author's points (in particular figure 1)\n2. **Well laid out experimental section**. I enjoyed the structure of the experiments section, first spelling out hypotheses, experimental settings and then discussing results.\n\n### Weaknesses\n1. **Notion of complexity**: In the paper, the complexity of a language is defined as the mutual information between \"inputs\" (what is communicated about) and signals (what is communicated). I am not entirely convinced by this notion of complexity for several reasons\n - It seems to me that a language can have a high mutual information by virtue of being unambiguous: indeed if we write $I(X;C) = H(X) - H(X\\mid C)$, then for a fixed distribution over inputs, the mutual information is maximal for languages where $H(X\\mid C)=0$, in other words when any given message $c$ refers unambiguously to a single input $x$. To me this seems unrelated to complexity. For example emergent languages might be equally unambiguous, yet more \"complex\" in a different sense (eg. less compositional, using longer words...).\n - More generally this is an odd definition in the context of human languages. Much ink has been spilled in linguistics on the topic of language complexity: syntactical complexity vs morphological complexity, are all human languages equally complex, is there a meaningful measure of language complexity, etc... There is very little mention of this literature which is a bit of a shame given that one of the objectives of the paper is to \"[demonstrate] how fundamental principles that are believed to characterize human language evolution may inform emergent communication in artificial agents.\" I recommend \"On the Feasibility of Complexity Metrics\" by Miestamo (2004) as a starting point for a discussion. It should also be mentioned that previous work in emergent communication has looked at other metrics which are related to complexity: length of the messages (\"“LazImpa”: Lazy and Impatient neural agents learn to communicate efficiently\", Rita et al. 2020), ease of cultural transmission (\"Emergence of Compositional Language with Deep Generational Transmission\", Cogwell et al. 2019, \"Compositional languages emerge in a neural iterated learning model\", Ren et al. 2020).\n\n Overall it seems to me that the notion of complexity advocated by the author is more or less the entropy of the distribution of the messages, which I understand to be a proxy for the size of the lexicon (the number of utterances in the language). I am not sure if this is an interesting metric: arguably a key feature of human languages is that they allow for generating a potentially infinite number of utterances, and what differentiate them is how they are able to do so by the use of finite means (phonemes, words...)\n2. **Unclear takeaway**: What do the results of the paper mean? The two takeaways I read from the papers are that (1) \"principles that characterize language evolution can inform emergent communication\" and (2) \"penalizing complexity is necessary to avoid idiosyncratic languages\". I would contest that (1) is a novel insight because there is already plenty of work porting ideas from cognitive science and language evolution to emergent communication (see work on iterated learning, zipf's law in emergent languages, etc...). As for (2) I am not entirely convinced due to my issues with the definition of complexity used in the paper (outlined above).\n3. **Informativeness vs utility**: The distinction between informativeness and utility does not seem very useful in the context of the paper: indeed the two are not really contrasted in experiments. In a lewis signaling game (such as the color naming experiment) they are essentially equivalent. It would have been interesting to also vary along this dimension in experiments. Alternatively, I think the key claims of the papers could have been made by focusing on the opposition between informativeness and complexity (and leaving utility out of the picture). - What is semantic communication? Or rather what would the opposite of semantic communication be? Communication without meaning?\n- What is the purpose of VQ-VIB after? It seems to me that randomness in VQ-VIB after emulates a perturbation in the environment (on which the agent has no control), whereas randomness in VQ-VIB before models uncertainty from the agent (via a reparametrization trick). It is not clear to me why the former is interesting in the context of this work (in fact it is used much less than VQ-VIB before in the experiments.\n- What are $m$ and $\\hat m$ in section 3.1? Are they typos for $x$ and $\\hat x$? I think there could be at the very least a more thorough discussion of the notion of complexity of a language (and the difficulty of defining a metric)", " In this work, the authors introduced a new discretization method for endowing neural agents with a communication channel in a cooperative multi-agent framework. By adding auxiliary losses, each penalizing the utility, informativeness, and complexity of the generated language, testing on two MARL benchmark and a referential task, the authors study the conditions that led the communication protocol to be successful and to resemble properties of natural language as described by the IB (information bottleneck) principle. I believe the main strengths of this paper are in the introduction of VQ-IB and the analysis of the contribution of the additional losses in terms of complexity and informativeness. The experimental setup is solid and the results support the idea that the introduced terms can steer the emergent languages to different complexity classes and make them similar to natural languages.\n\nI could not find any major weaknesses in this work. Useful addition to the paper would be a qualitative analysis for the navigation environments. The quantitative results are informative and support the main claim of the paper but a qualitative analysis might help shed a light on better understanding the language protocol. Interesting questions would be of this sort: \"are there specific messages the systematically mapped to actions of the environment e.g. go right, the target is in the top right corner, go there etc.\". \n\nWhile I believe that the tested environments correctly corroborates the introduced hypothesis, it would be interesting to know whether the discretization method can scale to larger scale setup involving complex input like natural images. Havrylov and Titov 2017 and Dessi et al. 2021 showed that agents can learn a protocol describing natural images when trained from pixel input and without any supervised pretraining. VQ-IB is a drop in replacement in their methods (they both used gumbel softmax sampling) so it would interesting to see wether your results hold at that scale. Given that the way \"complexity\" is penalized in previous work is usually by changing vocabulary sizes it could be useful to test whether an auxiliary loss term would be more effective instead.\n\n\nFinally, I consider the introduced method and the presented results interesting, the analysis to be thorough and informative and believe this paper should be included in NeurIPS since it could be beneficial for the emergent communication community.\n\n-----\n\nMinor:\n\n- A brief description of the Proto architecture would be useful to understand its differences with the other architectures\n Lin et a. 2021 reports how difficult it can be to train fully decentralized agents for tasks that involve learning a communication protocol. Sharing gradients between agents is a form of communication that could not used in a fully decentralized setups. Have you considered or tried training your agents with some form of RL algorithms that would not involve any gradient passing between agents? Note that this is not to be intended as a missing experiment for this work but rather as something to be intended as future work. The authors have adequately addressed limitations and negative societal impact of their work" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "cNrEvu77PGx", "_FPaSb-VyTn", "oMZywra4oJQ", "nISzgvlHb7", "9KrVr5sTbbK", "5hzjLuaTtYu", "ukq0svW0Nd_", "bmTe0woNaQW", "j4BFA5u-o0b", "duHS_HFVeIR", "cNrEvu77PGx", "VU4xEVWk_dC", "zsCYJ475JDc", "Z8TxVmz4Ss9", "Vs7PdVXHiP", "Vs7PdVXHiP", "ukq0svW0Nd_", "ZwQWhZPbha1", "nips_2022_O5arhQvBdH", "25-x7i1Yhg", "MNspLE303uT", "bmTe0woNaQW", "j4BFA5u-o0b", "duHS_HFVeIR", "5qsHUVQDqg", "jYqe6kZEV40", "nips_2022_O5arhQvBdH", "nips_2022_O5arhQvBdH", "nips_2022_O5arhQvBdH" ]
nips_2022_4pwCvvel8or
Online PAC-Bayes Learning
Most PAC-Bayesian bounds hold in the batch learning setting where data is collected at once, prior to inference or prediction. This somewhat departs from many contemporary learning problems where data streams are collected and the algorithms must dynamically adjust. We prove new PAC-Bayesian bounds in this online learning framework, leveraging an updated definition of regret, and we revisit classical PAC-Bayesian results with a batch-to-online conversion, extending their remit to the case of dependent data. Our results hold for bounded losses, potentially \emph{non-convex}, paving the way to promising developments in online learning.
Accept
PAC-Bayes theory provides upper-bounds on the risk of aggregation of predictors in the batch setting. Many PAC-Bayes bounds are actually minimized by EWA (Exponentially Weighted Aggregation), but these bounds can also be applied on (slightly) sub-optimal aggregation procedure, and allow to control their level of sub-optimality: classical examples include Gaussian aggregation / variational Bayes. There are also bounds on the regret of EWA in the online setting. However, while these bounds look quite similar to PAC-Bayes bound, they usually do not allow to work with alternate aggregation procedure such as Gaussian aggregation. Recently, some results allowed to study other aggregation rules, as in van der Hoeven et al. [2018] and Chérief-Abdellatif et al. [2019], but these results still impose strong constraint and cannot be applied to arbitrary aggregation strategies. Here, the authors manage to extend totally PAC-Bayes bounds to the online setting. In other words, their Theorem 2.2 can be used to upper bound the regret of very general aggregation strategies, including of course EWA and Gaussian aggregation. There was initially a disagreement between reviewers, based on the following: 1) on the one hand, the reviewers agree that Theorem 2.2 is a nice extension of PAC-Bayes bounds, and provides a generalization of existing results on EWA and Gaussian aggregation [see Reviewers Kjzo, orn2 and also UW2Y]. 2) on the other hand, it is not clear whether there is a useful application of Theorem 2.2 can lead to new results beyond the "usual cases" EWA / Gaussian. Indeed, these are the two examples discussed by the authors [UW2Y]. After discussion, there was ultimately an agreement that even though some of the reviewers and myseld are still not totally convinced about 2), the nice construction of 1) justifies publication of the paper. I will therefore recommend to accept it. Each of the reviewers raised many minor issues that the authors should take into account in the camera ready version (writing [zWwp, UW2Y] / experiments [Kjzo, orn2] / ...). I will add the following points: - van der Hoeven et al. [2018] already contains a nice discussion on the extension of regret bounds beyond EWA, and share many similarities with this work (even though it does NOT contain a result such as Theorem 2.2). This paper is currently not cited by the authors. The authors should cite it, and discuss it. - "The guarantees Chérief-Abdellatif et al. [2019] provided for SVB hold for Gaussian priors and posteriors and are valid for iid data, which is a particular case of our work." This is an incorrect and misleading statement: this paper is written in the same setting than the classical bounds on EWA. There is no stochastic assumption on the data in this paper (nor in van der Hoeven et al. [2018]).
train
[ "DgqgwDw0iF2", "cxgylGE4mhU", "j3DZmgLIcbO", "5yFslNfr2mL", "Z3_eBseWU16", "z43_12kkuDe", "k0GcjmFFWav", "E_BPeHAGGc", "bnJQF3Mg8cA", "V9YQMKY0R374", "mCGA5eD-WoX", "4uRET1Ug_v", "0QQv7-q_TFr", "zAFYOu1pJGi", "M2cH00CV1L", "bFvpv_vsF18x", "zFRX-jWPHWP", "jbHz_C163vK", "jj6rbmNtTMO", "Aljnl6LUvf7" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are happy to hear that the new version of the document has dissipated your concerns.\n\nWe thank you for your time.", " I thank the authors for their detailed response, which along with the responses to the other reviewers has shed additional light on the various points of inquiry I had. I will raise my score to a weak accept.", " Dear reviewer,\n\nWe hope that you have had chance to read and consider our response to your review, and that you would be able to share your thoughts with us.", " I thank the authors for the detailed response, it completely addresses my concern.", " We thank your for this discussion.\n\nConcerning your last concern, we now agree that it is essential to introduce properly stochastic kernels in the main document to dissipate the final confusion around item 2. Indeed, you seem to suggest is that, because the stochastic kernels $Q_i(.,.)$ (taking as first argument a sample and an event as second as in Def D.1) are fixed before the statement of the highly probable event, then our result holds for data-free measures.\nThis is not true for the following reason: *even if the stochastic kernel $Q_i(.,.)$ is fixed, this is not the case for the data-dependent measure $Q_{i}(S,.)$*.\n\nTo emphasize this point note that fixing the stochastic kernels in our theorems is like fixing a learning algorithm $A$ which takes as input a sample $S$ and outputs a sequence of $m$ distributions $Q_1(S,.),...,Q_m(S,.)$ : the outputs of $A$ are data-dependent while $A$ in itself is not.\n\nThus, a more rigorous statement of Theorem 2.2 is:\n\n'For any distribution $\\mu$ over $\\mathcal{Z}^m$, any $\\lambda>0$ and any online predictive sequence (used as priors) $(P_i)$, for any sequence of stochastic kernels $(Q_i)$ we have with probability $1-\\delta$ over the sample $S\\sim\\mu$ the following, which holds for the data-dependent measures $Q_1(S,.),...,Q_m(S,.)$: \n...\n'\n\nthe bound remains unchanged with the shift of notations $Q_i\\rightarrow Q_i(S,.)$.\n\n\nWhat we propose to do in the next version is:\n\n- update the definition of online predictive sequence by also incorporating the notion of stochastic kernel\n- introducing properly stochastic kernels in the main document\n- modifying the statement of our theorems as precised below\n- adding a short paragraph which explain stochastic kernels to non-specialists.\n\nDoes this address your final concern?", " I thank the authors for their clarification and revisions.\n\nRegarding item 2 in the comment above: to me, it seems that formulating the main theorem (2.2) with stochastic kernels is crucial for justifying the algorithm derivation. \nThe formulation of Thm 2.2 (for any Q_i, w.h.p. over S) means that the Q_i needs to b fixed (non-data-dependent)for the guarantee to hold (correct me if I am wrong)\nBut, of course, the algorithm presented in Eq (2) outputs a data-dependent posterior. \nThis discrepancy may be accounted for since Thm 2.2 should be formulated for stochastic kernels instead of posterior distributions.\n", " We warmly thank you for your response and for kindly revising your assessment. We very much appreciate the opportunity for clarification and your reactivity.\n\nRegarding your experimental concerns: we feel we have answered both the points you raise in your initial review, in the rebuttal above:\n\n\"No error bars\": see the discussion in the first rebuttal and see Appendix E of the revised manuscript for the adding of table gathering means and variance of our cumulative loss on 50 runs of our OPBD. Furthermore all our hyperparameters have been introduced in the Parameter Settings paragraph (lines 317-327 of the revised manuscript), with brief justification when those parameters were set to specific values.\nWe also precised in our rebuttal that our $\\lambda$ has been chosen through a GridSearch approach. We precise this method is not novel in the PAC-Bayes literature as it is used in Theorem 3 of Mhameddi et al. 2019 which also suggest this method to chose an efficient $\\lambda$.\n\nDoes this answer your concerns?\nWe are happy to answer any further questions you might have.\n\n\nMhameddi et al. 2019 'PAC-Bayes Un-Expected Bernstein Inequality (https://proceedings.neurips.cc/paper/2019/file/3dea6b598a16b334a53145e78701fa87-Paper.pdf)", " I think I am wrong and you are right. Thanks for the clarifications and apologies for the confusion. I recommend that you mention that exact example of $\\lambda$ in the paper as an example and state the resulting bound as a Corollary. I think it would make things much clearer + you seem to have enough space to do that.\n\nI am still a bit confused which is why I will set my confidence to 2 - I am clearly not on top of things here. I'll adjust my score to a 5 since the bound seems to be consistent and most likely correct. Still only a borderline accept since 1) my criticism about the experiments from above remains and 2) the clarity of the paper is still not where it should be for a NeurIPS publication.\n\n\n", " Dear authors,\n\nAn additional question was raised during the discussion with the reviewers, so I think it is fair to give you the opportunity to reply:\n\nTheorem 3 of http://proceedings.mlr.press/v75/hoeven18a/hoeven18a.pdf proposes a Gaussian approximation of the exponential weights with fixed variance, as in your Section 4, and a linearisation leads to the online gradient algorithm. Could you compare your regret bound to theirs? [by the way, I think it would be fair to cite their paper] Also, what would be your interpretation on the empirical performance of your method versus OGD? According to Hoeven et al, we would expect them to be quite similar...", " We thank you for your response. We are willing to insist again on the convergence of our results.\n\nIn the exemple you precised below (thank you for giving this specific instance), what you did not do is choosing a specific $\\lambda$. We first precise the question of the choice of $\\lambda$ is important in PAC-Bayes and has been treated for instance in Catoni 2007, Sec 1.2.2, Sec 1.3.1, Guedj 2019 p. 12, Audibert 2010, Sec 2.2 .\n\n In the framework of Corollary 3.3, optimising the right hand side of the bound in $\\lambda$ gives $\\lambda= \\sqrt{\\frac{2\\log(1/\\delta)}{mK^2} }$. Thus we have:\n\n$ \\frac{\\lambda m K^2}{2} + \\frac{\\log(1/\\delta)}{\\lambda} = 2\\sqrt{\\frac{\\log(1/\\delta)mK^2}{2}}$\n\nWe are allowed to take such $\\lambda$ as it only depends on hyperparameters $m,\\delta$. This is how we recover a sublinear rate here.\n\nWe appreciate this reasoning is mainly specific to the PAC-Bayes literature and are happy to provide any additional information. We apologise about not precising it earlier and will make it more explicit on the final version.\n\nConcerning our Appendix B.1, we agree with Rev. UW2Y that putting this result is relevant because **taking $m$ batches of size $1$ in classical PAC-Bayes allows us to take history-dependent priors and non-iid data**. This point is not allowed by classical PAC-Bayes for batches of $m$ data and is exactly what we propose. Thus this approach (which deals with data sequentially), is to us the only naive one using classical PAC-Bayes theorems which allows to recover our assumption.\nFurthermore, the resulting bound shows that this naive approach has a clear deteriorated rate as the approximation term $\\log(1/\\delta)$ in Thm 2.2 has been transformed in $m\\log(m/\\delta)$. This bound has a linear term, even if we kill the KL terms and optimise wrt $\\lambda$ (as we did above for Corollary 3.3) given the extra factor $m$. This is why we do not agree with the claim that \"the strong similarity of Corollary 3.1 to this naive bound illustrates the fundamental flaws of the bound\". On the contrary we argue that our bound clearly upgrades this naive bound and **allows us to obtain a sublinear convergence rate** (impossible with the naive approach). This shows it is relevant to leverage the heavy machinery used in Thm 2.2's proof.\n\nWe are happy to answer any other questions you might have and we very much hope that this part of our work is now clarified. This is very helpful as we will revise the manuscript to improve the clarity of the arguments here.\n\nReferences:\n\nCatoni 2007, Pac-Bayesian Supervised Classification: The Thermodynamics of Statistical Learning. (https://arxiv.org/abs/0712.0248)\n\nGuedj 2019, A primer on PAC-Bayesian Learning. (https://arxiv.org/abs/1901.05353)\n\nAudibert 2010, Agrégation PAC-Bayésienne et bandits à plusieurs bras. (https://fdocuments.net/document/agrgation-pac-baysienne-et-bandits-plusieurs-bras-pac-audibertmes-articleshdrpdf.html?page=13 )", " Thanks for your response and for extending / revising the manuscript in responds to my questions / concerns. Unfortunately, my concerns about the vacuousness and non-consistency of the bound still remain:\n\nOf course, I am aware that Corollary 3.1 and 3.3. constitute bounds on the cumulative loss as opposed to the average loss. However, in the same way $\\frac{\\lambda mK^2}{2}$ grows linearly and not sub-linearly with $m$. Thus, to my best assessment, the claim above that it would be sub-linear, i.e. $\\mathcal{O}(\\sqrt{m})$ is plainly wrong. When we divide both side of Corollary 3.1 or 3.3. by $m$ as suggested above by the authors, we get bounds on the average loss that are not consistent since $\\frac{\\lambda K^2}{2}$ stays constant with $m$. For a proper, consistent bound, we would expect that the terms vanish with $m$. \nLet me give an example: Let's say $l$ is bounded by $K = 2$. Then we have $\\frac{\\lambda K^2}{2} = 2 \\lambda$. Let's also assume for simplicity that the KL is 2. Then the version of Corollary 3.1. when we divide by both sides by m, bounds the generalization gap, i.e. the difference between expected average loss and empirical average loss, by a value greater than 2 which is worse than the trivial upper bound of 2 which immeditely follows from the boundedness assumption of $l \\leq K = 2$.\n\nMoreover, what has been done in Appendix B is not convincing: Applying theorem 4.1. of Alquier et a. (2016) to each individual sample under boundedness / Hoeffding assumption and then using a union bound over the samples makes obviously little sense and yields super loose guarantees, much worse than any trivial bound that directly follows from the boundedness assumption. The strong similarity of Corollary 3.1 to this naive bound illustrates the fundamental flaws of the bound in Corollary 3, rather than supporting the validity of Corollary 3.1. I understand that you added the respective appendix section in response to one of the other reviews and am sorry for the work you put in.\n\nOverall, I appreciate the authors' response and efforts. However, my concerns remain and after going over the results once more I am more convinced than ever that the bounds are 1) extremely loose/vacuous 2) not consistent 3) in many cases, uniformly over all $m=1, ..., \\infty$, worse than trivial bounds based on $l < K$. From my side, still a clear reject.\n", " We thank you for your insightful review. First of all, we agree our main result statement lacked context. This is now fixed in a new appendix (Appendix B in the updated version of the document) dedicated to such discussion. More precisely in this appendix:\n\n- We compare our theorem to the naive approach consisting in stacking $m$ classical PB bounds in Appendix B.1. What appears there is that, simulataneaously, at each time $i$, with an union bound, one pays an approximation term of $\\log(m/\\delta)$. Thus summing up all the terms leads to a huge approximation term of $m\\log(m/\\delta)$ which is significantly worse than our result. Furthermore, the term on the LHS (left hand side) has also changed: the controlled term here is now $\\sum\\_{i=1}^m \\mathbb{E}\\_{h_i\\sim Q_{i}}\\left[ \\mathbb{E}\\_{z_i}[\\ell(h_i,z_i) \\right] $ instead of $B:= \\sum\\_{i=1}^m \\mathbb{E}\\_{h_i\\sim Q_{i}}\\left[ \\mathbb{E}[\\ell(h_i,z_i) \\mid \\mathcal{F}\\_{i-1}] \\right] $.\n\n\n-This change is further discussed in the appendix B.2, and also leads to a broader discussion about $B$. We explain why this term appears to be a best-of-both world quantity which allows us to exploit the flexibility of PAC-Bayes theory while remaining meaningful in an online framework.\n\n- We also discuss in Appendix B.2 about our proof technique, explaining why our assumptions (especially the boundedness one) are needed and why classical regret does not spontaneously appear and what it involves for the OPB (online PAC-Bayes) framework (e.g. this framework may not be the best for adversarial objectives as we do not compare ourselves to other strategies but can be adapted for instance for environment exploration objectives as we control the cumulative generalisation error at each time step ).\n\n\nConcerning your questions:\n\n- The discussion concerning the boundedness assumption is led in the paragraph 'About the boundedness assumption' l.659. We precise here that the assumption truly needed is a conditional subgaussianity (particularly implied by boundedness) and not actual boundedness. However for the sake of clarity and simplicity, we maintained the boundedness assumption. But an interesting open direction is to find whether there exists concrete classes of unbounded losses which may satisfy either conditional subgaussianity or others conditions (such as conditional Bernstein condition for instance).\n\n- For your second question, note that we actually do not naively bound a sum of squared loss, actually we do not consider a squared loss at all. What we do is exploiting the fact of having an online predictive sequence as priors to deal with the exponential moment (in Thm 2.2's proof). More precisely we use $m$ times the conditional Hoeffding lemma after the exploiting of a 'conditional Fubini' (see Lemma D.4) to obtain this factor of $\\lambda m K^2 /2$. Note that a naive bound would have led to the term $mK$, which is significantly worse as this term is linear and cannnot be controlled by a right choice of $\\lambda$. Cor 3.3. exploits this fact as an optimised choice of $\\lambda$ allows us to recover a significant sublinear rate of $O(\\sqrt{m})$.\n\nFinally we would like to challenge your comment that 'the idea of an online variant of PAC-Bayes itself is not particularly novel'.\nWe partly agree as we cite in our paper previous work investigating online PAC-Bayes strategies, however we feel all these works do not fully exploit the online learning framework as they all focus on a single pair prior/posterior. To our knowledge, our work is the first to propose an online vision of PAC-Bayes with a sequence of pairs prior/posterior evolving through time.", " We warmly thank you for your review, we hope you will find our revised version easier to read.\n\n- Regarding your main concern: we are willing to defend that our bound in Thm 2.2 is not as loose as you suggest because the controlled quantity is not an expected loss on both $h$ and $z$ (e.g. $\\mathbb{E}\\_{h\\sim Q}\\mathbb{E}\\_{z\\sim \\mu}[\\ell(h,z)]$) as in the classical PAC-Bayes literature but a sum of losses (i.e $\\sum\\_{i=1}^m \\mathbb{E}\\_{h_i\\sim Q_i}[\\mathbb{E}[\\ell(h_i,z_i) \\mid \\mathcal{F}\\_{i-1}]] $).\nFurthermore note that in the right hand side we do not have the PAC-Bayesian empirical risk (which would be here $\\frac{1}{m} \\sum\\_{i=1}^m \\mathbb{E}\\_{h_i\\sim Q_i}[ \\ell(h_i,z_i)]$) but the cumulative empirical error, which is the empirical risk multiplied by $m$. We refer to the paragraph \"Analysis of the different terms in the bound\" on page 3 line 101 and our novel Appendix B where we discuss all terms of the bound.\nHaving a sublinear bound (such as the $O(\\sqrt{m})$ derived from Cor 3.3) is meaningful in terms of learning. Indeed, to make our result formally closer to classical PAC-Bayes, we need to divide all the terms of Thm 2.2 by $m$. This allows us to recover, not only the classical empirical risk but also a convergence rate of $O(1/\\sqrt{m})$ in Cor 3.3.\n\nWe also aim to make clear that taking $P_i=\\hat{Q}_i$ for all $i$ is not naive as it requires our posterior to depend only on the past, which is natural in online learning (OL). That is why we derived from Thm 2.2, Cor 3.1, which exploits the KL term and then suggest a learning algorithm and also Cor 3.3, the test bound which allows tighter convergence guarantees.\n\nFinally, we precise that considering cumulative errors instead of averaged ones strengthens the link with OL where sublinear rates are classical. For instance, we give in Appendix A.1, Thm A.2 which controls the regret (i.e. the difference between the cumulative loss wrt the best fixed strategy) with an optimal rate of $O(\\sqrt{T})$. This notion of regret is fundamental in OL and further studied in (Shalev-Shwartz 2012 'Online Learning and Online Convex Optimization') or (Hazan 2016, 'Introduction to Online Convex Optimization')\n\n- Concerning our experiments, we agree error bars are needed to evaluate the variability of our experiments. That being said, we have performed error bars for our OPBD methods as they have the higher efficiency and the lower computational time. We reported our new results in Appendix E. We found that over 50 repetitions, variability is small and does not change our findings. We precised in l.340 of the main document that this appendix furnishes those error bars.\nFurthermore, we performed a GridSearch approach to find the most efficient hyperparameters. We also precise that we see our experiments as a numerical illustration of consistency of our new algorithms, we do not claim competing with state-of-the-art online algorithms but instead providing evidence that our methods efficiently converge.\n\n\n- We are happy to consider revising our writing style if you have specific examples of formulations which impede the reading.\n\n- Concerning our route of proof, we insist on the fact our results are not trivial corollaries of Rivasplata et al. 2020. Their theorem is a general basis which contains a general exponential moment. This moment is a critical quantity to control in PAC-Bayes theory. We show that under a boundedness assumption, in the OL setting, we succeed to bound this moment without further assumptions.\nTo us, the control of this exponential moment in this setting is new and requires a careful study involving among others, a conditional version of Fubini (Lemma D.4).\nFurthermore the control of this exponential moment is reused in Cor 4.1 proof as the couple $(\\Psi_2, \\Phi_2)$ has been obtained from an adapted version of Viallard et al. 2021 (Prop C.2) involving again this exponential moment.\n\n\n**Questions**\n\n- About the choice of $\\lambda$, we explained in the paragraph 'Influence of $\\lambda$' l. 117 that it is seen as a trading parameter.\nIn the context of Cor 3.1, we see $\\lambda$ as a scale parameter for Eq. (1). We now precised it in blue l.133.\n\nIn the context of Cor 3.3 we said in l.163 that $\\lambda$ was seen as a tradeoff parameter between the approximation term $\\log(1/\\delta)$ and the ersatz $\\frac{mK^2}{2}$.\n\n- In Cor 3.3, we set $\\hat{Q}_i= P_i$ because of the nature of OL predictors. Indeed, the key point is that in the OL framework, we design a predictor at each time $i$ depending only on past data and possibly external informations. Concretely, this make $\\hat{Q}_i$ to be $\\mathcal{F}\\_{i-1}$ measurable. This fact, coupled to successive absolute continuities, make $(\\hat{Q}_i)_i$ an online predictive sequence and then can be used as priors.\n", " We warmly thank you for your enthusiasm about our work and your insightful review. Regarding to your concerns about the links between our main theorem and optimisation, we created a new appendix (namely Appendix B.3) which addresses this topic. We also provide below a summary of this discussion.\n\nWeaknesses:\n1. Concerning your experimental concern, the reason for which we only plotted the averaged empirical loss is that the bound of Cor 3.3 (after dividing by $m$ for averaging) consists in this averaged empirical loss plus an approximation term in $O(1/\\sqrt{m})$ It appears that given our dataset sizes, the magnitude of the approximation term is at least $1/\\sqrt{500}\\approx 10^{-2}$ while our averaged cumulative loss is always of magnitude between 1 and $10^{-1}$. Thus, plotting those additional terms would have led to similar curves and less clear graphs.\n\nQuestions:\n1. and 3. First of all, we acknowledge we did not precise the fact that the posterior sequence can be data-dependent, we precised it in blue l.90 in the revised version.\n\nSecond, it is true that Thm 2.2 is 'pointwise' in the sense you propose. We discuss this in Appendix B.3. A sum up of this discussion would be that the bound is suited for optimisation for two reasons:\n- contrary to classical PAC-Bayes, our bound holds for a sequence of posteriors with high probability.\n- the argmin is explicit (Gibbs posterior)\n\nThe second point affirms that the learning algorithm derived in Eq.2 (line 137) generates explictly a single posterior per time step: we have a well-defined sequence of $m$ posteriors at time $m$. Doing so the guarantees of Thm 2.2 holds for this sequence. The main difference between our result and the original one from Rivasplata et al. is that the original stochastic has (probably) been thought for one data-dependent distribution while our results invoke a stochastic kernel generating $m$ data-dependent distribution. Doing so, our theorem, while pointwise, is pointwise for a sequence of posteriors and so still ensure guarantees for a single run of an online PAC-Bayes algorithm. Indeed, if the argmin is explicit as for the one of Eq. (1) l. 137 then our learning algorithm derives a sequence of $m$ posteriors in $m$ time steps. To us, this point is crucial to bridge a link with online learning as regret bounds (e.g. Prop A.2) also provide guarantees for a single sequence of predictors (in prop A.2 it is the one generated by OGD).\nHowever to overcome the pointwise behavior of our theorem, we need to adapt Rivasplata et al. 2020 (Thm 2.1) as this basis is pointwise itself. Given we consider a sequence of data-dependent priors one cannot apply the change of measure inequality to ensure guarantees holding uniformly on posterior sequences. More discussion can be found in Appendix B.3\n\n\n2. We technically can define any data-dependent posterior $Q_i$ (and even any prior $P_i$) as a stochastic kernel. We did not do this choice for two reasons:\n- The first one is clarity. We believe that adding the definition of stochastic kernel to refer to data-dependent measure would have add confusion to the reader non-familiar with the work of Rivasplata et al. 2020. It is why we only evoke data-dependent measures in the main document as we believe it is more easier to understand for non PAC-Bayesian specialists (we also want to reach the online learning community with this work).\n- The second one is about clarity. In Thm 2.2's proof, we generate a stochastic kernel $Q(.,S)$ taking into account the $m$ data-dependent posteriors $Q_i(S)$. We think it would be easier to understand proof if were only using two stochastic kernels $P,Q$ as in the original theorem of Rivasplata et al. 2020.\n\n4. The reason of this discrepancy is precised in the paragraph 'Strength of our results' l. 205. We say that taking $P_i= \\hat{Q}_i$ is only a particular of what Cor. 3.1 can provide. We illustrate this point with the following example: 'if our online predicitve sequence $(\\hat{Q}_i)$ can be defined through a sequence of parameter vectors $\\hat{\\mu}$, then we can define $P_i$ by adding a small noise on $\\hat{\\mu}_i$ and thus giving more freedom through stochasticity.'\n\n5. We presented Cor 3.3 as a OPB test bound. We acknowledge that, because we killed the KL divergence term, this result is not specifically PAC-Bayesian in itself and is potentially reachable with other routes of proofs as the one you propose. However we stress that Cor 3.3 is only one side of Thm 2.2 as Cor 3.1. So, our goal through those corollaries is to illustrate the richness of Thm 2.2 by instanciating two meaningful corollaries (training and test bounds).\n\n6. We explicited our analogy in the case where $\\hat{Q}_i=\\mathcal{N}(\\hat{m}_i,I_d)$ modifications are written in blue on l.183. It allows us to recover explicitly the OGD algorithm for the averaged loss functions if we minimise a Taylor expansion of those losses at each time step. We then precisely show the smilar role of $\\lambda$ and $\\eta$.", " We warmly thank you for your positive review.\n\nFirst of all we hope you will find our revised version clearer and easier to read. We added in appendix A a reminder about PAC-Bayes learning with some classical theorems. Note that, in order to provide a broader vision of our work, according to other reviewer's remarks, we also created Appendix B which provides many discussions about Thm 2.2 and we added the definition of absolute continuity l.77. We hope those two appendices will provide you a broader vision of our work and what PAC-Bayes does.\n\nConcerning your questions:\n\n- We provided a broader discussion of the term in the LHS in the paragraph 'Reflections about the left hand side of Thm 2.2' (App B.2, l. 668). Furthermore, we precised l.78-79 that the filtration was adapted to the sample $S$ which implies the conditional expectation $\\mathbb{E}[\\ell(h_i,z_i) \\mid \\mathcal{F}_{i-1}]$ holds wrt $z_i$ for any $i$.\n\n- If the data is iid, Thm 2.2 does not recover classical guarantees of PAC-Bayes learning. The reason is that we dealt with data sequentially, which implies some additional factor compared to the batch approach.\nHowever, there exists a more meaningful way to compare ourselves to classical PAC-Bayes (as suggested by another reviewer): what if we treat sequentially our data in classical PAC-Bayes? How tight this bound would be? It appears this approach leads to a much looser bound than Thm 2.2, this highlights the strength of our results. This approach is presented in Appendix B.1 l.598 along with additional discussions.\n\n- Cor 3.1 suggests the algorithm presented in Eq. (1) l.137 because its RHS provides an empirical surrogate of the LHS of Thm 2.2. Thus, minimising this upper bound allow us to obtain an numerical upper bound on the LHS. Doing so, our learning objectives are derived to ensure measurable upper bounds on the CGE. This approach is the basic way to derive PAC-Bayes algorithms.\nFurthermore, our approach does not provide regret bounds but once we ran a single run of a online PAC-Bayes algorithms (or its disintegrated counterpart in Sec.4), we still have access to the guarantees provided by Cor 3.3 and Cor 4.2 for our posterior sequence.\n\n- About the computation of the expectation in Eq. 1, we ran an MCMC approach to estimate this moment. We acknowledge this may have a huge time complexity. This is why we proposed Sec. 4 which provides disintegrated PAC-Bayesian algorithms which do not need to estimate an expectation.\n\n- To us, our algorithms are more related to OMD as we see our KL term as the analoguous of the regularisation function appearing in OMD.\n\n- Technically speaking both Cor 3.1 and 3.3 are immediate consequences of Thm 2.2. However, we emphasise those bounds have two completely different goals. Cor 3.1 is necessary to derive online PAC-Bayes algorithms while Cor 3.3 allows us to obtain tight convergence guarantees. We insist especially on the significant difference in the LHS of both corollaries. We analysed those terms respectively in Remark 3.2 l.146 and l. 161 (added in blue).\n\n- We added in the introduction l. 53 that Sec. 4 circumvents the problem of expectation estimation that naturally appears with Gibbs posteriors in Sec.3. Disintegrated PAC-Bayes bounds allow us to obtain time efficient online algorithms.", " We warmly thank all four reviewers for their careful evaluation of our work, and we are especially encouraged by the positive elements in the reviews (in particular its potential impact on the PAC-Bayes literature, its clarity and presentation, and main results). We provide a point-by-point response to each review below and we hope reviewers with less positive evaluations will be convinced of the merits of our work.", " This paper extends the classical PAC-bayes theory to the online learning setting, where the sequence of priors and posteriors can dynamically involve. The authors provide the first online version of training and testing bound, and drive OGD-like algorithm for this problem. Strengths:\n\n1.\tThis paper considers PAC-Baysian learning in an online fashion, which is a natural extension of the classical PAC-Bayes results and to my knowledge is novel. \n2.\tThe main contribution in Theorem 2.2 seems to be very general, and even holds for non-convex bounded functions. \n3.\tExperimental results demonstrate the effectiveness of the proposed methods. \n\nWeaknesses: \n\n1. I find the paper is difficult to read especially for readers who are not very familiar with PAC-Bayes theory. I think a preliminary on the classical PAC-Bayes before introducing the online version in Section 2 would be super helpful, and a comparison between Theorem 2.2 and the classical PAC-Bayes theory is also needed. Moreover, could the authors provide the definition of absolute continuity (line 88)?\n\n\n2. I have the following questions: \n\nCan the authors provide more details about the LHS of the inequality in Theorem 2.2? More specifically, about the expectation inside (the expectation is taken wrt which random variables)?\n\nIf the data is iid, does Theorem 2.2 reduces to the classical conclusions? \n\nWhy corollary 3.1 suggests the algorithm in (1), if we perform the algorithm in (1), are their any follow up results (like regret)? And how do one compute the expectation in (1) in practice? (Line 181) Compared to OGD, is the algorithm more related to OMD or exponential gradient? \n\nIt seems to me that Corollary 3.1 and 3.2 are just Theorem 2.2 with different notations, and I’m not sure if it is necessary to repeat the same conclusion for several times. \n\nCould the authors be more clear in the introduction about the contributions of Section 4? \n The questions are listed above. Yes.", " The paper extends the PAC-Bayes framework to an online setting, where in each time step, the algorithm is presented with one sample and outputs a posterior.\nThe framework is very general, in that the data distribution can be arbitrary (e.g., history-dependent), and the per-step posteriors are allowed to depend on previous posteriors and data.\n\n The generalization performance is measured on the new sample given the past. The main result (Thm 2.2.) bounds this measure generalization by an “empirical error term” and a complexity term - effectively showing a trade-off between fitting the current sample and generalizing to the next sample. \n\nThe paper then analyzes two special cases of the main theorem.\nCorollary 3.1. considers the case where the per-step posterior is based on previous data (not including the current sample) and provides guarantees for the posterior applied to the last sample. Cor. 3.3. considers the case where the posterior at each step equals the current prior and provides guarantees for the posterior applied to the current sample.\nAs another special case, a disintegrated PAC-Bayes bound with isotropic Gaussian measures is proved (Cor. 4.1) and is used to derive Alg. 1. Numerical experiments show preferable performance compared to OGD. \n ### Strengths\nI believe the work is original and well-motivated. \nThe extension of the PAC-Bayes framework to the online setting (and the analyzed performance measure) is logical and elegant. The framework requires very limited assumptions, allowing general data distributions and history-dependent posteriors. \n\nThe paper is well-written and clear.\nThe authors provide an adequate survey of related work.\n\nAll in all, I think the paper has the potential to make important contributions to the PAC-Bayes literature.\n\n### Weaknesses\n1. Regarding the experimental results, I think it would be interesting to add a comparison with the prediction of the bounds.\n2. I have a few issues (see questions below) that I would like the authors to clarify in their rebuttal.\n\n 1. \nIn Thm 2.2., I found it hard to understand the conditions on the posterior sequence and its relation to the sample set $S$. At first reading, the statement of Thm 2.2. seems to me as a “pointwise” bound per posterior sequence, instead of a simultaneous bound over all posteriors (as in standard PAC-Bayes bounds). This formulation means that the bound does not hold for sample ($S$) -dependent posteriors (to clarify, by “pointwise” I mean the statement is “for any posteriors sequences w.h.p. over the sampling of $S$, bound holds”, in contrast to “simultaneous” where the statement should be “ w.h.p. over the sampling of $S$, for any posteriors sequences, bound holds”). If this is the case, I believe the paper should discuss this and explain the challenges in obtaining a “simultaneous” bound in this setting.\n\n2. \nReading the proof of the theorem, it seems that $Q_i$ is defined as stochastic kernels (Def. C.1.)\nas in Rivasplata et al., 2020. This is not mentioned at all in the main body of the paper. To my understanding, the formulation in the main body differs from the one used in the proof. Shouldn't the main body of the paper use this definition?\n\n3.\nDoes the algorithm derived in the paper have ensured generalization guarantees? \nI.e., is the stochastic kernel that minimizes the RHS of the bound of Thm 2.2. given a dataset, has the generalization guarantees predicted by Thm 2.2.?\n In usual PAC-Bayes bounds, this is true, since the bound simultaneously holds for all posterior distributions. I wonder if the same conclusion can be drawn here.\nAs stated in “Rivasplata, Omar. PAC-Bayesian Computation. Diss. UCL (University College London), 2022”, Sect 3.2, the stochastic-kernel-based bound is not suitable for optimization. I have not found a discussion of this in the paper in the algorithm and experiments sections. \n\n3. \nIn line 143, and in Remark 3.2. the prior is said to be $\\hat{Q}_i,$ while in Eq. 1, the prior is $P_i$. What is the reason for the discrepancy? \n\n4. Can the authors expand on the significance of Cor. 3.3.? How is it different from applying Chernoff’s bound per step?\n\n5. \nLines 181-191 (Analogy with Online Gradient Descent) - the claims here can be written more clearly and explicitly (e.g., can you show explicitly how the analogy holds? what is the relation between lambda and mu?)\n\n The limitations have been discusse above.", " The paper employs the PAC-Bayesian proof methodology of Rivasplata et al. 2020 to provide bounds for online learning. To adapt the proof of Rivasplata et al. 2020 to the online learning setting, the loss is conditioned on the filtration of previous samples and the hypothesis space and space of posterior measures are replaced by cartesian producucts thereof. The moment generating function is bounded by recursively applying Hoeffding’s inequality for each i. In section 4, the paper discusses a disintegrated version of the bound for a hypotheses sampled from the posterior $Q$ instead of the expectation over $Q.$ Inspired by the presented PAC-Bayesian online learning bounds, the paper proposes an online learning algorithm which uses a Gibbs posterior that is iteratively updated. The learning method is evaluated on four simple binary classification and linear regression settings. \n **Strenghts:**\n* Proofs seem to be correct\n* Code provided\n* Extensions of the PAC-Bayesian methodology to the online learning setting is a relevant open problem\n**Weaknesses:**\n* Questionable experiment methodology:\n * No error bars.\n * How the experiments were conducted and how the hyper-parameters are chosen is hardly described.\n* Bound is extremely loose and diverges with m instead of converging to the empirical risk, neither of this is discussed in the paper.\n* The clarity of the paper could be significantly improved\n* A lot of the language in the paper is vague. First example in line 1 of the intro: ‘Batch learning is somewhat the dominant learning paradigm’\n* As described in the summary above, the proof methodology is a straightforward extension of Rivasplata et al. 2020 to the online settings. * The paper has very few take aways that would make it worthwhile to read.\n\nHere, I elaborate a bit more what I think the central problem with the paper is:\nTo my best understanding, the critical point in the paper is the trivial upper bound of the moment generating function (MGF) which results from applying Hoeffding’s inequality m times. The resulting term $\\frac{\\lambda m K^2}{2}$ grows linearly with $m$. The fact that $\\lambda$ appears in the enumerator here but in the denominator of $\\frac{\\log(1/\\delta)}{\\lambda}$ makes it impossible to make both terms converge. Either of the two terms diverge. To my best understanding, the trivial upper bound of the MGF also leads to the fact that the authors can set $Q=P$. Typically, to obtain a meaningful bound on the MGF, we either have to assume that $P$ is independent of the samples or some weaked criterion such as differential privacy.\n\nAll together, this paper is clearly below the standard of what I would expect from a NeurIPS paper.\n\n * How would choose $\\lambda$?\n* All the related work I am familiar with that uses data dependent priors requires some additional assumption such as differential privacy. In corollary 3.3. You set P = Q such that the KL is 0. For instance, the differential privacy condition would not allow that. Can you explain why this is possible in your setting?\n The mentioned limitations and the fact that the bound constitutes a trivial upper bound is not discussed.\nEthical consideration or negative societal impact discussions are not necessary since the work is theoretical.", " Traditional PAC-Bayes analysis and algorithm design is centered around a single (prior, posterior) distribution pair and a batch of typically iid data. In this work, the authors develop the machinery for analyzing learning algorithms that are characterized by a *sequence* of priors and posteriors, where data need not be iid and can be processed one by one, i.e., an \"online\" PAC-Bayes scenario.\n\nThe substantive content of this paper includes a general-purpose error bound that can be used in the scenario described above, and applications of this bound to derive learning algorithms. Essentially, if we have a sequence of $m$ random data points, there will be $m$ ideal objective functions of interest (i.e., one expected loss for each random data point), and the main technical result of this work bounds the sum of these \"Gibbs risks\" (taking expectation of each expected loss WRT the random draw from the corresponding posterior) in a typical PAC-Bayes style, such that we have a high probability bound (over the draw of the data), which holds uniformly in the choice of the posteriors. The form of the upper bound is essentially the sum of $m$ traditional PAC-Bayes bounds, except that the empirical risk term for each bound is based on a single point, and thus it is possible to derive Gibbs posteriors that are optimal in the sense of minimizing the terms in the upper bound. The authors consider several specialized settings of priors, posteriors, and distribution families, drawing links between the procedures derived from their framework and existing techniques. Overall, the writing is quite clear, the paper is well-structured and it is easy to parse the main results of this work. The claims by the authors are sound, as they do not oversell their main result, and are quite thorough in their comparison with the existing literature.\n\nSince the idea of an online variant of PAC-Bayes itself is not particularly novel, the novelty and value of this paper should be in the execution, i.e., the means by which the authors obtain their general purpose online PAC-Bayes bound. However, the key ideas underlying their approach are completely swept under the carpet in the main paper. All the reader has to go on is a brief comment after Thm 2.2 telling us that the authors made use of a \"batch to online\" technique, leveraging much of the existing PB machinery in a sequential fashion. In my opinion the nature of the online PB bound obtained, the assumptions made (e.g., bounded losses), and how these points relate to the techniques used are the most important and interesting elements of a work of this nature, yet in its current form, the paper just \"gifts\" the reader with a new bound and discusses the most direct consequences of this bound.\n\nThis makes it somewhat difficult for me to evaluate the novelty and significance of this work. The authors lucidly relate the consequences of their main bound to the existing literature, but the more fundamental question of how one should/could make an online PB framework and the merits/demerits of the current approach (set against other possible approaches, if any) feels like a sidenote here. Let me expand just a bit more on this point.\n\nAssuming for now that we are satisfied with bounding the sum of expected losses, I feel like the bound in Thm 2.2 seriously lacks context. For example, if we assume that the priors are all data-independent, one could naively \"stack\" up $m$ traditional PB bounds ($m$ good events) applied to batches of size 1, sum these bounds, and then take a union bound. This would result in a $\\log(m/\\delta)$ term which is of course worse than the current $\\log(1/\\delta)$ term, but aside from the data-free priors, it is simple and yields a bound very similar to that of Thm 2.2, plus existing techniques can be easily used to deal with unbounded losses, etc. Of course, I know that data-dependent priors is a critical element of this work, but if this naive technique cannot be easily modified to allow for data-dependent priors, then I think that is a point the authors should communicate to the reader, and show how their approach alleviates this issue.\n Why does the bounded loss assumption arise, and what consequences are there to weakening this assumption? Considering my previous comments about naively stacking up traditional PB bounds, I would like to know if there is something inherent in allowing for data-dependent priors that means we need to naively bound a sum of (squared?) losses by $mK^{2}$.\n\n**Update:** the authors' response and revision has shed light on several key questions I had, and I have revised my score. The authors are quite upfront about the limitations of their current approach (cumulative losses rather than regret, bounded losses, etc.) which is great, but as I mentioned earlier, there is virtually no information about why these limitations arise." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 2, 3 ]
[ "cxgylGE4mhU", "4uRET1Ug_v", "4uRET1Ug_v", "Z3_eBseWU16", "z43_12kkuDe", "zAFYOu1pJGi", "E_BPeHAGGc", "V9YQMKY0R374", "nips_2022_4pwCvvel8or", "mCGA5eD-WoX", "0QQv7-q_TFr", "Aljnl6LUvf7", "jj6rbmNtTMO", "jbHz_C163vK", "zFRX-jWPHWP", "nips_2022_4pwCvvel8or", "nips_2022_4pwCvvel8or", "nips_2022_4pwCvvel8or", "nips_2022_4pwCvvel8or", "nips_2022_4pwCvvel8or" ]
nips_2022_siG_S8mUWxf
Learning Physical Dynamics with Subequivariant Graph Neural Networks
Graph Neural Networks (GNNs) have become a prevailing tool for learning physical dynamics. However, they still encounter several challenges: 1) Physical laws abide by symmetry, which is a vital inductive bias accounting for model generalization and should be incorporated into the model design. Existing simulators either consider insufficient symmetry, or enforce excessive equivariance in practice when symmetry is partially broken by gravity. 2) Objects in the physical world possess diverse shapes, sizes, and properties, which should be appropriately processed by the model. To tackle these difficulties, we propose a novel backbone, called Subequivariant Graph Neural Network, which 1) relaxes equivariance to subequivariance by considering external fields like gravity, where the universal approximation ability holds theoretically; 2) introduces a new subequivariant object-aware message passing for learning physical interactions between multiple objects of various shapes in particle-based representation; 3) operates in a hierarchical fashion, allowing for modeling long-range and complex interactions. Our model achieves on average over 3% enhancement in contact prediction accuracy across 8 scenarios on Physion and 2$\times$ lower rollout MSE on RigidFall compared with state-of-the-art GNN simulators, while exhibiting strong generalization and data efficiency.
Accept
Overall this is an interesting paper. It proposed a new formulation of the equivariant graph neural network, subequivariant GNN. Reviewers agree that the proposed idea could be useful to the community, albeit with perhaps small application scope. So on the novelty side, this paper is okay. The biggest concern among the reviewers is about the experiments, i.e., mostly the fair comparision. I feel the authors did a reasonable job to explaining why the current baselines were chosen and provided additional experimental evidence. Authors could take in account the comments from the reviewers to improve the overall presentation of the paper.
val
[ "EObPDK0NUfw", "F9Bl-zVSAmb", "uLqjY8GvZJG", "TmVBxAWm8-9", "e96rKqnz0ck", "bu08CGkF2bs", "KhnzGe5ELT", "XMX2Cao43uE", "Qj8XDJijv6", "tXcXYpGCRhw", "nGtfWpuIKw", "Ry3dMCYsPaP", "J4d52QYJGaE", "AILcckXgk_C", "6K2GLNCdjVg", "zGjmHIyLi80", "TUsOPVtb59", "hVOV5r8CiJ-", "qYfXXzsu0ZG8", "xrkuvyEWdZK", "kYsa7ajBQeF", "G9LiDeTwwQg", "t-BTC1vFagx", "XlJLoqYwea-", "mQmWUbjk3i6C", "vrawTSDm-Ku", "tKslL-EMs2-", "1goNl2HHkR", "O5Sh67DRtBI", "Z2JLE1XZwEI", "4y3fYZ6ZXRma", "eBmFO1tEJnd", "CUtRc0f1cZq", "NBB1xZ-9r2G", "lT_sXJXICqP", "7ovg1IKQrEs", "qg9kA51oOC" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer bh6f,\n\nThank you very much! We really enjoy the discussion with you, during which your insightful comments have helped greatly improve the paper. Thanks again!\n\nBest, \\\nAuthors", " Thank you for the discussion. I will revise my score.", " \n> **I personally think this is quite an impactful result, although it is expected that the other models would perform poorly.**\n\nThank you for the supportive comments! We agree that this result strengthens the validity of the model, especially on the design of subequivariance. We have updated this observation in the revised manuscript. Please refer to our response below for a detailed list of revisions we have made to enhance the readability of the paper.\n\n> **I think my only main issue remaining is how much extra conversation was required to understand the approach and how much of a re-write is required so that the paper is easy to understand. In addition, the extra writing to compare to all the different approaches within this conversation as I think this is really important to understand the novelty and contribution of this paper.**\n\nThank you for the comment! We agree that there are several important points mentioned in these discussions with you that are vital for the readers to better understand the contributions of this paper. To reflect these crucial contents in this discussion, we have made the following revisions to the paper:\n\n* We add discussions to those leveraging restricted representations to obtain equivariance on subgroups [1, 2] as well as general approaches like EMLP [3] in a separate paragraph in Related Work. We also discuss the differences between our approach and these works. (Line 44-48)\n* We update the results and discussions of our implemented Steer-SE(2)-GNN in Experiment. (Line 310-313)\n* We clarify the careful considerations on incorporating object information into the message passing. This part better clarifies how our approach differs from a simple feature space choice. (Line 175-179)\n* We add the connection between edge separation and automorphism graph networks [4, 5, 6]. (Line 200-203)\n* We update the experiment of generalization toward rotations along a non-gravity axis in Experiment. (Line 324-333)\n\nThere are also corresponding revisions in Appendix that provide necessary details of these updated contents, e.g., the implementation of our adapted Steer-SE(2)-GNN.\n\nWe sincerely thank the reviewer for recognizing the novelty of this paper and having these discussions with fruitful suggestions! We hope the revisions we made to the manuscript according to these discussions have improved the presentation and made it easier for readers to understand. Please do not hesitate to contact us if there are other clarifications we can offer.\n\nThank you for your time!\n\nBest, \\\nAuthors\n\n[1] Weiler, M. and Cesa, G., 2019. General e (2)-equivariant steerable cnns. Advances in Neural Information Processing Systems, 32. \n\n[2] Cesa, G., Lang, L. and Weiler, M., 2021, September. A Program to Build E (N)-Equivariant Steerable CNNs. In International Conference on Learning Representations. \n\n[3] Finzi et al. A Practical Method for Constructing Equivariant Multilayer Perceptrons for Arbitrary Matrix Groups. In ICML.\n\n[4] de Haan, P., Cohen, T.S. and Welling, M., 2020. Natural graph networks. Advances in Neural Information Processing Systems, 33, pp.3636-3646. \n\n[5] Thiede, E., Zhou, W. and Kondor, R., 2021. Autobahn: Automorphism-based graph neural nets. Advances in Neural Information Processing Systems, 34, pp.29922-29934. \n\n[6] Mitton, J. and Murray-Smith, R., 2021. Local Permutation Equivariance For Graph Neural Networks. arXiv preprint arXiv:2111.11840.\n", " \n> **One further question, if time permits, how does you approach then generalise to a new input domain...If the model trained on some of these examples is then tested on a new manifold, does it generalise well? Also, has this been shown in the paper (please point me to it if it has)?**\n\nIn the paper, experiments are conducted on physical scene data with gravity. We may not be able to evaluate such scenario where the force fields are highly irregular and non-uniform since the discussion period is running out of time. However, we do believe that our approach can generalize to these scenarios as it is designed to incorporate force field into the symmetry modeling.\n\nBesides, our experiment presented in the previous discussion (about rotation around a non-gravity axis) can actually be treated as one specific instantiation of this cross-domain generalization. In fact, as for our model, the setup in this experiment, which is rotating the scene while still keeping gravity vertical, is equivalent to keeping the scene as it is while rotating the direction of gravity by the same degree. In this sense, the previous results have implied that our model can generalize to a new domain (a new configuration of force field) different from the training data. We will keep on investigating more complicated scenarios that may go beyond simulations on physical scene tasks.\n\n> **When generalising to a new domain such as this does the method require the input domain to be provided to the model so that it has the force field? How much cost is there in computing this force field?**\n\nYes, we require the force field to be provided. Learning the force field itself is indeed another challenging task. Our approach here is to model the effect of a given force field on the physical dynamics of multiple interacting objects.\n\n> **Finally, if you have this force field how would this approach compare to a more classical approach of solving the task where I just compute the forces on the object at each step and predict its motion (given I assume this would be possible due to having access to the force field)?**\n\nThe biggest challenge here is that even if the force field is given, how it acts on the dynamics of objects still remains unknown and very challenging to learn. Particularly, apart from the force field, there are also internal forces of physical systems, like friction, collision, support, etc. The dynamics of an object are influenced by not only the force field but these internal forces as well. Our model is designed to learn the complicated effect of **both force field and internal forces** between objects to accurately predict the dynamics.\n\nHere is an example. Imagine there is a cube placed on a table. The cube is affected by both gravity and the support, and it remains still. However, if the table is removed, the cube will fall since the support force no longer exists. This implies that simple approaches like `just compute the forces on the object at each step and predict its motion` cannot generalize, since the object is affected by a combination of force field and interactions with other objects, and the interactions with other objects are not provided, which should be learned by the model.\n", " I personally think this is quite an impactful result, although it is expected that the other models would perform poorly. It would have been good to see the extra model you added (steer-SE(2)-GNN), given that if implemented correctly it should handle gravity well. I can acknowledge this is maybe a pain though and you have already demonstrated your method works here.\n\nI think my only main issue remaining is how much extra conversation was required to understand the approach and how much of a re-write is required so that the paper is easy to understand. In addition, the extra writing to compare to all the different approaches within this conversation as I think this is really important to understand the novelty and contribution of this paper. ", " Thank you for the clarifications. I think I am slowly better understanding the approach.\n\nOne further question, if time permits, how does you approach then generalise to a new input domain. Here I am thinking of the example you gave with the ball rolling on a smooth manifold, which has some local symmetries but no global symmetries. If the model trained on some of these examples is then tested on a new manifold, does it generalise well? Also, has this been shown in the paper (please point me to it if it has)? When generalising to a new domain such as this does the method require the input domain to be provided to the model so that it has the force field? How much cost is there in computing this force field? Finally, if you have this force field how would this approach compare to a more classical approach of solving the task where I just compute the forces on the object at each step and predict its motion (given I assume this would be possible due to having access to the force field)?\n\nI think providing more intuitive examples such as these discussed into the paper may help a reader understand the method (at least it has for me).", " Dear Reviewer tCa3,\n\nThank you very much for the detailed suggestion! We have modified Fig. 8 to highlight the key components by red circles. We have also revised the manuscript in Sec. 3.2 to refer the readers to Fig. 8 for the detailed illustration of the proposed structure.\n\nWe enjoy the discussions and thank you again for these helpful comments that have greatly improved the paper!\n\nBest, \\\nAuthors", " Thank you for answering the questions on generalizing modeling. Here is a follow-up comment.\n\nThe reviewer suggests 1) highlighting the key differences and key design of SGNN in Fig. 8, and 2) referring to Fig. 8 in the appendix when illustrating the proposed GNN structure.", " > **I think the theoretical contribution of natural graph networks is very similar to yours....If they are the same, then this could just be recognised, and the novelty of the paper comes in creating a specific network for the tasks considered.**\n\nWe agree that the edge separation has some correlations to natural graph networks, in that they both aim to incorporate different kernels for different types of interactions. In this sense, SGNN is a specific and effective instantiation for the physical simulation task, with the help of our designed hierarchical message passing scheme. We will add the above explanations to the revised version.\n\n\n\n> **Surely you could still have a small rotation around a non-gravity axis, which would amount to having the dominoes placed along a slight incline, which gravity still acts vertically. This was what I was trying to suggest, would this be possible? I would be interested to see how each model generalized to these settings.**\n\nSorry for the misunderstanding. Thank you very much for further elaborating on this point. We agree that this is a very meaningful experiment for evaluating whether these models actually learn the effect of gravity. In light of this, we conduct extra experiments by applying a rotation around a non-gravity axis, resulting in such scenarios where dominoes are placed on an incline while gravity still points downwards vertically.\n\nTo facilitate viewing, the demonstrations are provided in `rotate.mp4` in this [anonymous link](https://drive.google.com/drive/folders/1NRBfwNk9yLMNii88ep0c0U8GquMKQeUb?usp=sharing). It is also worth noticing that all models are only trained with the original data (horizontal table with vertical gravity) and none of them have seen any scenario placed like these. We have the following observations.\n\n* Very interestingly, SGNN well generalizes to these novel scenarios and reasonably simulates the effect of gravity. Particularly, the domino at the bottom starts to slide down along the table driven by gravity. The dominoes at the top reach an equilibrium between friction and gravity and keep still. The small bottle placed on the table also falls down due to gravity.\n* EGNN, as an E(3)-equivariant model, does not perceive the changes in scenarios, producing the same trajectory as if the table is horizontal. GNS and DPI, by not incorporating rotation symmetry, do not properly learn the effect of gravity as well.\n\nThis experiment interestingly reveals that our SGNN is able to learn how gravity acts on physical dynamics effectively from data and can thus generalize to novel scenes, verifying the validity of our motivation and design of subequivariance. We are willing to provide the above demo and explanations in the revised paper. \n\n\n*Thanks again for your insightful suggestions. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.* \n\n*Thank you for your time!*\n\n\nBest, \\\nAuthors", " > **Am I correct in thinking that just concatenating them would be the same as symmetry breaking, because you would just be ignoring the symmetry associated with the vector features?**\n\nYes. If one simply concatenates them (like as done by GNS and DPI), equivariance will be no longer maintained, and this is indeed why scalar feature and vector features should be treated differently.\n\n> **How does this differ from say a rotation equivariant network, where the first irrep is the scalar feature and the other irreps are the vector features?**\n\nThe irreps-based rotation equivariant networks resort to computing spherical harmonics which is very computationally expensive. Moreover, the hidden dimension grows significantly with the order of irreps feature computed in the network.\n\nOur method of ensuring equivariance belongs to the scalarization family which leverages the inner-product of geometric vectors [1, 2]. The method is easy to implement, highly efficient, and has better scalability toward large systems than the irreps-based equivariant networks. Interestingly, our approach also comes with a strong theoretical guarantee of universality, as depicted in Theorem 1 and Appendix A.1.\n\n[1] Satorras et al. E(n)-equivariant graph neural networks. ICML 2021.\n\n[2] Villar et al. Scalars are universal: Equivariant machine learning, structured like classical physics. NeurIPS 2021.\n\n", " Dear Reviewer bh6f,\n\nWe really enjoy communicating with you and appreciate your efforts. We provide detailed explanations to your raised questions below.\n\n> **Is the external force field not tractable to calculate? I assume in the case of gravity this is straightforward as it is just breaking the rotational symmetry that is not around the z-axis.**\n\nYes, the symmetry implied in the external force field might be difficult to infer especially when it is not uniformly distributed. Moreover, some fields only possess local symmetry other than a global symmetry. We will give detailed examples in the answer to the next question. \n\n> **Also, if there is a non-uniformly distribution force field in the space that breaks rotational symmetry I don't follow what symmetry would be left, could you please elaborate and provide an intuitive example of this?**\n\nWe are glad to offer more explanations. Indeed, our theory has more implications and is not restricted to uniformly distributed force fields like gravity. Particularly, the force field can possess local symmetry even when there is no clear global symmetry. Here is an example.\n\nConsider a ball rolling on a smooth and non-flat slope (which can be modeled as a manifold), under the effect of gravity. Due to the non-uniform curvature on the manifold, the tangent\nforce field (formulated by the resultant force of gravity and the support force) is also non-uniform. Clearly, such field is not globally symmetric.\n\nNow, consider there is a locally flat surface on the manifold. It is straightforward to see that the movement of the ball within this region follows symmetry (translation and rotation). This example illustrates that certain local symmetries possibly exist in force fields, and should also be captured by the model. \n\nIt is hard (at least exhausting) to derive the symmetry for each point in the manifold. Hence the group restriction methods are NOT easy to apply. Instead, our formulation well covers this scenario, by making use of the force field $\\vec{\\mathbf{g}}$ as an extra input, where $\\vec{\\mathbf{g}}$ does not necessarily need to be globally equivariant.\n\n> **I agree a more general method for finding the symmetries is beneficial. How does your method compare to the EMLP approach.**\n\nThanks for raising this point. Our approach differs from EMLP in the following aspects:\n\n* EMLP constructs equivariant MLPs for arbitrary matrix groups by solving the group constraint. Our approach tackles E($n$) group and its subgroups like $O_{\\vec{\\mathbf{g}}}(3)$ by leveraging the force field vector ${\\vec{\\mathbf{g}}}$.\n* Our approach is easier to implement and is computationally more efficient than EMLP. Specifically, EMLP requires solving the group constraints using SVD, yielding a time complexity of $O(n^3)$ where $n$ is the input dimension. By contrast, our approach mainly requires to compute the inner product which scales linearly with the input dimension $n$.\n* EMLP requires specifying the detailed group to perform equivariance constraints. Our approach only needs to take as input the force field vector $\\vec{\\mathbf{g}}$. For those scenarios when the group is intractable to specify (like the example of local symmetry we provided in the previous question), our approach is more flexible.\n\n> **What is the key difference between your model and this new one which leads to such a drastic difference in performance?**\n\nOur model features more proposed elements that are verified to be also very important in our ablation study, including considering the object geometric information as well as the hierarchical message passing scheme. Unfortunately, it is non-straightforward to incorporate all these components in the new model \"Steer-SE(2)-GNN\", which could explain the drastic difference in performance.\n", " Thank you for the detailed response and for sharing additional results. I think these are interesting and worth discussing. However, I still share some of the original concerns and some concerns raised by other reviewers regarding limited applicability and incremental work from EGNN in terms of novelty. As of now, I shall keep the score as it is. However, I'll keep these additional points mentioned in the review response and consider it in the reviewer-meta-reviewer discussion period.", " > **I believe this is a problem. Because a broken object getting treated with the same interactions as the original object leads to unrealistic healing.**\n\nSorry for the misunderstanding, and thank you for the detailed explanations. We agree that this is a limitation, and have included this in the paper. We will keep on investigating potential solutions to this problem as future work.\n\n*Thanks again for your insightful suggestions. Please do not hesitate to contact us if there are other clarifications we can offer. We would really appreciate it if you could **raise your rating**.* \n\n*Thank you for your time!*\n\nBest, \\\nAuthors", " > **It is not clear to me why is RK1 used as the integrator?**\n\nWe basically employ RK1 to control the computational cost. As depicted in the experiments in our previous response (please see Q3), even adding Hamiltonian update with RK1 integrator during training has already induced ~5x computational overhead. By further incorporating higher-order integrators like RK4 or those symplectic integrators, the cost will further increase by multiple times, making it hard to scale on large systems like those in Physion. As also mentioned by the reviewer, there are also works leveraging RK1 during training. Considering the trade-off between numerical precision and computational overhead, here we adopt RK1 during training in our previous experiment in the discussion.", " We are truly thankful for your constructive comments. We really enjoy communicating with you and appreciate your efforts. We provide detailed explanations to your raised questions below.\n\n\n> **I understand that the authors do not aim to incorporate physical laws explicitly and aim to learn more from a data driven perspective. However, the above statement is only partially correct. While the general form of Hamiltonian equations are applicable for energy conserving systems, the equations can be easily modified to incorporate dissipation...**\n\nSincerely thanks for the correction. We agree with the reviewer that Hamiltonian NNs can be modified to take into consideration dissipative systems and will revise the corresponding statement. We also thank the reviewer for pointing out this point, enlightening us to incorporate Hamiltonian into the SGNN framework for promising future work. In the current state, we identify several challenges to adapting Hamiltonian into physical scene simulation tasks like Physion:\n* The dissipative term $g(p, q)u$ [1] pointed out by the reviewer requires careful adaptation to this task. Particularly, other than being trivially implemented as MLP, there are several important inductive biases to be considered. For example, if $g(p, q)$ is for modeling forces exerted on the object level (like drag), it would be necessary for $g(p, q)$ to take into consideration the effect of multiple particles instead of a single one. In this case, $g(p, q)$ would probably be a GNN with multiple steps of message passing as well. Besides, if $g(p, q)$ is for modeling frictions, it might also be equivariant (or more precisely, subequivariant), since the generation of friction also follows physical symmetry.\n* Computational overhead. The Hamiltonian update requires taking the gradient of $H$ w.r.t. $p$ and $q$, as well as leveraging certain integrators to ensure desirable numerical precision. These operations significantly add up to the heavy computational cost for both training and inference, making it less scalable especially on large systems like Physion.\n* Different motivation. In many real-world scenarios, physical quantities like $p$ and $q$ may not be provided or even unmeasurable by visual perception (like cameras). Our goal here is to design a high-fidelity dynamics simulator with 3D information $\\vec{\\mathbf{x}}$ as input.\n\nTo summarize, we agree with the reviewer that incorporating Hamiltonian is an interesting aspect to further investigate, and we also appreciate the reviewer for the fruitful suggestions on this topic. Nevertheless, injecting Hamiltonian would pose new challenges on this task, and our preliminary experiments (in Q3) in the previous discussion also verify that simply building up Hamiltonian into EGNN and SGNN does not lead to a desirable gain in performance. We leave this for future work and have added this point in Section 5 of the paper.\n\n\n[1] Gruver et al. Deconstructing the Inductive Biases of Hamiltonian Neural Networks. ICLR 2022.\n\n>**While video demonstrations show good visualizations, they are only qualitative in nature. Rollout MSE is reasonable for non-chaotic systems. I would rely more on conserved quantities such as energy, force equilibrium, to show the trajectory is realistic.**\n\nThanks for the comment. Firstly, we would like to mention that the evaluation metrics we adopt in this paper are exactly those endorsed by the original benchmarks [2, 3].\n\nThe reviewer raises a great suggestion for plotting the time evolution of certain quantities.\nFor this purpose, we additionally compute the total energy (kinetic energy + gravitational potential energy) for those simulated Dominoes systems displayed in our demo video. To facilitate viewing, the resulting figures are provided in `energy.pdf` in this [anonymous link](https://drive.google.com/drive/folders/1NRBfwNk9yLMNii88ep0c0U8GquMKQeUb?usp=sharing).\n\nWe interestingly find that SGNN closely tracks the value of the ground-truth energy, achieving the lowest error compared to all baselines. \nThis verifies that the simulated trajectories are not only visually reasonable, but also physically valid. Particularly, SGNN simulates well even in those intervals when system energy changes drastically (e.g., due to the inelastic collision between Dominoes). It is also worth noticing from the figures that the systems in Physion are typically dissipative owing to friction or collision.\n\n\n[2] Bear et al. Physion: Evaluating physical prediction from vision in humans and machines. NeurIPS 2021.\n\n[3] Li et al. Visual grounding of learned physical models. ICML 2020.\n", " I think the theoretical contribution of natural graph networks is very similar to yours. Having a different message passing kernel for non-isomorphic edges is the same as having a different edge between different objects. The isomorphism group of the edge is driven by the neighbouring objects. I don't see the difference from the description provided, please elaborate. If they are the same, then this could just be recognised, and the novelty of the paper comes in creating a specific network for the tasks considered. \n\nI think I already said I am not familiar with the given tasks. Despite this I can only assume dominoes is say a set of dominoes set up and then pushed over or something like this and the task is to predict the dynamics of this. Rotations around the gravity axis leave the task identical, just in a different rotation. Surely you could still have a small rotation around a non-gravity axis, which would amount to having the dominoes placed along a slight incline, which gravity still acts vertically. This was what I was trying to suggest, would this be possible? I would be interested to see how each model generalized to these settings. ", " I agree that what you are doing is not just an arbitrary feature space choice and if I suggested that I apologize as that is unfair. \n\nOn the other hand, it seems like you have vector feature spaces and scalar feature spaces. Therefore, they clearly should be treated differently and not just concatenated together. Am I correct in thinking that just concatenating them would be the same as symmetry breaking, because you would just be ignoring the symmetry associated with the vector features? How does this differ from say a rotation equivariant network, where the first irrep is the scalar feature and the other irreps are the vector features? I am trying to assess how novel the splitting of geometric information and scalar information is as it seems strongly related to other equivariant approaches. \n\nI do agree there is some novelty in creating a new message passing network for a specific application though. ", " Thanks for your response. I am not trying to state that the task you are solving is not challenging, but solely focusing around the technical details of the proposed model to identify the novelty. \n\n1. Is the external force field not tractable to calculate? I assume in the case of gravity this is straight forward as it is just breaking the rotational symmetry that is not around the z-axis. Also, if there is a non-uniformly distribution forcefield in the space that breaks rotational symmetry I don't follow what symmetry would be left, could you please elaborate and provide an intuitive example of this?\n\n2. I agree a more general method for finding the symmetries is beneficial. How does your method compare to the EMLP approach, which is also a general approach to building equivariant networks:\nhttps://emlp.readthedocs.io/en/latest/#\n\n3. Thank you for considering the additional experiments. It is quite surprising that the new model performs very similarly to the EGNN approach despite correctly modelling the symmetry. What is the key difference between your model and this new one which leads to such a drastic difference in performance? ", " > If the particle comes with the $\\varepsilon$ cutoff distance, SGNN does not model this as a contact between different objects, but instead would model it as a particle of the object, since the particle's object label keeps unchanged. This interaction will be processed in the inner-object message passing stage in our hierarchical modeling.\n\nI am not sure if the authors understood the question correctly. Consider a scenario where a bullet impacts a plate. In this case, part of the plate get broken and moves away from the original plate. My question was that if this plate comes back and hits the original plate, will this interaction be modeled as a contact, or will the plate automatically *heal*. As per authors' response it seems that, the broken part heals and gets attached to the original plate as this interaction will be processed in the inner-object message passing. **I believe this is a problem.** Because a broken object getting treated with the same interactions as the original object leads to unrealistic *healing*. I don't think the authors have addressed this point. However, this is a minor problem and can be addressed as part of a future study.", " > We employ a RK1 integrator to conduct Hamiltonian update.\n\nIt is not clear to me why is RK1 used as the integrator? RK1 is not time reversible, non-symplectic and hence is not energy conserving. Isn't kind of contradicting when a non-energy conserving integrator is used along with the Hamiltonian equations? I noticed that several people have applied this in the literature as well. However, it is not clear to how this choice is justified. I am not expecting any additional experiments here. But I would appreciate if the authors can comment on this.", " > For example, the Hamiltonian-based NNs are generally designed to pursue energy conservation, which, however, is usually broken by forces like friction between objects, for the case like Physion. In this sense, the way we add inductive bias into SGNN is well appropriate.\n\nI understand that the authors do not aim to incorporate physical laws explicitly and aim to learn more from a data driven perspective. However, the above statement is only partially correct. While the general form of Hamiltonian equations are applicable for energy conserving systems, the equations can be easily modified to incorporate dissipation by adding an additional term in the second equation as $\\dot{p}=-\\partial H/\\partial q + g(q,p)u$ (see: Gruver et. al., ICLR 2022), where $g(q,p)u$ is additional forcing term with $u$ being the control parameter. For instance, in the case of a linear drag, the forcing term will be $c\\dot{q}$. Thus, frictional force can be made learnable in an HNN framework.\n\n>The video demonstrations were also provided in the supplementary material, where SGNN did produce visually realistic trajectories.\n\nWhile video demonstrations show good visualizations, they are only qualitative in nature. Rollout MSE is reasonable for non-chaotic systems. I would rely more on conserved quantities such as energy, force equilibrium, to show the trajectory is realistic.", " Thanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nIn our previous response, we have carefully studied your comments and added a lot more experiments and analyses to complement your suggestions. We summarize our responses with regard to the following aspects:\n\n* We clearly discuss the fundamental differences between this paper and those using restricted representations, from motivation, design, to performance. We have also, to our best effort, adapted and included an additional baseline to compare with this line of work.\n* We elaborate on the differences between our way of injecting object geometric information and that of a pure feature space choice.\n* We explain the physics-driven motivation and implementation of our edge separation, which clearly distinguishes it from graph automorphism.\n* Reasons for rotating along the gravity axis: for better leveraging the data for the models and baselines while ensuring physical correctness of symmetry.\n\nWe hope that the provided new experiments and additional explanations have convinced you of the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again!\n\nBest, \n\nAuthors", " Dear AC and all reviewers:\n\nThanks again for all of your constructive comments, which have helped us improve the paper!\n\nSince the discussion phase started over three days, we have not heard any post-rebuttal response yet.\n\nPlease don’t hesitate to let us know if there are any additional clarifications or experiments that we can offer. We would love to discuss more if any concern still remains. We appreciate your suggestions. \n\nThanks!\n", " > **How does the choice of using different edges to pass between objects/particles differ from an automorphism equivariant network?**\n\nIn the raised papers [4,5,6], an automorphism is defined as an isomorphism from a graph to itself (see Eq. (1) in [4]). This concept seems unrelated to edge separation. Natural Graph Networks (NGN) [4] do allow different message passing kernels on non-isomorphic edges, which, however, is clearly different from our motivation. Inspired by physics, we use different edges to simulate different interactions between or within objects, without the consideration of which edges are isomorphic and which are not. The models developed by [4,5,6] do not explicitly discuss different message passing between objects and particles.\n\n[4] de Haan, P., Cohen, T.S. and Welling, M., 2020. Natural graph networks.\n\n[5] Thiede, E., Zhou, W. and Kondor, R., 2021. Autobahn: Automorphism-based graph neural nets.\n\n[6] Mitton, J. and Murray-Smith, R., 2021. Local Permutation Equivariance For Graph Neural Networks.\n\n> **Why are only rotations applied around the gravity axis in the experiments?**\n\nAs we are focusing on predicting the dynamics of physical scenes where gravity exists, it is natural to leverage the gravity direction as inductive bias. As shown in Physion and RigidFall, the rotated scene can be easily generated from the original samples by restricting the rotation around gravity. However, if we try an arbitrary rotation (not just around gravity), the gravity direction should also rotate correspondingly. Otherwise, for example, the gravity could be parallel to the ground, leading to novel scenes that are unseen in the dataset. Without a ground-truth scene, it is hard to justify if the models produce realistic results and behave correctly. Still, it is worth mentioning that our formulation of SGNN is capable of taking the arbitrary direction of the external force field to meet subequivariance.\n", " We compare Steer-SE(2)-GNN with EGNN, EGNN-S (the subequivariant version of EGNN), and SGNN on Physion:\n\n| | Dominoes | Contain | Link | Drape | Support | Drop | Collide | Roll |\n| -------------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- | ----------- |\n| EGNN | 61.3 | 66.0 | 52.7 | 54.7 | 60.0 | 63.3 | 76.7 | 79.8 |\n| Steer-SE(2)-GNN | 59.1 | 66.7 | 54.0 | 51.1 | 62.5 | 66.7 | 77.0 | 77.3 |\n| EGNN-S | 72.0 | 64.6 | 55.3 | 55.3 | 60.5 | 69.3 | 79.3 | 81.6 |\n| SGNN | **89.1** | **78.1** | **73.3** | **60.6** | **71.2** | **74.3** | **85.3** | **84.2** |\n\nSteer-SE(2)-GNN outperforms EGNN on 5 out of 8 tasks and obtains comparable results on the other 3 tasks, which indicates the reliability of our implementation. The reason why Steer-SE(2)-GNN is generally better than EGNN lies in the involvement of the gravity constraint. If considering this constraint as well, EGNN-S surpasses Steer-SE(2)-GNN on 6 tasks. Overall, SGNN achieves the significantly best performance. \n\nWe have added citations of the mentioned works [1,2] in the paper.\n\n[1] Weiler, M. and Cesa, G., 2019. General e (2)-equivariant steerable cnns.\n\n[2] Cesa, G., Lang, L. and Weiler, M., 2021, September. A Program to Build E (N)-Equivariant Steerable CNNs.\n\n> **How does including object information differ from being a specific feature space choice?**\n\nThere is probably some misunderstanding here. Our design of involving object information in the message passing has fundamental differences from a simple feature choice. \n**(1)** We incorporate object information **in two aspects**: the geometric information ($\\vec{\\mathbf{C}}$) and the scalar information ($\\mathbf{c}$). To ensure equivariance, these two pieces of information should be treated **differently** as depicted in our message passing (Eq. (5-6)). By contrast, in the implementation of GNS/DPI by Physion and the raised work [3] by the reviewer, this information is simply concatenated together to the node feature, in a way closer to being a \"specific feature space choice\".\n**(2)** The object information, in our design, is constantly updated during the hierarchical modeling. Specifically, $\\vec{\\mathbf{C}}$ and $\\mathbf{c}$ are updated in inter-object message passing (Eq. (11)). The **updated** $\\vec{\\mathbf{C}}'$ and $\\mathbf{c}'$ are then used in the inner-object message passing. \n**(3)** Our approach of adding the object information is theoretically guaranteed to enhance the expressivity of our message passing (SOMP) over EGNN and GMN. More specifically, in Appendix A.3 we theoretically reveal that both EGNN and GMN are special cases of SGNN by choosing specific forms of MLP in $\\phi_{\\vec{\\mathbf{g}}}$ and $\\psi_{\\vec{\\mathbf{g}}}$ in Eq (7-8). This also supports that our way of involving object information is not an arbitrary feature space choice.\n\n[3] Pfaff et al, 2020. Learning mesh-based simulation with graph networks. arXiv preprint arXiv:2010.03409.", " We thank the reviewer for the valuable comments, and for pointing out several relevant references. We have cited them and discussed the differences.\nWe still would like to emphasize that our work focuses on learning physical dynamics using GNNs, which is fundamentally different from the mentioned works. Specifically, [1,2] are steerable CNNs. [4,5,6] are GNNs but not designed for physical simulations. We kindly refer the reviewer to General Response for a description of the significant and unique challenges of physical simulation, especially on the complex physical prediction dataset like Physion. \nThe concerns of the reviewer mainly stem from the insufficient discussion of the raised concepts in the mentioned works. We provide detailed responses to clarify the differences as follows.\n\n\n>**Q1. How does subequivariance differ from using the restricted representation?**\n\nWe appreciate the reviewer for raising the two related papers [1,2] that have introduced the notion of group restriction. We have cited them and will discuss the differences in the following aspects:\n\n**1. Motivation.** While the papers [1,2] aim to develop equivariant models given any E(3)/E(2) and their subgroups, this paper is mainly interested in how the external force field relaxes the equivariance from an E(3)-equivariant model. These two scenarios can be correlated, if the subgroup induced by the external force field is tractable to derive. However, when the force field is complicated (distributed nonuniformly in space), the group restriction methods [1,2] might be no longer applicable, as the underlying subgroup is hard to derive. In contrast, based on the view of the force field, our method can still work. \n\n**2. Design.** We follow the scalarization strategy used in EGNN to derive equivariant models and further augment the input with the external force vector to enable subequivariance. As shown in Eq. (9) of the main paper, our method is convenient to implement by typical computation of inner-product and nonlinearity. More importantly, we have proved the approximation universality in theory. For the methods like [1,2], they resort to steerable kernel bases in the form of harmonics and under irreducible group representations. Their components (including convolution and nonlinearity) should be specially designed. Another minor point is that the works [1,2] focus initially on CNN instead of GNN.\n\n**3. Performance.** To better illustrate, we also implement a baseline that leverages the idea in [1,2] but extends from CNN to GNN. Indeed, [1,2] are steerable CNNs, and these works have **not** offered available implementations on GNNs. We have tried our best to compare this idea with our model. Specifically, we implement ''Steerable-SE(2)-GNN'', which iterates the message passing as specified below. Consider the message computation for the edge $e_{ij}\\in\\mathcal{E}$ connecting node $i$ and $j$.\n* Compute the translation-invariant radial vector: $\\vec{\\mathbf{x}}_{ij}=\\vec{\\mathbf{x}}_i - \\vec{\\mathbf{x}}_j$.\n* Project $\\vec{\\mathbf{x}}\\_{ij}$ onto $\\vec{\\mathbf{g}}$: $v\\in\\mathbb{R}=\\frac{\\vec{\\mathbf{x}}\\_{ij}\\cdot\\vec{\\mathbf{g}}}{\\|\\vec{\\mathbf{g}} \\|}$, and $\\vec{\\mathbf{u}}\\in\\mathbb{R}^2=((\\vec{\\mathbf{x}}\\_{ij}-v\\vec{\\mathbf{g}})\\cdot \\vec{\\mathbf{m}}, ((\\vec{\\mathbf{x}}\\_{ij}-v\\vec{\\mathbf{g}})\\cdot \\vec{\\mathbf{n}})$, where $\\vec{\\mathbf{m}}, \\vec{\\mathbf{n}}$ are two orthonormal bases vertical to $\\vec{\\mathbf{g}}$.\n* Derive the type-0 message as $\\mathbf{m}\\_{ij} = \\text{MLP}\\_1(\\sum\\_l w^{01}\\_l k^{01}\\_l(\\vec{\\mathbf{u}}) \\cdot \\vec{\\mathbf{u}}, v, \\|\\vec{\\mathbf{u}}\\|, \\mathbf{h}\\_i, \\mathbf{h}\\_j)$.\n* Derive the type-1 message as $\\vec{\\mathbf{M}}\\_{ij}=(\\sum\\_l w\\_l^{10} k^{10}\\_l(\\vec{\\mathbf{u}})\\mathbf{m}\\_{ij}+\\sum\\_l w\\_l^{11} k\\_l^{11}(\\vec{\\mathbf{u}})\\cdot\\vec{\\mathbf{u}})\\cdot\\text{MLP}\\_2(\\mathbf{m}\\_{ij})$.\n* Aggregate and update type-0 feature: $\\mathbf{h}'\\_i=\\text{MLP}\\_3(\\sum\\_{j\\in \\mathcal{N}(i)}\\mathbf{m}\\_{ij}, \\mathbf{h}\\_i)$.\n* Aggregate and update type-1 feature: $\\vec{\\mathbf{M}\\_i}=\\sum\\_{j\\in\\mathcal{N}(i)}\\vec{\\mathbf{M}}\\_{ij}$, $\\vec{\\mathbf{x}}\\_{i}'=\\vec{\\mathbf{x}}\\_{i} + \\text{MLP}\\_4(\\|\\vec{\\mathbf{M}}\\_{i}\\|)\\frac{\\vec{\\mathbf{M}}\\_{i}}{\\|\\vec{\\mathbf{M}}\\_{i}\\|+\\epsilon}$.\n\nParticularly, $w\\_l^{10}, w\\_l^{01}, w\\_l^{11}\\in\\mathbb{R}$ are the learnable coefficients and $k^{10}\\_l, k^{01}\\_l, k^{11}\\_l$ are the steerable kernel bases that transform irreps from type 1 to 0, type 0 to 1, and type 1 to 1, respectively (please refer to Table 8 in [1] for more details); $\\text{MLP}\\_2(\\mathbf{m}\\_{ij})\\in\\mathbb{R}, \\text{MLP}\\_4(\\|\\vec{\\mathbf{M}}\\_{i}\\|)\\in\\mathbb{R}$. It is proved that the above implementation is equivariant with respect to the subgroup $O\\_{\\vec{\\mathbf{g}}}(3)$. We provide more explanations in Appendix. \n\n", " > **Q6. More baselines.**\n\n>**Also, the experiments chosen seem to be favorable for SGNN in comparison to the other baselines. For instance, implementing EGNN and GMN, exactly as they are, with gravity is expected to yield poor performance.**\n\nThere is probably some misunderstanding here. We are **not** choosing the most favorable setting for SGNN but instead have tried our best to compare with the baselines fairly. **(1)** Our training and evaluation protocol strictly follow that of Physion [1] including their provided scripts of GNS and DPI; **(2)** For GNS and DPI, we also evaluate their data-augmented variants GNS-Rot and DPI-Rot by leveraging random rotations of the input data; **(3)** EGNN and GMN are equivariant models, and subequivariance is never investigated in these models. We involve them as baselines here to demonstrate the importance and necessity to relax equivariance to subequivariance for physical scene simulations, which is one of our core contributions.\n\nTo further address the concern, we additionally design a $O_{\\vec{\\mathbf{g}}}(3)$-equivariant extension of EGNN and GMN. We achieve this by augmenting their update of the velocity as:\n\n$ \\vec{\\mathbf{v}}\\_i^{l+1}=\\phi\\_v(\\mathbf{h}^l\\_i)\\vec{\\mathbf{v}}\\_i^{l} + \\underline{\\phi\\_g(\\mathbf{h}\\_i^l)\\vec{\\mathbf{g}}} + \\sum\\_{j\\in\\mathcal{N}(i)}(\\vec{\\mathbf{x}}\\_i - \\vec{\\mathbf{x}}\\_j)\\phi\\_x(\\mathbf{m}\\_{ij}), $\n\nwhere the underlined term is our modification and simulates the acceleration of gravity. We dub these two variants as EGNN-S (\"S\" for Subequivariance) and GMN-S, respectively. The results on Physion are depicted below: \n\n| | Domino | Contain | Link | Drape | Support | Drop | Collide | Roll |\n| ------ | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\n| EGNN | 61.3 | 66.0 | 52.7 | 54.7 | 60.0 | 63.3 | 76.7 | 79.8 |\n| EGNN-S | 72.0 | 64.6 | 55.3 | 55.3 | 60.5 | 69.3 | 79.3 | 81.6 |\n| GMN | 54.7 | 57.6 | 54.5 | 57.6 | 55.1 | 54.2 | 79.5 | 81.3 |\n| GMN-S | 55.6 | 65.3 | 55.3 | 57.0 | 59.3 | 57.3 | 81.2 | 79.3 |\n| SGNN | **89.1** | **78.1** | **73.3** | **60.6** | **71.2** | **74.3** | **85.3** | **84.2** |\n\nWe have the following observations:\n**(1)** Our designed $O_{\\vec{\\mathbf{g}}}(3)$-equivariant version of EGNN and GMN generally performs better than their $E(3)$-equivariant counterparts, which, again, indicates the necessary of leveraging proper symmetry constraint.\n**(2)** Even with these ad-hoc modifications, EGNN-S and GMN-S are still inferior to our SGNN by a large margin. This verifies the efficacy of our overall architecture.\nThe above explanations and results have been added to Appendix.\n\n[1] Bear et al. Physion: Evaluating physical prediction from vision in humans and machines. NeurIPS 2021.\n\n> **Q7. The choice of the compared methods** \n\n>**More interesting examples where the situations where GNS and EGNN have been shown to yield SOTA performance could give a more realistic representation of how better SGNN is in comparison to these models.**\n\nWe have already been comparing with the SOTAs on both datasets: GNS and DPI are indeed the SOTA simulators on Physion (c.f. Physion [1]), and DPI is the SOTA on RigidFall (c.f. [2]). \n\n[1] Bear et al. Physion: Evaluating physical prediction from vision in humans and machines. NeurIPS 2021.\n[2] Li et al. Visual grounding of learned physical models. ICML 2020. \n\n\n> **Q8. On self-contact.**\n\n>**Since the contact is of main concern in the present work, authors should evaluate how the model performs in situations where there is self contact.**\n\n> **Discussions on self-contact and how this is incorporated is missing.**\n\nAs mentioned before in Q5, the `Drape` scenario in Physion is a deformable system and there are internal elastic forces and self-contact within the cloth themselves. Our model still performs promisingly in that scenario. \n\n> **Q9. In the case of a deformable system, consider a scenario where a particle from a given object breaks away, gets reflected by the wall, and then comes back to interact with the same object. In such case, if the particle comes with the $\\varepsilon$ cutoff distance, does SGNN model this as a contact or as a particle of the object.**\n\nIf the particle comes with the $\\varepsilon$ cutoff distance, SGNN does **not** model this as a contact between different objects, but instead would model it as a particle of the object, since the particle's object label keeps unchanged. This interaction will be processed in the inner-object message passing stage in our hierarhical modeling.\n\n> **Q10. Performance of SGNN on systems with drag and other dissipative forces are lacking.**\n\nActually, the systems in Physion are widely interfered by **Friction**, a typical dissipative force. Please refer to our demo videos in the supplementary material for better illustration.\n", " > **Q3. Experimental comparison with PINNs.**\n\n>**Additional baselines which employ contact models in the Lagrangian/Hamiltonian neural network framework may be considered. Indeed, they may not scale as the GNNs, but they can potentially give improved conservation of physical laws.**\n\nThank you for this suggestion. We augment SGNN and EGNN by a Hamiltonian integrator. Details of the implementation include:\n\n**(1)** We leverage a sum-pooling over the output scalar feature ($\\mathbf{h}\\_i$) as the Hamiltonian of the system, i.e., $\\mathcal{H} \\in \\mathbb{R}=\\sum\\_{i=1}^N \\mathbf{h}\\_i$.\n\n**(2)** We employ a RK1 integrator to conduct Hamiltonian update, i.e., $(\\dot{\\vec{\\mathbf{q}}}, \\dot{\\vec{\\mathbf{p}}})=(\\frac{\\partial\\mathcal{H}}{\\partial{\\vec{\\mathbf{p}}}}, -\\frac{\\partial\\mathcal{H}}{\\partial{\\vec{\\mathbf{q}}}})$.\n\nOne thing worth noticing here is that we are assuming the particles possess uniform mass, so that $\\vec{\\mathbf{q}}, \\vec{\\mathbf{p}}$ can be derived from $\\vec{\\mathbf{x}}, \\vec{\\mathbf{v}}$, respectively. We name the variants as SGNN-H and EGNN-H (\"H\" stands for Hamiltonian), and evaluate them on Physion. The results are displayed in the following table.\n\n\n| | Domino | Contain | Link | Drape | Support | Drop | Collide | Roll |\n| ------ | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- |\n| EGNN | 61.3 | 66.0 | 52.7 | 54.7 | 60.0 | 63.3 | 76.7 | 79.8 |\n| EGNN-H | 52.0 | 58.1 | 54.0 | 54.3 | 51.1 | 54.7 | 75.7 | 75.3 |\n| SGNN | **89.1** | **78.1** | **73.3** | **60.6** | **71.2** | **74.3** | **85.3** | **84.2** |\n| SGNN-H | 69.9 | 66.0 | 61.1 | 60.3 | 55.3 | 62.0 | 79.3 | 78.7 |\n\nAdding Hamiltonian into EGNN and SGNN generally leads to detrimental performance. We speculate that it is probably due to the dissipative forces as well as highly complex interactions in Physion. This result suggests that it may not be beneficial to involve such strong physical inductive bias for the scenarios in Physion, which also verify our discussion in Q2 above. By the way, SGNN-H is always better than EGNN-H.\n\nAs the reviewer said, the Hamiltonian NNs may not scale as GNNs. The Hamiltonian module brings significant computation overhead during training. We list the average training time per step (in seconds) on Physion Dominoes dataset.\n\n| EGNN | EGNN-H | SGNN | SGNN-H |\n| ------------- | ------------ | ------------- | ------------- |\n| 0.08$\\pm$ 0.01 | 0.44$\\pm$ 0.02 | 0.11$\\pm$ 0.02 | 0.48$\\pm$ 0.03 |\n\nEGNN-H and SGNN-H are 4~5x lower than EGNN/SGNN. \n\n\n> **Q4. Presentation in Related Work.**\n\n>**Authors refer to previous works such as Hamiltonian GNNs, and mention that they do not consider rotational equivariance. This is incorrect. In Hamiltonian GNNs, the edge embeddings can be modified to have the distance (L2 norm of the difference in positions), instead of giving simply the vectorial difference. Since the functional form to be learned in the case of a Hamiltonian is scalar, this approach also works very well and is both translationally and rotationally invariant.**\n\nSorry for this mistake. We have revised the corresponding statements in the related work section. As mentioned in Q3, the experimental comparisons with Hamiltonian GNNs are also added. \n\n\n> **Q5. Discussion of deformable systems.** \n\n>**However, in the present case although the particles are considered, there do not seem to be any deformation simulated and hence, these approaches referred should also stand equally valid as SGNN.**\n\n> **Although a particle-based approach is employed, unlike previous works such as GNS, there are no discussions on deformable systems**\n\nIn fact, our model is generally capable of tackling those deformable objects. For the dataset Physion, the **Drape** scenario does require accurate simulation on deformable objects, which, in this case, refers to the cloth. The scene depicts the dynamics of cloth falling on some random objects. The experimental results clearly show that SGNN offers ~2% improvement in contact accuracy (c.f. Table 1) and ~10% lower MSE (c.f. Table 6 in Appendix) in this scenario. We have additionally contained a related video demonstration in the supplementary material.\n\n", " We thank the reviewer for the thoughtful comments. To make our response more compact, we have rearranged the questions and addressed the similar ones together. \n\n\n\n> **Q1. The originality: while the idea is useful, the implementation and proofs are fairly straightforward extension from the equivariant case.**\n\nWe thank the reviewer for recognizing the usefulness of the idea. Our theoretical derivations of subequivariance does stem from the equivariant case. However, in Theorem 1, we have proved that our formuation of the subequivariant message passing has theorectical guarantee of universality, which broadly enhances the equivariant models when external force exists. The proof of Theorem 1 is not trivial compared to the traditional equivariance case (Proposition 1) as shown in Appendix A. For example, we require to additionaly prove the claim in Eq. (17) that states the one-to-one maping between the equivalent class $\\{\\mathbf{O}\\vec{\\mathbf{Z}}\\mid \\mathbf{O}\\in O_{\\vec{\\mathbf{g}}}(3)\\}$ and the augmented inner-product $[\\vec{\\mathbf{Z}}, \\vec{\\mathbf{g}}]^\\top[\\vec{\\mathbf{Z}}, \\vec{\\mathbf{g}}]$. \n\n\n\n> **Q2. Discussion of physics-informed NNs.**\n\n> **Also, no additional inductive biases to preserve the physics (as in the case of Hamiltonian or other physics-informed GNNs) are implemented. This raises a question on the validity of the trajectory predicted. Specifically, no comments on whether the trajectories represent a physically feasible realization is not discussed. This is important because one of the major advantages the authors claim for SGNN is the ability to \"learn\" the dynamics.**\n\n> **Now, if the main focus of the work is simulate contact, then there has been several works which attempted to do this (for instance, Zhong, Y.D., Dey, B. and Chakraborty, A., 2021. Extending lagrangian and hamiltonian neural networks with differentiable contact models. Advances in Neural Information Processing Systems, 34, pp.21910-21922.).**\n\n\n\nWe would like to first clarify the difference between SGNN and the other two lines of existing works: GNS/DPI, and the Hamiltonian-based NNs or other Physics-Informed NNs [1, 2]. \n\n\nGNS and DPI are particle-based GNN simulators, belonging to the intuitive physics family. They usually do not explicitly involve hand-crafted conservations or equations (e.g., the symplectic update in Hamiltonian NNs), purely learning the dynamics from data, similar to how humans percieve, learn, and induce about the dynamics. \n\nHamiltonian-based NNs and also other PINNs are differentiable physics models that explicitly build up physical equations into the model, restricting the output trajectory to possess some desirable physical properties, such as energy conservation. \n\nOur model SGNN belongs to the first category, but is designed to address some limitations of these models. Particularly, we inject mild physical prior like proper symmetry into the model, making it strongly generalizable and data-efficient (see Table 1 and 3 in the manucript). \n\nSGNN adds the inductive bias of subequivariance to reflect the partial symmetry of the physical law. In contrast to PINNs, we avoid over-restricted inductive bias to enable more generalization ability across different types of systems. For example, the Hamiltonian-based NNs are generally designed to pursue energy conservation, which, however, is usually broken by forces like friction between objects, for the case like Physion. In this sense, the way we add inductive bias into SGNN is well appropriate. \n\nWe cite these mentioned works in the paper.\n\n[1] Sanchez-Gonzalez et al. Hamiltonian graph networks with ode integrators.\n[2] Zhong et al. Extending lagrangian and hamiltonian neural networks with differentiable contact models.\n\n> **Authors have not evaluated how realistic the trajectory is with respect to the physical laws.**\n\n\nTo evaluate the validity of the predicted trajectory, we followed the settings in Physion and RigidFall, and applied quantities including contact accuracy and rollout MSE. The video demonstrations were also provided in the supplementary material, where SGNN did produce visually realistic trajectories. ", " We sincerely appreciate the reviewer's constructive comments and the recognition of the novelty of our paper!\n\n> **Q1. In Sec. 3.1. the authors define the problem target as to predict the position of the next time step ($x^{t+1}$), where the position of the current time step is input ($x^t$). The reviewer would like to know if the proposed method is generalizable to a broader problem setup. There are many physical systems whose modeling can be seen as a mapping from one input quantity to one output quantity ($x\\rightarrow y$). The graph structure is also applicable. How to generalize the proposed method then?**\n\n\nWe thank the reviewer for bringing up this interesting point. Our SGNN model is generalizable to a broader problem setup. To show this, let the input be a geometric graph of the system $\\mathcal{G=(V(\\vec{\\mathbf{Z}}, \\mathbf{h}), E, C)}$ with the node $i\\in\\mathcal{V}$ possessing some geometrical (vectorial) quantities $\\vec{\\mathbf{Z}}_i$ (e.g., position $\\vec{\\mathbf{x}}_i$, velocity $\\vec{\\mathbf{v}}_i$) and some scalar quantities $\\mathbf{h}_i$, \n\nthe edges $\\mathcal{E}$ representing interaction or connectivity, and the object index function $\\mathcal{C}$ as defined in the paper. \n\nGiven the input, the model transforms as \n$$\\vec{\\mathbf{Z}}', \\mathbf{h}' = f_{\\text{SGNN}}\\left( \\vec{\\mathbf{Z}}, \\mathbf{h}, \\mathcal{E}, \\mathcal{C}\\right).$$\nIn this way, this formulation well covers what the reviewer has mentioned (i.e., \"mapping from one input quantity to one output quantity\"), and even generalizes it by allowing for both tensorial and scalar quantities. The dynamics prediction task in this paper is a specific example by setting $\\vec{\\mathbf{Z}}'=[\\vec{\\mathbf{x}}^{t+1}]$ with the input $\\vec{\\mathbf{Z}}=[\\vec{\\mathbf{x}}^{t}, \\vec{\\mathbf{v}}^{t}]$.\n\n> **Q2. The structure of the proposed Subequivariant Graph Neural Networks needs to be clarified. The reviewer suggests adding a flow chart around Sec. 3.2 -3.3.**\n\nWe appreciate the reviewer's suggestion. Actually, we did include a detailed architectural flowchart of the message passing in Figure 8 of Appendix A.3. Combining the flowchart of message passing and that of the hierarchical design (Figure 2) would provide the necessary details of the entire SGNN model, which is additionaly depicted in Appendix in the revision. ", " > **Q3: If the SGNN model is capable of predicting the force field of the atomic system, it would be interesting to compare with the MLFF models.**\n\nWe are happy that the reviewer thinks our model is general and has the potential to be applied to atomic systems.\n\nHowever, the core interest of our paper here focuses on learning physical dynamics. As already explained in Q2 above, the diversity of the objects and the existence of gravity motivate the specific design of SGNN (involving object-level message, considering subequivariance, and using the hierarchical framework). In the current version, our SGNN model is particularly designed for the macro physical systems in our daily life, instead of the atomic systems, where special featurizers (e.g., radial basis functions [3,4]) are commonly applied. We will also release our code for the community to develop upon SGNN on other interesting tasks.\n\n[3] Schütt et al. SchNet: A continuous-filter convolutional neural network for modeling quantum interactions. 2017.\n\n[4] Klicpera, Johannes et al. Directional message passing for molecular graphs. ICLR. 2020.\n\n> **Q4: Besides Physion, is there any other dataset for further verifying the performance of the model? Unlike SGNN, it seems the compared models, such as GMN and EGNN are general models and not specifically developed for this task. The comparison may not be fair. It would be better to compare with the model which is dedicated to this task. Otherwise, the SGNN should be compared with other models in different tasks.**\n\nIndeed, besides Physion, we have conducted evaluations on RigidFall (please see Table 3 and Figure 6). GNS and DPI have been reported to be State Of The Art (SOTA) on Physion (see [5]), and DPI is SOTA on RigidFall as well (see [6]). Therefore, we have indeed compared the most competitive baselines in the literature, in addition to EGNN and GMN. Moreover, we also endeavor multiple ways to further boost the performance of these baselines. Even so, SGNN still yields significant improvement over SOTAs. It is true that EGNN and GMN are not particularly designed for large-scale physical scene simulations, but we still include these equivariant models as baselines in order to verify the validity and effectiveness of our proposed components. As also recognized by the reviewer, one purpose here is to emphasize that proper physical inductive biases should be carefully considered when designing GNN simulators on scene tasks.\n\nWe are happy to provide additional results if the reviewer thinks there is still any experiment missing on learning physical simulation.\n\n[5] Bear et al. Physion: Evaluating physical prediction from vision in humans and machines. NeurIPS 2021.\n\n[6] Li et al. Visual grounding of learned physical models. ICML 2020.", " We thank the reviewer for the detailed comments and the recognition of our paper!\n\n> **Q1: As mentioned previously, the author should make a better introduction to make people easy to understand the background and motivation as well as the challenge of the task.**\n\nThank you for the advice. We agree with the reviewer and apologize for not elaborating and emphasizing the challenges of the task sufficiently. We have carefully made revisions to the manuscript to properly reflect the challenges, including the motivation and background. Details are provided in the Introduction.\n\n> **Q2: Probably, the task predicting the physical dynamic of the objects has less information from the vision system than that can be obtained in an atomic system. It would be better if the author could clarify the difference between the two tasks.**\n\nThanks. We recognize that the task of modeling the physical dynamics of objects in vision systems (e.g., Physion) poses **unique challenges** over the atomic systems in various aspects. \n\n**1.** The objects in these systems are diverse in material (e.g., bowls, bricks, dominoes, cloth) and hardness (rigid or deformable). These physical properties significantly influence the way the objects interact, posing great challenges to the generalization of the simulator. We approach this challenge by taking into consideration the object's geometrical information. However, in atomic systems, these concepts become minor where the focus mainly transits to atoms themselves and the force field between atoms. \n\n**2.** There are multiple types of interaction forces in these systems, and these forces usually have substantial differences from each other. Typical interactions include collision (where objects fiercely exchange momentums), friction on contact surfaces, and even internal forces that help maintain the shape of objects. As supported by the experiments on all 8 scenarios in Physion, our model is able to capture a broad range of interactions. \n\n**3.** Physical scenes also usually involve external forces, which, in Physion dataset, corresponds to gravity. This factor motivates us to design subequivariant models by incorporating the external force into message passing, ensuring desirable symmetry. As for the atomic systems, molecules are mostly simulated in vacuum or implicit solvent, where such consideration of external force has not been elaborated.\n\n**4.** The systems considered in this paper are of much larger size than previously explored atomic systems such as QM9 [1] and MD17 [2]. Specifically, systems in Physion contain **thousands** of particles, while one molecule in QM9 and MD17 consists of no more than 100 atoms. When modeling large systems like Physion, it is required to take care of the scalability of the proposed method.\n\n[1] Ramakrishna et al. Quantum chemistry structures and properties of 134 kilo molecules. Scientific data. 2014.\n[2] Chmiela et al. Machine learning of accurate energy-conserving molecular force fields. Science advances. 2017.\n\n", " We sincerely thank all reviewers and ACs for their time and efforts on reviewing the paper. We are glad that the reviewers recognized the contributions of our paper, which we briefly summarize as follows.\n\n* **Novelty.** \"The work has very good originality.\" \"The work can bring people to pay attention to the importance of developing a model with appropriate physical constrain for a similar task.\"(Y6Jq) \"To the best of the reviewer’s knowledge, this work is new.\"(tCa3) \n* **Experiment.** \"The design and test of SGNN with Physion dataset are thoughtful and convincing.\" (Y6Jq) \"The experiments seem thorough and multiple baselines are used.\" (bh6f) \n* **Presentation.** \"The presentation of the manuscript is clear.\" (Y6Jq) \"The authors provide a clear illustration of the relevance to existing works.\" (tCa3) \"Overall, the work is well-written and clearly presented.\" (cLqG)\n \nWe also appreciate the reviewers for their thoughtful comments and concerns. Below we summarize two core aspects: **(1)** the main focus and contributions of the paper; **(2)** the revisions made to further improve the paper, including the extra experiments we have added.\n\n**1. Main focus and contributions.**\n\nIn this work, we aim to tackle the physical simulation task on challenging physical scenes including 8 scenarios on Physion and RigidFall. Such tasks are **highly challenging**, due to **the scale of the system** (on average thousands of particles per system), **the diversity of the interactions** (e.g., collision, friction, gravity), as well as multiple **shapes, materials, or even rigidness** of the objects. Particularly, **both rigid and deformable objects** exist in Physion. We kindly recommend the reviewers watch our demo video in the supplementary for a better understanding of these summarized challenges of the task.\n\n\nTo this end, we propose a particle-based GNN simulator, SGNN, that injects mild inductive biases including physical symmetry, particularly when external force like gravity exists, and object geometric information into the hierarchical model. The model can well simulate both rigid and deformable objects, is capable of modeling diverse interactions, and is promised to meet the proper symmetry, without relying on specific physical PDEs like the Hamiltonian. We also theoretically reveal that our subequivariant message passing has universality guarantee.\n\n\n**2. Summary of the revisions.**\nWe add the following experiments per the reviewers' suggestions.\n* Reviewer cLqG suggests adding strong physical priors like Hamiltonian into the network. We augment EGNN and SGNN with Hamiltonian updates and add experimental comparisons with these models.\n* Reviewer cLqG suggests stronger baselines designed on top of EGNN and GMN. We design the subequivariant version of EGNN and GMN, and add corresponding experiments.\n* Reviewer bh6f asks about the difference between our subequivariant message passing and group representation-based steerable convolutions. We make non-trivial efforts to adapt the mentioned approach, which is implemented only on CNNs, to GNNs and compare this baseline with our model.\n\nThe results of all these experiments still demonstrate that SGNN offers significant improvements in all scenarios compared with these baselines.\n\nWe have also added more discussions about physics-informed physical models and the steerable E(2) CNNs, along with adding citations to the relevant works mentioned by the reviewers.", " The manuscript improves the equivalent graph neural network (GNN) to tackle the possible inefficiency when the model predicts the dynamics of a physical system when the system symmetry is partially broken by an external force such as gravity. The subequivariant GNN (SGNN) proposed in this work introduces the hierarchical architecture in terms of the particles and objects where the subequivariant message passing is incorporated into the model to deal with complicated object interactions. The approach with the additional freedom can be more accurate in the task to evaluate the physical dynamic of objects from vision and achieve an impressive generalization compared with GNS model etc. Originality: The work introduces the concept of subequivariance based on the hierarchical network to extend the capability of GNN in the task of learning the physical dynamics from vision information. The work has very good originality. \n\nQuality: The model is developed rigorously based on the understanding of the rotation equivariant group. The design and test of SGNN with Physion dataset are thoughtful and convincing. \n\nClarity: The presentation of the manuscript is clear. However, the background and the motivation are confusing. It is a bit hard to understand the challenge of the problem until I read the online introduction of Physion dataset. \n\nSignificance: Though I may not consider this work as a breakthrough of GNN model development in the focused task, the fresh idea, such as the subequivariance described in the manuscript, indeed helps the neural network achieve better accuracy and generalization. The work can bring people to pay attention to the importance of developing a model with appropriate physical constrain for a similar task. \n Q1: As mentioned previously, the author should make a better introduction to make people easy to understand the background and motivation as well as the challenge of the task. \n\nQ2: It seems the work is similar to the development of the machine learning force field (MLFF) in molecular dynamic simulation. Probably, the task predicting the physical dynamic of the objects has less information from the vision system than that can be obtained in an atomic system. It would be better if the author could clarify the difference between the two tasks. \n\nQ3: If the SGNN mode is capable of predicting the force field of the atomic system, it would be interesting to compare with the MLFF models. There are many recently developed models such as Nequip and TorchMD incorporating the rotation equivariance and achieving impressive high accuracy while predicting the atomic force field.\n\nQ4: Besides Physion, is there any other dataset for further verifying the performance of the model? Unlike SGNN, it seems the compared models, such as GMN and EGNN are general models and not specifically developed for this task. The comparison may not be fair. It would be better to compare with the model which is dedicated to this task. Otherwise, the SGNN should be compared with other models in different tasks.\n Yes. This is a work dedicated to developing the machine learning model for science. There would be no potential negative societal impact.", " This paper targets an interesting question of embedding equivariance into graph neural networks for physical dynamics recovery. The main focus is to 1) relax the strict constraint for cases with gravity and 2) consider the differences in self- and mutual interactions during learning. This work targets the learning of physical system dynamics, which is an important topic and needs more attention in the community. The authors provide a clear illustration of the relevance to existing works on 1) graph neural networks to model interactions among particles/objects and 2) using physical constraints as an inductive bias to improve generalizability. The authors mainly propose subequivariance against the case with gravity and design multi-stage modeling to account for differences in object properties such as shape. To the best of the review’s knowledge, this work is new. \n\n The review has some questions about the work.\n1.\tIn Sec. 3.1. the authors define the problem target as to predict the position of the next time step (x^{t+1), where the position of the current time step is input (x^t). The reviewer would like to know if the proposed method is generalizable to a broader problem setup. There are many physical systems whose modeling can be seen as a mapping from one input quantity to one output quantity (xy). The graph structure is also applicable. How to generalize the proposed method then?\n2.\tThe structure of the proposed Subequivariant Graph Neural Networks needs to be clarified. The reviewer suggests adding a flow chart around Sec. 3.2 -3.3. \n\n Please see the questions above.", " In this work, authors present a new formulation of the equivariant graph neural network, namely, subequivariant GNN (SGNN), which allows the modeling of systems with symmetry breakage such as gravity. In addition, to model systems with different shapes and geometry, an additional feature that represent the object type is added so as to distinguish the intra-object (elasticity/rigidity/plasticity) and inter-object interactions (collision/repulsion/weak attraction). A hierarchical modeling approach is implemented to address particle- and object-level interactions separately. By considering different physical scenarios, for instance, collision and contact prediction, the superior performance of SGNN is demonstrated. Overall, the work is well-written and clearly presented. It builds on the equivariant GNNs and modifies it to present the idea of SGNN, which can include directional symmetry breaking such as gravity. TSome of the main comments regarding the work are as follows. \n1) While the idea is useful, the implementation and proofs are fairly straightforward extension from the equivariant case. Also, no additional inductive biases to preserve the physics (as in the case of Hamiltonian or other physics-informed GNNs) are implemented. This raises a question on the validity of the trajectory predicted. Specifically, no comments on whether the trajectories represent a physically feasible realization is not discussed. This is important because one of the major advantages the authors claim for SGNN is the ability to \"learn\" the dynamics. \n2) Authors refer to previous works such as Hamiltonian GNNs, and mention that they do not consider rotational equivariance. This is incorrect. In Hamiltonian GNNs, the edge embeddings can be modified to have the distance (L2 norm of the difference in positions), instead of giving simply the vectorial difference. Since the functional form to be learned in the case of a Hamiltonian is scalar, this approach also works very well and is both translationally and rotationally invariant. \n3) The examples demonstrated in the work are those, where contact seems to be of main interest. It is not clear how much better the model would perform in other cases where contact is not necessarily the primary interest but dynamics is. Now, if the main focus of the work is simulate contact, then there has been several works which attempted to do this (for instance, Zhong, Y.D., Dey, B. and Chakraborty, A., 2021. Extending lagrangian and hamiltonian neural networks with differentiable contact models. Advances in Neural Information Processing Systems, 34, pp.21910-21922.), which employ a similar to idea of cutoff-based contact detection. Indeed, they do not employ a graph-based approach. However, in the present case although the particles are considered, there do not seem to be any deformation simulated and hence, these approaches referred should also stand equally valid as SGNN.\n4) Also, the experiments chosen seem to be favorable for SGNN in comparison to the other baselines. For instance, implementing EGNN and GMN, exactly as they are, with gravity is expected to yield poor performance as the architecture expects the data to be rotationally invariant. Similarly, GNS and DPI learns purely from data and hence are unaware of symmetry unless trained specifically for it. More interesting examples where the situations where GNS and EGNN have been shown to yield SOTA performance could give a more realistic representation of how better SGNN is in comparison to these models. Some of the questions that naturally follow from the previous sections and some additional questions the authors should address are mentioned below.\n1) Since one of the main aim of the present work is to simulate realistic contact models, additional baselines which employ contact models in the Lagrangian/Hamiltonian neural network framework may be considered. Indeed, they may not scale as the GNNs, but they can potentially give improved conservation of physical laws than purely data-driven approaches. Also, this will provide insights into the deficiencies of SGNN.\n2) Authors have not evaluated how realistic the trajectory is with respect to the physical laws. For instance, is the energy and momentum conserved in the collision, is the coefficient of restitution 1, etc. These are important to analyze, since the claim is that the SGNN provides improved dynamics (Q2 in results). \n3) Again, since the contact is of main concern in the present work, authors should evaluate how the model performs in situations where there is self contact. At present, it is not clear whether the formulation is capable of dealing with such scenarios. There are several limitations for the present work, which the authors should consider.\n1) Although a particle-based approach is employed, unlike previous works such GNS, there are no discussions on deformable systems. It is not clear how much improvement SGNN can give for deformable systems.\n2) As mentioned in the questions, discussions on self-contact and how this is incorporated is missing.\n3) Performance of SGNN on systems with drag and other dissipative forces are lacking.\n4) In the case of a deformable system, consider a scenario where a particle from a given object breaks away, gets reflected by the wall, and then comes back to interact with the same object. In such case, if the particle comes with the \\varepsilon cutoff distance, does SGNN model this as a contact or as a particle of the object. In other words, does the particle \"heal\" with the object or not. If yes, this is unphysical. It is not clear if the model can address this issue.", " The paper introduces a new model for learning physical dynamics. It introduces a concept that is termed subequivariance, which appears similar to restricted representations from group theory. The paper also introduces a hierarchical message passing to better enable long range interactions between particles. The paper also expands the input feature space by computing geometric features of the input particles and their objects. The paper experiments on a range of dynamics datasets, namely Physion and RigidFall. Strengths:\n* The paper adds a \"binary operation\" which is the difference in position between adjacent features which adds a translation invariant representation to the input feature space, although the benefits and justification for this are not adequately detailed. \n* The experiments seem thorough and multiple baselines are used, although I am not famailir with these baselines so I cannot comment on whether any relevant baselines have been missed.\n\nWeaknesses:\n\n[major]\n* A key concept of the paper is characterising \"subequivariance\" as a relaxation of equivariance focusing on the group E(3). Although this concept has already been considered previously in [1,2] by using restricted representations of the group rather than the regular representation.\n* A criticism of GNNs made in the paper is that they seldom explicitly involve the object information into the message passing, but this feels like a specific decision choice of the input features rather than some fundamental issue of GNNs which a paper can solve. Also, there are papers which consider specific object information such as [3].\n* The concept of having different edges which pass different information appears strongly related to a relevant sub-field of GNNs exploring automorphism equivariance which has been ignored in this paper [4,5,6].\n* In the experiments only rotations around the gravity axis are used. Surely, all rotations should be possible and all the models which do not correctly account for the break in symmetry caused by gravity should produce unrealistic results, while if this model works correctly it should break the symmetry and produce realistic results. I feel like this would be a far stronger result proving the correctness of the model. \n\n[minor]\n* Multiple typos / grammatical mistakes makes reading the paper difficult. (L44, 59)\n* The object features appear to be the pooled features of its particles, which is used to create a feature space by subtracting these from each of the particles features. This seems to be a complexly worded way of saying subtracting the mean from the particle features which is a common approach in ML.\n\n\n[1] Weiler, M. and Cesa, G., 2019. General e (2)-equivariant steerable cnns. Advances in Neural Information Processing Systems, 32.\n[2] Cesa, G., Lang, L. and Weiler, M., 2021, September. A Program to Build E (N)-Equivariant Steerable CNNs. In International Conference on Learning Representations.\n[3] Pfaff, T., Fortunato, M., Sanchez-Gonzalez, A. and Battaglia, P.W., 2020. Learning mesh-based simulation with graph networks. arXiv preprint arXiv:2010.03409.\n[4] de Haan, P., Cohen, T.S. and Welling, M., 2020. Natural graph networks. Advances in Neural Information Processing Systems, 33, pp.3636-3646.\n[5] Thiede, E., Zhou, W. and Kondor, R., 2021. Autobahn: Automorphism-based graph neural nets. Advances in Neural Information Processing Systems, 34, pp.29922-29934.\n[6] Mitton, J. and Murray-Smith, R., 2021. Local Permutation Equivariance For Graph Neural Networks. arXiv preprint arXiv:2111.11840. * How does subequivariance differs from using the restricted representation?\n* How does including object information differ from being a specific feature space choice?\n* How does the choice of using different edges to pass between objects/particles differ from an automorphism equivariant network?\n* Why are only rotations applied around the gravity axis in the experiments? N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 4 ]
[ "F9Bl-zVSAmb", "uLqjY8GvZJG", "e96rKqnz0ck", "bu08CGkF2bs", "Qj8XDJijv6", "nGtfWpuIKw", "XMX2Cao43uE", "Z2JLE1XZwEI", "zGjmHIyLi80", "TUsOPVtb59", "hVOV5r8CiJ-", "J4d52QYJGaE", "qYfXXzsu0ZG8", "xrkuvyEWdZK", "kYsa7ajBQeF", "XlJLoqYwea-", "mQmWUbjk3i6C", "vrawTSDm-Ku", "tKslL-EMs2-", "1goNl2HHkR", "O5Sh67DRtBI", "qg9kA51oOC", "nips_2022_siG_S8mUWxf", "mQmWUbjk3i6C", "vrawTSDm-Ku", "qg9kA51oOC", "1goNl2HHkR", "O5Sh67DRtBI", "7ovg1IKQrEs", "lT_sXJXICqP", "eBmFO1tEJnd", "NBB1xZ-9r2G", "nips_2022_siG_S8mUWxf", "nips_2022_siG_S8mUWxf", "nips_2022_siG_S8mUWxf", "nips_2022_siG_S8mUWxf", "nips_2022_siG_S8mUWxf" ]
nips_2022_XtyeppctGgc
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
Existing fine-tuning methods either tune all parameters of the pre-trained model (full fine-tuning), which is not efficient, or only tune the last linear layer (linear probing), which suffers a significant accuracy drop compared to the full fine-tuning. In this paper, we propose a new parameter-efficient fine-tuning method termed as SSF, representing that researchers only need to Scale and Shift the deep Features extracted by a pre-trained model to catch up with the performance of full fine-tuning. In this way, SSF also surprisingly outperforms other parameter-efficient fine-tuning approaches even with a smaller number of tunable parameters. Furthermore, different from some existing parameter-efficient fine-tuning methods (e.g., Adapter or VPT) that introduce the extra parameters and computational cost in the training and inference stages, SSF only adds learnable parameters during the training stage, and these additional parameters can be merged into the original pre-trained model weights via re-parameterization in the inference phase. With the proposed SSF, our model obtains 2.46% (90.72% vs. 88.54%) and 11.48% (73.10% vs. 65.57%) performance improvement on FGVC and VTAB-1k in terms of Top-1 accuracy compared to the full fine-tuning but only fine-tuning about 0.3M parameters. We also conduct amounts of experiments in various model families (CNNs, Transformers, and MLPs) and datasets. Results on 26 image classification datasets in total and 3 robustness & out-of-distribution datasets show the effectiveness of SSF. Code is available at https://github.com/dongzelian/SSF.
Accept
This paper provides a simple method to avoid full fine-tuning of vision transformers, namely very simple linear adapters that can be trained and then subsumed into the existing linear layers during inference, which is an interesting characteristic as it prevents added computation during inference (unlike the use of regular adapters as used in NLP). Overall the reviewers appreciated the simplicity and intuition of the method, the improvement in performance over other competing methods such as VPT, the avoidance of computation overhead during inference, and comprehensive experiments. There were some concerns, however, related to the writing of the paper and clarity, robustness on OOD data, complexity analysis/details of runtime, and lack of theoretical justification. Many of these were addressed by the authors, including nice robustness/OOD results which add to the experimental validation. After the rebuttal, the reviewers all agreed on acceptance. While the method is still empirical (the hypothesis related to distribution matching seems unsubstantiated, and there are many other ways to do that, and so should probably not be in the paper), the paper has a strong empirical execution that uses a simple method to deal with limitations that have significant societal/deployment consequences. As a result, I recommend accepting this paper.
train
[ "O2yOidq9Row", "i0pQxf6Kalt", "mDBgcrIPY39c", "fGKLTGg2tEZ", "o7WD1bcajsb", "l1aS2X7gSdQ", "w9mVC_TLHD1m", "DhiBFi2LhX", "ycZMATfCSHR", "vIKRmxes-WV", "USpmRis22fK", "Km9fnfbVGUl", "oPGM2NUJ9hf" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your valuable comments and suggestions! We sincerely appreciate your recognition and constructive comments to improve our work.", " Thank you authors for the detailed responses to my questions. I have reviewed them and they seem to answers my concerns. I have updated my rating accordingly.", " **Q4: The claim (lines 27-27) that you do not have to store the pretrained separately for each fine-tuned model seems flawed.**\n \nA4: Thanks for the valuable suggestions. We have revised this part to make it clearer. Please refer to Line 26-27 of our revised version. Specifically, We divide the storage of model parameters into two different phases: the undeployed phase and the deployed phase. In the undeployed phase, compared to the full fine-tuning where all model parameters need to be stored for each task, our LIFTs only need to store a few parameters. When the model is downloaded from the server and needs to be sped up for a single target task, which is termed the deployment phase, the small number of parameters can be absorbed into the large model via re-parameterization while not changing the model architecture as VPT [7] does.\n\n[1] Baochen Sun, Jiashi Feng, Kate Saenko. Return of Frustratingly Easy Domain Adaptation. AAAI2016.\n\n[2] Dan Hendrycks, Norman Mu, Ekin D. Cubuk, Barret Zoph, Justin Gilmer, Balaji Lakshminarayanan. AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty. ICLR2020.\n\n[3] Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M. Alvarez, Ping Luo. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. NeurIPS2021.\n\n[4] Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Anima Anandkumar, Jiashi Feng, Jose M Alvarez. Understanding The Robustness in Vision Transformers. ICML2022.\n\n[5] Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of neural networks for efficient integer-arithmetic-only inference. CVPR2018.\n \n[6] Xiaohan Ding, Xiangyu Zhang, Ningning Ma, Jungong Han, Guiguang Ding, and Jian Sun. Repvgg: Making vgg-style convnets great again. CVPR2021.\n \n[7]  Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim. Visual Prompt Tuning. ECCV2022.", " We truly appreciate the reviewer for the constructive comments.\n \n**Q1: Lack of theoretical grounding, but their simplicity of the idea and empirical effectiveness seems to be sufficient according to me.**\n \nA1: Thanks for the comment. The intuition behind our idea is feature distribution alignment [1] which is under the theoretical framework of feature distribution matching. Specifically, the parameters tuned per block are to align the first-order (mean) and second-order (variance) statistics of the feature distribution to the target data. We design extensive experiments to empirically show the effectiveness of our method, which, hopefully, will establish a solid baseline of parameter-efficient fine-tuning. It is indeed hard to theoretically prove the behaviour of a deep neural network model. In our future work, as pointed out by you, we will attempt to look into the theoretical groundings.\n\n\n**Q2: Lack of study on robustness and OOD performance.**\n \nA2: Thanks for the suggestions. Here we conduct experiments to evaluate the robustness and OOD performance. The results are shown in Table 5 of the revised version. We directly perform the evaluation on the ImageNet-A, ImageNet-R and ImageNet-C datasets with the fine-tuned models on ImageNet-1K. ImageNet-A and ImageNet-R are measured by Acc@1. The performance on ImageNet-A and ImageNet-R shows the OOD prediction of models. ImageNet-C is measured by mCE ($\\downarrow$). The performance on ImageNet-C shows the robustness of the models. For your convenience, we also list this table as follows. \n\n| \t\t | ImageNet-1K | ImageNet-A | ImageNet-R | ImageNet-C | \n| :-----------------: | :-----------------: | :---------------: | :----------------: | :--------------: |\n| Full fine-tuning | 83.58 | 34.49 | 51.29 | 46.47 |\n| Linear probing | 80.31 \t |\t29.43\t |\t50.83\t |\t 49.04\t|\n| VPT-Deep | 82.69 |\t44.03\t |\t53.41 |\t 42.05\t|\n| LIFTs (ours) | 82.82 | 45.88\t|\t56.77 | 41.47\t|\n\n\n\nWe have two findings from this table: i) our LIFTs obtains better performance than VPT on both datasets, which shows our fine-tuning method has stronger robustness and out-of-distribution generalization; ii) although LIFTs has lower accuracy than full fine-tuning on ImageNet-1K, the performance on ImageNet-A, ImageNet-R and ImageNet-C are better. As pointed out in paper [2, 3, 4], the performance between ImageNet-1K and ImageNet-A (ImageNet-C) is not absolutely relevant. We believe such improvements in robustness and OOD datasets might come from the fact that LIFTs freeze most of the pre-trained parameters and thus maximally preserve the knowledge learned on the large dataset for pre-training and thus maintain a better generalization capability. We found this is an extremely interesting point and will explore more on this in a separate work.\n\n \n**Q3: Please review and explain this section better (lines 175-177).**\nHere, we consider that g is a linear function for its simplicity of the linear function but the lack of comprehensive investigation and the properties of the linear function make it possible to be merged with other operations of the network in the inference phase.\n \nA3: Thanks for the suggestions. We have added more explanations regarding this part. Please refer to Line 174-176 in the revised version. The refined description is ‘Here, we consider that g is a linear function for the simplicity of the linear function, and the properties of the linear function make it possible to be merged with other operations of the network in the inference stage.’. Such an idea is inspired by [5, 6]. One of the representative techniques is batch normalization folding used in the model compression algorithms. The parameters introduced by the batch normalization layers are fused into the convolution layers which are usually implemented before them. We deploy a similar strategy such that during the inference phase, the parameters introduced during the training phase are merged into the linear layer defined in the baseline model. In this manner, LIFTs achieves state-of-the-art performance without extra parameters and computation overhead during the inference stage.\n \n", " Thanks for your valuable comments.\n \n**Q1: Some parts of the manuscript is unclear. For example, line 195, A_2^2 should be A_i^2. It is unclear whether the transformation A is different among different attention heads within the same layer.**\n \nA1: Thanks for your suggestion. We have refined this description as Line 194 of the revised version. As you mentioned, A_2^2 should be A_i^2, and the transformation A is different among different attention heads. \n \n \n**Q2: For the Fig. 3 (b), it is recommended to provide details explanation about how the similarity matrix is computed. Also, it would be interesting to show the dissimilarity matrix of the linear transformation before and after fine-tuning, since they are comparable after re-parameterization.**\n\nA2: Thanks for the good question. For Fig.3 (b), we visualize the similarity matrix of LIFTs-ADA in different layers of LIFTs-Deep. In each layer, there exist a vector of scale and a shift variables of LIFTs-ADA, as shown in Alg. 1 of the submitted version. We compute the similarity of the scale factor across different layers to show their weak correlation. That is to say, they learn independent representation in each layer rather than interdependent and redundant features. \n\nWe found the proposed dissimilarity quite interesting. However, the model weights parameters are extremely sensitive. To verify this, we conduct a simple ablation experiment according to your suggestion as shown in the following parts. First, we choose three set of fine-tuned model weights for comparison: full fine-tuning, linear probing, and the proposed LIFTs. After getting the weights, we compute the dissimilarity of weights between full fine-tuning and linear probing. After that, we compute the dissimilarity of weights between full fine-tuning and LIFTs. Finally, we compare both dissimilarity values and find LIFTs has a higher value than linear probing, but LIFTs has higher accuracy. It means that, even though the accuracies of the two networks are similar, the weights might be very different. We have also conducted another step of experiments for a sanity check where we train the same model with different seeds. Even with similar final accuracy, the learned model weights can be quite different.\n\nAdditionally, inspired by your comment, we find the feature similarity might better depict the similarity of the model [1]. It is possible that even though the weights of the two models are very different, the extracted features are similar. Therefore, we show the curve of CKA w.r.t. the layers in Fig.3 (b). We compute the feature similarity between full fine-tuning and LIFTs, full fine-tuning and linear probing, and full fine-tuning and VPT-Deep separately. Near the classification layer, LIFTs have higher feature similarity to full fine-tuning compared with linear probing and VPT-Deep, which shows the effectiveness of our method.\n\n \n**Q3: For Tab. 2, why the Tuned/Total rates are so different between NABirds dataset and others?**\n \nA3: In Table 2, we compute all tuned parameters, which also contain the head of the network and it’s number of parameters is proportional to the number of classes. NABirds only has 55 classes but other datasets have 200 (CUB), 102 (Flowers), 120 (Stanford Dogs), and 196 (Stanford Cars) classes, respectively. NABirds has fewer head parameters than other datasets but the total parameters are similar because the same ViT-B/16 backbone is employed. Thus, the tuned/total rate of NABirds is much less than other datasets.\n\n[1] Simon Kornblith, Mohammad Norouzi, Honglak Lee, Geoffrey Hinton. Similarity of Neural Network Representations Revisited. ICML2019.\n", " **Q4: Please provide some discussion on the limitations and societal impact in the paper.**\n \nA4: Regarding the limitations of this work, we currently focus on sharing backbone parameters among different tasks while treating each task independent of the rest of the tasks involved. However, some recent papers (e.g., [1, 2]) show that by correlating multiple tasks together during fine-tuning, the performance for each single task can be further improved. However, recent works treat this relationship among tasks as a black box in which investable suffers a huge computational cost. Thus, we believe an efficient method to find positive task relationships could be a meaningful direction for further exploration.\n\nThis work has the following societal impact. LIFTs can effectively save parameters and training time compared to full fine-tuning, so that the approach can quickly transfer large models pre-trained on large datasets to downstream tasks, which saves computational resources and carbon emissions. Thanks to the linear transformation and re-parameterization, there is also no need to change the deployed model architecture when the model is transferred to the downstream task. Only a set of weights need to be replaced, which is also more convenient compared to the methods that introduce additional parameters such as VPT [1]. However, like other fine-tuning methods, LIFTs is also based on a pre-trained model, which will probably also cause a violation of the use of fine-tuning methods if this upstream pre-trained model is trained on some illegal data. \n\n[1]  Menglin Jia, Luming Tang, Bor-Chun Chen, Claire Cardie, Serge Belongie, Bharath Hariharan, Ser-Nam Lim. Visual Prompt Tuning. ECCV2022.\n\n[2] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly. Parameter-Efficient Transfer Learning for NLP. ICML2019.\n\n[3] Elad Ben Zaken, Yoav Goldberg, Shauli Ravfogel. BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-model. ACL2022.\n\n[4] Shoufa Chen, Chongjian Ge, Zhan Tong, Jiangliu Wang, Yibing Song, Jue Wang, Ping Luo.AdaptFormer: Adapting Vision Transformers for Scalable Visual Recognition, arXiv preprint, 2022.\n\n[5] Han Cai, Ligeng Zhu, Song Han.ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware. ICLR2019.\n\n[6] Andrea Gesmundo, Jeff Dean. muNet: Evolving Pretrained Deep Neural Networks into Scalable Auto-tuning Multitask Systems, arXiv preprint, 2022.\n\n[7] Andrea Gesmundo, Jeff Dean. An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems, arXiv preprint, 2022.\n\n[8] Paul Barham et al. Pathways: Asynchronous Distributed Dataflow for ML. MLSys2022.", " Thanks for your valuable suggestions.\n\n**Q1: Since the paper propose a new adapter-based architecture for transfer learning, please compare the computational complexity (Table 1) and model performance (Table 3 and Table 4) with original adapter-tuning and recent vision adapter structures. I do not see the results in the paper.**\n\nA1: Thanks for your suggestions. Following VPT [1], we have added the complexity analysis of the Adapter [2] in Table 1 of the revised version. We also add experiments to show the performance of Adapter [2, 4] and Bias [3] with different architectures (ViT-B/16, Swin-B, ConvNext-B) in Table 3 of the revised version, where our LIFTs still achieves superior results compared to other baselines. The running time and memory are shown in Table 4. \n\n**Q2: As the paper put a lot of effort on the where to insert the LIFT block, the current version seems to a simplified Neural Architecture Search with grid search strategy. I am wondering whether the author tried some NAS techniques for such combinational optimization problem.**\n \nA2: This is a good question and we also agree that incorporating NAS algorithm is a reasonable extension. Currently, we are using a method similar to grid search to find insertion locations, which is a bit like NAS. However, NAS requires an additional search phase, and the structure searched in one task is not necessarily applicable to another task [5]. Some recent works [6, 7] indeed show that, given a task, a multitask system can automatically learn the best path for the specific task, thereby activating only a specific module for this task in a large system, similar to pathway [8]. However, this could be extremely computational expensive and considerable amount of extra works. Therefore, we leave it as a future and separate work of building a larger system and then finding task-specific activation paths via NAS, to keep the current world focus on the problem of efficient fine tuning. \n\n\nBack to the current approach, we also only explore the structure on CIFAR-100, which might not be the optimal structure on other datasets (Even if we use NAS, we cannot also guarantee that the structure searched on one dataset will be optimal on other datasets), but we are more interested in whether our approach works well and obtains superior performance. Considering your suggestion, we conduct an experiment where the lifts-ada is inserted into all layers and all positions. Such an approach is an upper bound of performance. We conduct experiments on CIFAR-100 and ImageNet-1K with pre-trained ViT-B/16 and show that such an approach achieves higher performance than lifts-deep (CIFAR-100: 93.24 vs. 92.76, ImageNet-1K: 83.10 vs. 82.82). Despite lifts-ada is inserted into all layers and all positions, the inference phase still does not require any additional parameters due to the re-parameterization.\n\n**Q3: It is more appropriate and important to report the results for CIFAR and ImageNet with ViT in Table 3.**\n \nA3: Thanks for your suggestions. We have added the results for CIFAR and ImageNet in Table 3 with pre-trained ViT-B/16 following VPT [1]. The specific results are as follows. \n \n| |CIFAR-100 | ImageNet-1K |\n| :--------------------: |:---------------: | :------------: |\n| Full fine-tuning | 93.69 | 83.58 |\n| Linear probing | 87.28 | 80.31 |\n|Adapter [2] \t\t| 92.42 | 82.66 |\n|Bias [3] \t| 92.21 \t |82.75 |\n|VPT-Shallow [1] \t| 91.28 \t |81.43 |\n|VPT-Deep [1] \t\t| 92.01 | 82.69 |\n|LIFTs (ours) \t\t| 92.76\t |82.82 |\n\n\nFrom this table, we can see that, although there is a gap between LIFTs and full fine-tuning, our method still outperforms other parameter-efficient fine-tuning methods. \n\n", " Thanks for your valuable comments and recognization.\n \n**Q1: Please explicitly denote in Tab. 4 on what device the running time is obtained.**\n \nA1: Thanks for your suggestion. For the training stage, we employ a batch size of 16 and mixed precision training. For the inference stage, the batch size is 1. The running time in Table 4 is measured in a single GeForce RTX 2080Ti GPU with 11G memory in total. See Line 292-296 of the revised version.\n \n \n**Q2: It will be great if the authors could discuss the limitations of the proposed method, which may showcase the future directions on this line of research to some extent.**\n \nA2: Thanks for your suggestions. Currently, we focus on sharing backbone parameters among different tasks while treating each task independent of the rest of the tasks involved. However, some recent papers (e.g., [1, 2]) show that by correlating multiple tasks together during fine-tuning, the performance for each single task can be further improved. However, recent works treat this relationship among tasks as a black box in which investable suffers a huge computational cost. Thus, we believe an efficient method to find positive task relationships could be a meaningful direction for further exploration.\n\n[1] Andrea Gesmundo, Jeff Dean. muNet: Evolving Pretrained Deep Neural Networks into Scalable Auto-tuning Multitask Systems, arXiv preprint, 2022.\n\n[2] Andrea Gesmundo, Jeff Dean. An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems, arXiv preprint, 2022.", " Dear AC and all reviewers,\n \nWe are grateful to AC for organizing the review of our paper and appreciate all the reviewers for the valuable comments and recognition of our work. We have revised our manuscript carefully and improved the proofreading based on all reviewers’ comments. The revised version has been updated online. The updated sections are marked in the magenta font. The responses to each reviewer are as follows. We will also continue to revise our manuscript according to the reviewers’ further comments in Author- Reviewer Discussion stage.\n", " The authors propose a simple linear adapter for parameter-efficient fine-tuning. Unlike previous methods that involve complex non-linear operations or prompting, they explore linear feature scalability of vision transformers and propose LIFTs-ADA. Furthermore, their linear adapters can be merged or subsumed into the original modules of the network during inference stage making them not requiring any additional parameters and FLOPs. Their experiments show that their proposed method performs at par or better than previous competing methods such as VPT with substantially less number of parameters during fine-tuning for downstream tasks.\n\n\n Strengths\n\n1. Method is simple, effective and easy to use.\n2. Inference stage does not require any additional parameters and FLOPs, thereby reducing computational costs. They also provide numbers (#params and #FLOPs) for comparison with another parameter-efficient fine-tuning method such as VPT.\n3. Authors have performed a comprehensive set of experiments with various datasets and downstream tasks to support their method and claims.\n4. Authors also perform a series of ablations to understand the effectiveness of locations for placing their linear adapters.\n\nWeaknesses\n1. Paper is poorly written in some places. Some specific lines 103, 175, 176, 177. In general the paper lacks the written maturity of a Neurips paper.\n2. Lack of theoretical grounding. But their simplicity of idea and empirical effectiveness seem to be sufficient according to me.\n3. Lack of study on robustness and OOD performance.\n 1. Please review and explain this section better (lines 175-177)\n\nHere, we consider that g is a linear function for its simplicity of the linear function but the lack of comprehensive investigation and the properties of the linear function make it possible to be merged with other operations of the network in the inference phase.\n\nI do not understand this line. Looks like 2 unrelated sentences have been merged.\n\n2. The claim (lines 27-27) that the you do not have to store the pretrained seperately for each fine-tuned model seems flawed. If the Linear adapters are subsumed once into the pre-trained network during inference stage, they you would have to do this for each downstream task and store a different model for each downstream task. 1. Lack of results on robustness or OOD performance.\n2. Writing quality at some places is not up-to the standards.\n3. Lack of theoretical grounding (but this can be excused).\n4. For each downstream task, you still have to store a separate model since the parameters of the linear adapters are subsumed into original parameters of the pre-trained network thus updating the parameters effectively.", " In this submission, instead of optimizing all the parameters to adapt to a new task, the authors propose a LInear Feature Scalability (LIFTs) method that keeps all the model parameters from the pre-trained network but only learns the task-specific linear feature scalability adaption layer for each block. In this way, the authors alleviate the dilemma of useful information elimination and expensive computation burden as in previous all-parameter-optimized fine-tuning approaches, making the model readily and effectively reusable across various tasks. Experiments on both the fine-grained visual classification and general classification datasets convincingly demonstrate the superiority of the proposed LIFTs over prior works. Further results on segmentation and detection are also provided in the supplement. [Strengths]\n- The proposed module is plug-and-play, which is easy to be incorporated in various architectures. The motivation is clear. The authors analyze the flaws of the existing full-parameter fine-tuning methods in details, and then demonstrate their solution of LIFTs accordingly.\n\n- The detailed algorithm workflow with the key code is provided, which makes it easier to understand the method details. Detailed discussions on the complexity are given in the method section, which is critical to understanding the contributions of the proposed LIFTs. The paper is easy to follow. The illustrations are quite clear, by firstly giving a definition of the key term of the feature scalability in this paper.\n\n- Extensive ablation studies and visualization results are provided, by exploring the impacts of various designs. \n\n[Weaknesses]\n- Please explicitly denote in Tab. 4 on what device the running time is obtained.\n\n- The authors are suggested to use the same font across all the figures. \n\n- It will be great if the authors could discuss the limitations of the proposed method, which may showcase the future directions on this line of research to some extent.\n\n- Grammars: Ln 329 “ … which makes the network applied to downstream tasks not require any additional parameters and FLOPs …”\n\n - Could you analyze the limitations of the proposed method? It will be helpful for follow-up research on this line of fine-tuning schemes with the feature scalability. Yes, limitations have been discussed. I do not find the potential negative societal impact of this work.", " A new fine-tuning method for vision transformer is proposed in this manuscript. The authors have explored the Linear Feature Scalability of vision transformers, which defined as the fine-tuning performance with only linear transformation of the feature map. A simple module named LIFTs-ADA is designed for a fine-tuning, where some additional linear transformation operations are injected after the MLP and MSA. Re-parameterization is designed to make the fine-tuning parameter-free during inference. It has shown its efficiency on different tasks and models.\n Strengths:\n1, The definition of feature scalability and the observation that applying only a linear transformation of the feature map can lead to great performances on different downstream tasks are interesting. \n2, The proposed method is quite simple yet efficient for vision transformers with respect to the naive fine-tuning and vision prompt fine-tuning. The additional FLOPS of the proposed method is indeed lower than that of the prompt-based method.\n3, The Re-parameterization trick can neurally integrate the linear transformation learned for different tasks into the original linear layer of transformer, leading to a zero-overhead inference.\n4, The ablation studies about different configuration of ADA is good.\n\nWeaknesses:\n1, Some parts of the manuscript is unclear. For example, line 195, A_2^2 should be A_i^2. It is unclear whether the transformation A is different among different attention heads within the same layer.\n2, For the Fig. 3 (b), it is recommended to provide details explanation about how the similarity matrix is computed. Also, it would be interesting to show the dissimilarity matrix of the linear transformation before and after fine-tuning, since they are comparable after re-parameterization.\n3, For Tab. 2, why the Tuned/Total rates are so different between NABirds dataset and others? Please refer to the weaknesses part The limitations are mainly about the unclear statement of some parts and the insufficient analysis about the learning behaviors (the different visualizations of the linear transformation).\n", " The paper introduces a parameter-efficient transfer learning method for vision transformer, named LInear Feature Scalability (LIFTs). It inserts a linear-structured adapter layer to the transformer block, while keeping other model parameters unaltered. The micro and macro insertion positions are carefully searched through experiments. The trained linear adapter layer can be fused into the original network structure by applying the reparameterization tricks, without additional computational overhead at inference time. LIFT shows promising results on a variety of evaluations. Strengths:\n1.\tSimple but efficient design. The linear-structured adapter can be easily implemented and plugged into any existing vision transformer structure without hustle. \n2.\tThe proposed LIFT adapter takes the idea of reparameterization method, thus avoid the additional inference cost. \n3.\tThe performance improvement is encouraging.\n\nWeakness:\n1.\tSince the paper propose a new adapter-based architecture for transfer learning, please compare the computational complexity (Table 1) and model performance (Table 3 and Table 4) with original adapter-tuning and recent vision adapter structures. I do not see the results in the paper. 1.\tAs the paper put a lot of effort on the where to insert the LIFT block, the current version seems to a simplified Neural Architecture Search with grid search strategy. For example, 3 macro designs, called LIFTs-Shallow, LIFTs-Shared and LIFTs-Deep are evaluated. But there are still other hybrid micro and macro designs that are untested. I am wondering whether the author tried some NAS techniques for such combinational optimization problem.\n\n2.\tIn table 3, why not include the ViT-B structure for CIFAR-100 and ImageNet1k experiments? Since the Visual prompt tuning was originally designed for ViT-structured network, I believe it is more appropriate and important to report the results for CIFAR and ImageNet with ViT, since it provides a fair test bed for two methods. Please provide some discussion on the limitations and societal impact in the paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5, 5 ]
[ "i0pQxf6Kalt", "vIKRmxes-WV", "vIKRmxes-WV", "vIKRmxes-WV", "Km9fnfbVGUl", "oPGM2NUJ9hf", "oPGM2NUJ9hf", "USpmRis22fK", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc", "nips_2022_XtyeppctGgc" ]
nips_2022_qm5LpHyyOUO
MCMAE: Masked Convolution Meets Masked Autoencoders
Vision Transformers (ViT) become widely-adopted architectures for various vision tasks. Masked auto-encoding for feature pretraining and multi-scale hybrid convolution-transformer architectures can further unleash the potentials of ViT, leading to state-of-the-art performances on image classification, detection and semantic segmentation. In this paper, our MCMAE framework demonstrates that multi-scale hybrid convolution-transformer can learn more discriminative representations via the mask auto-encoding scheme. However, directly using the original masking strategy leads to the heavy computational cost and pretraining-finetuning discrepancy. To tackle the issue, we adopt the masked convolution to prevent information leakage in the convolution blocks. A simple block-wise masking strategy is proposed to ensure computational efficiency. We also propose to more directly supervise the multi-scale features of the encoder to boost multi-scale features. Based on our pretrained MCMAE models, MCMAE-Base improves ImageNet-1K finetuning accuracy by 1.4% compared with MAE-Base. On object detection, MCMAE-Base finetuned for only 25 epochs surpasses MAE-Base fined-tuned for 100 epochs by 2.9% box AP and 2.2% mask AP respectively. Code and pretrained models are available at \url{https://github.com/Alpha-VL/ConvMAE}.
Accept
The reviewers are positive about this submission initially. After the authors' rebuttal, one reviewer pointed out that the name `ConvMAE' is not proper to describe the current work. The authors respond by claiming using an alternative name, which is acknowledged by the reviewer. Overall, all the reviewers stand positive for this work and AC stands with the reviewers. The authors shall take the suggestions from the reviewers to further polish the current work in the camera-ready submissions.
train
[ "ZZ63RrAwYp", "NDxV1kaiQlw", "V8I7SsvVZ90", "ssuZAulGPCci", "sJqlrXfOh42", "BAtNiU6C_eY", "8otWqFlnXbm", "JhGaaXNu6gL2", "i0kZXF2AXc5", "2nxBGoV8-N8", "_ljYpO-7La0", "tpXLzQK9bca", "Mt_r_atVyQT", "SCw1JSaqqO" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We update the results of VideoConvMAE-multiscale pretrained for 1600 epochs on SSV2 in the table below :\n| ConvMAE-multiscale/Epochs | 800 | 1600 | \n|----------------|------|-----|\n| Kinetics-400 | 82.7 |N/A| \n| SSV2 | 70.7 | 71.2| \n", " Thanks! Given this change I have no other concerns about the paper.", " Thanks for your suggestion. After careful consideration, we will rename our approaches to MC-MAE (Masked Convolution meets Masked AutoEncoder) and backbone ConViT to CViT. The final version will be updated with the new name.", " Thanks for the response and glad to see this approach improving over the original MAE. However, my biggest concern that the over-emphasis on the name is not addressed. Several additional remarks:\n\n* \"ConvMAE\" gives reader the wrong impression that it is a ConvNet architecture with MAE pre-training. It is not the case here (hybrid architecture used).\n* As mentioned by the authors, none of the 3 works [1-3] the paper followed, or has similar architecture with, use the name \"Conv\" in their approach. For [1], it is citing the architecture as ViT-C (and instead of ViT-B), where Transformer is still the main character in play.\n* May be a better name is MCAE, as the paper is Masked Convolutions meets masked Autoencoders?\n\nIf the biggest concern is not addressed, I will lower my rating of the paper for the potential risk of oversell. ", " > Did you experiment with larger models of the order of ViT-L and ViT-H? How do these results scale?\n\nWe pretrain ConvMAE-Large with multi-scale fusion for 1600 epochs and fine-tune the pretrained ConvMAE-Large on ImageNet, COCO, and ADE20K. We compare the results with those of ConvMAE-Base, MAE-Base, and MAE-Large in the following table. It shows that our proposed ConvMAE can further improve when scales up. In the future, we will conduct experiments to train ConvMAE-Huge.\n\n| | COCO FT Epoch | COCO AP{Box} | ImageNet | ADE20K |\n|---------------|:---------------:|:--------------:|:----------:|:--------:|\n| MAE-Base | 100 | 51.2 | 83.6 | 48.1 |\n| ConvMAE-Base | 25 | 53.2 | 85.0 | 51.7 |\n| MAE-Large | 100 | 54.6 | 85.9 | 53.6 |\n| ConvMAE-Large | 25 | 55.6 | 86.7 | 54.1 |\n\n> Limited runtime analysis with current approaches.\n\nWe compare the inference speed under Mask-RCNN framework for object detection. Our ConvMAE significantly improves the accuracy of object detection with slightly increase of inference time. The inference speed is tested on the A100 GPU. \n| | Inference Speed | COCO AP{Box} \n|---------------|-----------------|-----------------|\n| ConvMAE-Base | 0.090 s/img | 53.2 | \n| MAE-Base | 0.083 s/img | 51.2 |\n\n\n> More experiments on VideoConvMAE and VideoConvMAE-multiscale\n\nWe additionally pretrain VideoConvMAE for 400 epochs and VideoConvMAE-multiscale for 1600 epochs. The performance for 400 epochs is updated in the following table. We will update the results of 1600 epochs after the longer training is finished. \n\n| ConvMAE/Epochs | 200 | 400 | 800 |\n|----------------|------|-----|------|\n| Kinetics-400 | 80.1 |81.3| 81.7 |\n| SSV2 | 67.7 |69.2| 69.9 |\n\n\n| ConvMAE-multiscale/Epochs | 800 | 1600 | \n|----------------|------|-----|\n| Kinetics-400 | 82.7 |N/A| \n| SSV2 | 70.7 | N/A| \n\n\n\n\n> Performance comparison with CoAtNet\n\n| | Parameters | Resolution | ImageNet |\n|--------------|------------|------------|----------|\n| CoAtNet-3 | 168 M | 224 * 224 | 84.5 |\n| ConvMAE-Base | 88 M | 224 * 224 | 85.0 |\n\nCoAtNet only performs experiments on image classification. The ImageNet-1K accuracy of CoAtNet and ConvMAE is listed above. ConvMAE with 88 million parameters can surpass CoAtNet-3 with 168 million parameters by 0.5 accuracy. \n", " > Would the masked feature map introduce some bias or artifacts to the training along the mask edges? It may be helpful to find a way to alleviate the boundary issue.\n\nYes. Masked feature map would introduce artifacts to the training process. We will explore new approaches for solving the artifacts introduced by masked feature.\n\n> The comparison with the MAE training with standard ViT backbones may be unfair, due to introducing extra computation cost with the convolutional layers. It would be helpful to further break down the improvements.\n\nConvMAE shares similar parameters and FLOPs with MAE on various downstream tasks as shown in the table below (Table 1, 2, 3 of the original paper). When we design ConvMAE, we compensate for the extra computation of convolution layers by reducing the number of transformer blocks from 12 blocks to 11 blocks.\n\n\n| Method | FLOPs/Params of SEG | FLOPs/Params of DET |\n|:--------------:|:---------------------:|:---------------------:|\n| MAE-Base | 0.6 T / 163M | 0.8T / 111M |\n| ConvMAE-Base | 0.6 T / 153M | 0.9T / 104M |\n", " > The contributions of ConvMAE compared with previous hybrid architecture is vague?\n\nThe overall architecture of ConvMAE shares similarity with previous efforts for hybrid visual backbone [1,2,3]. However, as we stated in Line58-72, the contribution of ConvMAE focuses on how to effectively and efficiently pretrain hybrid backbones under masked auto encoding settings with the following strategies: \n1. The adopted masked convolution in the early stages prevents information leakage introduced by local convolutional operators. \n2. The proposed blockwise masking strategy can avoid the requirement of keeping all tokens in stage 3, which significantly accelerates the pretraining. \n3. The proposed multi-scale fusion decoder takes advantage of supervision signals for both fine-grained and coarse-grained features to learn more discriminative representations.\n\nAs shown in Table 5 and Table 6, our proposed training strategies can effectively improve the representations of hybrid architecture under MIM pretraining paradigm while maintaining the training efficiency of original MAE. \n\n[1] Early convolutions help transformers see better. \\\n[2] CoAtnet: Marrying convolution and attention for all data sizes. \\\n[3] Container: Context aggregation network. \n\n\n\n> Why “Block-wise Masking with Masked Convolutions” can keep 25% of tokens inside transformer block?\n\nBlockwise Masking strategy generates a mask for stage 3 then progressively upsamples it to obtain higher-resolution masks for stages 1 and 2. No information would be leaked to unmasked tokens of stage 3 if the corresponding upsampled masks are used for early stages. This strategy can make sure only 25% of all tokens need to be processed during transformer blocks. On the contrary, uniformly masking input tokens requires keeping all tokens in stage 3. \n\n> The pretraining epochs of ConvMAE vary from other approaches. Can ConvMAE outperforms other approaches under a fair pretraining epochs?\n\n| Method | Encoder | PT-Epochs | ImageNet-1K |\n|:----------:|:-----------:|:-----------:|:-------------:|\n| BEiT | 100%|300 | 83.0 |\n| MAE | 25%|1600 | 83.6 |\n| SimMIM | 100%|800 | 84.0 |\n| MaskFeat | 100%|300 | 83.6 |\n| data2vec | 100%|800 | 84.2 |\n| ConvMAE | 25%|200 | 84.4 |\n\n\nIn the above table (Table 6 in our original submission), we present the ImageNet fine-tuning performance of ConvMAE pretrained for 200 epochs. Compared with previous approaches with longer pretraining epochs, ConvMAE can surpass them with a shorter training epoch by only processing 25% tokens inside encoder.\n", " >How would the duration of the pre-training epochs impact the backbone performances? How much performance drop if changing to shorter training epochs, such as reducing it from 1600 to 800 epochs?\n\n| Pretrain Epochs | ImageNet FT | COCO AP box | ADE 20K |\n|:-----------------:|:-------------:|:-------------:|:---------:|\n| 200 | 84.1 | 50.2 | 48.1 |\n| 400 | 84.4 | 51.4 | 49.5 |\n| 800 | 84.6 | 52.0 | 50.2 |\n| 1600 | 84.6 | 52.5 | 50.7 |\n\nIn the above table (Table 4 in our original submission), we study the influence of different pretraining epochs on various downstream tasks. Longer pretraining epochs lead to improved performance on detection and segmentation. The improvement on classification tasks saturates at 800 epochs. Please refer to above table (Table 4 in our original submission) for the performance comparison between model pretrained for 800 and 1600 epochs. \n", " >ConvMAE appends several convolutional blocks upon transformer blocks. The difference between MAE and Early Conv is not that salient. \n\nThe overall architecture of ConvMAE shares similarity with previous efforts for hybrid visual backbone [1,2,3]. However, as we stated in Line58-72, the contribution of ConvMAE focuses on how to effectively and efficiently pretrain hybrid backbones under masked auto encoding settings with the following strategies: \n1. The adopted masked convolution in the early stages prevents information leakage introduced by local convolutional operators. \n2. The proposed blockwise masking strategy can avoid the requirement of keeping all tokens in stage 3, which significantly accelerates the pretraining. \n3. The proposed multi-scale fusion decoder takes advantage of supervision signals for both fine-grained and coarse-grained features to learn more discriminative representations.\n\nAs shown in Table 5 and Table 6, our proposed training strategies can effectively improve the representations of hybrid architecture under MIM pretraining paradigm while maintaining the training efficiency of original MAE. \n\n| | COCO FT Epoch | COCO AP{Box} | ImageNet | ADE20K |\n|---------------|:---------------:|:--------------:|:----------:|:--------:|\n| MAE-Base | 100 | 51.2 | 83.6 | 48.1 |\n| ConvMAE-Base | 25 | 53.2 | 85.0 | 51.7 |\n\nAs shown in above table (Table 1, 2, 3 in our submission), ConvMAE significantly outperforms MAE with simple but effectively strategies. \n\n\n[1] Early convolutions help transformers see better. \\\n[2] CoAtnet: Marrying convolution and attention for all data sizes. \\\n[3] Container: Context aggregation network. \n\n> It is unclear the benefit of ConvMAE can still hold when the model size further scales up, as shown in pure ViT-based MAE.\n\nWe pretrain ConvMAE-large with multi-scale fusion for 1600 epochs and fine-tune on ImageNet, COCO, and ADE20K. We compare with ConvMAE-Base, MAE-Base, and MAE-Large in the following table. The further scaling-up experiments verify the scaling capability of ConvMAE. \n\n| | COCO FT Epoch | COCO AP{Box} | ImageNet | ADE20K |\n|---------------|:---------------:|:--------------:|:----------:|:--------:|\n| MAE-Base | 100 | 51.2 | 83.6 | 48.1 |\n| ConvMAE-Base | 25 | 53.2 | 85.0 | 51.7 |\n| MAE-Large | 100 | 54.6 | 85.9 | 53.6 |\n| ConvMAE-Large | 25 | 55.6 | 86.7 | 54.1 |\n\n>Minor typos.\n\nThanks for pointing out the typos. FOV stands for field-of-view. The mask ratio shall be 75% instead of 25%. We will modify those typos in the updated version.\n", " The paper proposes ConvMAE, a hybrid convolution-transformer architecture that is friendly to MAE-like pre-training. MAE was originally proposed with ViT, and due to omitted mask tokens in the backbone encoder, MAE is not trivially extensible to convolutional networks. The work extends MAE by resorting to the hybrid design of first using convolutions, and then using transformers. The masking is done block-wise (at the resolution of the transformer); and masked convolutions are used to avoid potential cheating. Extensive experiments are done on ImageNet classification, object detection, semantic segmentation, video classification. Various ablation analysis is also provided. (+) Self-supervised learning, especially masked auto-encoding for images is an emerging topic in computer vision. A breakthrough in this direction can bear huge significance. The work aims at fixing the limitation of MAE by introducing an hybrid architecture of convolutions and transformers, which is definitely important and relevant to the NeurIPS audience.\n\n(+) The paper is well written, and is clear enough for readers to follow through with good illustrations. \n\n(+) The experiments are extensive and conclusive. The downstream transfers include image classification, object detection, semantic segmentation, and even video understanding is involved (which by itself could be an independent investigation). The ablations and the conclusions are also covering most of the things I can think of -- a solid paper clearly with a lot of hard work behind the scene.\n\n(-) I think the \"Conv\" part of \"ConvMAE\" is an over-emphasis. The architecture only has 4 conv layers in the bottom of the network, while it has 11 transformer blocks for the base model (ViT-B has 12 blocks in total). So my current understanding is that ConvMAE has a similar architecture as in:\n\nXiao, Tete, et al. \"Early convolutions help transformers see better.\" Advances in Neural Information Processing Systems 34 (2021): 30392-30400.\n\nThis means the majority of the architecture is still transformers, and in this regard, the difference/significance over original MAE is not that salient. This is the biggest concern about the paper -- it has a risk of over-sell with the term \"Conv\" in it.\n\n(-) One minor concern is about the scalability of ConvMAE. The paper is highly focused on the model size of the base model. It is unclear the benefit of ConvMAE can still hold when the model size further scales up, as shown in pure ViT-based MAE.\n\n(-) Some minor typos need to be fixed with proof-reading:, e.g., should define what is FOV at page 2, and the mask ratio should be 75% instead of 25% for MAE if I recall correctly.\n - Please address the concerns mentioned above, especially the first point. I do not see potential negative societal impact concern. The paper also points this out at the end, which is adequate to me.", " This paper proposes a self-supervised framework using a hybrid convolution-transformer architecture, to obtain multi-scale, hierarchical representations. Masked convolution is introduced to prevent information leakage in convolution blocks and block-wise masking strategy is applied to improve computational efficiency. The resulting model achieves competitive performances in image classification and dense prediction tasks such as object detection. strengths: \n1. This work effectively extends the self-supervised MAE framework to the hierarchical, convolution-transformer hybrid architecture. \n2. The resulting model outperforms existing self-supervised models in classification and dense prediction tasks. How would the duration of the pre-training epochs impact the backbone performances? How much performance drop if changing to shorter training epochs, such as reducing it from 1600 to 800 epochs? adequate ", " This paper proposed a new self-supervised learning framework by integrating hybrid convolution-transformer architectures and masked convolution into the masked auto-encoders. The proposed method can achieve computational efficiency and low pretraining-finetuning gap at the same. Extensive experiments on several computer vision tasks demonstrate the effectiveness of the proposed method. __Strengths__\n\n- The paper is well written and easy to follow. Sufficient technique details are provided.\n- The proposed method is well motivated and simple. Several key components are proposed to address heavy computational cost and pretraining-finetuning discrepancy.\n- The proposed method is flexible and can be applied in both image classification and object detection.\n\n__Weaknesses__\n\n- It seems hybrid convolution-transformer architectures have been explored in previous works but show how very similar performance to MAE (Lines 45-47). Why the proposed method can make them work for MAE? The differences from previous work and the contribution of the paper remains vague.\n- Some parts of the method are not clearly illustrated. For example, in “Block-wise Masking with Masked Convolutions”, the authors state that “Uniformly masking stage-1 input tokens would cause all tokens of stage-3 to have partially visible information and requires keeping all stage-3 tokens”. Why the proposed method can address this issue? What is the key idea of the proposed method?\n- The required training epochs vary from different methods. I wonder whether the proposed method can still outperform others under the same training epochs.\n\n__Post Rebuttal__\n\nI thank the authors for their response. Most of my concerns have been addressed. I increased my rating and recommend acceptance for this paper. See \"Weaknesses\". Yes.", " This paper addresses the difficulty of applying MAE training with convolutional layers. The proposed ConvMAE adopts masked convolutions in the early stage of convolutional layers by applying convolution on the masked featuremaps. In this way, the information leak is prevented. With the proposed ConvMAE training, the ViT with early convolutional layers can benefit from the MAE training and achieved better transfer learning results comparing to standard ViT. It achieves superior performance on ImageNet & MS-COCO. 1. Novel training strategy to enable MAE training for models with convolutional layers.\n2. Strong performance on various transfer learning tasks. \n\n Would the masked featuremap introduce some bias or artifacts to the training along the mask edges? It may be helpful to find a way to alleviate the boundary issue. The comparison with the MAE training with standard ViT backbones may be unfair, due to introducing extra computation cost with the convolutional layers. It would be helpful to further break down the improvements.", " The paper starts with the hypothesis that a multiscale hybrid convolution-transformer can learn better representations using masked inputs than vanilla ViTs. The original masking scheme proposed in the MAE paper can be computationally prohibitive when directly applied to hybrid models. This paper presents a multiscale block-wise masking strategy with masked convolutions to efficiently train a hybrid transformer-convolutional model for representation learning. The paper shared a broad range of empirical results on classification, detection, segmentation, and video understanding tasks to show the effectiveness of the proposed technique. Originality: The novelty of the paper lies in its proposed multi-scale hybrid convolution-transformer encoder, which can generate hierarchical representations and possibly exceed the performance of vanilla ViTs. The idea of hybrid models already exists in multiple pieces of literature (CoAtNet, Early Convolutions etc.). Masked convolutions were introduced in the PixelRNN paper (https://arxiv.org/pdf/1601.06759.pdf). The strength of this paper is in its novel combination of existing ideas to produce a very simple hybrid framework that effectively combines the strength of convolutions and transformers.\n\nI also like the idea of performing masking at the late stage and then progressively upsampling the mask to larger resolutions to avoid the requirement of keeping all tokens in stage 3.\n\nThe proposed setup naturally generates hierarchical representations and fits nicely with Feature Pyramid Networks. It is a nice way to generate a feature pyramid with local context via convolutions and global context using transformers.\n\nQuality: The paper primarily describes experiments using ViT-B scale networks. It covers a broad set of vision tasks but it does not cover scale. It would be nice to see whether the proposed scheme continues to outperform existing masking techniques for larger models. There is also limited runtime comparison with existing techniques.\n\nThe paper shares very informative results of ablation experiments comparing random masking, regular convolutions, multi-scale decoders etc.\n\nClarity: The paper is very well written, with a nice flow, and explains the concepts with ease. \nNit: line 56 pretraing -> pretraining \n\nSignificance: The paper proposes a simple and effective hybrid convolution-transformer encoder, which naturally generates hierarchical representations from an image and outperforms a number of existing techniques. \n 1. Did you experiment with larger models of the order of ViT-L and ViT-H? How do these results scale?\n2. Why did you generate limited data points for VideoConvMAE and VideoConvMAE-multiscale in figure 3?\n3. Why did you not include the CoAtNet results in the experiments? Yes, authors adequately addressed the limitations.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 3, 4, 3 ]
[ "sJqlrXfOh42", "V8I7SsvVZ90", "ssuZAulGPCci", "i0kZXF2AXc5", "SCw1JSaqqO", "Mt_r_atVyQT", "tpXLzQK9bca", "_ljYpO-7La0", "2nxBGoV8-N8", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO", "nips_2022_qm5LpHyyOUO" ]
nips_2022_d4JmP1T45WE
Training Spiking Neural Networks with Event-driven Backpropagation
Spiking Neural networks (SNNs) represent and transmit information by spatiotemporal spike patterns, which bring two major advantages: biological plausibility and suitability for ultralow-power neuromorphic implementation. Despite this, the binary firing characteristic makes training SNNs more challenging. To learn the parameters of deep SNNs in an event-driven fashion as in inference of SNNs, backpropagation with respect to spike timing is proposed. Although this event-driven learning has the advantages of lower computational cost and memory occupation, the accuracy is far below the recurrent neural network-like learning approaches. In this paper, we first analyze the commonly used temporal backpropagation training approach and prove that the sum of gradients remains unchanged between fully-connected and convolutional layers. Secondly, we show that the max pooling layer meets the above invariance rule, while the average pooling layer does not, which will suffer the gradient vanishing problem but can be revised to meet the requirement. Thirdly, we point out the reverse gradient problem for time-based gradients and propose a backward kernel that can solve this problem and keep the property of the invariable sum of gradients. The experimental results show that the proposed approach achieves state-of-the-art performance on CIFAR10 among time-based training methods. Also, this is the first time that the time-based backpropagation approach successfully trains SNN on the CIFAR100 dataset. Our code is available at https://github.com/zhuyaoyu/SNN-event-driven-learning.
Accept
The authors propose a novel training algorithm to train spiking neural networks (SNNs) in an event-driven manner with backpropagation. They perform experiments on standard benchmarks such as CIFAR-10 and CIFAR-100 to verify the effectiveness of the method. The algorithm achieves SOTA performance on these data sets. Event-driven methods are interesting from a hardware perspective as gradient have to propagated only at spike times. The manuscript received mixed ratings, a clear agreement could not be found. Pros: - The authors provide an analysis of event-driven backprop in SNNs which helps to adjust the usual learning procedure. - The authors performed experiments on several data sets and achieved SOA performance w.r.t. other event-driven methods. - They also tackled CIFAR100 for which no results were previously shown with event-driven algorithms - The paper is well-written, although language could be improved at places. Cons: - Improvements over competing event-driven techniques are rather small - Performance is still clearly below non-event based surrogate gradient methods (but this is not surprising) - Not clear how the method scales up beyond CIFAR100 Since the ratings were mixed, I read the paper and believe it is publishable in NeurIPS although it is somewhat borderline.
test
[ "iy9DEnjSDi", "28mTmhyht9", "Y3jKxXWmhxs", "x1tVPZDgjzY", "u7w7T6Cwd3_", "FGgyxnU1SMd", "6ZQ4AlSBRqf", "Ssht7CSobZn", "KjvapaILMHo", "LWBHoY3zbwy", "Mmqw5LjB7G-", "AFsSs-sHe-", "tLWbrSLRy5D", "dEKk844QTj", "NwpburIDs8q" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 1S4Q,\n\nAs you suggested, we have checked and added recommended publications in the new version of our paper. We organize the other concerns as follows:\n1) Aiming at your question on the contribution of our paper (which is also asked by Reviewer tMmb), we have clarified the contribution of our paper in the response below.\n2) For whether we can scale up our method, we have explained that we are the first to train SNNs in an event-driven fashion on the CIFAR100 dataset.\n3) For your concern on the number of time steps, we have provided it in our response and other hyper-parameters in the supplementary material.\n4) We have explained the difference between the backward connection in [1] and the backward kernel we used in our response.\n\nWe kindly hope you to re-consider your initial rating based on the response we provided. If you have any further questions, please let us know and we are glad to provide a follow-up response.\n\n> [1] Neural architecture search for spiking neural networks Y Kim, Y Li, H Park, Y Venkatesha, P Panda arXiv preprint arXiv:2201.10355", " Dear Reviewer tMmb,\n\nWe notice that your initial review can be grouped into two overall concerns and two detailed concerns.\n1) The main concern is about the contribution of our paper and how our method leads to better results. We would like to note that our time-based event-driven training approach **is essentially different from activation-based surrogate gradient methods and has not been intensively researched (especially on large datasets like CIFAR100)**. The contribution part has been re-summarized in the response below, as well as how the proposed components enhance the performance.\n2) The secondary concern is the comparison between our method and the surrogate gradient method, including its performance, rate of convergence, and efficiency. We have analyzed the advantage of time complexity of our approach (which received positive feedback from Reviewer kF1o) and explained why the time-based event-driven approach has not yet reached comparable performance and rate of convergence in the following response.\n3) The third question is about Figure 1, which we have explained in our response below.\n4) The fourth question is about the input encoding, which we have provided the encoding we use in the response as well as an experiment on the TTFS (time-to-first-spike) encoding (90.33\\% accuracy on SEW-Resnet 14 with TTFS encoding compared with 92.45\\% accuracy on SEW-Resnet 14 with the original encoding in reply to Reviewer kF1o).\n\nBased on these facts and positive feedback from other reviewers, we sincerely hope you could re-consider your initial rating. If you still have any further comments or questions, please let us know and we are glad to address your further concerns.", " Thanks for your thorough initial comments. We would like to know whether our response has addressed your questions appropriately. As the discussion period will end soon, we sincerely hope to receive your further feedback, and we are glad to provide a follow-up response if needed.", " Dear reviewer 1S4Q, we sincerely hope our posted response can help to address your concerns on our paper and serve as a reference for your re-assessment of our work. If you have any further comments and questions, please let us know and we are glad to write a follow-up response.", " > The paper does not discuss how the datasets are converted to spike domain.\n\nThanks for pointing it out. We directly use the real image pixel values as the input of our network, since the image input is commonly used [1][2][3][4], and encoding methods like Poisson encoding often impede the performance of SNNs.\nMeanwhile, directly using images as the input will not decrease the efficiency of a network for the following reasons [5]. First, in computer vision, the input representation typically has much fewer channels (e.g., Red, Green, and Blue) than internal representations (e.g., 512). As a result, the first layer of a ConvNet is often the smallest convolution layer, both in terms of parameters and computations (Szegedy et al., 2014). Second, it is relatively easy to handle continuous-valued inputs as fixed-point numbers with $m$ bits of precision.\n\nWe have also tried the time-to-first-spike encoding (used by [6]), which turns the pixel intensity to the spike firing time of a neuron (higher pixel intensity corresponds to an earlier firing time). We achieved 90.33\\% testing accuracy for this encoding with a SEW-ResNet 14 network on the CIFAR10 dataset.\nWe have added the discussion in the appendix of the revised paper.\n\n>\n\n [1] Zhang, W., & Li, P. (2020). Temporal spike sequence learning via backpropagation for deep spiking neural networks. Advances in Neural Information Processing Systems, 33, 12022-12033.\n [2] Kim, Y., Park, H., Moitra, A., Bhattacharjee, A., Venkatesha, Y., & Panda, P. (2022). Rate Coding Or Direct Coding: Which One Is Better For Accurate, Robust, And Energy-Efficient Spiking Neural Networks?. In ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 71-75). IEEE. \n [3] Kim, Y., Li, Y., Park, H., Venkatesha, Y., & Panda, P. (2022). Neural architecture search for spiking neural networks. arXiv preprint arXiv:2201.10355.\n [4] Wu, Y., Deng, L., Li, G., Zhu, J., Xie, Y., & Shi, L. (2019). Direct training for spiking neural networks: Faster, larger, better. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 33, No. 01, pp. 1311-1318).\n [5] Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., & Bengio, Y. (2016). Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830.\n [6] Zhang, M., Wang, J., Wu, J., Belatreche, A., Amornpaisannon, B., Zhang, Z., ... & Li, H. (2021). Rectified linear postsynaptic potential function for backpropagation in deep spiking neural networks. IEEE Transactions on Neural Networks and Learning Systems, 33(5), 1947-1958.", " > The authors talk about advantages over the previous work in terms of efficiency however the paper does not report any metric that shows it is more efficient to train with this proposed method.\n\nTo illustrate the advantage of efficiency, we show that the number of operations of the event-based learning algorithms is less than the RNN-based learning algorithms when spikes are sparse. For simplicity, we only analyze a single fully-connected layer with M input neurons and N output neurons. Other layers and the whole network can be analyzed similarly.\n\nDuring training, RNN-based learning algorithms are forced to unfold through the time axis, as explained in Figure 1 and Section 2. As a result, the corresponding number of operations is at least $O(TMN)$, where T is the total time steps and M, N are the number of input and output neurons. On the other hand, event-based learning algorithms only have to deal with cases where a certain neuron fires a spike, and record the relevant information. In the forward stage, a spike fired by an input neuron affects the state itself and all output neurons, which is $O(N)$ in total. In the backward stage, a spike fired by an output neuron needs to propagate gradient information to all spikes between this spike and the last spike fired by this neuron (so all input spikes are processed once in this stage). Therefore, denoting the average firing rate of input and output neurons to be $\\alpha$ and $\\beta$, the number of operations of this layer is $O(T(\\alpha MN+\\beta M+\\alpha N))=O(T(\\alpha MN+\\beta M))$. When spikes are sparse, event-based learning algorithms certainly have advantages since $\\alpha+\\beta \\ll 1$ in this case.\nWe have added the analysis in the appendix of the revised paper.\n\n> How do the proposed methods compare against surrogate gradient techniques?\n\nAs illustrated in Section 2 \"Backgrounds and Related Work\", our proposed method differs from surrogate gradient techniques in two aspects.\nFirstly, we calculate gradients of spike timings with respect to the loss, while surrogate gradient methods calculate gradients of the 'spike scale' (illustrated in Figure 1d).\nSecondly, our method is event-driven, which means information propagates only through spikes in both forward and backward propagation. In opposite, surrogate gradient techniques propagate gradient information even when spikes are not emitted (recall that the surrogate gradient approximates $\\frac{\\partial s}{\\partial u}$ whether there is a spike or not).\n\nThis event-driven property makes our method harder to train compared with the surrogate gradient techniques due to the sparse gradient propagation path. Besides, our learning scheme is relatively new compared with surrogate gradient methods. Thus, the performance and convergence speed of this learning scheme has not yet surpassed the surrogate gradient method currently. However, the event-driven property empowers our learning scheme to be more biologically plausible and have more potential for efficiency optimization when running on neuromorphic hardware.\n\n> Does the proposed method converge faster compared to previous algorithms?\n\nSince our proposed method aims at a topic with less research, it cannot converge faster than the surrogate-gradient-based methods yet. As discussed above, the gradient information can only be passed through spikes, which is sparser than RNN-like surrogate gradient methods. This leads to difficulty in event-driven training. In addition, event-driven learning is a topic with far less existing research than RNN-like training. Therefore, future research might focus on improving the convergence speed.\nHowever, the proposed method fits better into the event-driven nature of SNNs, which makes it more power-friendly when training on neuromorphic hardware and more biologically plausible.\n\n", " > I wonder if the author's method can even scale up since their results are limited to CIFAR10 and CIFAR100.\n\nWe would like to note that the most complex dataset in which previous works successfully trained SNNs in **event-driven fashion** is the CIFAR10 dataset. Our work makes one step further and successfully trains SNNs on the CIFAR100 dataset. To the best of our knowledge, this is the first time that the time-based backpropagation approach successfully trains SNN on the CIFAR100 dataset. We are glad to investigate more complex datasets (such as TinyImageNet and ImageNet) further in future work.\n\n\n> The authors did not comment on how many time steps their method requires to train.\n\nThanks for your comments. We would like to note that training networks in an event-driven fashion do not necessarily need the concept of ‘time step’. As illustrated in Figure 1, we only have to record information at spike times (the precise timing, slope of membrane potential, etc.) to train our network, and there is no need to use clock-driven methods by nature. However, to better make use of the current deep learning frameworks, we turn the training process from continuous time to discrete time steps in our simulation. The number of time steps is set to 5 for MNIST and Fashion-MNIST, 12 for CIFAR10, 16 for CIFAR100, and 30 for N-MNIST. For further information about hyper-parameters, please refer to the appendix (section ‘Implementation Details’).\n\n\n> In the recent work [5], the authors show that they can use backward connections to train SNNs better, is there a similarity between the backward kernel and backward connection?\n> [5] Neural architecture search for spiking neural networks Y Kim, Y Li, H Park, Y Venkatesha, P Panda arXiv preprint arXiv:2201.10355\n\nWe would like to clarify that the backward kernel we proposed differs from the backward connection. The backward connection in [5] is applied in both forward and backward propagation (it adds the transformed node feature of $l$-th layer at time-step $t−1$ to the node of $l′$-th ($l′<l$) layer at time-step t). In contrast, our backward kernel can be viewed as a correction of the original workflow in the backward propagation, which means it is only applied in the backward propagation.", " We appreciate your constructive comments. We would like to address your concerns below.\n\n> If the authors can highlight their technical novelty and compare it to previous works, it will help me re-assess the paper's contributions.\n\nWe would like to emphasize that our work focus on event-driven learning (with temporal gradient), which is different from the commonly used RNN-like BPTT learning approaches (with activation-based gradient). In our method, the gradient information is carried by spikes instead of both spikes and non-spikes at each time step (shown in Figure 1). This feature makes our method harder to train (because of the sparser gradient propagation path) as well as more biological plausibility.\n\nThe main contributions of our paper are re-summarized as follows:\n1. We prove that the typical SNN temporal backpropagation training approach assigns the gradient of an output spike of a neuron to the input spikes generating it. After summing this assignment rule altogether, we find that the sum of gradients is kept unchanged between layers.\n2. We analyze the case of the pooling layer (which does not have weights) and find that average pooling does not keep the gradient sum unchanged, but we can modify its backward formulas to meet the requirement. Meanwhile, the max-pooling layer satisfies the rule initially.\n3. We point out the reverse gradient problem in event-driven learning that the direction of the temporal gradient is reversed during backpropagation when the kernel function of an input spike is decreasing. Then we propose a backward kernel function that addresses this problem while keeping the sum of gradients unchanged between layers.\n4. The adjusted average pooling layer and the non-decreasing backward kernel enhances the performance of our model as well as the convergence speed. To our best knowledge, our proposed approach achieves state-of-the-art performance on CIFAR10 among event-driven training methods (with temporal gradients) for SNNs. Meanwhile, our method is the first event-driven backpropagation approach that successfully trains SNN on the larger-scale CIFAR100 dataset.\n\n> The authors have failed to acknowledge most recent works. Below is a list of publications (not exhaustive) that the author should check.\n\nThanks for recommending these works. We are encouraged to see that SNNs can be applied in such a variety of tasks. We have checked and added the following references [1-6] in the revised paper (please refer to the sections of Introduction and Backgrounds and Related Work). We will incorporate the discussion on these works in our final version, where the one additional page allows us to extend the current Sections with more content and illustration. \n\n\n [1] Lee, C., Sarwar, S. S., Panda, P., Srinivasan, G., & Roy, K. (2020). Enabling spike-based backpropagation for training deep neural network architectures. Frontiers in neuroscience, 119.\n [2] Kim, Y., & Panda, P. (2020). Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. Frontiers in neuroscience, 1638.\n [3] Kim, Y., & Panda, P. (2021). Optimizing deeper spiking neural networks for dynamic vision sensing. Neural Networks, 144, 686-698.\n [4] Venkatesha, Y., Kim, Y., Tassiulas, L., & Panda, P. (2021). Federated learning with spiking neural networks. IEEE Transactions on Signal Processing, 69, 6183-6194.\n [5] Kim, Y., Chough, J., & Panda, P. (2021). Beyond classification: directly training spiking neural networks for semantic segmentation. arXiv preprint arXiv:2110.07742.\n [6] Kim, Y., Venkatesha, Y., & Panda, P. (2022). PrivateSNN: Privacy-Preserving Spiking Neural Networks. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 1, pp. 1192-1200).\n", " Thank you for your detailed comments and suggestions for improvement. We would like to address your concerns and answer your questions as follows.\n\n> It is hard to understand what the axes are for Figure 1.\n\nThanks for pointing it out. The horizontal axis denotes the time for all four sub-figures. The vertical axis is membrane potential in (a)(b) and the four vertical layers in \\(c\\)(d) are (bottom to top) input spikes, input current, membrane potential, and output spike for a neuron. Through Figure 1, we want to emphasize the difference between event-driven learning and RNN-like learning. Information at the firing time of a spike is enough to conduct forward and backward propagation in event-driven learning. In contrast, information at every time step is needed in RNN-like learning. The key reason is shown in Figure 1 \\(c\\)(d): RNN-like learning requires gradient propagation (spike $\\rightarrow$ membrane potential) in each time step through a surrogate function, whereas event-driven learning does not need this.\nThis key difference leads to the fact that event-driven learning has the potential to not rely on the concept of ‘time step’, but to infer and learn in a continuous time.\nWe have annotated the axes of Figure 1 \\(c\\)(d) with time steps and input/output.\n\n> It is unclear what the major contributions of the paper are. Analyzing previous work does not constitute a contribution.\n\nWe would like to emphasize that our work focuses on event-driven learning (with temporal gradient) of SNNs. Unlike RNN-like BPTT learning approaches (with activation-based gradient), the gradient information is only carried by spikes instead of both spikes and non-spikes at each time step (shown in Figure 1). This sparse gradient propagation path makes it harder than RNN-like BPTT approaches to train SNNs.\n\nThe main contributions of our paper are summarized as follows:\n1. We prove that the typical SNN temporal backpropagation training approach assigns the gradient of an output spike of a neuron to the input spikes generating it. After summing this assignment rule altogether, we find that the sum of gradients is kept unchanged between layers.\n2. We analyze the case of the pooling layer (which does not have weights) and find that average pooling does not keep the gradient sum unchanged, but we can modify its backward formulas to meet the requirement. Meanwhile, the max-pooling layer satisfies the rule initially.\n3. We point out the reverse gradient problem in event-driven learning that the direction of the temporal gradient is reversed during backpropagation when the kernel function of an input spike is decreasing. Then we propose a backward kernel function that addresses this problem while keeping the sum of gradients unchanged between layers.\n4. The adjusted average pooling layer and the non-decreasing backward kernel enhances the performance of our model as well as the convergence speed. To our best knowledge, our proposed approach achieves state-of-the-art performance on CIFAR10 among event-driven training methods (with temporal gradients) for SNNs. Meanwhile, our method is the first event-driven backpropagation approach that successfully trains SNN on the larger-scale CIFAR100 dataset.\n\nThe reason we analyze previous work in section 2 is to introduce the background and related works and make our method easier to understand. We hope this will increase your recognition of our work.\n\n> It is unclear how the proposed method enables better results. For instance, Table 1 reports similar accuracies for this work compared to the previous ones.\n\nTable 1 shows that our method can achieve state-of-the-art performance on Fashion-MNIST and CIFAR-10 datasets. The main contender is the TSSL-BP method, which gets 0.06% higher accuracy on the MNIST dataset. We would like to clarify that TSSL-BP uses surrogate gradients in the backward propagation to support training (see lines 55-56), so we are the first pure event-driven method to achieve these results. In addition, our approach is the first event-driven one to successfully train SNNs on CIFAR100. \n\n\n", " Thank you for your positive and constructive feedback. We are encouraged that you find our learning paradigm preserves de biological plausibility and the event-based features advantages of spiking neural networks when compared with surrogate backpropagation. We would like to address your concerns and answer your questions in the following.\n\n> Spiking Neural Networks has yet achieved SOTA in no benchmark.\n\nYou have raised an important point. One key feature of SNNs is using binary activation. Researches on quantization neural networks (QNNs) have shown that the binary activation function used by SNNs will degrade the performance to a large extent [1][2][3][4], which is even more than the influence of low-bit weights [3]. The relatively low performance of SNNs might originate from this.\n\nIn addition, training deep SNNs is a new and developing research topic with fewer researchers compared with training analog neural networks (ANNs). As a result, it is not surprising that SNNs have relatively low performance compared with ANNs. Due to this reason, developing new learning schemes for SNNs is necessary, which is the aim of our work.\n\n\n\n [1] Kim, H., Kim, K., Kim, J., & Kim, J. J. (2020). Binaryduo: Reducing gradient mismatch in binary activation network by coupling binary activations. arXiv preprint arXiv:2002.06517.\n [2] Cai, Z., He, X., Sun, J., & Vasconcelos, N. (2017). Deep learning with low precision by half-wave gaussian quantization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5918-5926).\n [3] Mishra, A., Nurvitadhi, E., Cook, J. J., & Marr, D. (2017). WRPN: Wide reduced-precision networks. arXiv preprint arXiv:1709.01134.\n [4] Zhou, S., Wu, Y., Ni, Z., Zhou, X., Wen, H., & Zou, Y. (2016). Dorefa-net: Training low bitwidth convolutional neural networks with low bitwidth gradients. arXiv preprint arXiv:1606.06160.\n\n> In line 233 you stated that more details about the configuration of training were available in the appendix, I was not able to find the appendix section, nor an appendix file on the supplement material, even though the code contains the configurations of the experiment.\n\nThe appendix file is in the 'supplementary material.pdf'. The configuration of training is in 'Implementation Details' of the appendix.", " Thank you for your positive and thoughtful comments. We are encouraged that you find our paper well written and could be valuable for publication at the NeurIPS conference. We would like to address your concerns and answer your questions in the following.\n\n> When implementing these event-based learning algorithms on neuromorphic hardware, how to accelerate the training and why is it significantly faster than the RNN-based learning algorithms?\n\nYou have addressed an interesting concern about running our algorithm on neuromorphic hardware. We have discussed this with some hardware experts, and they suggest that accelerating the network on hardware can be solved by representing spike inputs and outputs in a compressed, time-stamped, and sorted way, then sequentially walking through time-stamped spikes and avoiding calculating when spikes are not emitted in simulation [1].\n\nNext, we show that the number of operations (total times all neurons affected by spikes) of the event-based learning algorithms is less than the RNN-based learning algorithms when spikes are sparse. For simplicity, we only analyze a single fully-connected layer with M input neurons and N output neurons. Other layers and the whole network can be analyzed similarly.\nDuring training and inference, RNN-based learning algorithms are forced to unfold through the time axis, as explained in Figure 1 and Section 2. As a result, the corresponding number of operations is at least $O(TMN)$, where T is the total time steps and M, N are the number of input and output neurons. On the other hand, event-based learning algorithms only have to deal with cases where a certain neuron fires a spike, and record the relevant information. In the forward stage, a spike fired by an input neuron affects the state itself and all output neurons, which is $O(N)$ in total. In the backward stage, a spike fired by an output neuron needs to propagate gradient information to all spikes between this spike and the last spike fired by this neuron (so all input spikes are processed once in this stage). Therefore, denoting the average firing rate of input and output neurons to be $\\alpha$ and $\\beta$, the number of operations of this layer is $O(T(\\alpha MN+\\beta M+\\alpha N))=O(T(\\alpha MN+\\beta M))$. When spikes are sparse, event-based learning algorithms certainly have advantages since $\\alpha+\\beta \\ll 1$ in this case.\nWe have added the analysis in the appendix of the revised paper.\n\n\n\n [1] Narayanan, S., Taht, K., Balasubramonian, R., Giacomin, E., & Gaillardon, P. E. (2020, May). SpinalFlow: An architecture and dataflow tailored for spiking neural networks. In 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA) (pp. 349-362). IEEE.\n\n\n> Can this method be trained with residual connection?\n\nYou have raised an interesting concern. We test ResNet on CIFAR10 and find that it performs even better than the results presented in our paper (92.45% by a 14-layer Spike-Element-Wise (SEW) ResNet).\nOne thing that is worth noticing is the case when two simultaneous input spikes are added in the residual connection. According to our analysis in 'Invariant sum of gradients among layers with weights' in section 3.2, the gradients of input spikes 'inherit' gradients from the next output spike and keep the sum of gradients unchanged. However, in the add operation, the default gradient rule in adding is to copy the gradient from the added spike to the two input spikes (since when $s=a+b$, $\\frac{\\partial s}{\\partial a} = \\frac{\\partial s}{\\partial b} = 1$). This causes gradients amplified in previous layers, which can be corrected by a custom backward function.\nWe have added the results in Table 2 of the revised paper.\n", " This paper focuses on the temporal, event-driven manners of training a spiking neural network from scratch. The authors first revisit the learning dynamics of event-driven learning and discover several invariance properties. Then, a problem called reverse gradient is raised and addressed. Extensive experiments are conducted to verify the effectiveness of this method. 1. This paper is well-presented. The writing and visualization are neat and easy to follow.\n\n2. The experimental results are sufficient for comparison with existing event-driven learning works. TBH, I am not an expert in event-based training of SNN, therefore I cannot give useful feedback with respect to that. I have a few questions about the difference between event-based training and the \"RNN-like\" training. \n\n1. When implementing these event-based learning algorithms on the neuromorphic hardware, how to accelerate the training and why is it significantly faster than the RNN-based learning algorithms?\n\n2. Can this method be trained with residual connection?\n Overall, I found this paper interesting and could be valuable for publication at the NeurIPS conference. As I am familiar with this type of training method for SNNs, I could not give conceptual limitations for this paper. I am giving borderline acceptance of this paper due to its good presentation. Meanwhile, I will set my confidence score to 2 and will look into the comments from other reviewers to finalize my rating. \n\n\n-----\n\nPOST-REBUTTAL REVIEW:\n\nI'd like to thank the authors for their detailed response. My questions are addressed, thus I increase my rating to 7.\n", " The authors propose a temporal Backprop approach for training SNNs with some interesting backward kernel fucntion. + The authors showcase that their temporal BP methodology yields high accuracy as compared to similar related works.\n-The paper presents a direct training method using BP for SNNs. This work is very derivative and incremental. There is a lot of work from Priya Panda's group at Yale, Emre Neftci's group, and many others with regard to SNN training. The authors have failed to acknowledge most recent works and the method they are proposing is very incremental in the context of those works. Further, many recent works on SNNs have targeted larger datatsets including video segmenattion with direct training. I wonder if the author's method can even scale up, since their results are limited to CIFAR10, CIFAR100.\n\n-The authors did not comment on how many time steps their method requires to train. In the recent work [5], the authors show that they can use backward connections to train SNNs better, is there a similarity between the backward kernel and backward conenction?\n\n\nBelow is a list of publications (not exhaustive) that the author should check:\n[1] \n[2] Enabling spike-based backpropagation for training deep neural network architectures C Lee, SS Sarwar, P Panda, G Srinivasan, K Roy Frontiers in neuroscience, 119\n[3] Rate Coding Or Direct Coding: Which One Is Better For Accurate, Robust, And Energy-Efficient Spiking Neural Networks? Y Kim, H Park, A Moitra, A Bhattacharjee, Y Venkatesha, P Panda ICASSP 2022-2022\n[4] Neuromorphic Data Augmentation for Training Spiking Neural Networks Y Li, Y Kim, H Park, T Geller, P Panda arXiv preprint arXiv:2203.06145\n[5] Neural architecture search for spiking neural networks Y Kim, Y Li, H Park, Y Venkatesha, P Panda arXiv preprint arXiv:2201.10355\n[6] Optimizing deeper spiking neural networks for dynamic vision sensing Y Kim, P Panda Neural Networks 144, 686-698\n[7] Federated Learning with Spiking Neural Networks Y Venkatesha, Y Kim, L Tassiulas, P Pand IEEE Transactions on Signal Processing 2021\n[8] Beyond classification: directly training spiking neural networks for semantic segmentation Y Kim, J Chough, P Panda arXiv preprint arXiv:2110.07742\n[9] Visual explanations from spiking neural networks using interspike intervals Y Kim, P Panda Scientific Reports 11, Article number: 19037 (2021)\n[10] Revisiting batch normalization for training low-latency deep spiking neural networks from scratch Y Kim, P Panda Frontiers in neuroscience, 1638 If the authors can highlight their technical novelty a compared to previous works, it will help me re-assess the paper's contributions. See above comments for reference. See weakness section.", " The authors propose a modified event-driven backpropagation and investigate its performance on benchmarks. The authors also investigate if the backpropagation followed a gradient assignment rule, finding that max-pooling obeyed this rule. This is one of the most important conclusions of the paper. The algorithm achieved SOTA on CIFAR-10, and was the first to be trained on CIFAR-100. Spiking neural networks represent a new paradigm of neural networks that, among other advantages, incorporates time into the building blocks of its own functioning. Thus, in addition to having greater biological plausibility, it is also believed to be more coherent with learning in the real-world, which contains the time dimension.\nEvent-base learning paradigm preserves de biological plausibility and the event-based features advantages of spiking neural networks when compared with surogate backpropagation.\nHowever, this new paradigm has not yet reached SOTA in any benchmark, not even in those that demand incorporation of the time dimension. In line 233 you stated that more details about the configuration of training were available on appendix, I was not able to fin the appendix section, nor a appendix file on the supplement material, even though the code contains the configurations of the experiment. It is not clear for me if you want to provide more information in an appendix section or you were referring to the code provided. Spiking Neural Networks has yet achieved SOTA in no benchmark.\nAlthough spiking neural networks are believed to have greater biological plausibility, it is not clear whether biological neural networks learn through backpropagation, which was the method tested in this study. Despite this, nowadays there is no alternative that works better than backpropagation.\nThe authors also stated that another gap in biological plausibility is the reverse time processing feature.", " The focus of this paper is training of spiking neural networks. Specifically, the paper proposes back-propagation with respect to spike timing. They analyze previous methods and propose a small increment over the current state-of-the-art. The results are not clearly discussed. Strengths:\n- The work focuses on an aspect of the learning algorithms that requires optimization and innovation. \n\nWeakness\n1.\tIt is hard to understand what the axes are for Figure 1. \n2.\tIt is unclear what the major contributions of the paper are. Analyzing previous work does not constitute as a contribution. \n3.\tIt is unclear how the proposed method enables better results. For instance, Table 1 reports similar accuracies for this work compared to the previous ones.\n4.\tThe authors talk about advantages over the previous work in terms of efficiency however the paper does not report any metric that shows it is more efficient to train with this proposed method. \n5.\tDoes the proposed method converge faster compared to previous algorithms?\n6.\tHow does the proposed methods compare against surrogate gradient techniques?\n7.\tThe paper does not discuss how the datasets are converted to spike domain. \n Please refer to Strengths and Weakness for the points. There are no potential negative societal impacts. One major limitation of this work is applicability to neuromorphic hardware and how will the work shown on GPU translate to neuromorphic cores." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2, 3 ]
[ "tLWbrSLRy5D", "NwpburIDs8q", "NwpburIDs8q", "tLWbrSLRy5D", "FGgyxnU1SMd", "KjvapaILMHo", "Ssht7CSobZn", "tLWbrSLRy5D", "NwpburIDs8q", "dEKk844QTj", "AFsSs-sHe-", "nips_2022_d4JmP1T45WE", "nips_2022_d4JmP1T45WE", "nips_2022_d4JmP1T45WE", "nips_2022_d4JmP1T45WE" ]
nips_2022_mjUrg0uKpQ
I2DFormer: Learning Image to Document Attention for Zero-Shot Image Classification
Despite the tremendous progress in zero-shot learning (ZSL), the majority of existing methods still rely on human-annotated attributes, which are difficult to annotate and scale. An unsupervised alternative is to represent each class using the word embedding associated with its semantic class name. However, word embeddings extracted from pre-trained language models do not necessarily capture visual similarities, resulting in poor zero-shot performance. In this work, we argue that online textual documents e.g., Wikipedia, contain rich visual descriptions about object classes, therefore can be used as powerful unsupervised side information for ZSL. To this end, we propose I2DFormer, a novel transformer-based ZSL framework that jointly learns to encode images and documents by aligning both modalities in a shared embedding space. In order to distill discriminative visual words from noisy documents, we introduce a new cross-modal attention module that learns fine-grained interactions between image patches and document words. Consequently, our I2DFormer not only learns highly discriminative document embeddings that capture visual similarities but also gains the ability to localize visually relevant words in image regions. Quantitatively, we demonstrate that our I2DFormer significantly outperforms previous unsupervised semantic embeddings under both zero-shot and generalized zero-shot learning settings on three public datasets. Qualitatively, we show that our method leads to highly interpretable results where document words can be grounded in the image regions.
Accept
The authors propose a method to learn a joint representation of an image with a document of the object present in the image. Experiments show that the proposed model outperforms state-of-the-art models. Although the final reviews between reviewers are not aligned, I think authors solved most of their proposed questions.
train
[ "-b8wMaDFY7U", "0qJiMVkldEB", "4vbpCG9XzCC", "L_3sw7C6zO", "n01wp3AkMC", "OCZnqZMG3N_", "x0H85AeucAY", "HMlV18LBBo", "Sno3_HJ_Muq", "Q91Z1OAVj2Z", "iCnlwHEQHq", "takbNy0RvHI", "ziQSA4F-WO", "NsslNJcf7De", "LPAfbpNK5WW", "mJ6cMXtzHl", "qSA8bskeCY1", "jE2uEkq4F7M", "4FGfMgzAyFV" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer uvjr\n\nWe want to thank you once again for your helpful review. We have incorporated your feedback into the manuscript including additional discussion and experiments. We believe your suggestions further improved the clarity of the manuscript and opens it to a wider set of audience. We have also discussed your questions and concerns in detail in our rebuttal. Given we have less than two days left for author-reviewer discussion, we want to invite you to discuss our response.", " I want to thank the authors for their extensive response!\nIt addresses my questions and the main points I've made well.\n\nI find the response to other reviewers also generally convincing.", " We thank the reviewers for their insightful reviews. We have incorporated the very helpful feedback of the reviewers in the updated manuscript and believe it will increase the impact of our work.\n\n \n\n**S1)**\nThe reviewers appreciated the novelty and simplicity of our model. Reviewer oTDz appreciated “model combining visual queries and textual keys in a transform model is original”. Reviewer gQFr “conceptually simple” and “doesn’t require many extra hyperparameters”. Reviewer Sfca “proven effective to capture the relevant image regions based on the collected documents”. Reviewer uvjr “seems natural - using the asymmetric attention(image-query)”.\n\n \n\n**S2)**\nThe reviewers appreciated the performance gains of I2DFormer “exceed the state of the art results - with simple embedding (Glove),” “learned document text features (I2DEmb) quite consistently, both among different datasets and different methods, outperform” “experimental results are considered favorable compared to SOTAs on three benchmark datasets” “The performance over the baselines was improved.”\n\n \n\n**S3)**\nThe reviewers appreciated the interpretability of our learned model “Attention maps provide some extra interpretability “, “The I2D module is proven effective to capture the relevant image regions based on the collected documents.”, “The interpretability study demonstrates that the model learned to align modalities together.”\n\n \n\n**S4)**\nReviewers found the experimental evaluation of our work to be “very complete with comparison to the state of the art” with “Rich ablation studies and a good set of experiments”.\n\nIn this work, we argue that textual information in the form of class documents serves as natural auxiliary information for zero-shot learning. We propose I2DFormer, a novel transformer-based model that jointly learns image and document embeddings for zero-shot image classification. In addition, our model employs our novel I2DAttention module, which learns fine-grained interaction between image regions and the documents. Our model sets a new SOTA wrt unsupervised class embeddings on three benchmark datasets and additionally offers great interpretability. The class embeddings generated by our model also improve all baselines.\n\nWe address individual questions and concerns under each reviewer's comments. Please note that the line numbers referenced in the individual comments refer to the updated rebuttal version of the manuscript and the supplementary.", " [Continued from the last table]\n\n***Results:***\n\nWe see from the table that our I2DFormer (row f) outperforms all baselines across the three datasets. As we compare rows b) and c), we observe that the introduction of the D2IAttention module leads to a drop in performance across two datasets. We attribute the relatively low performance of I+D TokenFormer (row a) to the same issue of information asymmetry in addition to the extra learnable parameters introduced by the full Transformer Encoder. We would also like to point out that I+D TokenFormer also involves several memory repeats of image and document tokens to concatenate the two modalities which results in a 7x increase in required GPU memory. Comparing row e) and a), we see that the introduction of our Attention module improved the performance of I+D TokenFormer but the performance is limited by Document to Image Attention in addition to extra learnable parameters introduced by additional attention blocks. We want to highlight that in our data scarce ZSL classification setting, scaling up additional attention modules is not the optimal way to align the two modalities. Our novel I2DAttention module is designed with these problem constraints in mind. While being conceptually simple, it leads to significant performance gains as shown in row f).\n\n \n\n### **W7) how were semantic embeddings learned from different sources**\n\nThe experimental setup of the compared semantic embeddings has been described in details in section 4.1, “Compared Semantic Embeddings” at line 230. Namely, the embeddings are extracted by the recommended procedure from the works cited in Table 1. The details are omitted from the caption for brevity as it is the standard embedding procedure for these models and already described in Section 4.1.\n\n \n\n### **Q2) how datasets are constructed**\n\nThe three compared datasets, AWA2, CUB, and FLO are standard zero-shot learning datasets extensively studied in the literature[2, 58, 60, 68]. We have augmented these datasets with one text document for each class describing the respective class. The process of this document collection is described in Section 4, “Collecting documents” at line 203.\n\n \n\n### **References:**\n\n[A] “Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm” Yangguang Li, Feng Liang, Lichen Zhao, Yufeng Cui, Wanli Ouyang, Jing Shao, Fengwei Yu, Junjie Yan, ICLR2022\n\n[B] “Integrating Language Guidance into Vision-based Deep Metric Learning” Karsten Roth, Oriol Vinyals, Zeynep Akata\n\n[C] “What Makes Training Multi-Modal Classification Networks Hard?” Weiyao Wang, Du Tran, Matt Feiszli, CVPR 2020.\n\n[D] “Generalized Zero-Shot Learning with Deep Calibration Network” Shichen Liu, Mingsheng Long, Jianmin Wang, Michael I. Jordan, NeurIPS 2018.", " [Continued for W5, 6]\n\nWe want to thank the reviewer for raising this important point. We have updated the supplementary to discuss this problem constraint in detail in Supplementary Section 1.4 to empirically confirm our design choices. The attention direction of our model is motivated by the information asymmetry in our problem setting. A document describes the most discriminative information in the image along with non-visual information about the class. Our I2DAttention learns to focus on the visual information to align the two modalities while learning to limit the impact of non-visual information. An image, however, only contains limited information about the document and does not contain features about the non-visual content of the document. As a consequence, learning Document to Image attention can lead to picking up spurious correlations which can limit the model’s performance. We would like to point out that this limitation originates from the ZSL nature of our problem, as we only have one textual document to represent each class in contrast to zero-shot transfer works like CLIP[37] where a caption locally describes the image for each training iteration.\n\nWe have included additional ablation about Document to Image attention, Symmetric attention and deeper attention configuration in the table below. We show that our proposed model achieves the best performance.\n\n \n\n***Experimental Setup:***\n\nBaseline a) I+D TokenFormer concatenates the image and article tokens and inputs this sequence into a learnable transformer Encoder block to have full attention between each modality and cross-modality. The transformer encoder block outputs a CLS token which is used for classification.\n\nBaseline b) is our I2DGlobal module introduced in Section 3.1.\n\nBaseline (c-e) introduces the D2IAttention module which is the symmetric counterpart to our I2DAttention module for Document to Image Attention.\n\nBaseline c) combines I2DGlobal with D2IAttention similar to our proposed model to learn asymmetric attention from Documents to Image.\n\nBaseline d) combines I2DFormer(the proposed model) with D2IAttention to learn symmetric attention from Image to Document and from Document to Image.\n\nBaseline e) combines the I2DGlobal with I2DAttention and D2IAttention. However, instead of pooling the token dimension to compute class scores (Equation 4 of the manuscript), we concatenated the attention recomputed image and document token embeddings and pass them through a Transformer encoder block similar to a) for an additional full attention layer between the two modalities to compute the class score. This baseline is included to study if scaling up cross modal attention will improve performance.\n\nf) is our proposed model I2DFormer from the manuscript\n\n| | **Model** | **ZSL** | | | **GZSL** | | | | | | | | |\n|----|----------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|\n| | | **AWA2** | **CUB** | **FLO** | | **AWA2** | | | **CUB** | | | **FLO** | |\n| | | | | | **u** | **s** | **H** | **u** | **s** | **H** | **u** | **s** | **H** |\n| a) | I+D TokenFormer | 66.8 | 34.1 | 29.7 | 59.1 | 72.2 | 65.0 | 26.5 | 47.4 | 33.9 | 26.7 | 87.3 | 40.9 |\n| b) | I2D Global | 69.4 | 37.2 | 37.2 | 59.1 | **79.7** | 67.8 | 28.5 | 59.1 | 38.4 | 28.4 | 88.2 | 43.0 |\n| c) | I2D Global + D2I | 67.1 | 39.5 | 32.0 | 53.9 | 76.5 | 63.2 | 32.0 | **61.4** | 42.1 | 28.3 | 87.0 | 42.7 |\n| d) | I2DFormer + D2I | 68.7 | 42.5 | 37.6 | 58.1 | 76.3 | 66.0 | 32.3 | 52.8 | 40.1 | 34.2 | 86.0 | 48.9 |\n| e) | I2DFormer + D2I + a) | 67.9 | 42.1 | 36.6 | 55.4 | 78.0 | 64.8 | 31.4 | 55.3 | 40.0 | 28.9 | 90.9 | 43.9 |\n| f) | **I2DFormer(Ours)** | **76.4** | **45.4** | **40.0** | **66.8** | 76.8 | **71.5** | **35.3** | 57.6 | **43.8** | **35.8** | **91.9** | **51.5** |\n \n", " ### **W3.2) using subword tokenizer, using static embeddings**\nWe also want to clarify a potential misunderstanding. Our embeddings are not static, only the initial embedding layer of the document transformer is static. These embeddings are refined by the learnable MLP and then further improved by the learnable Document Transformer. The subword tokenizers referenced by the reviewer are already part of baseline embedding models in table 3b), and it tackles a different problem of embedding rarer language terms as a combination of their simpler more frequently occurring sub-words. This compositionality from subwords is achieved by pre-training on Billions of Language data points to cover enough vocabulary. This can not be used to learn a semantic space over e.g. animal classes from just 40 documents of average length of 400 words each. The new information, especially nouns and adjectives detailed in the unseen documents e.g. hoofed legs, or the classname horse can not be sufficiently represented in the embedding space by sub tokenizing since nouns in language are not compositional with sub token configurations in this data scarce setting. All language embedding models use Billion scale training data including GloVe/ Word2Vec(Common Crawl of 840Billion tokens), Bert (Corpos of 3.3 Billion Tokens) etc. In addition aligning vision and language modalities with fully learnable embeddings is even more data hungry where works like CLIP[37] use 400 million images with captions and even works which use pretraining in the language domain still require 88 million image captions pairs to be competitive[A]. These modern transformer based pretrained embeddings are still not superior to classic word embeddings as shown in our work (also appreciated by reviewer oTDz). A similar conclusion is reached in [B], where authors showed that simple embedding models like FastText achieve comparable performance with CLIP and other Transformer based embeddings for language guided metric learning. We see this as an important result to motivate the community to further develop better embedding methods for zero-shot learning.\n\n \n\n### **W4) Is there any reason not to try to unfreeze the pretrained visual encoder?**\nWe fix the Image Transformer as pre-trained on ImageNet1K (which respects the GBU split i.e. the pretraining does not include any data of the unseen classes). We learn the Document Transformer from scratch. The choice of this training strategy is motivated by several published works including [66,C]. Namely in [66], the authors have studied the impact of freezing, finetuning and end to end training of image-text models with noisy text for the zero-shot transfer models like CLIP. The conclusion from these works propose that when dealing with noisy text, it is better to pretrain the image encoder and learn the text encoder from scratch to align the text modality to a common semantic space. Since our training setting has even more noise in the text domain in the form of non-visual facts about the class described in the document, we found this training strategy to perform the best for us. Moreover, it is important to know that we are working in a data-scarce setting where we only have one document for each class, and a few thousand training images in total.\n\n### **W5, 6) scaling the model deeper is lacking but may be significant, the choice of asymmetric attention**\nWe want to clarify that since both of our image and document modalities are processed by respective transformers, these already contain several self attention layers. However, we only use one cross-modal attention layer in the final model. We touch on scaling up cross-modal attention in section 4.3 and Table 3a) in comparison with ViLBERT (line 293-296). Specifically, in our data constraint setup, deeper cross-modal attention configurations lead to suboptimal performance and a parameter efficient model like ours leads to better performance (as also appreciated by Reviewer gQFr).\n\n", " ### **W3.1, Q3) the texts(words) are processed by converting them into GloVe embeddings, then through the MLP layer, the decision to do so remains unexplained, usefulness of this method counters the conclusion the paper reaches.**\nSince we only have limited text in the form of documents of seen classes while training, we can not learn input features for all vocabulary from scratch as our model will observe new concepts at test time. It is critical to initialize the embeddings with a model which can represent all concepts of the language the model will encounter during training for seen classes and during testing for both seen and unseen classes. It is common practice in ZSL literature to improve upon this initial representation of a language model (for class names in their case) by refining it by various learnable models e.g. by GCN[29], Generative model[58], MLP[26] or a Transformer in our case. The intuition here is that the learned ZSL model improves the semantic space of concepts in relation to the same input language space which can represent both seen and unseen classes. To address the reviewer's request and show the usefulness of the MLP, we report an additional ablation in the table below. We compare using fixed GloVe vectors as input to the document transformer vs learning an MLP on top. We see that our strategy of learning an MLP on top of GloVe vectors greatly benefits the learnable document transformer.\n\n| | **Doc Embedding** | **ZSL** | | | **GZSL** | | | | | | | | |\n|----|-------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|\n| | | **AWA2** | **CUB** | **FLO** | | **AWA2** | | | **CUB** | | | **FLO** | |\n| | | | | | **u** | **s** | **H** | **u** | **s** | **H** | **u** | **s** | **H** |\n| a) | GloVe | 66.8 | 39.7 | 32.0 | 60.3 | 74.9 | 69.9 | 31.7 | 54.8 | 40.2 | 30.0 | 86.8 | 44.5 |\n| b) | GloVe + MLP | **76.4** | **45.4** | **40.0** | **66.8** | **76.8** | **71.5** | **35.3** | **57.6** | **43.8** | **35.8** | **91.9** | **51.5** | \n\nOur results do not contradict our claim as the utility of the word embedding model is to represent all concepts in the same semantic space and then learn an improved representation on top as it is common in a plethora of ZSL works including [29, 58, 26]. The point made in the paper for Table 3b) refers to context conditioned embeddings generated by the pretrained transformer based language models used as input to our learnable document transformer. To reiterate this, for example let's take the following two sentences “a horse has hoofed legs” and “a giraffe has long legs”. LongFormer will give different token embeddings for the token “legs”, while GloVe will embed the two instances of “legs” to the same initial embedding. For the input embedding from the LongFormer, the ZSL model will be faced with a distribution shift for the concept of leg as it has not observed enough variation of the token “legs” while training to generalize to this context changing input embedding.\n", " We want to thank the reviewer for their review. The reviewer appreciated the interpretability of the model, the performance gains, and the novelty of our I2DAttention Module. The reviewer raised some concerns, which we address individually below. Please note that references to the manuscript and the supplementary e.g. line numbers/ sections are for the updated version for the rebuttal.\n\n### **W1, W8) task description is short and appears relatively late, structure makes it hard to understand, what real-life problem the paper is trying to solve**\n\nOur paper follows the same format as previously published zero-shot learning papers at NeurIPS including [60, D]. These works use Zero-Shot Learning, synonymous with Zero-shot image classification. In this line of work, the core problem we and other works are trying to solve is classifying images of unseen classes that were not observed during training using side information. Our manuscript title mentions that the work is addressing zero-shot image classification. Although remarkable progress has been made towards zero-shot image classification, most prior works rely on human annotated attributes as the side information. Towards unsupervised semantic embeddings, word embeddings can be easily obtained from pre-trained language models. Yet, they often do not reflect fine-grained visual similarities, thus limiting the performance. The goal of this work is to learn visually aligned unsupervised semantic embeddings from online textual documents for zero-shot image classification. Towards this our proposed I2DFormer utilizes free form textual description to learn zero-shot classification and achieves SOTA on three benchmark datasets. Moreover, the learned class embeddings of our method can improve existing ZSL methods as shown in Table 2.\n\nWe thank the reviewer for the suggestions regarding increasing clarity for a wider set of audience. We have modified the introduction section to highlight further the task description, problem setting, and contributions in the updated draft. We want to highlight that all the three other reviewers have rated our manuscript very high for presentation with a score of 3. We hope these modifications address the reviewer's concern.\n\n### **W2, Q1) how the training proceeds?**\n\nWe have provided a detailed method overview in manuscript section 3 with additional training details in supplementary Section 3. In addition to these, we provide details of our training and inference pipelines below for the reviewer. We hope this helps the reviewer get an overview.\n\n \n\n***For Training:***\n\n**Input to the model:** an image and the set of documents for the seen classes.\n\nStep 1: Input the image and documents to the respective transformer to get the feature representation for the global CLS tokens, image patches and document tokens.\n\nStep 2.1: The dot product of the image CLS token and each Document CLS token is used in Equation 1 to define class scores “s(x, d)” using the document information per class.\n\nStep 2.2: The image patch and text token features are processed by the learnable attention module defined in section 3.2. Equation 3 computes an attention matrix between image patch queries and document token keys for a given image and each seen document. This attention is used to recompute the image patch embeddings f_{pa} from document token values which are pooled to give an image level feature embedding \\hat{f}_{pa} used to generate the “s_local(x, d)” in equation 4 for each seen class.\n\nStep 3: The global class scores “s(x, d)” are optimized using the L_{CLS} which is a cross entropy over the set of documents for seen classes in Equation 2. Similarly the local class scores “s_local(x, d)” is optimized in L_{local} as a cross-entropy over the set of documents for seen classes in Equation 5.\n\nStep 4: The gradients from both the loss functions are used to update the model using Adam Optimizer\n\nThis is repeated until training converges.\n\n \n\n***At inference:***\n\n**Input to the model:** an image and the set of documents for both seen and unseen classes.\n\nStep 1: Input the image and documents to the respective transformer block to get the feature representation for the global CLS tokens of image and documents.\n\nStep 2: The dot product of the image CLS token and each Document CLS token is used in Equation 1 to define class scores “s(x, d)” using the document information per class.\n\nStep 3: An argmax over the scores for the given image and the documents for all classes is used to get the class output in Equation 6\n\nMoreover, we want to add that we will release the code and pre-trained models to promote reproducibility once the paper is accepted.\n\n \n### **W2) Where does the class information comes from?**\nWe formulate the problem setting in the beginning of Section 3 (line 99). The class labels of images are provided by the benchmark datasets. Each class is associated with exactly one textual document.", " ### **Q3) What if all tokens (both visual ones and semantic ones) are jointly fed to the transformer?**\n\nWe have included this baseline as I+D TokenFormer in the table below. We see from the table that I2DFormer outperforms this baseline across the three datasets. We attribute this to the information asymmetry between the image and document domain in our setting (discussed in more detail in supplementary section 1.4) and the increased learnable parameters of a full transformer encoder block for I+D TokenFormer. A document describes the most discriminative information in the image along with non-visual information about the class. Our I2DAttention learns to focus on the visual information to align the two modalities while learning to limit the impact of non-visual information. An image, however, only contains limited information about the document and does not contain features about the non-visual content of the document. As a consequence, learning additional Document to Image attention in this model can lead to picking up spurious correlations which can limit the model’s performance. We would also like to point out that I+D TokenFormer also involves several memory repeats of image and document tokens to concatenate the two modalities which results in a 7x increase in required GPU memory.\n\n| | **Model** | **ZSL** | | | **GZSL** | | | | | | | | |\n|----|----------------------|----------|----------|----------|----------|----------|----------|----------|---------|----------|----------|----------|----------|\n| | | **AWA2** | **CUB** | **FLO** | | **AWA2** | | | **CUB** | | | **FLO** | |\n| | | | | | **u** | **s** | **H** | **u** | **s** | **H** | **u** | **s** | **H** |\n| a) | I+D TokenFormer | 66.8 | 34.1 | 29.7 | 59.1 | 72.2 | 65.0 | 26.5 | 47.4 | 33.9 | 26.7 | 87.3 | 40.9 |\n| b) | **I2DFormer(Ours)** | **76.4** | **45.4** | **40.0** | **66.8** | **76.8** | **71.5** | **35.3** | **57.6** | **43.8** | **35.8** | **91.9** | **51.5** |\n\n \n", " \n\nWe thank the reviewer for the encouraging review and the helpful suggestions. The reviewer appreciated our transformer based model for ZSL learning, the interpretability of our model, the experimental evaluation and the improvement in baselines with our learned document embeddings. The reviewer also rated the paper high for soundness, presentation and contribution.\n\n\nWe now address the individual comments. Please note that references to the manuscript and the supplementary e.g. line numbers/ sections are for the updated version for the rebuttal.\n\n\n### **W1, Q2) The discussions regarding the model complexity needs further clarification, model complexity.**\n\nSince different ZSL and class embedding methods employ different training strategies including multi-step training e.g. in the case of VGSE[61], they are not directly comparable in FLOPs.\n\n \n\nOur method requires roughly the same compute and training time compared to our closest unsupervised class embedding competitor VGSE[61]. For comparison, our model is trained on a Single A100 GPU and requires only 24 hours to converge to the reported number while utilizing 20GB of VRAM(as also appreciated by Reviewer oTDz). In comparison VGSE takes a similar training time of 25 hours while utilizing 18GB of VRAM.\n\n \n\nCompared to baseline ZSL models like APN and f-VAEGAN-D2, our model requires relatively more training time and GPU memory. However, our model is learning both a ZSL model and a generic unsupervised class embedding that can be utilized by other methods. Once trained, our learned I2DEmb can benefit any of these baseline models as shown in Table 2.\n \n\nFor model inference, our model’s computational requirements are very similar to the most basic baselines like SJE as we only require a dot product between each image CLS feature and the document CLS features which can be precomputed once for each evaluation run.\n\n \n\n### **W2) Novelty of cross modal attention, other novelty**\n\nWe want to highlight that our cross-modal attention module is fundamentally different from the existing published works compared in the manuscript Table 3a). Instead of learning deep or interleaved cross-modal attention layers, our I2DAttention module exploits the information asymmetry between the image and text domain to develop a parameter-efficient attention module for zero-shot image classification with text documents (as appreciated by Reviewer oTDz and gQFr). We have included additional discussion regarding this in supplementary section 1.4. In addition to our cross modal ablations in Table 3a) where we showed that our attention module outperforms existing published work, we address your suggestion of inputting all tokens to a transformer in the later comment. Our analysis shows that existing versions of attention will achieve suboptimal results due to not addressing the zero-shot nature of the problem. In addition to the novelty of our attention module, our model distills the knowledge of fine-grained attention to the global head to learn a computationally efficient inference model for test time. Our other contributions in this work include creating a document-based dataset for existing ZSL benchmarks, achieving SOTA performance on three public benchmarks, and analysis of augmenting existing word embedding and document embedding models with our transformer-based model.\n\n \n\n### **W3) fairness of utilizing the online document compared to other SOTA ZSL frameworks**\n\nWe follow the evaluation protocol of [61] where different unsupervised class embeddings are compared by replacing the original class embeddings of each ZSL method. We believe that the comparison is fair as our experiments use the same input image features from a pretrained ImageNet1K model and the same documents. For a fair comparison with different unsupervised embedding methods, we have ablated over them under the same model and training protocol in Table 1. Additionally, we have ablated over several ZSL methods under different unsupervised class embeddings in Table 2 using the training protocol recommended by [61]. We conclude in the paper that our learned embeddings are superior to other unsupervised class embeddings in Table 1. Additionally we conclude that our model outperforms other baseline ZSL models across different class embeddings in Table 2. Finally our learned document embedding also leads to significant improvements in the performance of baseline ZSL methods as also shown in Table 2.\n\n \n\n### **Q1) implementation details regarding the compared methods should be given, since different noisy textual source is utilized under the current experimental settings**\n\nWe want to clarify that we use the same set of documents in both Table 1 and Table 2 as we compare different embedding methods and ZSL models. These embeddings are extracted using the respective author’s implementation. We discuss how each embedding is extracted in Section 4.1 at Line 230 “Compared semantic embeddings”.", " ### **Q2) Are any of either the Image or Document Transformer fine-tuned?**\n\nWe fix the Image Transformer as pre-trained on ImageNet1K (which respects the GBU split i.e. the pretraining does not include any data of the unseen classes). We learn the Document Transformer from scratch with a fixed word embedding layer as discussed in the earlier point. The choice of this training strategy is motivated by several published works, including [66,B]. Namely, in [66], the authors have studied the impact of freezing, fine tuning, and end to end training of image-text models with noisy text for the zero-shot transfer models like CLIP. The conclusion from these works propose that when dealing with noisy text, it is better to pretrain the image encoder and learn the text encoder from scratch to align the text modality to a common semantic space. Since our training setting has even more noise in the text domain in the form of non-visual facts about the class described in the document, we found this training strategy to perform the best for us.\n\n \n\n### **Q3) What are the differences between text lengths of different documents?**\n\nSince our text is extracted from web sources to represent descriptions of the class, the text length is as a natural consequence defined by the source of the information. We indeed have to introduce padding and masks to cater for varying sizes of the document for a run time efficient implementation(One could implement it with a for loop for memory efficiency at the cost of run time). However, we found our visual section filtering strategy to reduce the relative differences between the lengths of the different documents. For example, for AWA2, the filtered document is at max 487 tokens long. The mean padding size for this dataset is 34 tokens with a standard deviation of 47. The difference in lengths becomes much more significant if one does not filter the document for visual sections. In such a case ~60% of the document token tensor will be empty/ padded. This further highlights the importance of relatively cheap section filtering for using unstructured text for ZSL.\n\n \n\n### **Q4) How does the proposed model compare with other ZSL models with respect to computational requirements?**\n\nOur method requires roughly the same computation and training time compared to our closest unsupervised class embedding competitor VGSE[61]. For comparison, our model is trained on a Single A100 GPU and requires only 24 hours to converge to the reported number while utilizing 20GB of VRAM(as also appreciated by Reviewer oTDz). In comparison VGSE takes a similar training time of 25 hours while utilizing 18GB of VRAM.\n\n \n\nCompared to baseline ZSL models like APN and f-VAEGAN-D2, our model requires relatively more training time and GPU memory. However, our model is learning both a ZSL model and a generic unsupervised class embedding that can be utilized by other methods. Once trained, our learned I2DEmb can benefit any of these baseline models as shown in Table 2.\n\n \n\nFor model inference, our model’s computational requirements are very similar to the most basic baselines like SJE as we only require a dot product between each image CLS feature and the document CLS features which can be precomputed once for each evaluation run.\n\n \n\n### **Q5) How are soft attention maps produced?**\n\nThe attention matrix learned by the I2DAttention module has the dimension of [image patches x document tokens] (described in manuscript section 3.2). At a patch size of 16x16 for the input image size of 224x224, each document token produces an attention map of 14x14. This attention map is upsampled and overlaid on the image, similar to how attention is visualized in the original ViT paper.\n\n \n\n### **Q6) differences in classification between s and s_{local} predictions?**\n\nOnce the training has been completed, we observe that the two heads have distilled their knowledge to each other with the global head performing slightly better as shown in Supplementary Table 1. We ablated the ensemble of the two heads and found the accuracy of the model to be between the two heads. The errors of the two models tend to overlap with the local head making errors for some cases the global head classified correctly. While being more accurate, the global head is also computationally efficient for inference as it only requires a dot product between the respective CLS features of images and documents.\n\n \n\n### **Q7) Eq. 4 shouldn’t it be \\hat(f)^(pa)?**\n\nThank you for pointing this out. We have corrected it in the manuscript.\n\n \n\n### **Minor Issues:**\n\nThank you for the suggestions, we have corrected these in the updated manuscript.", " ### **W5) The “direction” of the attention seems somewhat arbitrary**\n\nWe want to thank the reviewer for raising this important point. We have updated the supplementary to discuss this problem constraint in detail in Supplementary Section 1.4 to empirically confirm our design choices. The attention direction of our model is motivated by the information asymmetry in our problem setting. A document describes the most discriminative information in the image along with non-visual information about the class. Our I2DAttention learns to focus on the visual information to align the two modalities while learning to limit the impact of non-visual information. An image, however, only contains limited information about the document and does not contain features about the non-visual content of the document. As a consequence, learning Document to Image attention can lead to picking up spurious correlations which can limit the model’s performance.\n\n \n\n***Experimental Setup:***\n\nBaseline a) is our I2DGlobal module introduced in Section 3.1.\n\nBaseline (b,c) introduces the Document to Image(D2I) Attention module which is the counterpart to our I2DAttention module.\n\nBaseline b) combines I2DGlobal with D2I Attention similar to our proposed model to learn asymmetric attention from Documents to Image.\n\nBaseline c) combines I2DFormer(the proposed model) with D2I Attention to learn symmetric attention from Image to Document and from Document to Image.\n\nd) is our proposed model in the manuscript\n\n| | **Model** | **ZSL** | | | **GZSL** | | | | | | | | |\n|----|--------------------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|----------|\n| | | **AWA2** | **CUB** | **FLO** | | **AWA2** | | | **CUB** | | | **FLO** | |\n| | | | | | **u** | **s** | **H** | **u** | **s** | **H** | **u** | **s** | **H** |\n| a) | I2D Global | 69.4 | 37.2 | 37.2 | 59.1 | **79.7** | 67.8 | 28.5 | 59.1 | 38.4 | 28.4 | 88.2 | 43.0 |\n| b) | I2D Global + D2I | 67.1 | 39.5 | 32.0 | 53.9 | 76.5 | 63.2 | 32.0 | **61.4** | 42.1 | 28.3 | 87.0 | 42.7 |\n| c) | I2DFormer + D2I | 68.7 | 42.5 | 37.6 | 58.1 | 76.3 | 66.0 | 32.3 | 52.8 | 40.1 | 34.2 | 86.0 | 48.9 |\n| d) | **I2DFormer(Ours)**| **76.4** | **45.4** | **40.0** | **66.8** | 76.8 | **71.5** | **35.3** | 57.6 | **43.8** | **35.8** | **91.9** | **51.5** |\n\n***Results:***\n\nWe see from the table that I2DFormer outperforms all baselines across the three datasets. As we compare rows a) and b), the introduction of the D2IAttention module leads to a drop in performance across two datasets due to the information asymmetry in the problem setting as discussed earlier. Row c) improves upon row b) as the model now additionally utilizes our I2DAttention module but its performance is limited by the D2IAttention. Our final model I2DFormer, which only utilizes the I2DAttention module, outperforms all these baselines in row d). Our model is designed with the problem constraints of our ZSL setting and the resulting information asymmetry in mind. While being conceptually simple, it leads to significant performance gains as shown.\n\n\n### **Q1) How are different input features (e.g. GloVe) used in the Document Transformer**\n\nWe replace the learnable token embedding layer with fixed word embeddings extracted from the models mentioned in Table 3b) followed by a shallow MLP to improve upon this initial representation before input to the learnable Document Transformer. Since we only have limited text while training in the form of documents of seen classes, learning token embeddings from scratch leads to suboptimal performance as unseen documents introduce additional vocabulary. In this data constraint environment, we found the mix of fixed word embeddings plus a shallow MLP to improve the initial representation as a good compromise.\n\n", " ### **W4) justifying collect their own text documents, performing some filtering of document sections**\n\nWe want to thank the reviewer for citing the impactful related works. These works have also inspired our work and we have already cited them in our manuscript. The documents released by [14] are missing section information which prevented us from studying the impact of performing a relatively cheap section filtering step to reduce some noise in the collected document. Section filtering reduces the relative memory cost for attention as the document length is reduced from ~1500 words to ~400 words. Moreover, the mentioned works have collected their set of documents in 2016 and are therefore 6 years old. Since sources like Wikipedia are ever evolving with richer information, we felt that the effort spent in recollecting the documents can potentially benefit future works as we see a boom of good vision-text models thanks to Transformer architectures. For comparison, the authors mention in [14] that in 2016 by querying Wikipedia for classnames, they were able to get matches for 178/200 classes while in 2022, we were able to retrieve all 200/ 200 classes without manual intervention. We want to highlight that for CUB, we performed analysis over impact of document sources in supplementary 1.2 and show that our model is SOTA compared to baseline unsupervised embeddings on both Wikipedia and AllAboutBird documents.\n\nWe also want to mention that we use the same set of newly collected documents for all baselines reported in the manuscript for a fair comparison. To address the reviewer’s request, we include additional ablation below on the impact of section filtering on Wikipedia articles for CUB. We chose Wikipedia documents for CUB for this study as they tend to contain more noise than the relatively cleaner AllAboutBirds reported in the main manuscript. The document collection protocol is the same as [14] i.e. we only query Wikipedia from the Python api and perform section filtering where required and train I2DFormer.\n\n| | **ZSL** | | **GZSL** | |\n|------------------------|----------|----------|----------|----------|\n| **Input Document** | | **u** | **s** | **H** |\n| Only Abstract | 40.3 | 31.7 | 52.6 | 39.5 |\n| Full Article | 42.5 | 33.1 | 56.2 | 41.7 |\n| Visual Sections (Ours) | **43.1** | **34.1** | **57.1** | **42.1** |\n \n\nWe see from the table above that our strategy of filtering for Visual Sections achieves the best performance for both ZSL and GZSL. Row1 achieves decent results by performing a very simple filtering of only extracting the abstract of a document. However, as abstracts only contain partial information about the visual attributes of a class, it limits the model’s performance. Row 2, which uses the full article, improves upon this as the model now has access to additional class information. At the same time, this also increases the noise in the document due to its long length. However, our model is still able to learn a very competitive class embedding showing the robustness of our attention module. While competitive, using the full document has a disadvantage of requiring significantly more GPU compute to process the full document in the Document Transformer and subsequently the cross modal I2DAttention module. Finally, in row 3, we show that filtering for visual sections achieves the best performance as this simple step reduces the noise in the document in addition to reducing the required GPU compute.\n", " We want to thank the reviewer for the detailed and very helpful review. The reviewer appreciated the simplicity of our attention module and its parameter efficiency. The reviewer also appreciated the consistent SOTA performance of our model, rich ablation studies, and the extra interpretability offered by the attention module. The reviewer also rated the paper high for soundness, presentation, and contribution.\n\nWe now address the mentioned comments. Please note that references to the manuscript and the supplementary e.g. line numbers/ sections are for the updated version for the rebuttal.\n\n\n### **W1) Possibly suboptimal settings for ZSL models**\n\nWe follow the evaluation protocol of VGSE [61] where different unsupervised class embeddings are compared by replacing the original class embeddings of each ZSL method. In APN’s journal extension[A], the authors also compare the performance of unsupervised class embedding methods under the same setting. In the table below, we compare the performance of GAZSL with TF-IDF feature vs I2DEmb and show that our observations from the manuscript are consistent. Namely, GAZSL performs better with our learned document embeddings I2DEmb, and our I2DFormer still achieves the SOTA.\n\n| | **Model** | **ZSL** | | | **GZSL** | | | | | | | | |\n|----|------------------------|----------|----------|----------|----------|----------|----------|----------|---------|----------|----------|----------|----------|\n| | | **AWA2** | **CUB** | **FLO** | | **AWA2** | | | **CUB** | | | **FLO** | |\n| | | | | | **u** | **s** | **H** | **u** | **s** | **H** | **u** | **s** | **H** |\n| a) | GAZSL w. TF-IDF | 48.0 | 39.2 | 33.1 | 28.0 | **95.2** | 43.3 | 9.58 | 54.2 | 16.3 | 27.1 | 91.5 | 41.8 |\n| b) | GAZSL w. I2DEmb (Ours) | **83.1** | 42.9 | 34.2 | 56.8 | 94.7 | 71.0 | 15.9 | 50.4 | 24.1 | 28.8 | 90.1 | 43.7 |\n| c) | **I2DFormer (Ours)** | 76.4 | **45.4** | **40.0** | **66.8** | 76.8 | **71.5** | **35.3** | **57.6** | **43.8** | **35.8** | **91.9** | **51.5** |\n\n### **W2) claims of outperforming SOTA should be formulated more precisely - e.g. with respect to a specific type of data/source of information**\n\nWe agree with the reviewer that our claims are with respect to unsupervised class embeddings and not human labeled attributes. We thank the reviewer for requesting additional clarity. We want to clarify that the focus of our work is to bridge the gap between ZSL performance using expensive human-annotated attributes vs cheap unsupervised class embeddings as mentioned in Intro line 30-33. We also mention this in our manuscript abstract(line 16-19), contribution point 3 where we state “Our model I2DFormer consistently improves the SOTA in unsupervised semantic embeddings”(line 50-52). Moreover, we start the section 4 of our manuscript as “Since the main focus of this work is to learn unsupervised semantic embeddings, we do not use any human-annotated attributes.”(line 198-200). As mentioned in the previous point, this protocol has been introduced by previously published work including [61]. To address the reviewer’s concern, we have added additional explanation in the intro (line 34-36) and have updated the caption of Table 1 and 2 to specifically mention that our claim of SOTA is wrt unsupervised class embeddings.\n\n### **W3) qualitative evaluation samples are not indicated as random samples, failure cases.**\n\nThe qualitatives included in the main manuscript were chosen for clarity. The samples in the supplementary were chosen randomly on correctly classified images. We agree with the reviewer that showing failure cases for attention will offer more insights. Therefore we have now included a discussion around failure cases in the updated supplementary section 1.3 and Figure 3 as requested by the reviewer. We show that since our model directly learns the attention from the data instead of paired supervision for image regions and document words, it is not immune to dataset biases. The learned attention can fail in cases where unseen classes have a large number of instances in a single image, significant orientation changes or the attribute of a flower varies significantly from the seen classes.\n\n\n### **References:**\n\n[A] “Attribute Prototype Network for Any-Shot Learning” Wenjia Xu, Yongqin Xian, Jiuniu Wang, Bernt Schiele, Zeynep Akata, IJCV, 2022\n\n[B] “What Makes Training Multi-Modal Classification Networks Hard?” Weiyao Wang, Du Tran, Matt Feiszli, CVPR 2020.", " We want to thank the reviewer for the very positive feedback. We are glad that the reviewer shares our excitement around the novelty of our idea, the completeness of our experiments, the compute efficiency of our attention module, and our analysis around the performance of modern and classic document embedding models. The reviewer also rated the paper high for soundness, presentation, and contribution.", " The authors propose a method to learn a joint representation of an image with a very generic description of the object present in the image. This representation makes it possible to associate the discriminating elements of the textual description with visual elements of the image. The objective is to improve the zero-shot learning approaches and to classify images containing objects not seen during the learning phase only from the textual description. The proposed model is composed of two parts: a first part combines an image-based transformer and a text-based transformer through a scoring function that allows to compute the similarity between text embedding and image embedding. A second part allows alignment using visual queries and textual keys in a combined text and image transformer.\nExperiments are conducted on standard datasets for zero-shot learning. Wikipedia articles were collected to serve as class descriptions. The collected dataset will be made public after the review process.\nExperiments show that the proposed model outperforms state-of-the-art models and allows the classification of images containing objects never seen during the learning process based on their textual description only (unseen classes). Different types of semantic embedding are tested (glove, longformer, mpnet, tfidf). An ablation study and examples of qualitative results are presented.\n I am not familiar with the field of zero-shot learning but it seems to me that the model combining visual queries and textual keys in a transform model is original. The performances of the model seem to exceed the state of the art results and the model obtains interesting performances with simple embedding (Glove), which puts in perspective the contribution of more complex models like longformer for zero-shot learning problems. The bibliography seems to be very complete, again with citations of simple but efficient models (tfidf). The proposed model seems to be able to be trained on a single A100 GPU in one day, which is accessible. The experimental part is very complete with comparison to the state of the art, testing of different embeddings, ablation study and qualitative analysis.\n\n I am not familiar enough with the field of zero-shot learning to ask further questions. NA", " The authors propose an attention-based model for zero-shot learning from text documents - knowledge sources such as Wikipedia and bird/flower information websites. The proposed approach relies on two Transformer models: one for processing image patches, and another one for text documents. Each of the Transformers outputs a sequence of features which is fed into two different components of the proposed model. One of them fuses two sequences with an attention mechanism (I2D Attention) and produces an image-text matching score, described as local. The other component (I2D Global) only uses special classification tokens ([CLS]) from both text and image and similarly produces an image-text matching score, which the authors describe as global. The approach optimizes for both local and global matching but only the global one is used later on for classification/generating predictions.\n\nThe authors evaluate their model on standard ZSL datasets: AWA2, CUB, and FLO. They separately evaluate the performance of both their learned text features and their entire model that uses the global matching scores. When using their learned text features as input to existing four different ZSL models they observe quite a consistent improvement over alternative GloVe or VGSE features, often by a big margin. Also, using their entire proposed model, with global text-image matching scores used for classification, generally compares well against the other ZSL models.\n\nAdditionally, the authors argue that their model is more interpretable, showing examples of attention maps over words in a document or matching image-text element pairs. **Strengths:**\n\n- (S1) The proposed approach is conceptually simple - it consists mostly of dot-product attention. It doesn’t require many extra hyperparameters.\n\n- (S2) The experimental results. The learned document text features (I2DEmb) quite consistently, both among different datasets and different methods, outperform GloVe and VGSE features: often by a large margin. Additionally, the entire proposed model that uses a very simple similarity score between text and image [CLS] token often outperforms the remaining ZSL methods - even models that use the text features proposed in this work (I2DEmb)\n\n- (S3) Rich ablation studies and a good set of experiments: the experiments show the importance of different components of the proposed model. The authors evaluate their model with different types of textual features but also use their textual features with different models. The set of experiments covers the most important aspects of the model.\n\n- (S4) Attention maps provide some extra interpretability which can be important for understanding predictions, especially when using text documents as a source of information about classes (although see W3)\n\n**Weaknesses:**\n\n- (W1) Possibly suboptimal settings for ZSL models used to compare against. GAZSL was introduced using TF-IDF features, e.g. APN was introduced for attribute features. This paper however compares against them when using either GloVe or VGSE features - which might be suboptimal for those models. A more fair comparison should at least additionally include TF-IDF features as well (especially important for GAZSL).\n\n- (W2) The authors seem to miss a little bit of context regarding the attribute data. The usage of text documents instead of attributes, in general, might have many advantages. However, e.g. f-VAEGAN-D2, APN papers using attribute data report much higher performance on e.g. CUB dataset. The authors however do not mention any attribute-based results which might be misleading for the readers, as if attribute features were not competitive even on these datasets. Additionally, the claims of outperforming SOTA should be formulated more precisely - e.g. with respect to a specific type of data/source of information\n\n- (W3) The qualitative evaluation samples are not indicated as random samples, as opposed to selected. Additionally, for more convincing/insightful results an analysis of failure cases would be needed\n\n- (W4) The authors, without justifying, collect their own text documents instead of already standard existing ones already extracted. There is no motivation for it present in the paper and no discussion or analysis of the differences and the impact of using these vs. standard documents. Additionally, the authors mention performing some filtering of document sections, which could potentially be very important but is not discussed/analyzed further\n - Existing extracted documents (Wikipedia, AllAboutBirds, etc.):\n - FLO, CUB: Mohamed Elhoseiny, Ahmed Elgammal, and Babak Saleh. Write a classifier: Predicting visual classifiers from unstructured text. TPAMI 2016.\n - CUB: Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, and Ahmed Elgammal. Link the head to the” beak”: Zero shot learning from noisy text description at part precision. CVPR 2017\n\n(W5) The “direction” of the attention seems somewhat arbitrary and not analyzed. The authors basically weight the text features (attention values) by the attention compatibility between image patches & text sequence elements. One could do the opposite - use image patch features and weight them by attention maps instead. Or do both directions - like some cross-attention works. No motivation behind this choice is discussed and no comparison of alternative choices is present. **Questions:**\n\n- (Q1) Table 3 (b): How are different input features (e.g. GloVe) used in the Document Transformer? Is it just an input to a Transformer instead of learning token embeddings (as in the first layer of Transformer)? Are those text features fixed or fine-tuned?\n\n- (Q2) Are any of either the Image or Document Transformer fine-tuned? The Image Transformer seems to be mentioned to be pre-trained but what about Document Transformer? Is it also pre-trained in any way or trained from scratch?\n\n- (Q3) What are the differences between text lengths of different documents? Since the softmax is computed over all classes, and texts lengths can be different, doesn’t it cause significant computational issues? Different lengths would require masking and padding\n\n- (Q4) How does the proposed model compare with other ZSL models with respect to computational requirements - both in training and generating predictions?\n\n- (Q5) How are soft attention maps produced (e.g. Figure 3) if the method only operates on image patches?\n\n- (Q6) Analysis of differences in classification between $s$ and $s_{local}$ predictions (Table 1 in the Appendix): are there any important differences other than just a different level of accuracy? Do they make different types of mistakes? If the two modules are complementary to each other some error analysis from the corresponding scoring function could be insightful\n\n- (Q7) Eq. 4: What is $\\hat{x}\\_{pa}$? Shouldn’t it be $\\hat{f}_{pa}$ instead?\n - Why H in eq. 4 is a function while in L167 it says it’s a vector?\n\n---\n\n**Suggestions:**\n\n- Ablations in the Appendix, Table 1 seem important - I would recommend at least mentioning the main observations in the main paper\n\n**Minor issues:**\n\n- L50: “consistent” —> “consistently”?\n\n- Table 3: “Modal” —> “Model”?\n\n- Figure 1 doesn’t seem to be referred to anywhere in the text\n\n- Hyperlinks should be highlighted somehow - they do not seem to be displayed in any special way - just as normal text - very easy to miss them\n\n- Terminology: Instead of document “embedding” - I’d rather say encoding/representations/features. Typically, “embedding” refers to basically a look-up for discrete objects (e.g. tokens) - the terminology used in this paper with “embeddings” used as a name for any features or representations makes it very easy to confuse with e.g. Transformer’s embeddings (in the very first layer). The authors do not discuss the computational cost of training & evaluating their model as opposed to alternative approaches. The qualitative analysis seems to include only selected positive example, with no fail-case analysis (see W3). Additionally, the authors use some very strong, over-exaggerated language - in L45 they claim that their model is “able to develop understanding of different parts of an animal”.", " In this paper, authors propose to address the zero-shot image classification problem under a more realistic setting, namely each class is descried with one document collected online. To address the issues with online textual documents contain certain noise, and different parts of the document may correspond to the different regions of the images, authors propose a transformer-based ZSL framework that jointly learns to encode images and documents by aligning both modalities in a shared embedding space. The cross-modality attention mechanism is introduced to suppress the noise. And extensive experiments are conducted to validate the proposed modules. Strengths:\n- The transform-based framework is proposed for the ZSL problem.\n\n- The I2D module is proven effective to capture the relevant image regions based on the collected documents.\n\n- The experimental results are considered favorable compared to SOTAs on three benchmark datasets. And the results are highly-interpretable.\n\n- The learned text embedding can be utilized with existing ZSL for further improvement.\n\nWeakness:\n- The discussions regrading the model complexity needs further clarification.\n\n- The novelty of cross modality attention among different tokens is somewhat limited being the main contribution of the whole paper.\n\n- The fairness of utilizing the online document compared to other SOTA ZSL frameworks needs further clarification.\n\nThe more detailed comments I'd like authors to address are summarized in the Questions part. - The implementation details regarding the compared methods should be given, since different noisy textual source is utilized under the current experimental settings;\n\n- What about the model complexity, FLOPs of the proposed models compared to other SOTA models?\n\n- What if all tokens (both visual ones and semantic ones) are jointly fed to the transformer instead of utilizing the separate transformer then employing the cross-modal attention, since the transformer contains self and cross attention mechanism itself. The main concern regarding the limitation is the model complexity.", " The paper proposes to use multimodal cross-attention to align image representation with word representation by unsupervised training on image-document pairs. The model is later validated on a Zero-Shot Learning task, and an interpretability study is conducted.\n Pros:\n1. The interpretability study demonstrates that the model learned to align modalities together.\n2. The performance over the baselines was improved.\n3. The idea of Image2Document attention seems natural - using the asymmetric attention(image-query) on text-image pairs and aligning them into the same embedding space. \n\nCons:\n1. Motivation: task description is short and appears relatively late in the paper(Zero-Shot Learning is a very vague statement). It is hard to assume what real-life problem the paper is trying to solve, which requires considerable mental effort to judge the model correctly. For example, the article's structure makes it impossible to understand the contribution while reading for the first time (not enough context on them is provided before). Both those issues make it hard to understand the significance of the contribution. \n2. Training: It remains unclear how the training proceeds, e.g., how is the output of the similarity function classified? How are the images aligned to the text? Where does the class information come from? \n3. Model: As I understand, the texts(words) are processed by converting them into GloVe embeddings, then through the MLP layer to be consumed by a Transformer network. This approach added a couple of processing steps, but the decision to do so remained unexplained and is not grounded in the experimental results anywhere in the paper. How much did using GloVe&MLP help? Additionally, in some aspects, the usefulness of this method counters the conclusion the paper reaches, especially the following: \"Documents of unseen classes use the same and additional vocabulary in new sentences causing a distribution shift in their input representation.\" This issue has been previously tackled in other areas of NLP with the subword tokenization (e.g., BPE). Why the authors use static word embeddings in such a situation remains highly mysterious.\n4. Model: Is there any reason not to try to unfreeze the pretrained visual encoder?\t\n5. Model: The proposed model trains a single attention layer while the ablation on scaling the model deeper is lacking but may be significant.\n6. Model: The choice of asymmetric attention(image-query) is not explained nor studied - one can imagine using the coattention or merging the inputs before passing to the self-attention (see, e.g. [1] for comparison). Is there any specific reason to design it that way?\n7. Clarity: The caption in Table 1 is not clear: how were semantic embeddings learned from different sources, and were they represented?\n8. Clarity: The paper presentation is aesthetically pleasing, but the structure makes it hard to understand and stack all the concepts together. Sadly, after reading the paper and having it in front of me, I would not be able to reproduce and reimplement the model's training. \n See Cons in the previous section, and please clarify:\n1. how the training proceeds,\n2. how datasets are constructed (especially where the class information comes from),\n3. what is the rationale for processing text with GloVe before passing it to Transformer? The authors partially analyze the results and provide some details on when the model fails." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 4, 4, 3 ]
[ "4FGfMgzAyFV", "4vbpCG9XzCC", "nips_2022_mjUrg0uKpQ", "n01wp3AkMC", "OCZnqZMG3N_", "x0H85AeucAY", "HMlV18LBBo", "4FGfMgzAyFV", "Q91Z1OAVj2Z", "jE2uEkq4F7M", "takbNy0RvHI", "ziQSA4F-WO", "NsslNJcf7De", "qSA8bskeCY1", "mJ6cMXtzHl", "nips_2022_mjUrg0uKpQ", "nips_2022_mjUrg0uKpQ", "nips_2022_mjUrg0uKpQ", "nips_2022_mjUrg0uKpQ" ]
nips_2022_fLIgyyQiJqz
Temporal Effective Batch Normalization in Spiking Neural Networks
Spiking Neural Networks (SNNs) are promising in neuromorphic hardware owing to utilizing spatio-temporal information and sparse event-driven signal processing. However, it is challenging to train SNNs due to the non-differentiable nature of the binary firing function. The surrogate gradients alleviate the training problem and make SNNs obtain comparable performance as Artificial Neural Networks (ANNs) with the same structure. Unfortunately, batch normalization, contributing to the success of ANNs, does not play a prominent role in SNNs because of the additional temporal dimension. To this end, we propose an effective normalization method called temporal effective batch normalization (TEBN). By rescaling the presynaptic inputs with different weights at every time-step, temporal distributions become smoother and uniform. Theoretical analysis shows that TEBN can be viewed as a smoother of SNN's optimization landscape and could help stabilize the gradient norm. Experimental results on both static and neuromorphic datasets show that SNNs with TEBN outperform the state-of-the-art accuracy with fewer time-steps, and achieve better robustness to hyper-parameters than other normalizations.
Accept
The paper proposes a method of batch normalization that takes into account the temporal dimension (TEBN) and empirically shows that TEBN can significantly improve the accuracy of spiking neural networks (SNNs). Theoretical analysis also provides new insights into how SNNs should be trained to improve accuracy (particularly in the face of temporal variation of internal covariate shift). This paper had received conflicting evaluations from Reject to Strong Accept, and the reviewers did not reach a consensus even after fairly intense discussion. The disagreement appears to come from what ones expect from SNNs: accuracy, robustness, latency, sparsity, biological plausibility, etc. While it is well empirically supported (particularly with the additional experiments during rebuttal) that TEBN increases accuracy, TEBN certainly looses biological plausibility, and the operations such as variance computation needed in TEBN might not be desirable for some applications of SNNs. Also, since much of the experiments are added during rebuttal, there is a criticism for the lack of consistency in experimental design, which also leads to mixed evaluations regarding the benefit of the proposed approach. Overall, despite several weaknesses and uncertainties, the high accuracy certainly matters to some of the users and researchers of SNNs, and the paper clearly excels in this regard. Hence, I recommend an acceptance.
train
[ "lC9dz2wssv6", "O_hUzdnlUI", "wn-28unYfP6", "IMPoNzsurw", "kupxNe3ClkA", "bgj7GHTOYt", "5AFiNvZ7hxM", "-q_oZimGm6E", "hBAdEHwvrOQ", "T3nG9oSo79m", "O-bB0tWR5ie", "48gMB-5xjg", "mFkc0giAPaD", "XquezqhLxcN", "lJ-DkxqRg8k", "3XtiiCLVLy99", "vuKdywLxM1Ex", "TVqzNYI16k", "PJ8TWtMqVUN5", "7B9c4CIozUl", "CDKRft3zqux", "ky4EkCirrE", "wv9-2WbDfa8", "k_PfUt6I8gk", "RNY5iYFOaL5" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " [1]https://github.com/fangwei123456/spikingjelly/blob/master/spikingjelly/activation_based/examples/speechcommands.py\n\n[2]Youngeun Kim and Priyadarshini Panda. Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. Frontiers in Neuroscience, 15:773954–773954, 2021.\n\n[3]Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, and Guoqi Li. Going deeper with directly trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence(AAAI), 2021", " Dear Reviewer VwQw,\n\nThank you for your constructive further comments and suggestions. We would like to address your concerns and answer your questions here.\n\n***Comment 1. DVS (or Event-based camera) for classification lacks justification as DVS is sensitive to motion. Still, no motion is required for classification tasks.***\n\nWe agree that no motion is required for classification tasks. We have added the experiment on the Speech Commands dataset to compare our method with other normalization methods. We apply the 4-layer CNN proposed in [1] and train the network for 50 epochs. As reported in Table R4, our method achieves better performance than other normalization methods, which implies that our method works for the time-dependent task. As you suggested, we test our method on the tracking task, and our experiments are still ongoing. Due to the limited rebuttal time, we would like to provide the results in the final version.\n\n\n**Table R4: Comparison with other normalization methods on Speech Commands dataset.**\n|Model | Architecture | Accuracy(%) | \n|:-----:|:---:|:---------:|\n|LN | 4-layer CNN| 92.33 |\n|BN | 4-layer CNN| 95.12 |\n|BNTT[2]| 4-layer CNN| 94.23 |\n|tdBN[3]| 4-layer CNN| 95.39 |\n| **TEBN** |4-layer CNN| **95.47** |\n\n\n***Comment 2. I agree with the authors that LIF is a special case of SRM. But, it does not mean working with LIF would work with SRM. I still suggest conducting SRM-based experiments to show the generalization of the proposed method.***\n\nThanks for your suggestion. As the rebuttal time is due soon, we only added the experiment of SRM on the CIFAR-10 dataset to validate the generalization of TEBN. For the SRM model, we used standard double-exponential PSP kernels with a brief finite rise and exponential decay, of the form $\\epsilon(t) = \\frac{\\tau_m}{\\tau_m - \\tau_s} (e^{-\\frac{t}{\\tau_m}} - e^{-\\frac{t}{\\tau_s}})$. Due to limited time, we only train the SRM model with 60 epochs. As shown in Table R5, the SRM-based network gets slighter better accuracy than the LIF-based network with the same structure, which implies that our method can be generalized to different neuron models.\n\n\n\n**Table R5: Performance on CIFAR-10 dataset with SRM model.**\n|Architecture | Neuron |Epoch| Accuracy(%) | \n|:-----:|:---:|:-----:|:-----:|\n| 7-layer CNN |LIF($\\tau$=0.25)| 60/100 |90.30/92.57|\n|7-layer CNN | SRM(${\\tau_s}$=2,${\\tau_m}$=4)| 60 |90.32\n\n***Comment 3. Threshold experiments do not make sense without showing the average potential. Based on my experiences, setting a large threshold would surely break the network as no signal would pass through.***\n\nThanks for your suggestion, we have calculated the average potential at each layer and reported the results in Table R6. We can find that a larger threshold makes a larger average potential. As the accuracies change very little for the thresholds 0.5, 1, and 1.5, our method obtains good scalability when threshold changing. Furthermore, we have added new experiments on the CIFAR-10 dataset with much larger thresholds and reported the results in Table R7. Due to limited time for rebuttal, we only train the networks with 36 epochs. We find that a larger threshold makes the training slower to converge. Besides, the training fails when the threshold is larger than 5.5. These results are consistent with your suggestion that \"a large threshold would surely break the network as no signal would pass through.\"\n\n**Table R6: Average potential at each layer with different thresholds.**\n|Threshold |Layer 1 | Layer 2 | Layer 3| Layer 4 |Layer 5 | Epoch|Accuracy(%)| \n|:-----:|:----:|:---:|:----:|:--:|:----:|:----:|:---:|\n| 0.5 |0.06| -0.29 | -0.34 | -0.36 | -0.07|100|92.22|\n| 1.0 | 0.30| -0.20 | -0.14 | -0.13 | 0.20|100|92.57|\n| 1.5 | 0.48| 0.01 | 0.05 | 0.03 | 0.38|100|92.99|\n\n**Table R7: Comparison with different thresholds on CIFAR-10 dataset.**\n|Architecture|Threshold |Epoch|Accuracy(%)| \n|:-----:|:------:|:---:|:---:|\n|7-layer CNN |2.0 | 36 |90.21|\n| 7-layer CNN| 2.5 | 36 |88.82|\n| 7-layer CNN| 3.0 | 36 |87.85|\n| 7-layer CNN |4.0 | 36 | 86.33|\n| 7-layer CNN |5.0 | 36 | 83.08|\n\n***Comment 4. \"all timesteps\" means all the steps in a window. Sure! But, what's the window size? How does the window size or the number of windows impact the proposed approach?***\n\nWe would like to note that the setting of timesteps means that we have chosen fixed window size. The window size has an influence on the final classification accuracy as it can increase the precision of sampling. Data of all timesteps in our classification task has first been squeezed into a window. So far, we have not researched the effect of multiple windows. As for the window size, we present the results of 2,4, and 6 timesteps for our proposed approach. Our results in Tables 2 and 3 of the manuscript imply that larger timesteps may increase the accuracy of SNNs. \n\n", " Dear Authors, \n\nThanks so much for the efforts to try to address all my concerns!\n\nHowever, I am still not convinced. \n\nDVS (or Event-based camera) for classification lacks justification as DVS is sensitive to motion. Still, no motion is required for classification tasks. \n\nI agree with the authors that LIF is a special case of SRM. But, it does not mean working with LIF would work with SRM. I still suggest conducting SRM-based experiments to show the generalization of the proposed method. \n\nThreshold experiments do not make sense without showing the average potential. Based on my experiences, setting a large threshold would surely break the network as no signal would pass through. \n\n\"all timesteps\" means all the steps in a window. Sure! But, what's the window size? How does the window size or the number of windows impact the proposed approach? \n\nIn summary, for my perspective, I do not think the work is in the shape to be accepted by NeurIPS. Therefore, I keep my original rate. ", " Dear Reviewer VwQw,\n\nThank you very much for your review. The authors have provided detailed responses to your review. Do they resolve your concerns, or do you still have anything you would like to clarify? We will be finishing the rebuttal period soon, so please let us know your opinions after the responses from the authors. Thank you!\n\n\n", " Dear Reviewer gvBu,\n\nThank you very much for your review. The authors have provided detailed responses to your review. Do they resolve your concerns, or do you still have anything you would like to clarify? We will be finishing the rebuttal period soon, so please let us know your opinions after the responses from the authors. Thank you!", " Dear Reviewer VwQw,\n\nThank you for the thorough feedbacks and constructive suggestions. Since it is approaching the end of author-reviewer discussion period, we would like to kindly ask if our previous response clarifies your concerns and if there are any further comments, and we are glad to cooperate and answer to facilitate the review process. Thanks a lot for your time!", " Thank you very much for the thorough review and for increasing the score.", " Thank you very much for increasing the score and for the detailed comments.", " Dear Reviewer gvBu,\n\nWe thank you for your detailed initial comments. We really hope to know whether our previous response has addressed your questions and concerns properly. As the discussion period will end soon, please let us know if you have any further questions that we would write a follow-up response. Thank you very much!", " Thank you for the detailed comments. The responses have addressed all my concerns and comments. After reading the other reviews, I still believe this paper presents a contribution worth of acceptance. Therefore, I would like to raise my score to 8.", " Taking into account the authors' responses to other reviewers and me, I am updating the rating. I found the rebuttal convincing.", " Thank you for the detailed feedbacks and constructive suggestions. It has been several days since we submitted our response, but we have not received your reply. We spend a lot of effort in rebuttal, and we really hope that you can check whether our responses have addressed your concerns. Please let us know if you have any further comments, and we are glad to write a follow-up response. Thank you very much! ", " ***Comment 5. I would recommend the author validate the efficiency of mitigating gradient explosion and vanishing (Line 215) with the proposed TEBN using a deeper model (e.g., ResNet-50).***\n\nDue to limited time for rebuttal, here we add the experiment of ResNet-34 on the ImageNet dataset to demonstrate the efficiency of our method. We first compare the performance of the proposed TEBN and tdBN [3]. As reported in Table R2, our method achieves better performance (64.29\\% v.s. 63.72\\%) and fewer time-steps (4 v.s. 6) than tdBN [3]. Then we compare our method with other state-of-the-art learning methods [4,5] for SNNs. One can find that our method outperforms SEW [4] and TET [5] when the architecture and time-steps are the same. All these results demonstrate that the proposed method can apply to the complex dataset. We have added these results in Tables 2 and 3 of the revised paper.\n\n**Table R2: Comparison with other normalization method and the SOTA training methods on ImageNet dataset.**\n|Model |Methods | Architecture | Time-steps| Accuracy(%) | \n|:-----:|:------:|:------:|:----:|:-------:|\n| tdBN[3] |Surrogate Gradient| ResNet-34 | 6 | 63.72 | \n| **TEBN** |Surrogate Gradient| ResNet-34 | 4 | **64.29** | \n| SEW[4] |Surrogate Gradient| SEW ResNet-34 | 4 | 67.04 |\n| TET[5] |Surrogate Gradient| SEW ResNet-34 | 4 | 68.00 |\n| **TEBN** |Surrogate Gradient| SEW ResNet-34 | 4 | **68.28** | \n\n\n\n***Comment 6.what is the biological meaning of the hyper-parameters in LIF model? Or if the authors explain how different hyper-parameters influence TEBN with theoretical analysis not only experimental evaluations, the results would be more sound.***\n\nIn terms of mimicking the brain network, there are spectrums of models varying in their biological realistic and computational efficiency. The Integrate-and-Fire model (IF) is the simplest in biology and the most efficient for computation. There exist more complex models like the Leaky integrate-and-fire model (LIF), spiking response model(SRM), Hodgkin–Huxley model (HH), etc [6]. \nThere are two hyper-parameters in LIF neuron we used, membrane time constant $\\tau$ and firing threshold $\\theta$. LIF neurons are able to remember current input information and forget some information from the past, which is under the regulation of the relative scale of membrane time constant $\\tau$ . And a suitable threshold $\\theta$ could maintain suitable firing rates and reduce information loss [3].\n\nExperimentally, we have presented the analysis of different time constants and firing thresholds to test the generalization. We showed the impacts of changing $\\tau$ in Sec 6.4, where $\\tau$=0.1, 0.25, 0.5, 0.75, 1.0. The accuracies of these $\\tau$ settings are 93.39, 93.54, 93.5, 93.53, and 93.61. Besides, we test the effect of different thresholds. The firing threshold is a hyper-parameter of the spiking neuron, corresponding to a biological neuron characteristic. When the Threshold=[0.5, 1.0, 1.5], we obtain the Accuray=[92.22%, 92.57%, 92.99%] in experiments, as shown in Table R3. Overall, our experimental results show that our TEBN generalizes well when changing the hyper-parameters. \n\n**Table R3: Comparison with different thresholds on CIFAR-10 dataset.**\n|Threshold |Accuracy(%)| \n|:-----:|:------------:|\n| 0.5 | 92.22 |\n| 1.0 | 92.57 |\n| 1.5 | 92.99 |\n\n\n\n[1] Tong Bu, Jianhao Ding, Zhaofei Yu, and Tiejun Huang. Optimized potential initialization for low-latency spiking neural networks. In In Proceedings of the AAAI Conference on Artificial Intelligence(AAAI), 2022\n\n[2]Youngeun Kim and Priyadarshini Panda. Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. Frontiers in Neuroscience, 15:773954–773954, 2021.\n\n[3]Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, and Guoqi Li. Going deeper with directly trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence(AAAI), 2021\n\n[4]Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, and Yonghong Tian. Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems (NeurIPS), 2021\n\n[5]Shikuang Deng, Yuhang Li, Shanghang Zhang, and Shi Gu. Temporal efficient training of spiking neural network via gradient re-weighting. In International Conference on Learning Representations(ICLR), 2021\n\n[6]Wulfram Gerstner, Werner M Kistler, Richard Naud, and Liam Paninski. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press, 2014.\n", " Thank you for your positive and constructive feedback. We would like to address your concerns and answer your questions in the following.\n\n***Comment 1.How does internal covariate shift appear in SNN? In my opinion, because of the different scales from each layer's output, ICS in ANN is obvious, however, in this paper, when you used the time-series LIF neuron model, the ICS should be changed in the time window. When using rate-based coding, the spikes are more intensively compared to temporal coding, how does TCS behave in addition to sec. 4.1?***\n\nWe would like to point out that ICS in SNN behaves like ANN, that is, the overall distribution of current in different layers is different. This overall distribution is a combination of currents from different time steps. Since SNN adds a time dimension compared to ANN, the relationship between distributions in SNN is more complicated. Even in the same layer, the current distribution at different time steps is different, or in other words, shifted. Considering a layer of neurons, the spikes from the previous layer will cause different synaptic connections (i.e. weights) to be driven so that the current distributions will not be exactly the same. Therefore, TCS should appear less pronounced when using a more densely and uniform coding scheme, such as rate encoding.\n\n***Comment 2. Line 40, the sentence, ‘Theoretical analysis shows that our approach can be viewed as a smoother of SNN’s optimization landscape and could help stabilize the gradient norm.’ need a revision as it is repetitive with the statement in the abstract.***\n\nThanks for pointing it out. We have revised it to 'We prove that our approach could smooth the optimization landscape of SNN and help stabilize the gradient norm.' in the revised paper.\n\n***Comment 3. Line 65, please add a citation to support the statement (less than 15 layers).***\n\nThanks for your suggestion! We have added the reference [1] in the revised paper.\n\n\n***Comment 4. Page9, Figures 2 and 3. The X and Y limit should be the same for both figures since the authors could compare distributions. Besides, I expect the author to conduct some quantitative measures to compare the distributions, rather than directly state that TEBN appears more homogeneous.***\n\nThe reviewer noticed certain grammar and clarity issues. We have updated the manuscript accordingly. Furthermore, we have provided distribution figures of different methods with aligned axes in Sec. C of Supplement.\n\nIn order to compare the distributions quantitatively, we plot the distributions shown in Fig. 2 as histograms, ranging from -1.5 to 1.5 with the bin of 0.05. Then we calculate the Kullback-Leibler(KL) divergence of the distribution histograms at every two timesteps. The results show that TEBN have smaller KL divergence, which means more homogeneous distributions.\n\n**Table R1: Comparison of KL divergence of distributions.**\n|Model | T=0,1 (×1e-3) | T=0,2 (×1e-3) | T=1,2 (×1e-3)| \n|:-----:|:------:|:------:|:----:|\n| default BN |21.8| 0.9 | 16.9 | \n| BNTT[2] |17.1| 78.9 | 23.4 | \n| tdBN[3] |61.7| 6.6 | 40.3 | \n| TEBN |2.0| 6.0 | 1.1 | \n\n\n\n", " [1]Iyer, Laxmi R., Yansong Chua, and Haizhou Li. Is neuromorphic mnist neuromorphic? analyzing the discriminative power of neuromorphic datasets in the time domain. Frontiers in neuroscience 15, 2021\n\n[2]Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, and Guoqi Li. Going deeper with directly trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence(AAAI), 2021\n\n[3]Youngeun Kim and Priyadarshini Panda. Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. Frontiers in Neuroscience, 15,2021.\n\n[4]Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, and Yonghong Tian. Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems (NeurIPS), 2021\n\n[5]Shikuang Deng, Yuhang Li, Shanghang Zhang, and Shi Gu. Temporal efficient training of spiking neural network via gradient re-\nweighting. In International Conference on Learning Representations(ICLR), 2021\n\n[6]Yufei Guo, Xinyi Tong, Yuanpei Chen, Liwen Zhang, Xiaode Liu, Zhe Ma, and Xuhui Huang. RecDis-SNN: Rectifying Membrane Potential Distribution for Directly Training Spiking Neural Networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022\n\n[7]https://github.com/fangwei123456/spikingjelly/blob/master/spikingjelly/activation_based/examples/speechcommands.py", " As for the timestep of SNN, we present the results of 2,4 and 6 time-steps. This setting is the same as that in the SNN SOTA works [5]. Our results in Tables 2 and 3 imply that larger time-steps may increase the accuracy of SNN. Nevertheless, we would like to note that the field of SNN favors the research of generalization on fewer time-steps. A larger time-step means greater computation cost both in training and inference. The field of SNN in AI has come a long way to reducing the time-step required for inference and training: As suggested in our Related Work, ANN-to-SNN conversion ''demands many time-steps to approach the accuracy of pre-trained ANNs''. Compared to conversion methods, backpropagation with surrogate gradients requires far fewer time-steps. Many works have been focused the performance on the extremely time-steps (less than 10)[4][5][6]. Therefore, it has always been the goal of SNN training to achieve higher performance with fewer time-steps.\n\n***Comment 3.The experimental results are only based on LIF. What about other SNN neuron models, e.g., SRM, IF?***\n\nIn terms of mimicking the brain network, we are aware of the fact that there are spectrums of models varying in their biological realistic and computational efficiency. The Integrate-and-Fire model (IF) is the simplest in biology and the most efficient for computation. There exist more complex models like the Leaky integrate-and-fire model (LIF), spiking response model(SRM), Hodgkin–Huxley model (HH), etc. \n\nLIF can be seen as a special case of SRM, and IF can be seen as a special case of LIF. LIF degenerates to the IF when membrane time constant $\\tau$ used in our method is set to 1.0. \nWe have actually experimented on IF models. We showed the impacts of changing $\\tau$ in Sec 6.4. We have chosen different $\\tau$s to test the generalization, $\\tau$=0.1, 0.25, 0.5, 0.75, 1.0. The accuracies of these $\\tau$ settings are 93.39, 93.54, 93.5, 93.53, and 93.61. These results can prove that our model has the potential to be applied to other neuron models.\n\nWe won't overclaim that our model can work for more complex neuron models without performing the related experiments. Indeed, how to build a computational-efficient model with complex neuron dynamics is a quite challenging problem for the whole machine learning and computational neuroscience groups. We are glad to investigate it further in future work.\n\n***Comment 4. The authors must need to describe the detailed architecture of the models used in their experiments.***\n\nWe would like to clarify that the detailed network architectures used in our experiments are provides in Sec. E2 of Supplementary Material.\n\n***Comment 5.Only LIF and classification tasks cannot show the generalization of the proposed method. Without revealing the performance of the tasks that rely on temporal information, I do not think the proposed scheme is meaningful to the community.***\n\nThanks for your comment. We would like to note that some event-based DVS datasets are recognized as being naturally encoded temporal information [1]. Besides, we added the experiment on Speech Commands dataset to validate the effectiveness of TEBN and report the results in Table R3. We apply the 4-layer CNN proposed in [7] and train the network for 50 epochs. We obtain accuracy of 95.47% with TEBN, 95.12% with BN, 92.33% with Layer Normalization (LN) and 94.60% without BN. The results illustrate that our TEBN can address issues related to sequential data better than some popular normalization methods.\n\n**Table R3: Comparison with other normalization on Speech Commands dataset.**\n \n|Model | Architecture | Accuracy(%) | \n|:-----:|:---:|:---------:|\n| Without BN |4-layer CNN| 94.60 |\n|LN | 4-layer CNN| 92.33 |\n|BN | 4-layer CNN| 95.12 |\n| **TEBN** |4-layer CNN| **95.47** |\n ", " Thank you for your constructive comments and suggestions. We would like to address your concerns and answer your questions here.\n\n***Comment 1.Requiring all time steps information is not practical at all, especially for time-dependent tasks, e.g., tracking. For classification tasks, it may make sense. However, using SNN for classification tasks lacks justification as the tasks do not really need temporal information.***\n\n\nThanks for your comments. We would like to note that our proposed method currently aims to solve the problem of SNN surrogate training on deep architectures. The benchmark task of this field is the classification task. Our classification tasks are conducted not only on static image datasets, but also on event-based datasets, some of which are recognized as being naturally encoded temporal information [1]. \n\nYou have raised an interesting concern about how to require the information of all time steps for time-dependent tasks. There may be a misunderstanding about ''all timesteps''. In our paper, ''all timesteps'' means all the steps in a window instead of the entire data sequence. For the event-based classification task, we adjust the full length of the event data into a fixed window. The window has a pre-determined size called ''timestep''. For video-based tasks like tracking, the existence of window size means that one can feed multiple frames of a video into SNNs once a time.\n\n\n***Comment 2.The experimental setup is unclear, and validations are weak. For VGG, ResNet, and other architectures, which parts are based on SNNs? For the ResNet-19, did the authors only replace the ReLU with SNNs?\nThe experimental results did not show the impacts of the following factor on the classification tasks. mini-batch size, spiking threshold, higher time steps: The authors claimed that fewer time steps are enough for the tested classification tasks. However, how do the larger time steps impact the results?***\n\n\nWe would like to declare our experimental setting: Yes, for VGG and ResNet architectures, we replace the ReLU activation with Leaky Integrate-and-fire neurons. The leaky factor of neurons can be manually set. Our TEBN directly trains SNN using surrogate gradients. Besides using the spiking neuron, the computational graph and gradients calculation is different from ANN due to the computational mechanism of the spiking neuron and additional temporal dimension. Overall, our setup of the model is identical to the related works [2][3][4] in the field of SNN training. Experiments details of the configurations and models are provided in Supplementary Material (Please refer to Sec. E of the Appendix). \n\nThanks for the suggestions on the impacts of some experimental factors. We have added new experiments on the CIFAR-10 dataset to demonstrate the generalization performance of our model. Here we use the 7-layer spiking CNN with the structure 28C3-256C3-AP2-512C3-AP2-1024C3-512C3-1024FC-10FC. We set the batch size to 64, timestep to 4, and threshold to 1 as the default setting. \n\nWe first experiment on the mini-batch size. The setting of batch size will potentially influence the statistics of our model. We set BatchSize=[4, 8, 16, 32, 64] and obtain the outcome Accuracy=[92.4%, 93.04%, 93.11%, 93.12%, 92.57%], as shown in Table R1. Our results demonstrate that though there is a significant change in batch size setting, our TEBN model still maintains good generalization capabilities. \n\n**Table R1: Comparison with different batch sizes on CIFAR-10 dataset.**\n|Batch |Accuracy(%)| \n|:-----:|:------------:|\n| 4 | 92.40 |\n|8 | 93.04 |\n|16 | 93.11 |\n| 32 | 93.12 |\n|64 | 92.57 |\n\n\nBesides, we test the effect of different thresholds. The firing threshold is a hyper-parameter of the spiking neuron, corresponding to a biological neuron characteristic. When the Threshold=[0.5, 1.0, 1.5], we obtain the Accuray=[92.22%, 92.57%, 92.99%] in experiments, as shown in Table R2. Our experimental results show that our TEBN generalizes well when changing the threshold value. \n\n**Table R2: Comparison with different thresholds on CIFAR-10 dataset.**\n|Threshold |Accuracy(%)| \n|:-----:|:------------:|\n| 0.5 | 92.22 |\n| 1.0 | 92.57 |\n| 1.5 | 92.99 |\n\n", " ***Comment 4. does not mention the method's strong similarity to short-term plasticity (STP), which is a long-known property of biological SNNs for temporal filtering of inputs (see e.g. Fortune & Rose, 2001 ; Rosenbaum et al., 2012 ). Temporal filtering is what the authors' TEBN also applies. Of course, STP applies a fixed, e.g. exponential kernel, whereas TEBN learns the shape of the temporal kernel. On the other hand, short-term plasticity is also learnable (Garcia-Rodriguez et al., ICML 2022 ), which then makes STP similar to what the authors here achieve, but in several aspects with more flexibility than what the authors here propose. Moreover, STP has been shown to be a powerful property of SNNs, such that SNNs with STP can even outperform LSTMs and CNNs in accuracy (Moraitis et al., 2021 ). Even though the authors do not have to perform experimental comparisons of their method against STP, STP's relevance through these works must be mentioned, and discussed as a potential alternative for future work, that is also more biologically-realistic.***\n\nThanks for your suggestion. STP is indeed relevant and we have now cited and discussed in the revised paper. In the conclusion section, we added \"Besides, recent works have shown that short-term plasticity (STP) \\cite{fortune2001short,rosenbaum2012short,tsodyks1997neural} can be incorporated into ANNs to enhance efficiency and computational power \\cite{moraitis2020optimality,rodriguez2022short}. As STP performs a function of temporal filtering similar to TEBN, how to use the biologically-realistic filter-STP in SNNs is the future direction.\n\n\n***Comment 5. Are the accuracies of other methods reported in the tables results from the authors own experiments, or from the literature? Please clarify in the paper.***\n\nAll the accuracies of other methods reported in the tables are results from the mentioned literature. We have now clarified the source in the revised manuscript.\n\n\n\n[1]https://github.com/fangwei123456/spikingjelly/blob/master/spikingjelly/activation_based/examples/speechcommands.py\n\n[2]Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.\n\n[3]Moreau, Thomas, et al. Benchopt: Reproducible, efficient and collaborative optimization benchmarks. arXiv preprint arXiv:2206.13424, 2022.\n\n[4]Zheng, Yaowei, Richong Zhang, and Yongyi Mao. \"Regularizing neural networks via adversarial model perturbation.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR), 2021.\n\n[5]Parker, Luke, Frances Chance, and Suma Cardwell. \"Benchmarking a Bio-inspired SNN on a Neuromorphic System.\" In Neuro-Inspired Computational Elements Conference, 2022.", " Thank you for your constructive comments, suggestions and appreciation of our work. We would like to address your concerns and answer your questions here.\n\n \n\n***Comment 1. does not address issues related to truly sequential data. SNNs are considered as most suitable for temporal tasks, therefore a complete evaluation of the method should involve such tasks. An example is keyword spotting on the Speech Commands dataset. Currently, all tested datasets in the paper only include static images, even if in some cases the images are recorded through a DVS.***\n\nThanks for your suggestion. We have added the experiment on Speech Commands dataset to validate the effectiveness of TEBN and report the results in Table R1. We apply the 4-layer CNN proposed in [1] and train the network for 50 epochs. We obtain accuracy of 95.47% with TEBN, 95.12% with BN, 92.33% with Layer Normalization (LN) and 94.60% without BN. The results illustrate that our TEBN can address issues related to sequential data better than some popular normalization methods.\n\n\n**Table R1: Comparison with other normalization on Speech Commands dataset.**\n \n|Model | Architecture | Accuracy(%) | \n|:-----:|:---:|:---------:|\n| Without BN |4-layer CNN| 94.60 |\n|LN | 4-layer CNN| 92.33 |\n|BN | 4-layer CNN| 95.12 |\n| **TEBN** |4-layer CNN| **95.47** |\n\n\n\n\n***Comment 2. does not compare to layer normalization, a technique known as suitable for recurrent networks and temporal datasets.***\n\nThanks for your valuable suggestions. BN takes the same feature of different samples, while LN takes the different features of the same sample. To compare with LN, we have conducted multiple sets of experiments. In addition to the experiment on Speech Commands Dataset (Table R1), we also compare the performance of LN and the proposed TEBN on CIFAR-10 and DVS-CIFAR10 datasets (shown in Tables R2 and R3). Here we train the 7-layer CNN on CIFAR10 and the 6-layer CNN on DVS-CIFAR10 using LN, with the same networks as Table 3 in the manuscript. From the observation of our experiment, we find that our TEBN outperforms LN on CIFAR-10 and DVS-CIFAR10. The performance of TEBN is 92.65% (v.s. 83.79% of LN) on CIFAR10 and 80.00% (v.s. 62.90% of LN) on DVS-CIFAR10. Our results are consistent with the conclusion of [2] that BN generally performs better than LN in CNN.\n\n\n**Table R2: Comparison with LN on CIFAR-10 dataset.**\n|Model | Architecture | Time-steps| Accuracy(%) | \n|:-----:|:------------:|:----:|:-------:|\nLN | 7-layer CNN | 4| 83.79 | \n**TEBN** | 7-layer CNN | 4| **92.65** |\n\n**Table R3: Comparison with LN on DVS-CIFAR10 dataset.**\n|Model | Architecture | Time-steps| Accuracy(%) | \n|:-----:|:------------:|:----:|:-------:|\nLN | 6-layer CNN | 10| 62.90 | \n**TEBN** | 6-layer CNN | 10| **80.00** |\n\n\n***Comment 3. does not explain how its results compare to the true state of the art, i.e. beyond SNNs. For example, non-spiking networks can also reduce latency and computation while maintaining their higher accuracy than SNNs (Jeffares et al., ICLR 2022). But more generally, it would be good to put the results back in the context of the broader ML field, and discuss the differences e.g. in accuracy, latency, or computational efficiency.***\n \nThanks for your constructive suggestion. Back in the context of the broader ML field, we would like to include the result of non-spiking ANN SOTA in the comparison. Using the similar network architectures, the ANN accuracies are 95.55%[3] on CIFAR-10 and 78.49%[4] on CIFAR-100 (Tables R4 and R5). \n\nWe would like to note that computation efficiency is the benefit of SNN. Our TEBN does not directly cope with the problem of computational efficiency. In the context of SNN, binary activation eliminates the multiplication through adding operation, which theoretically leads to computational efficiency if the hardware supports the deployment, e.g. Loihi chips from Intel, where 0 activations (no spike) will not be involved in the computation. Detailed discussions of computational efficiency between non-spiking network and SNN can be found in [5].\n\n**Table R4: Comparison with ANN on CIFAR-10 dataset.**\n\n|Model | Architecture | Time-steps| Accuracy(%) | \n|:-----:|:------------:|:----:|:-------:|\nANN[3] | ResNet-18 | 1 | 95.55 | \n**TEBN** | ResNet-19 | 2 | 95.45 |\n\n**Table R5: Comparison with ANN on CIFAR-100 dataset.**\n|Model | Architecture | Time-steps| Accuracy(%) | \n|:-----:|:------------:|:----:|:-------:|\nANN[4] | PreAct-ResNet-18 | 1 | 78.49 | \n**TEBN** | ResNet-19 | 2 | 78.07 |\n\n\n", " ***Comment 3. BNTT and TDBN like methods were adapted for learning in diverse scenarios, liek Federated learning, Segmentation, large scale DVS datasets, NAS optimization among others. Maybe the authors can include a commentary or discussion on how their approach can be applied to more diverse learning scenarios***\n\nYou have raised an interesting concern. We have added the following references [4-6] in the revised paper to illustrate BNTT like methods can be used for learning in diverse scenarios. \n\n\nBesides, we believe that our method also can be applied to more diverse learning scenarios. Due to limited time for rebuttal, here we add the experiment of federated learning [7] to validate the generalization of the proposed TEBN. We compare our method with BNTT. For a fair comparison, we use the same VGG-9 structure and only replace BNTT by TEBN. The results are shown in Table R2, we obtained final testing accuracy of 85.81%(v.s. 76.44% of BNTT) with 10 clients in total and 2 participating clients. We have added the comparison in the revised paper (please refer to the supplementary).\n\n\n\n**Table R2: Comparison with BNTT on the task of federated learning.**\n|Model |Methods | Architecture | Time-steps| Accuracy(%) | \n|:-----:|:-------:|:-----:|:----:|:-------:|\n| BNTT[8] |Surrogate Gradient| VGG-9 | 20 | 76.44 | \n| **TEBN** |Surrogate Gradient| VGG-9 | 4 | **85.81** | \n\n \n\n\n\n[1]Hanle Zheng, Yujie Wu, Lei Deng, Yifan Hu, and Guoqi Li. Going deeper with directly trained larger spiking neural networks. In Proceedings of the AAAI Conference on Artificial Intelligence(AAAI), 2021\n\n[2]Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, and Yonghong Tian. Deep residual learning in spiking neural networks. Advances in Neural Information Processing Systems (NeurIPS), 2021\n\n[3]Shikuang Deng, Yuhang Li, Shanghang Zhang, and Shi Gu. Temporal efficient training of spiking neural network via gradient re-weighting. In International Conference on Learning Representations(ICLR), 2021\n\n[4] Venkatesha, Yeshwanth, et al. \"Federated learning with spiking neural networks.\" IEEE Transactions on Signal Processing, 69: 6183-6194, 2021. \n\n[5] Kim, Youngeun, et al. \"Beyond classification: directly training spiking neural networks for semantic segmentation.\" arXiv preprint arXiv:2110.07742, 2021. \n\n[6]Kim, Youngeun, et al. \"Neural architecture search for spiking neural networks.\" European Conference on Computer Vision (ECCV), 2022.\n\n[7]https://github.com/Intelligent-Computing-Lab-Yale/FedSNN\n\n[8]Youngeun Kim and Priyadarshini Panda. Revisiting batch normalization for training low-latency deep spiking neural networks from scratch. Frontiers in Neuroscience, 15:773954–773954, 2021.", " \n\n\nThank you for your detailed and insightful comments. We are encouraged that you find our method effective. We would like to address your concerns and answer your questions here.\n\n***Comment 1. The main comment I have is on the novelty of this work. The authors have cited many relevant works such as BNTT, TDBN etc. I think the paper's results are very similar to that of BNTT.***\n\nWe would like to clarify that our work is different from BNTT, tdBN, etc. As illustrated in Sec. 2.2, our work is inspiration from these related works and takes advantage of them. To be specific, BNTT can utilize separate sets of BN parameters on different time-steps to mitigate the temporal shift of distributions. While in tdBN, the utilization of shared parameters may neglect the negative impact brought by the unusual temporal distributions. The most significant difference of this work is: our TEBN model can model the temporal shift of distributions without including T times volumes of BN parameters (Eq.10-13) and take advantage of the overall distribution (Theorem 1&2). We believe our work will be appealing for the implementation of energy-efficient SNNs (fewer parameters) and the theoretical generalization capability of SNNs (better experimental results).\n\n***Comment 2. The authors of BNTT and TDBN experimented on larger datasets like Tiny Imagenet. Can the authors comment on the scalability of their approach?***\n\nThanks for your suggestion. We have performed new experiments on the ImageNet dataset to demonstrate the scalability of our method. We first compare the performance of the proposed TEBN and tdBN [1]. As reported in Table R1, our method achieves better performance (64.29\\% v.s. 63.72\\%) and fewer time-steps (4 v.s. 6) than tdBN [1]. Then we compare our method with other state-of-the-art learning methods [2,3] for SNNs. One can find that our method outperforms vanilla SEW [2] and TET [3] when the architecture and time-steps are the same. All these results demonstrate that the proposed method can scale to larger datasets. We have added these results in Tables 2 and 3 of the revised paper.\n\n\n**Table R1: Comparison with other normalization method and the SOTA training methods on ImageNet dataset.**\n|Model |Methods | Architecture | Time-steps| Accuracy(%) | \n|:-----:|:------:|:------:|:----:|:-------:|\n| tdBN[1] |Surrogate Gradient| ResNet-34 | 6 | 63.72 | \n| **TEBN** |Surrogate Gradient| ResNet-34 | 4 | **64.29** | \n| SEW[2] |Surrogate Gradient| SEW ResNet-34 | 4 | 67.04 |\n| TET[3] |Surrogate Gradient| SEW ResNet-34 | 4 | 68.00 |\n| **TEBN** |Surrogate Gradient| SEW ResNet-34 | 4 | **68.28** | \n \n \n\n\n", " The paper proposes a temporal BN method to train SNNs with high accuracy, + The methos is simple and effective as suggested by author's results.\n-The main comment I ahve is on the novelty of this work. The authors have cited many relevant works such as BNTT, TDBN etc. I think the paper's results are very similar to that of BNTT. Except for the fact that the accuracy is better in the datasets that authros have exeprimented with, I don't thinkt here is a lot of technical contribution or novelty. \n-Further, authors of BNTT and TDBN experimented on larger datasets like Tiny Imagenet. Can the authors commnet on the scalability of their approach?\n-BNTT and TDBN like methods were adapted for learning in diverse scenarios, liek Federated learning [1], Segmentation [2], large scale DVS datasets, NAS optimization[3] among others. Maybe the authors can include a commentary or discussion on how their approach can be applied to more diverse learning scenarios.\n\n[1] Venkatesha, Yeshwanth, et al. \"Federated learning with spiking neural networks.\" IEEE Transactions on Signal Processing 69 (2021): 6183-6194.\n[2] Kim, Youngeun, et al. \"Beyond classification: directly training spiking neural networks for semantic segmentation.\" arXiv preprint arXiv:2110.07742 (2021).\n[3]Kim, Youngeun, et al. \"Neural architecture search for spiking neural networks.\" arXiv preprint arXiv:2201.10355 (2022). My main concern is teh technical novelty of this paper with respect to previous works. The authors have performed a holistic comparison. But, can they comment on how their work is technically novel? At this time, in my opinion, the paper reads to be more incremental effort. Please see the above comments on weaknesses.", " The manuscript presents a technique (TEBN) for batch normalization in SNNs that takes into account the temporal dimension of the spiking input to rescale and recenter the input distribution differently at each input timestep, even though it still estimates the mean and variance of the whole batch, throughout all timesteps. The paper's theory shows that this smoothens the loss landscape. Experiments show that TEBN improves accuracy, robustness, and latency compared to other batchnorm methods and compare to the state of the art in the SNN literature, in image recognition of common benchmark datasets, including recordings of static images through neuromorphic vision sensors. The paper targets a true problem in the field of SNNs, and appears to provide a truly improved solution for certain important cases. In addition, it provides theoretical analysis, and experimental insights deeper than merely end accuracies, showing the impact of the method on input distributions and on robustness to hyperparameters.\n\nOn the other hand, the paper:\n- does not address issues related to truly sequential data. SNNs are considered as most suitable for temporal tasks, therefore a complete evaluation of the method should involve such tasks. An example is keyword spotting on the Speech Commands dataset. Currently, all tested datasets in the paper only include static images, even if in some cases the images are recorded through a DVS.\n- does not compare to layer normalization, a technique known as suitable for recurrent networks and temporal datasets, even though the authors' input includes a temporal dimension, and even though LIF SNNs are recurrent networks.\n- does not explain how its results compare to the true state of the art, i.e. beyond SNNs. For example, non-spiking networks can also reduce latency and computation while maintaining their higher accuracy than SNNs (Jeffares et al., ICLR 2022 https://openreview.net/forum?id=iMH1e5k7n3L). But more generally, it would be good to put the results back in the context of the broader ML field, and discuss the differences e.g. in accuracy, latency, or computational efficiency.\n- does not mention the method's strong similarity to short-term plasticity (STP), which is a long-known property of biological SNNs for temporal filtering of inputs (see e.g. Fortune & Rose, 2001 https://roselab.biology.utah.edu/publications/trends2001.pdf; Rosenbaum et al., 2012 https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002557). Temporal filtering is what the authors' TEBN also applies. Of course, STP applies a fixed, e.g. exponential kernel, whereas TEBN learns the shape of the temporal kernel. On the other hand, short-term plasticity is also learnable (Garcia-Rodriguez et al., ICML 2022 https://arxiv.org/abs/2206.14048), which then makes STP similar to what the authors here achieve, but in several aspects with more flexibility than what the authors here propose. Moreover, STP has been shown to be a powerful property of SNNs, such that SNNs with STP can even outperform LSTMs and CNNs in accuracy (Moraitis et al., 2021 https://arxiv.org/abs/2009.06808). Even though the authors do not have to perform experimental comparisons of their method against STP, STP's relevance through these works must be mentioned, and discussed as a potential alternative for future work, that is also more biologically-realistic. Could the authors please respond to the above points, and ideally address them with experiments?\nAre the accuracies of other methods reported in the tables results from the authors own experiments, or from the literature? Please clarify in the paper. The authors have not discussed any limitations of the work.", " The authors proposed a batch normalization scheme considering the temporal domain, TEBN. The proposed TEBN requires all time-step information to estimate the desired expectation and variance. The authors provided theoretical analysis and validated the effectiveness of the proposed TEBN with classification tasks. Strengths:\n+1. The authors showed good efforts in theoretical analysis, which is desired for the NeurIPS audience. \n\nWeaknesses:\n\n-1. Requiring all time steps information is not practical at all, especially for time-dependent tasks, e.g., tracking. For classification tasks, it may make sense. However, using SNN for classification tasks lacks justification as the tasks do not really need temporal information.\n\n-2. The experimental setup is unclear, and validations are weak. For VGG, ResNet, and other architectures, which parts are based on SNNs? For the ResNet-19, did the authors only replace the ReLU with SNNs?\n* The experimental results did not show the impacts of the following factor on the classification tasks.\n * mini-batch size\n * spiking threshold\n * higher time steps: The authors claimed that fewer time steps are enough for the tested classification tasks. However, how do the larger time steps impact the results?\n\n-3. The experimental results are only based on LIF. What about other SNN neuron models, e.g., SRM, IF?\n\n\n\n Besides the questions, I listed above, the authors must need to describe the detailed architecture of the models used in their experiments. They cannot assume the audiences know them or let audiences play a guessing game. The validation is not convincing at all from my perspective. Only LIF and classification tasks cannot show the generalization of the proposed method. Without revealing the performance of the tasks that rely on temporal information, I do not think the proposed scheme is meaningful to the community. \n\nAs such, I suggest the authors show that the proposed scheme works with time-sensitive tasks. ", " This paper presents an efficient batch normalization (BN) method to smooth and uniform the temporal distributions of the presynaptic input in the Spiking Neural Networks (SNN). By combing the BN with temporal features, the proposed TEBN could alleviate the gradient vanishing problem to some extent. The proposed TEBN shows better classification accuracy and robustness to hyper-parameters on CIFAR-10, CIFAR-100 and DVS-CIFAR10 with fewer time-steps. Strengths: \n1) The writing is clear and the motivation is clarified clearly. Besides, the theoretical grounding and experimental evaluation are also sufficient to show its originality and significance. \n2) Experimental results also demonstrate the superiority of the proposed TEBN method. Through Fig. 1, we can see the advantages of the proposed TEBN Intuitively compared to tdBN.\n3) This paper gives a detailed illustration of how TEBN processes in a surrogate-gradient SNN training method. If the authors show the input and output simultaneously with or without TEBN, the effectiveness maybe better in a visual way. \n4) The spiking nondifferentiable leads to a non-smooth with the loss landscape, the authors analyze the smoothing optimization in sec. 5 which I think is very important for using the surrogate-gradient or ANN2SNN method training a deep SNN model.\n\nWeaknesses: \nHow does internal covariate shift appear in SNN? In my opinion, because of the different scales from each layer's output, ICS in ANN is obvious, however, in this paper, when you used the time-series LIF neuron model, the ICS should be changed in the time window. When using rate-based coding, the spikes are more intensively compared to temporal coding, how does TCS behave in addition to sec. 4.1? \n \n1) Line 40, the sentence, ‘Theoretical analysis shows that our approach can be viewed as a smoother of SNN’s optimization landscape and could help stabilize the gradient norm.’ need a revision as it is repetitive with the statement in the abstract. \n2) Line 65, please add a citation to support the statement (less than 15 layers). \n3) Page9, Figures 2 and 3. The X and Y limit should be the same for both figures since the authors could compare distributions. Besides, I expect the author to conduct some quantitative measures to compare the distributions, rather than directly state that TEBN appears more homogeneous.\n4) I would recommend the author validate the efficiency of mitigating gradient explosion and vanishing (Line 215) with the proposed TEBN using a deeper model (e.g., ResNet-50).\n The efficiency of mitigating gradient explosion and vanishing (Line 215) with the proposed TEBN is not validated efficiently in this work. And what is the biological meaning of the hyper-parameters in LIF model? Or if the authors explain how different hyper-parameters influence TEBN with theoretical analysis not only experimental evaluations, the results would be more sound." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 3, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 5, 5 ]
[ "O_hUzdnlUI", "wn-28unYfP6", "IMPoNzsurw", "lJ-DkxqRg8k", "7B9c4CIozUl", "k_PfUt6I8gk", "O-bB0tWR5ie", "T3nG9oSo79m", "ky4EkCirrE", "mFkc0giAPaD", "TVqzNYI16k", "nips_2022_fLIgyyQiJqz", "XquezqhLxcN", "RNY5iYFOaL5", "3XtiiCLVLy99", "vuKdywLxM1Ex", "k_PfUt6I8gk", "PJ8TWtMqVUN5", "wv9-2WbDfa8", "CDKRft3zqux", "ky4EkCirrE", "nips_2022_fLIgyyQiJqz", "nips_2022_fLIgyyQiJqz", "nips_2022_fLIgyyQiJqz", "nips_2022_fLIgyyQiJqz" ]
nips_2022_js2ssA77fX
Masked Generative Adversarial Networks are Data-Efficient Generation Learners
This paper shows that masked generative adversarial network (MaskedGAN) is robust image generation learners with limited training data. The idea of MaskedGAN is simple: it randomly masks out certain image information for effective GAN training with limited data. We develop two masking strategies that work along orthogonal dimensions of training images, including a shifted spatial masking that masks the images in spatial dimensions with random shifts, and a balanced spectral masking that masks certain image spectral bands with self-adaptive probabilities. The two masking strategies complement each other which together encourage more challenging holistic learning from limited training data, ultimately suppressing trivial solutions and failures in GAN training. Albeit simple, extensive experiments show that MaskedGAN achieves superior performance consistently across different network architectures (e.g., CNNs including BigGAN and StyleGAN-v2 and Transformers including TransGAN and GANformer) and datasets (e.g., CIFAR-10, CIFAR-100, ImageNet, 100-shot, AFHQ, FFHQ and Cityscapes).
Accept
This paper proposes two masking strategies to improve GANs with limited data. The idea is novel and these two strategies can nicely complement each other. The experiment results are promising. The reviewers unanimously raised questions on missing comparison, which seem to be well addressed after author-reviewer discussion. Two reviewers end up raising their scores. There are still some claims in the paper that may need better experimental evidence support, and also more visual cues would be helpful in paper presentation. Also, all reviewers pointed out that the discussion of limitations and broader impact seem not adequate. I would recommend weak acceptance however strongly encourage the authors to address the above-mentioned concerns in their next version.
train
[ "0LtU-pnz1Mn", "ASirOq1Euc3", "pElzxhw8-6M", "kHnjGM8tPB-", "SVUkdcNyjA4", "fSd6tQUImAJ", "jm4tGxyjTqb", "as8kLazzsGn", "UG3U5GGuf1S", "J4Lh6pa7X7p", "_S2jnEcPKT6", "ePIjE72TG92", "H0aiXd7RtV4", "xeV6DWa8hwx", "yE9hR5bWMB", "EeUwAi4XSQ", "enUYom-mzlM" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional experiments and analysis provided by the authors. My concerns are well addressed. I will raise my score. ", " Thanks for the timely and detailed response from the authors. My concerns have been addressed, and I will raise the score.", " Dear Reviewer mM3P:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest regards, \nAuthors\n", " Dear Reviewer aPDi:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest regards,\nAuthors\n", " Q2: Masking, including spatial masking and spectral masking, can also be viewed as a kind of data augmentation strategy. What makes masking, or the specially designed masking in this paper, superior to other data augmentation strategies?\n- Thank you for your questions. We would clarify that we discussed the difference between our MaskedGAN and data augmentation methods extensively in our manuscript and appendix. For example, the 4th paragraph of Section 4.4 describe the better convergence of the proposed MaskedGAN and why it converges better than existing data augmentation methods; Section 3.3 provides detailed theoretical insights and illustrations (on two time-scale update rule and local Nash Equilibrium); Section H of appendix shows the comparison with \"cutout\" that is used in data augmentation methods DA and ADA.\n- In summary, MaskedGAN designs two image masking strategies that work by removing certain image information only while previous data augmentation methods involve various data augmentations such as color jitters, saturation adjustment, etc. Besides, different to the conventional \"cutout’ in DA and ADA, MaskedGAN uses \"random patch-based masking\" and designs \"random mask shift\" and \"balanced spectral masking\".\nSuch differences in designs lead to very different results in convergences (empirically and theoretically) and generation performances.\nMore detailed descriptions can found in the following texts. Thank you for your suggestion and we will include the above discussions in the revised manuscript.\n\n\n- Followed please find the detailed texts copied (or summarized) from our manuscript and appendix for your reference, which extensively discuss the difference between our MaskedGAN and data augmentation methods:\n\n- **1)** As mentioned in the 4th paragraph (\"Convergence comparison across different network architectures and datasets\") of Subsection 4.4, the experimental results show that MaskedGAN converges well consistently across various conditions (the amounts of training data, network architectures and datasets) while the data augmentation method such DA still suffers from generation failures and training collapses. The great convergence of MaskedGAN is largely attributed to two factors: (a) its image masking designs suppress trivial solutions and training failures directly; (b) it keeps similar learning paces for discriminator and generator which ensures that network converge to a Local Nash Equilibrium under certain conditions [21].\nMaskedGAN can achieve factor (b) as its image masking strategies (or called data augmentation operations) operate by masking (or more specifically, removing) some image information only.\nDifferently, data augmentation methods such as DA and ADA cannot guarantee factor (b) as they generally includes some operations like color jitters and saturation adjustment which cannot satisfy the theoretic proofs as introduced in Section 3.3.\n\n- **2)** Point 1) described the design differences between MaskedGAN and previous data augmentation methods (e.g., DA and ADA) and illustrated different designs lead to different empirical convergences and further different generation performances.\nIn Section 3.3, we provided detailed theoretic insights and illustrations, which show MaskedGAN can be modeled as an instance of Two Time-Scale Update Rule and and thus it converges to a Local Nash Equilibrium under certain conditions. \nOn the other hand, previous data augmentation methods (e.g., DA and ADA) cannot satisfy our Propositions 1 and 2 provided in Section 3.3 because data augmentation methods such as DA and ADA includes some operations like color jitters and saturation adjustment, where such additive noises violate the proposition condition of removing certain image information only. \n\n- **3)** In Section H in appendix (\"Comparisons with the ‘cutout’ used in ADA and DA\"), we compared our MaskedGAN with the ‘cutout’ used in ADA and DA by providing experiments and detailed explanations.\n\nQ3: How much does the computation overhead increase after adding the masking strategy during GAN training?\n- The masking strategy could be considered as a type of data augmentation, which includes the Fourier transformation, inverse Fourier transformation and zeroing operations that introduce little extra computation overhead (similar to the previous works ADA and DA).", " Q1: My major concern is why the comparison results with other techniques for training GAN with limited data on AFHQ, FFHQ, and ImageNet are missing. The experimental results are not convincing enough without them?\n- Thank you for the suggestion! We conducted the suggested experimental comparisons over ImageNet.\n- We would clarify that we did not benchmark with ADA-related methods including InsGen and APA as our work mainly follows DA that adopts different experiment setups. To benchmark with ADA-related methods, it requires to rerun their codes over DA's experiment setups for valid comparisons. Such experiments are super computational intensive, e.g., ADA takes approximately 1, 259, 337.6 GPU hours (i.e., 143.76 GPU years) to complete its experiments. We did not have sufficient GPU resources to conduct such experiments.\n- Therefore, due to the computation resource constraints, we only benchmarked DA and ADA over the very large ImageNet dataset, the very small 100-shot dataset and the medium-size CIFAR-10/100 dataset, where we believe extensive experiments over these datasets are sufficient for benchmarking the proposed method. We plan to conduct related experiments and benchmark DA and ADA over other datasets and backbones later.\n- In addition, the experiments and comparisons over the 100-shot dataset (i.e., the revised Table 4) are very relevant and meaningful as this paper focuses on training GAN with limited data, where the task would be more challenging while working with a small dataset.\n\nRevised Table 3: Conditional image generation with BigGAN on ImageNet (FID).\n| Method | 10% Data | 5% Data | 2.5% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n|BigGAN (baseline) |38.30 ± 0.25 |91.16 ± 0.43 |133.80 ± 0.76 \n|ADA (**newly included**) | 31.89 ±0.17 |43.21 ± 0.37 | 56.83 ± 0.48 \n|DA (**newly included**) | 32.82 ± 0.18 |56.75 ± 0.35 |63.49 ± 0.51 \n|**MaskedGAN** | 26.51 ± 0.12 | 35.70 ± 0.31 | 38.62 ± 0.37 \n\nRevised Table 3: Conditional image generation with BigGAN on ImageNet (IS).\n| Method | 10% Data | 5% Data | 2.5% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n|BigGAN (baseline) |10.94 ± 0.35 |6.13 ± 0.09 |3.92 ± 0.07 |\n|ADA (**newly included**) |12.67 ±0.31 |9.44 ±0.25 |8.54 ± 0.26 |\n|DA (**newly included**) |12.76 ± 0.34 |9.63 ± 0.21 |8.17 ± 0.28 | \n|**MaskedGAN** |13.34 ± 0.24 | 12.85 ± 0.40 | 12.68 ± 0.27 | \n\nRevised Table 1: Conditional image generation with BigGAN on CIFAR-10 (FID).\n| Method | 100% Data | 20% Data | 10% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n| Non-saturated GAN |9.83 ± 0.06 |18.59 ± 0.15 |41.99 ± 0.18 |\n| LS-GAN |9.07 ± 0.01 | 21.60 ± 0.11 | 41.68 ± 0.18 \n| RAHinge GAN | 11.31 ± 0.04 | 23.90 ± 0.22| 48.13 ± 0.33|\n| StyleGAN-v2 | 11.07 ± 0.03 | 23.08 ± 0.11 | 36.02 ± 0.15 | \n| BigGAN (baseline) | 9.74 ± 0.06| 21.86 ± 0.29| 48.08 ± 0.10 | \n| LeCam-GAN| 8.31 ± 0.05 |15.27 ± 0.10 |35.23 ± 0.14 |\n| GenCo | 8.83 ± 0.04 | 16.57 ± 0.08 | 28.08 ± 0.11|\n| ADA (**newly included**) |8.99 ± 0.03 |19.87 ± 0.09 |30.58 ± 0.11 |\n| DA | 8.75 ± 0.03 | 14.53 ± 0.10| 23.34 ± 0.09 |\n| **MaskedGAN** | 8.41 ± 0.03 | 12.51 ± 0.09| 15.89 ± 0.12 |\n\nRevised Table 1: Conditional image generation with BigGAN on CIFAR-100 (FID).\n| Method | 100% Data | 20% Data | 10% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n| Non-saturated GAN | 13.87 ± 0.08| 32.64 ± 0.19 | 70.5 ± 0.38| \n| LS-GAN| 12.43 ± 0.11| 27.09 ± 0.09| 54.69 ± 0.12 | \n| RAHinge GAN| 14.61 ± 0.21| 28.79 ± 0.17| 52.72 ± 0.18| \n| StyleGAN-v2| 16.54 ± 0.04 | 32.30 ± 0.11| 45.87 ± 0.15 | \n| BigGAN (baseline) | 13.60 ± 0.07| 32.99 ± 0.24| 66.71 ± 0.01| \n| LeCam-GAN | 11.88 ± 0.12| 25.51 ± 0.19| 49.63 ± 0.16| \n| GenCo| 11.90 ± 0.02| 26.15} ± 0.08| 40.98 ± 0.09| \n| ADA (**newly included**) | 12.22 ± 0.02| 22.65 ± 0.10| 27.08 ± 0.15| \n| DA| 11.99 ± 0.02|22.55 ± 0.06|35.39 ± 0.08 | \n| **MaskedGAN** | 11.65 ± 0.03| 18.33 ± 0.09| 24.02 ± 0.12| \n\nRevised Table 4: Unconditional image generation with StyleGAN-v2 on 100-shot dataset (FID).\n| Method | Obama | Grumpy Cat | Panda | \n| --------------------------------|:-------:|:-----:|:-----:|\nStyleGAN-v2 (baseline) | 80.20 | 48.90 | 34.27 \nADA | 45.69 | 26.62 | 12.90 \nLeCam-GAN | 38.58 | 41.38 | 19.88 \nGenCo | 36.35 | 33.57 | 15.50 \nAdvAug | 52.86 | 31.02 | 14.75 \nDA |46.87 | 27.08 | 12.06 \nAPA (**newly included**) | 43.75 | 28.49 | 12.34 \nInsGen (**newly included**) | 45.85 | 27.48 | 12.13 \n**MaskedGAN** | 33.78 ± 0.27 | 20.06 ± 0.13 | 8.93 ± 0.06 ", " Q5: What do you mean by Masking? Are you zeroing some pixels? In that case, are you using Gated or Partial Convolutions? If yes, then the models are different, and if no, how do you handle the normalization of the output to adjust for the fraction of missing data?\n- As mentioned in Section 3.2 (Lines138-150 and Lines159-175), image masking is defined as multiplying the image with a binary mask (e.g., $M_{spatial}(x) = x \\times m_{spatial}'$ where $m_{spatial} \\in$ {0,1} $^{H \\times W}$). This definition is the same as in Masked Autoencoders papers (e.g., Context Encoder [39], MAE [20], etc.).\n\n- We did not use Gated or Partial Convolutions. \n- We conducted the image masking operations after image normalization during training. The Masked Autoencoders papers (e.g., Context Encoder [39], MAE [20], etc.) also conducted image masking operations in the similar way and all of them show that such image masking operations have no problem and work well over both CNNs-based and Transformer-based networks. We reckon this should not be a problem as none of these works (e.g., Context Encoder [39], MAE [20], etc.) reported any normalization issues of zeroing some pixels in CNNs-based and Transformer-based network training.\n\nQ6: In Fig2, I don't understand how the masked image is used in the model? Specifically, can you make it clear if the model receives the right images directly? If it receives the lower image (Masked Image in the spectral domain), it seems a completely different image! Do some of the pixels retain the original pixel values?\n- In Fig 2, both \"Masked Image\" are directly fed into the networks for training. We confirmed that the model receives the right images (i.e., the two Masked Image on the right) directly.\n\n- For the second \"Masked Image\" at the bottom in Fig 2 (Masked Image in the spectral domain), it seems quite different to the original image as the low-frequency bands have been masked (i.e., removed). If we zoom in it, we can observe that the low-frequency information (e.g., brightness and color information) have been removed while the mid-frequency and high-frequency information (e.g., shapes and outlines) are kept intact.\n\n- In spatial masking, the un-masked pixels retain the original pixel values. In spectral masking, removing any image spectra will generally change the value of all image pixels as image spectra (also called image frequency bands) capture information of each and every pixel globally.\n\n- As mentioned in Section 3.2 (Lines159-168), the spectral masking can encourage the networks to learn from all image spectral bands instead of focusing on easy bands (e.g., low-frequency bands capturing color and brightness) only.\n\n- Besides, both the Fourier transformation and pixel zeroing operation are differentiable, which will not leak the augmented information to the generator as proved in DA.\n\n", " Q1: The literature review section does not provide coherent statements for each cited work, and it bulk references some papers that might not be directly related. e.g. for Masked Autoencoders, it references basic papers of autoencoders(45, 39, 8, 17, 2, 20, 48), which needs to be discussed in the Autoencoder section?\n- We would clarify that the references [45, 39, 8, 17, 2, 20, 48] have no problem and all of them propose new Masked Autoencoders (MAE) with different image masking strategies. For example, [45] is the pioneering work of MAE, which presents masking as a noise type in Denoising Autoencoders (DAE); Context Encoder [39] proposes to masks random image regions of different shapes. iGPT [8], ViT [17], BEiT [2], MAE [20] and SimMIM[48] are inspired by transformers [17]: [8] proposes masked pixel prediction; [17] uses masked patch prediction; based on [17], [2] proposes the tokenization with a block-wise masking strategy, [20] proposes to use a high masking ratio and [48] proposes several simple designs (e.g., a large masked patch size and a light prediction head). Thus, we believe we provided coherent statements for each cited work in the section of Related Work.\n- Note the recent well-known Masked Autoencoders papers [20, 48] also cited [45, 39, 8, 17, 2] as the related works of Masked Autoencoders (or called \"Masked image encoding/modeling\"), i.e., the third paragraph of Section 2 in [20] and the second paragraph of Section 2 in [48].\n\nQ2: The mentioned shifting and Fourier domain manipulation (masking) are not clearly discussed?\n- We would clarify that we clearly defined the \"random shift\" (Lines138-150) and \"Fourier domain manipulation (masking)\" (Lines159-175) in Section 3.2. We also extensively discussed these two designs in our manuscript and appendix. Following please find the detailed texts copied (or summarized) from our manuscript and appendix for your reference:\n\n- In Table 2, we presented ablation studies to discuss and examine how each design (i.e., \"Random Shift\", \"Spectral Masking\" and \"Self-adaptive Probability\") contributes to the overall performance (Lines207-221).\n\n- In Section E in appendix (\"Parameter ablations\"), we presented parameter studies which extensively discussed the parameters used in MaskedGAN (including the parameter involved in these two designs).\n\nQ3: About implementation details. It is not clear how the sampling happens in the limited data scenario and if it is heterogeneous or homogenous? Is the sampled data heterogeneous or homogeneous? How have you handled it? The number of training rounds is not specified? What are the learning rate, batch-size, and epochs of training for each of the experiments?\n- We would clarify that we adopted similar training details as in earlier data augmentation methods such as DA and ADA.\nWe provided the experiment detail in Second D in appendix (\"Experiment details\"), which includes the detail information of datasets, backbones, network training, as well as the training details such as the learning rate, the batch-size and other specifics used for each backbone and dataset.\n- We adopted the same data sampling random seed (i.e., selecting which data for limited the data scenario) and number of training rounds as used in the previous work DA.\nWe did not provide these information as most data-efficient GAN studies follow DA (or ADA) to conduct the experiments and all the training details (e.g., the random seed of data sampling, the number of training rounds, the learning rate, etc.) are the same as used in DA (or ADA).\n- As mentioned, we followed DA to conduct the experiments, where the data sampling is heterogeneous (e.g., if the full training data are evenly distributed across categories, after sampling, the number of training data may be different for each category).\n\nQ4: It seems it works as an augmentation, and with multiple random masking, we actually increase the number of data samples. I think after mentioning the number of iterations, it would be appropriate to compare with SOTA augmentation techniques in GANs?\n- As mentioned in the response to the previous question (i.e., Q4 from Reviewer mM3P), we followed DA to conduct the experiments, where the number of training epochs is the same as in DA.", " Q2: Is the proposed method orthogonal to existing augmentation methods like DA [1] ADA [2]?\n- We did not consider incorporating DA and ADA into MaskedGAN because their data augmentation strategies violate the proposition condition of removing certain image information only (Propositions 1 and 2 provided in Section 3.3). In other words, incorporating DA/ADA into MaskedGAN may impair the theoretic convergence guarantee of MaskedGAN and even deteriorate the training stability and generation performance. Nevertheless, it is interesting to explore how and whether MaskedGAN and DA/ADA complement and we will definitely investigate this issue later.\n\nQ3: The author may consider to put more visualization results in the paper.\n- Thank you for the suggestion! We will include more visualization results in the revised version.", " Q1: Missing augmentation comparison? In table 3, a proper comparison would be 1. biggan, 2. biggan + diffaug[1], 3. biggan + [2], 4. biggan + masked-aug (MaskedGAN)?\n- Thank you for the suggestion! We conducted the suggested experiments over ImageNet in the revised Table 3. It can be seen that the proposed MaskedGAN outperforms other data augmentation methods (i.e., ADA and DA) clearly, which is largely attributed to our two masking designs. The experiments are consistent with other experiments in revised Tables 1 and 4 (including several newly conducted experiments), which together show that MaskedGAN outperforms other data augmentation methods consistently over various datasets (e.g., CIFAR-10, CIFAR-100, ImageNet and 100-shot datasets) and backbones (e.g., BigGAN and StyleGAN-v2). Thank you for your suggestion and we will include these comparisons in the revised appendix/manuscript.\n- Thank you for the suggestion again! Due to the time and computation resource constraints, we only benchmarked DA and ADA over the very large ImageNet dataset, the very small 100-shot dataset and the medium-size CIFAR-10/100 dataset, where we believe extensive experiments over these datasets are sufficient for benchmarking the proposed method. We plan to conduct related experiments and benchmark DA and ADA over other datasets and backbones later.\n\nRevised Table 3: Conditional image generation with BigGAN on ImageNet (FID).\n| Method | 10% Data | 5% Data | 2.5% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n|BigGAN (baseline) |38.30 ± 0.25 |91.16 ± 0.43 |133.80 ± 0.76 \n|ADA (**newly included**) | 31.89 ±0.17 |43.21 ± 0.37 | 56.83 ± 0.48 \n|DA (**newly included**) | 32.82 ± 0.18 |56.75 ± 0.35 |63.49 ± 0.51 \n|**MaskedGAN** | 26.51 ± 0.12 | 35.70 ± 0.31 | 38.62 ± 0.37 \n\nRevised Table 3: Conditional image generation with BigGAN on ImageNet (IS).\n| Method | 10% Data | 5% Data | 2.5% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n|BigGAN (baseline) |10.94 ± 0.35 |6.13 ± 0.09 |3.92 ± 0.07 |\n|ADA (**newly included**) |12.67 ±0.31 |9.44 ±0.25 |8.54 ± 0.26 |\n|DA (**newly included**) |12.76 ± 0.34 |9.63 ± 0.21 |8.17 ± 0.28 | \n|**MaskedGAN** |13.34 ± 0.24 | 12.85 ± 0.40 | 12.68 ± 0.27 | \n\nRevised Table 1: Conditional image generation with BigGAN on CIFAR-10 (FID).\n| Method | 100% Data | 20% Data | 10% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n| Non-saturated GAN |9.83 ± 0.06 |18.59 ± 0.15 |41.99 ± 0.18 |\n| LS-GAN |9.07 ± 0.01 | 21.60 ± 0.11 | 41.68 ± 0.18 \n| RAHinge GAN | 11.31 ± 0.04 | 23.90 ± 0.22| 48.13 ± 0.33|\n| StyleGAN-v2 | 11.07 ± 0.03 | 23.08 ± 0.11 | 36.02 ± 0.15 | \n| BigGAN (baseline) | 9.74 ± 0.06| 21.86 ± 0.29| 48.08 ± 0.10 | \n| LeCam-GAN| 8.31 ± 0.05 |15.27 ± 0.10 |35.23 ± 0.14 |\n| GenCo | 8.83 ± 0.04 | 16.57 ± 0.08 | 28.08 ± 0.11|\n| ADA (**newly included**) |8.99 ± 0.03 |19.87 ± 0.09 |30.58 ± 0.11 |\n| DA | 8.75 ± 0.03 | 14.53 ± 0.10| 23.34 ± 0.09 |\n| **MaskedGAN** | 8.41 ± 0.03 | 12.51 ± 0.09| 15.89 ± 0.12 |\n\nRevised Table 1: Conditional image generation with BigGAN on CIFAR-100 (FID).\n| Method | 100% Data | 20% Data | 10% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n| Non-saturated GAN | 13.87 ± 0.08| 32.64 ± 0.19 | 70.5 ± 0.38| \n| LS-GAN| 12.43 ± 0.11| 27.09 ± 0.09| 54.69 ± 0.12 | \n| RAHinge GAN| 14.61 ± 0.21| 28.79 ± 0.17| 52.72 ± 0.18| \n| StyleGAN-v2| 16.54 ± 0.04 | 32.30 ± 0.11| 45.87 ± 0.15 | \n| BigGAN (baseline) | 13.60 ± 0.07| 32.99 ± 0.24| 66.71 ± 0.01| \n| LeCam-GAN | 11.88 ± 0.12| 25.51 ± 0.19| 49.63 ± 0.16| \n| GenCo| 11.90 ± 0.02| 26.15} ± 0.08| 40.98 ± 0.09| \n| ADA (**newly included**) | 12.22 ± 0.02| 22.65 ± 0.10| 27.08 ± 0.15| \n| DA| 11.99 ± 0.02|22.55 ± 0.06|35.39 ± 0.08 | \n| **MaskedGAN** | 11.65 ± 0.03| 18.33 ± 0.09| 24.02 ± 0.12| \n\nRevised Table 4: Unconditional image generation with StyleGAN-v2 on 100-shot dataset (FID).\n| Method | Obama | Grumpy Cat | Panda | \n| --------------------------------|:-------:|:-----:|:-----:|\nScale/shift | 50.72 | 34.20 | 21.38 \nMineGAN | 50.63 |34.54 |14.84 \nTransferGAN |48.73 | 34.06 |23.20 \nTransferGAN + DA |39.85 |29.77 | 17.12 \nFreezeD | 41.87 | 31.22 | 17.95 \nStyleGAN-v2 (baseline) | 80.20 | 48.90 | 34.27 \nADA | 45.69 | 26.62 | 12.90 \nLeCam-GAN | 38.58 | 41.38 | 19.88 \nGenCo | 36.35 | 33.57 | 15.50 \nAdvAug | 52.86 | 31.02 | 14.75 \nDA |46.87 | 27.08 | 12.06 \nAPA (**newly included**) | 43.75 | 28.49 | 12.34 \nInsGen (**newly included**) | 45.85 | 27.48 | 12.13 \n**MaskedGAN** | 33.78 ± 0.27 | 20.06 ± 0.13 | 8.93 ± 0.06 ", " Q4: Differences from data augmentation?\n- We would clarify that we discussed the difference between our MaskedGAN and data augmentation methods extensively in our manuscript and appendix. For example, the 4th paragraph of Section 4.4 describes the better convergence of the proposed MaskedGAN and why it converges better than existing data augmentation methods; Section 3.3 provides detailed theoretical insights and illustrations (on two time-scale update rule and local Nash Equilibrium); Section H of appendix shows the comparison with \"cutout\" that is used in data augmentation methods DA and ADA.\n- In summary, MaskedGAN designs two image masking strategies that work by removing certain image information only while previous data augmentation methods involve various data augmentations such as color jitters, saturation adjustment, etc. Besides, different to the conventional \"cutout’ in DA and ADA, MaskedGAN uses \"random patch-based masking\" and designs \"random mask shift\" and \"balanced spectral masking\".\nSuch differences in designs lead to very different results in convergences (empirically and theoretically) and generation performances.\nMore detailed descriptions can be found in the following texts. Thank you for your suggestion and we will include the above discussions in the revised manuscript.\n\n\n- Followed please find the detailed texts copied (or summarized) from our manuscript and appendix for your reference, which extensively discuss the difference between our MaskedGAN and data augmentation methods:\n\n- **1)** As mentioned in the 4th paragraph (\"Convergence comparison across different network architectures and datasets\") of Subsection 4.4, the experimental results show that MaskedGAN converges well consistently across various conditions (the amounts of training data, network architectures and datasets) while the data augmentation method such DA still suffers from generation failures and training collapses. The great convergence of MaskedGAN is largely attributed to two factors: (a) its image masking designs suppress trivial solutions and training failures directly; (b) it keeps similar learning paces for discriminator and generator which ensures that network converge to a Local Nash Equilibrium under certain conditions [21].\nMaskedGAN can achieve factor (b) as its image masking strategies (or called data augmentation operations) operate by masking (or more specifically, removing) some image information only.\nDifferently, data augmentation methods such as DA and ADA cannot guarantee factor (b) as they generally includes some operations like color jitters and saturation adjustment which cannot satisfy the theoretic proofs as introduced in Section 3.3.\n\n- **2)** Point 1) described the design differences between MaskedGAN and previous data augmentation methods (e.g., DA and ADA) and illustrated different designs lead to different empirical convergences and further different generation performances.\nIn Section 3.3, we provided detailed theoretic insights and illustrations, which show MaskedGAN can be modeled as an instance of Two Time-Scale Update Rule and and thus it converges to a Local Nash Equilibrium under certain conditions. \nOn the other hand, previous data augmentation methods (e.g., DA and ADA) cannot satisfy our Propositions 1 and 2 provided in Section 3.3 because data augmentation methods such as DA and ADA includes some operations like color jitters and saturation adjustment, where such additive noises violate the proposition condition of removing certain image information only. \n\n- **3)** In Section H in appendix (\"Comparisons with the ‘cutout’ used in ADA and DA\"), we compared our MaskedGAN with the ‘cutout’ used in ADA and DA by providing experiments and detailed explanations.\n\nQ5: As claimed in the contribution summary, the holistic understanding of images is encouraged. Is there any experimental support?\n- This is really a critical issue, and we attempted to address it with the Gini coefficient of spatial attention in Section A of the appendix. As Fig. 1 (in Section A of the appendix) shows, the baseline model tends to merely focus on a few image locations without holistic understandings of images, ultimately leading to an over-confident discriminator and training collapse. In contrast, MaskedGAN pays very even attention to every spatial location (i.e., learning and understanding images more holistically), resulting in more stable training process and better performance. More details can be found in Section A in appendix.", " Q3: Some important baselines are missing?\n- Thank you for the suggestion! We compared with the suggested InsGen and APA. The experiments show that the proposed MaskedGAN outperforms the two methods clearly (as shown in the revised Table 4), which is largely attributed to our two masking designs (i.e., shifted spatial masking and balanced spectral masking) which randomly remove certain image information during network training and thus encourage a holistic learning of images as illustrated in Section A in appendix. Thank you for your suggestion and we will include the new experiments in our manuscript.\n- We would clarify that we did not benchmark with ADA-related methods including InsGen and APA as our work mainly follows DA that adopts different experiment setups. To benchmark with ADA-related methods, it requires to rerun their codes over DA's experiment setups for valid comparisons. Such experiments are super computational intensive, e.g., ADA takes approximately 1, 259, 337.6 GPU hours (i.e., 143.76 GPU years) to complete its experiments. We did not have sufficient GPU resources to conduct such experiments.\n- Therefore, due to the computation resource constraints, we only benchmarked InsGen and APA over 100-shot dataset. Nevertheless, the experiments and comparisons over the 100-shot dataset (i.e., the revised Table 4) are very relevant and meaningful as this paper focuses on training GAN with limited data, where the task would be more challenging while working with a small dataset.\n\nRevised Table 4: Unconditional image generation with StyleGAN-v2 on 100-shot dataset (FID).\n| Method | Obama | Grumpy Cat | Panda | \n| --------------------------------|:-------:|:-----:|:-----:|\nScale/shift | 50.72 | 34.20 | 21.38 \nMineGAN | 50.63 |34.54 |14.84 \nTransferGAN |48.73 | 34.06 |23.20 \nTransferGAN + DA |39.85 |29.77 | 17.12 \nFreezeD | 41.87 | 31.22 | 17.95 \nStyleGAN-v2 (baseline) | 80.20 | 48.90 | 34.27 \nADA | 45.69 | 26.62 | 12.90 \nLeCam-GAN | 38.58 | 41.38 | 19.88 \nGenCo | 36.35 | 33.57 | 15.50 \nAdvAug | 52.86 | 31.02 | 14.75 \nDA |46.87 | 27.08 | 12.06 \nAPA (**newly included**) | 43.75 | 28.49 | 12.34 \nInsGen (**newly included**) | 45.85 | 27.48 | 12.13 \n**MaskedGAN** | 33.78 ± 0.27 | 20.06 ± 0.13 | 8.93 ± 0.06 ", " Q1: More data augmentation methods are supposed to be involved in the comparison of CIFAR and ImageNet?\n- Thank you for the suggestion! We compared with the suggested two state-of-the-art methods DA and ADA that achieve data-limited generation through data augmentation. The proposed MaskedGAN achieves outstanding performance consistently across multiple datasets (e.g., CIFAR-10, CIFAR-100, ImageNet and 100-shot datasets) and backbones (e.g., BigGAN and StyleGAN-v2) as shown in revised Tables 1, 3, and 4 (newly conducted experiments are highlighted). The outstanding performance is largely attributed to our two masking designs (i.e., shifted spatial masking and balanced spectral masking) which randomly remove certain image information during network training and thus encourage a holistic learning of images as illustrated in Section A in appendix. Thank you for your suggestion and we will include the new experiments in our manuscript.\n\nRevised Table 1: Conditional image generation with BigGAN on CIFAR-10 (FID).\n| Method | 100% Data | 20% Data | 10% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n| Non-saturated GAN |9.83 ± 0.06 |18.59 ± 0.15 |41.99 ± 0.18 |\n| LS-GAN |9.07 ± 0.01 | 21.60 ± 0.11 | 41.68 ± 0.18 \n| RAHinge GAN | 11.31 ± 0.04 | 23.90 ± 0.22| 48.13 ± 0.33|\n| StyleGAN-v2 | 11.07 ± 0.03 | 23.08 ± 0.11 | 36.02 ± 0.15 | \n| BigGAN (baseline) | 9.74 ± 0.06| 21.86 ± 0.29| 48.08 ± 0.10 | \n| LeCam-GAN| 8.31 ± 0.05 |15.27 ± 0.10 |35.23 ± 0.14 |\n| GenCo | 8.83 ± 0.04 | 16.57 ± 0.08 | 28.08 ± 0.11|\n| ADA (**newly included**) |8.99 ± 0.03 |19.87 ± 0.09 |30.58 ± 0.11 |\n| DA | 8.75 ± 0.03 | 14.53 ± 0.10| 23.34 ± 0.09 |\n| **MaskedGAN** | 8.41 ± 0.03 | 12.51 ± 0.09| 15.89 ± 0.12 |\n\nRevised Table 1: Conditional image generation with BigGAN on CIFAR-100 (FID).\n| Method | 100% Data | 20% Data | 10% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n| Non-saturated GAN | 13.87 ± 0.08| 32.64 ± 0.19 | 70.5 ± 0.38| \n| LS-GAN| 12.43 ± 0.11| 27.09 ± 0.09| 54.69 ± 0.12 | \n| RAHinge GAN| 14.61 ± 0.21| 28.79 ± 0.17| 52.72 ± 0.18| \n| StyleGAN-v2| 16.54 ± 0.04 | 32.30 ± 0.11| 45.87 ± 0.15 | \n| BigGAN (baseline) | 13.60 ± 0.07| 32.99 ± 0.24| 66.71 ± 0.01| \n| LeCam-GAN | 11.88 ± 0.12| 25.51 ± 0.19| 49.63 ± 0.16| \n| GenCo| 11.90 ± 0.02| 26.15} ± 0.08| 40.98 ± 0.09| \n| ADA (**newly included**) | 12.22 ± 0.02| 22.65 ± 0.10| 27.08 ± 0.15| \n| DA| 11.99 ± 0.02|22.55 ± 0.06|35.39 ± 0.08 | \n| **MaskedGAN** | 11.65 ± 0.03| 18.33 ± 0.09| 24.02 ± 0.12| \n\n\nRevised Table 3: Conditional image generation with BigGAN on ImageNet (FID).\n| Method | 10% Data | 5% Data | 2.5% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n|BigGAN (baseline) |38.30 ± 0.25 |91.16 ± 0.43 |133.80 ± 0.76 \n|ADA (**newly included**) | 31.89 ±0.17 |43.21 ± 0.37 | 56.83 ± 0.48 \n|DA (**newly included**) | 32.82 ± 0.18 |56.75 ± 0.35 |63.49 ± 0.51 \n|**MaskedGAN** | 26.51 ± 0.12 | 35.70 ± 0.31 | 38.62 ± 0.37 \n\nRevised Table 3: Conditional image generation with BigGAN on ImageNet (IS).\n| Method | 10% Data | 5% Data | 2.5% Data | \n| --------------------------------|:-------:|:-----:|:-----:|\n|BigGAN (baseline) |10.94 ± 0.35 |6.13 ± 0.09 |3.92 ± 0.07 |\n|ADA (**newly included**) |12.67 ±0.31 |9.44 ±0.25 |8.54 ± 0.26 |\n|DA (**newly included**) |12.76 ± 0.34 |9.63 ± 0.21 |8.17 ± 0.28 | \n|**MaskedGAN** |13.34 ± 0.24 | 12.85 ± 0.40 | 12.68 ± 0.27 | \n\nQ2: Overall, some tables seem unfair since many baselines are without data augmentations, while the proposed method could be regarded as one novel data augmentation?\n- Data-limited image generation has been tackled in two typical approaches, namely, data augmentation approach like DA and ADA and model regularization approach like LeCAM-GAN and GenCo. The two approaches address data constraint from very different perspectives and they are quite independent, i.e., data augmentation approach usually won't involve model regularization and vice versa. That's why, unlike the benchmarking with data augmentation baselines like DA and ADA, several model regularization baselines involve little data augmentation in benchmarking. Note such evaluation practice has been widely adopted in the data-limited image generation studies such as LeCAM-GAN and GenCo.", " This paper propses MaskedGAN to help GANs learn from the limited data by introducing two image masking strategies. One is the shifted spatial masking and the other is the balanced spectral masking. Those two masking strategies complement each other which together encourage GANs learning effectively and robustely on the limited data. Strengths:\n\nThe motivation of this paper is clear, and the authors did lots of experiments to demonstrate the effectiveness of their method. But the experiment setting part needs to be improved as listed in the weaknesses or question part.\n\n\nWeaknesses:\n\n1. More data augmentation methods are supposed to be involved in the comparison of CIFAR and Imagenet. For example, StyleGAN2-ADA is required rather than the original StyleGAN2, especially with limited training data. Overall, some tables seem unfair since many baselines are without data augmentations, while the proposed method could be regarded as one novel data augmentation. \n\n2. Some important baselines are missing. For instance, InsGen and APA (both are NeurIPS 2021 work) achieved strong performances with limited training data on FFHQ, and AFHQ.\n\nInsGen: Data-Efficient Instance Generation from Instance Discrimination, Yang. et al, NeurIPS 2021.\nAPA: Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data, Jiang, et al. NeurIPS 2021. 1. Differences from data augmentation? As it lacks the reconstruction part of MAE, the masking strategy can be regarded as another type of data augmentation. From this perspective, what if we have sufficient data, is improvement consistent? Besides, the relation to data augmentation is required to be discussed in related work.\n\n2. As claimed in the contribution summary, the holistic understanding of images is encouraged. Is there any experimental support? The authors do not adequately addressed the limitations and potential negative societal impact of their work.", " This paper proposes two novel masked-based data augmentation methods for image generation task with GAN, namely shifted spatial masking and balanced spectral masking. Extensive experiments are conducted to demonstrate the effectiveness and generalizability of the proposed methods. Pro:\n1. The paper is well-written and easy to follow.\n2. The proposed method is technically sound and intuitive. The paper is well-motivated. \n3. Extensive experiments are conducted to support the author's points.\n4. Competitive performances are achieved by using the proposed method. The proposed method is generalizable enough to a lot of GAN architectures.\n\nCons:\n1. Missing augmentation comparison: in table 3, I think a proper comparison would be 1. biggan, 2. biggan + diffaug[1], 3. biggan + [2], 4. biggan + masked-aug (MaskedGAN). Same issue also exists in table. 5~7.\n\n[1] Zhao, S., Liu, Z., Lin, J., Zhu, J. Y., & Han, S. (2020). Differentiable augmentation for data-efficient gan training. Advances in Neural Information Processing Systems, 33, 7559-7570.\n\n[2] Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., & Aila, T. (2020). Training generative adversarial networks with limited data. Advances in Neural Information Processing Systems, 33, 12104-12114. 1. Is the proposed method orthogonal to existing augmentation methods like [1][2]? The author might consider to add an experiment that using both the proposed mask-based augmentation with either [1] or [2] to see whether this would give an extra performance boost.\n\n[1] Zhao, S., Liu, Z., Lin, J., Zhu, J. Y., & Han, S. (2020). Differentiable augmentation for data-efficient gan training. Advances in Neural Information Processing Systems, 33, 7559-7570.\n\n[2] Karras, T., Aittala, M., Hellsten, J., Laine, S., Lehtinen, J., & Aila, T. (2020). Training generative adversarial networks with limited data. Advances in Neural Information Processing Systems, 33, 12104-12114. The author may consider to put more visualization results in the paper.", " This work suggests two strategies for masking in GANS and it claims that their Mask Generative Adversarial Networks (MaskedGAN) are robust with limited training data.\nThe two strategies are:\n\n1) Shifted spatial masking (random shifts in the spatial domain)\n2) Balanced spectral masking (multiple bands with self-adaptive probabilities)\n\nHypothesis: masking helps the model to learn hard-to-discriminate band and create a challenging scenario\nSupport for the hypothesis: training on multiple architectures (not covering all datasets)\nLimited dataset setup:\n10%(5K), 20%(10K), and 100%(50K) of data on CIFAR10, 100 \n2.5%(25K), 5%(50K), and 10%(100K) of data on ImageNet\n\n Strengths:\n- The paper introduces an interesting subject for the community.\n- There are multiple experiments.\n- Multiple models are used for the experiments.\n\nWeaknesses:\n- The writing is somewhat incoherent and unclear: in the abstract and introduction, problem statements lack motivation and clarity.\n- The literature review section does not provide coherent statements for each cited work, and it bulk references some papers that might not be directly related. e.g. for Masked Autoencoders, it references basic papers of autoencoders(45, 39, 8, 17, 2, 20, 48), which needs to be discussed in the Autoencoder section.\n- The mentioned shifting and Fourier domain manipulation (masking) are not clearly discussed.\n- It is not clear how the sampling happens in the limited data scenario and if it is heterogeneous or homogenous.\n- The number of training rounds is not specified.\n- It seems it works as an augmentation, and with multiple random masking, we actually increase the number of data samples. I think after mentioning the number of iterations, it would be appropriate to compare with SOTA augmentation techniques in GANs.\n - What do you mean by Masking? Are you zeroing some pixels? In that case, are you using Gated or Partial Convolutions? If yes, then the models are different, and if no, how do you handle the normalization of the output to adjust for the fraction of missing data?\n- Is the sampled data heterogeneous or homogeneous? How have you handled it?\n- What are the learning rate, batch-size, and epochs of training for each of the experiments?\n- In Fig2, I don't understand how the masked image is used in the model? Specifically, can you make it clear if the model receives the right images directly? If it receives the lower image (Masked Image in the spectral domain), it seems a completely different image! Do some of the pixels retain the original pixel values?\n No. They mentioned the limited data training, which is the main setting of the paper, but as far as I understand, listing the limitation means to discuss where the methodology does not work.", " This paper proposes MaskedGAN to deal with the problem of training GAN with limited data. MaskedGAN uses two masking strategies, shifted spatial masking (masking spatial patches) and balanced spectral masking (masking spectral bands), on both real images and generated images during the training of GAN. Experimental results shows the effectiveness of the proposed method over the original GAN, as well as the superiority over other techniques for training GAN with limited data, across different architectures and datasets. Pros:\n1. The idea of developing masking strategies to enhance the training of GAN with limited data is great. It shows the power of masking in the generative field. \n\n2. The results are strong. This paper tests the proposed method with different architectures and datasets, and shows consistent and remarkable improvements over baselines, especially when the ratio of available data is small. \n\nCons:\n\nIn the experiments, the proposed method compares with other techniques for training GAN with limited data (e.g., DA, GenCo, etc) only on very small datasets (e.g., CIFAR, 100-shot) but not on large datasets (e.g., ImageNet, FFHQ). On large datasets, it only compares with the baseline. 1. My major concern is why the comparison results with other techniques for training GAN with limited data on AFHQ, FFHQ, and ImageNet are missing. The experimental results are not convincing enough without them.\n\n2. Masking, including spatial masking and spectral masking, can also be viewed as a kind of data augmentation strategy. What makes masking, or the specially designed masking in this paper, superior to other data augmentation strategies?\n\n3. How much does the computation overhead increase after adding the masking strategy during GAN training? The authors did not state the limitation of the proposed method clearly in the paper. They only claimed possible future applications, e.g., applying MaskedGAN to multi-modality generation, but those are not the limitation of the method or result itself." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "SVUkdcNyjA4", "_S2jnEcPKT6", "EeUwAi4XSQ", "xeV6DWa8hwx", "fSd6tQUImAJ", "enUYom-mzlM", "as8kLazzsGn", "EeUwAi4XSQ", "J4Lh6pa7X7p", "yE9hR5bWMB", "ePIjE72TG92", "H0aiXd7RtV4", "xeV6DWa8hwx", "nips_2022_js2ssA77fX", "nips_2022_js2ssA77fX", "nips_2022_js2ssA77fX", "nips_2022_js2ssA77fX" ]
nips_2022_w5DacXWzQ-Q
SAViT: Structure-Aware Vision Transformer Pruning via Collaborative Optimization
Vision Transformers (ViTs) yield impressive performance across various vision tasks. However, heavy computation and memory footprint make them inaccessible for edge devices. Previous works apply importance criteria determined independently by each individual component to prune ViTs. Considering that heterogeneous components in ViTs play distinct roles, these approaches lead to suboptimal performance. In this paper, we introduce joint importance, which integrates essential structural-aware interactions between components for the first time, to perform collaborative pruning. Based on the theoretical analysis, we construct a Taylor-based approximation to evaluate the joint importance. This guides pruning toward a more balanced reduction across all components. To further reduce the algorithm complexity, we incorporate the interactions into the optimization function under some mild assumptions. Moreover, the proposed method can be seamlessly applied to various tasks including object detection. Extensive experiments demonstrate the effectiveness of our method. Notably, the proposed approach outperforms the existing state-of-the-art approaches on ImageNet, increasing accuracy by 0.7% over the DeiT-Base baseline while saving 50% FLOPs. On COCO, we are the first to show that 70% FLOPs of FasterRCNN with ViT backbone can be removed with only 0.3% mAP drop. The code is available at https://github.com/hikvision-research/SAViT.
Accept
The paper received three positive reviews and one negative review. The raised issues contain technical correctness, ImageNet-22K pertaining, insufficient experiments and speedup on GPUs, computational cost, clarity on ablation studies. During the rebuttal and discussion phases, most of the issues are addressed and reviewers are willing to upgrade. After checking all the reviews, rebuttals, and discussions, the AC agrees with the reviewers that the raised issues are well addressed. The authors shall revise according to the suggestions to further improve the current manuscript in the camera-ready submission. Also, the comparison to token selection-based ViT acceleration methods [a] shall be included in the experiments. [a]. Not All Patches Are What You Need: Expediting Vision Transformers via Token Reorganizations. Liang et al. ICLR 2022.
train
[ "l4y8MaAtMWX", "kYFAzEU2fjY", "OJgiqOMt8_A", "niVtS7BGCEh", "woqfnRadvY", "zCBRkhEp9b3", "o-Ug_TbM3Qy", "qKG7XZxwGwJ", "muDO9b_k6-n", "BtIUuKhbtf", "3qu4VOu8qCC", "UOwyAnChqsY", "tYKA4xEWww" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the detailed feedback and additional results. The response addressed my concerns about the insufficient experiments and actual speedup on GPUs. I am glad to see the search process of the method is much faster than the existing method. I raised the score to 5. \n\n", " Dear Reviewers,\n\nWe sincerely thank your time for the review, and we really hope to have a further discussion with you to see if our response solves your concerns before the end of discussion period. Thank you!\n\nBest regards", " The authors' response addresses my concerns. I would like to keep my initial rating (weak accept) for this paper. Hope to see updates for the two mentioned issues in the final version.", " We are appreciated that you had a positive initial impression, and hope the responses below can solve your concerns.\n## Q1\nHere we list the required computation cost and data of several state-of-the-art network pruning methods as well as ours. As for the data in our approach, we empirically find that pruning using 10\\% training data works well as that using all training data.\n\n***\nMethod. &emsp;&emsp;&emsp;&emsp;&emsp;&emsp; Computation Cost(Search cost) &emsp;&emsp; Fine-tune cost &emsp;&emsp;&emsp; Data\n***\nS$^{2}$ ViTE[1] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp; &emsp;&emsp;&ensp;600 epochs &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 0 epochs &emsp;&emsp; training dataset\n\nNVP[2] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 10 epochs &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 300 epochs &emsp;&ensp; training dataset\n\nViT-Slim[3] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 50 epochs &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 300 epochs &emsp;&ensp; training dataset\n\nPS-ViT[4] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp; 300 epochs &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp;&nbsp; 300 epochs &emsp;&ensp; training dataset\n\nOurs &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; <2 epochs &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp; 300 epochs &emsp; 10\\% training dataset\n***\nNote that we train the 10\\% training dataset for 12 epochs, other methods train on the whole training dataset, so the search time of our method is less than 2 epochs on the whole training dataset.\n## Q2\nThe ablation study in Table 7 means dropping the cross-components terms, as we aim to show the important impact of the interactions. We will address the ambiguity in the final version. Actually, we had conducted experiments dropping all Hessian-based terms and observed that the cross-component terms play a crucial role. The whole ablation experiments are summed up as follows.\n\n***\nModel &emsp;&emsp;&emsp;&emsp;&emsp; Interactions &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; Top-1 Acc.\n***\nDeiT &emsp;&emsp; dropping all Hessian-based terms &emsp;&emsp;&emsp;79.56\n\n&emsp;&emsp;&emsp;&emsp;&ensp; dropping cross-component terms &emsp;&emsp;&emsp;79.68\n\n&emsp;&emsp;&emsp;&emsp;&emsp;keeping all Hessian-based terms &emsp;&emsp;&emsp; **80.78**\n***\n\n[1] Chen, T., et al. Chasing sparsity in vision transformers: An end-to-end exploration. In NeurIPS 2021.\n\n[2] Yang, H., et al. Nvit: Vision transformer compression and parameter redistribution. arXiv preprint 2021.\n\n[3] Arnav Chavan, et al. Vision transformer slimming: Multi-dimension searching in continuous optimization space. arXiv preprint 2022.\n\n[4] Yehui Tang, et al. Patch slimming for efficient vision transformers. arXiv preprint 2021.", " We are very glad you had a positive initial impression, and we provide pointwise responses to your concerns below.\n## Q1\nTo fairly compare against the state-of-the-art pruning methods, we fine-tuning 300 epochs after pruning following the existing studies on ViT compression [1,2]. Furthermore, to resolve your concerns, we train DeiT-Base/Small for 300+300 epochs as longer baselines following the original DeiT training recipe in the paper [3]. Actually, the performance of DeiT training saturates after 300\\textasciitilde400 epochs. We report the results in the table below. Compares to longer baselines with 600 training epochs, our pruned models can still achieve 0.53\\% accuracy gains on DeiT-Base and 0.09\\% on DeiT-Small. These results suggest our pruning algorithm indeed brings accuracy gains. \n***\nModels &emsp;&emsp; 300 epochs &emsp; 600 epochs &emsp; &emsp; Our pruned model\n***\nDeiT-Base &emsp;&emsp; 81.84 &emsp;&emsp; &emsp; 82.01 &emsp;&emsp;&emsp; 82.54 (50.0\\% FLOPs reduction)\n\nDeiT-Small &emsp;&ensp; 79.85 &emsp;&emsp;&emsp;&ensp; 80.02 &emsp;&emsp;&emsp; 80.11 (31.7\\% FLOPs reduction)\n***\n## Q2\nThanks for your advice, we will revisit and adjust the pseudo-code in a proper position in the final version. \n\n[1] Arnav Chavan, et al. Vision transformer slimming: Multi-dimension searching in continuous optimization space. arXiv preprint 2022.\n\n[2] Yehui Tang, et al. Patch slimming for efficient vision transformers. arXiv preprint 2021.\n\n[3] Touvron, H., et al. Training data-efficient image transformers \\& distillation through attention. In ICML 2021.\n", " We appreciate your consideration and thoughtful feedback. We provide pointwise responses to your concerns below.\n## Q1\nWe dig into the importance of the interactions between components from a theoretical perspective. Based on the analysis, we derive to take advantage of the Hessian matrix to explicitly represent the interactions and prune the ViT automatically. \nCompared to previous methods for CNNs [1,2,3], our method is different in: \n**1)** The effect of interactions in CNNs and ViTs is different. As CNNs consist of homogeneous components, most works [1,2] drop the interactions and just apply individual importance, which has achieved pretty good performance. ViTs are significantly different from CNNs, and we show that cross-component interactions play a crucial role. \n**2)** The approximations of Hessian matrix are different. Due to the required huge memory of whole Hessian matrix, other works [3] compute the layer-wise Hessian matrix without considering cross-layer interactions. In contrast, we propose an efficient algorithm to approximate the global Hessian matrix. In a word, directly applying these approaches to prune ViTs is infeasible.\n\nOn the other hand, the joint optimization of ViT can be categorized into Neural Architecture Search(NAS) and pruning. NAS considers interactions in a black-box-like, implicit form. Take AutoFormer [4] as an example, our method differs from it in: **1)** AutoFormer uses the classification accuracy of subnets from the supernet to implicitly reflect the interactions, while ours derives a theoretical term for the interaction explicitly. **2)** Since AutoFormer considers interactions in a implicit way, it consumes much more search time. Specifically, AutoFormer needs to train the supernet hundreds of epochs and evaluate thousands of subnets on the validation set. Ours starts from the pre-trained model and prunes it fast by the approximated loss perturbation. **3)** AutoFormer needs to design a discrete search space by hand. It searches QKV Dim in the range of (528, 624) by a step of 48. Instead, we can automatically search for a fine-grained and more suitable embedding dim.\n\nAs for related ViT pruning works [5], they ignore the necessary interactions between components and require some hand-crafted parameters to balance the pruning ratios for every component.\n\nWe list a thorough comparison of the related methods and ours below.\n\n***\nCategory &emsp;&emsp; Method &emsp; Application &emsp; Interactions &emsp;&emsp; Search Time &emsp;&ensp; Search Granularity &emsp;&emsp; Pattern\n*** \nNAS &emsp;&emsp;&emsp;&emsp; OFA[6] &emsp;&emsp;&emsp; CNN &emsp;&emsp;&emsp;&ensp; Implicit &emsp;&emsp;&emsp;&nbsp; 300 epochs &emsp;&emsp;&emsp;&emsp; Coarse &emsp;&emsp;&emsp;&emsp; Hand-crafted \n\n&emsp;&emsp;&emsp;&emsp;&ensp; AutoFormer[4] &emsp;&ensp; ViT &emsp;&emsp;&emsp;&emsp; Implicit &emsp;&emsp;&emsp;&nbsp; 500 epochs &emsp;&emsp;&emsp;&emsp; Coarse &emsp;&emsp;&emsp;&emsp; Hand-crafted\n***\nPruning &emsp;&ensp; Taylor-FO[1] &emsp;&ensp; CNN &emsp;&emsp;&emsp;&emsp;&ensp; No &emsp;&emsp;&emsp;&emsp;&ensp; 30 epochs &emsp;&emsp;&emsp;&ensp; Fine-grained &emsp;&emsp; Hand-crafted\n\n&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; NVP[5] &emsp;&emsp;&emsp;&ensp; ViT &emsp;&emsp;&emsp;&emsp;&ensp;&ensp; No &emsp;&emsp;&emsp;&emsp;&ensp; 10 epochs &emsp;&emsp;&emsp; &ensp;Fine-grained &emsp; &emsp; Hand-crafted\n\n&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; Ours &emsp;&emsp;&emsp;&emsp;&ensp; ViT &emsp;&emsp;&emsp;&emsp;&nbsp; Explicit &emsp;&emsp;&emsp;&ensp; <2 epochs &emsp;&emsp;&emsp;&ensp;Fine-grained &emsp;&emsp; &ensp;Automatical\n***\nNote that we conduct pruning with the 10\\% training dataset for 12 epochs, while other methods do it on the whole training dataset, so the search time of our method is less than 2 epochs on the whole training dataset.\n", " ## Q2\nTo show the effectiveness of our method, we conduct experiments on several model sizes and list the results below. DeiT-B-Distilled is adopted as the distillation teacher.\n***\nModel &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; Param. &emsp;&emsp; FLOPs &emsp;&emsp; Top-1 Acc.\n\n***\n\nDeiT-B-Distilled(teacher) &emsp;&emsp;&ensp; 87M &emsp;&emsp;&ensp; 17.6G &emsp;&emsp;&emsp; 83.36 \n\nNVP-B &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 34M &emsp;&emsp;&ensp; 6.8G &emsp;&emsp;&emsp; 83.29 \n\nSAViT-B(ours) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp; 33M &emsp;&emsp;&emsp; 6.7G &emsp;&emsp;&emsp; **83.31**\n\n***\nDeiT-S-Distilled &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&nbsp; 22M &emsp;&emsp;&emsp; 4.6G &emsp;&emsp;&emsp; 81.20 \n\nManifold[7] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 22M &emsp;&emsp;&emsp; 4.6G &emsp;&emsp;&emsp;&nbsp;81.48\n\nUP-DeiT[8] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&nbsp; 22M &emsp;&emsp;&emsp;&emsp; - &emsp;&emsp;&emsp;&ensp;&nbsp; 81.56 \n\nNVP-S[5] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 21M &emsp;&emsp;&emsp; 4.2G &emsp;&emsp;&emsp;&nbsp; 82.19\n\nSAViT-S(ours) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 21M &emsp;&emsp;&emsp; 4.2G &emsp;&emsp;&emsp; **82.38** \n***\nDeiT-T-Distilled &emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 5.6M &emsp;&emsp;&emsp; 1.3G &emsp;&emsp;&emsp;&ensp; 74.5 \n\nManifold[7] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp;&nbsp; 5.6M &emsp;&emsp;&emsp; 1.3G &emsp;&emsp;&emsp;&ensp;&nbsp;75.1\n\nUP-DeiT[8] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 5.7M &emsp;&emsp;&emsp;&emsp; - &emsp;&emsp;&emsp;&ensp;&emsp; 75.8\n\nNVP-T[5] &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 6.9M &emsp;&emsp;&emsp; 1.3G &emsp;&emsp;&emsp;&ensp; 76.2\n\nSAViT-T(ours) &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 6.6M &emsp;&emsp;&emsp; 1.3G &emsp;&emsp;&emsp;&emsp;**77.0** \n***\nAs we can see, the more FLOPs we prune, the larger accuracy gap our method obtains over other state-of-the-art approaches. We notice that SAViT-B performs very close to the distillation teacher DeiT-B-Distilled and reaches almost the performance ceiling. In addition, we observe that a smaller model should use a smaller drop path rate, in accordance with Swin[9]. So we adjust the drop path rate for fine-tuning the pruned models as detailed in Appendix. ", " ## Q3\nThe reason for the gap between the theoretical FLOPs compression rate and the actual speedup lies in that ViT contains many operators that affect running latency. Except for matrix multiplication/convolution operators, of which pruning aims to accelerate the computation, operations like LN and Softmax also require extra sophisticated computation and have a huge footprint on memory bandwidth. The extra computation and memory operation could not be reflected by FLOPs. To understand this, we break down the latency of baseline DeiT-Base and 70\\% FLOPs pruned model into operator levels as follows. \n\n***\nOperators &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; Matmul. &emsp;&emsp;&emsp; LN &emsp;&emsp;&emsp; GELU &emsp;&emsp;&emsp; Softmax &emsp; Other memory-related ops &emsp; Total\n***\nDeiT Base latency(ms) &emsp; 125(38.9\\%) &emsp;18.0(5.6\\%) &emsp; 17.8(5.5\\%) &emsp; 10.3(3.2\\%) &emsp;&emsp;&emsp; 151.5(46.9\\%) &emsp;&emsp;&emsp;&emsp;323(100\\%)\n\n70\\% FLOPs Pruned(ms) &ensp; 41.3(26.2\\%) &emsp;11.7(7.4\\%) &emsp; 10.8(6.8\\%) &emsp;&ensp;6.0(3.8\\%) &emsp;&emsp;&emsp;&emsp;87.8(55.7\\%) &emsp;&emsp;&emsp;&ensp; 157(100\\%)\n\nSpeedup &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 3.04x &emsp;&emsp;&emsp; 1.54x &emsp;&emsp;&emsp; 1.65x &emsp;&emsp;&emsp;&ensp; 1.72x &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&ensp; 1.73x &emsp;&emsp;&emsp;&emsp;&emsp;&emsp; 2.05x \n***\n\nAfter pruning, the FLOPs remain 30\\% and the ideal speedup is 3.3x. As for the actual GPU latency speedup, it can be observed that **the matmul achieves an almost ideal 3.04x speedup**. However, LN, Softmax, and other memory-related operations can only reach 1.77x due to that these operations could not be reflected by FLOPs and are not linear w.r.t. FLOPs reduction. The above analysis shows that pruning can achieve ideal matmul speedup on GPUs (matmul computation is reflected by FLOPs). These are also observed in other ViT pruning works [10,11]. In fact, we are working on accelerating these operators like LN, Softmax, GELU, and other memory-related operators to achieve better speedup.\n## Q4\nWe promise that we will release the code in paper and checklist. Due to the double-blind reviewing rule of NeurIPS, we are not allowed to release the code now. We will release the code as soon. In addition, we have provided pseudo-code for implementation.\n\n[1] Molchanov, P., et al. Importance estimation for neural network pruning. In CVPR 2019.\n\n[2] Liu, L., et al. . Group fisher pruning for practical network compression. In ICML 2021.\n\n[3] Dong, X., et al. Learning to prune deep neural networks via layer-wise optimal brain surgeon. In NeurIPS 2017.\n\n[4] Chen, M. et al. Autoformer: Searching transformers for visual recognition. In ICCV 2021.\n\n[5] Yang, H., et al. Nvit: Vision transformer compression and parameter redistribution. arXiv preprint 2021.\n\n[6] Cai, H., et al. Once-for-All: Train One Network and Specialize it for Efficient Deployment. In ICLR 2019.\n\n[7] Ding Jia, et al. Efficient vision transformers via fine-grained manifold distillation. arXiv preprint 2021.\n\n[8] Hao Yu, et al. A unified pruning framework for vision transformers. arXiv preprint 2021.\n\n[9] Liu, Z., et al. Swin transformer: Hierarchical vision transformer using shifted windows. In CVPR 2021.\n\n[10] Chen, T., et al. Chasing sparsity in vision transformers: An end-to-end exploration. In NeurIPS 2021.\n\n[11] Yin, H., et al. A-ViT: Adaptive Tokens for Efficient Vision Transformer. In CVPR 2022.", " Thanks for your sincere comments, we hope to have a further discussion to see if our response solves the concerns.\n\n## Q1 \\& Q2\nThe target of eq.1 is correct and also used in related literature [1,2]. Here we provide a more detailed explanation. Assume the pre-trained model weight vector as $ w \\in R^{N}$ , $N$ is the number of parameters, and the binary mask vector $b \\in R^{N}$ is defined as follows:\n\n$b_i$= 1 &ensp; if the $i$-th weight is not pruned, 0 else the $i$-th weight is pruned.\n\nSo we have the weight vector for the pruned model $b\\odot w$. Correspondingly, the FLOPs computation for the pruned model is $C(b\\odot w)$, and **the weight change before and after pruning is defined as $\\Delta w = b\\odot w - w$**. According to the Taylor series, the target (eq.1) has a series representation about $w$:\n\n$\\Delta L = L(w+ \\Delta w) - L(w) = L(b\\odot w)\t- L(w) $\n\n&emsp;&emsp;$= L(w) + (b\\odot w - w)^Tg + \\frac{1}{2}(b\\odot w - w)^TH(b\\odot w - w) + O(||b\\odot w - w||^3) - L(w)$ \n\n&emsp;&emsp;$= (b\\odot w - w)^Tg + \\frac{1}{2}(b\\odot w - w)^TH(b\\odot w - w) + O(||b\\odot w - w||^3)$\n\n&emsp;&emsp;$ = {\\Delta w}^Tg+ \\frac{1}{2}{\\Delta w}^TH\\Delta w + O(||\\Delta w||^3)$\n\n## Q3\nTo fairly compare against other pruning methods, we conduct pruning on the Swin that directly trianed on ImageNet-1k. The pruning results have shown our algorithm can compress Swin with a slight performance drop. By setting with the ImageNet-22k pre-training recipe, Swin just trains with more data without any modifications to the network structure. Thus our method can naturally compress the ImageNet-22k pre-trained model. We conduct pruning on the Image-22k pre-trained model according to the configuration of Swin paper [3] and report the results below, which again demenstrates the effectiveness of our approach.\n***\nModel&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;Method&emsp;&emsp;FLOPs&emsp;&emsp;Top-1 Acc.\n*** \nImageNet-1k directly Swin-B &emsp;&emsp;&emsp;Baseline[3] &emsp;15.4G &emsp;&emsp;&ensp; 83.5 \n\n&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;SAViT &emsp;&emsp;&emsp; 7.7G &emsp;&emsp;&emsp; 82.6\n\n***\nImageNet-22k pre-trained Swin-B &ensp; Baseline[3] &nbsp; 15.4G &emsp;&emsp;&emsp; 85.2 \n\n&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; SAViT &emsp;&emsp;&ensp; 7.7G &emsp;&emsp;&emsp;&ensp; 84.1\n***\n[1] LeCun, Y., et al. Optimal brain damage. In NeurIPS 1989.\n\n[2] Peng, H., et al. Collaborative channel pruning for deep networks. In ICML 2019.\n\n[3] Liu, Z., et al. Swin transformer: Hierarchical vision transformer using shifted windows. In ICCV 2021.\n", " This work proposes to prune ViT from all components comprehensively, which considers the interactions between different components in pruning ViTs. Different from the homogeneous components of CNN, the components of ViT are always heterogeneous, thus this work constructs a Taylor-based optimization function to take full advantage of the interactions between heterogeneous components. To avoid the huge computation cost of the Hessian matrix, this work derives an approximation to transform the Hessian matrix into pruning ratios. Finally, it solves the optimization target towards the optimal trade-off between accuracy and computational cost. This work is validated on DeiT and Swin on ImageNet and also on detection experiments. Strengths:\n1. This work is well-written and easy to follow.\n2. The motivation of this work is clear.\n\n\nWeaknesses:\n\n1. Please verify the definition of ∆w, as ∆w = b ⊙ w − w, while the optimization target (eq.1) is: min ∆L = L(w + ∆w) − L(w), thus ∆L = L(w + ∆w) − L(w) = L(b ⊙ w) - L(w). And C(w + ∆w) = C(b ⊙ w). I think this target is not correct, which maybe cuased by the definition of ∆w.\n2. Based on the definition of ∆w, I think we cannot get eq2 from eq1.\n3. The pruning results on Swin are directly trained on ImageNet-1k, while Swin can achieve much better performance after pretraining on ImageNet-22k. So the author should also provide experimental results on it. No need to train Swin on ImageNet-22k, but directly load the pretrained weights on ImageNet-22k then do pruning along with finetuning on ImageNet-1k.\n Overall, I think this work has a reasonable motivation and good idea, but the definition of ∆w confused me. Please refer to Weaknesses. ", " This paper presents a model pruning method for vision Transformers by jointly considering multiple possible pruning dimensions. To address the problem of joint optimization, a new collaborative pruning is designed. Experiments on multiple backbones (i.e., DeiT and Swin) and multiple tasks (i.e., ImageNet classification and COCO detection) show the effectiveness of the method. Strengths:\n\n- The idea of jointly considering multiple pruning dimensions is natural and well-motivated. \n\n- The method is tested and works well on multiple tasks and backbones.\n\nWeaknesses:\n\n- The idea of joint optimization of the vision Transformer architecture is not very new. Many previous methods have explored the joint optimization problem for network acceleration. Recent work like AutoFormer also considers multiple dimensions for vision Transformers.\n\n- Table 3 presents an important experiment to compare with previous state-of-the-art pruning methods. Since ViT/DeiT-B/S are considered as the standard models in many previous papers, it is better to provide the results on multiple model sizes (e.g., ViT-B/S/T) to clearly show the effectiveness of the proposed method.\n\n- According to Table 6, pruning multiple dimensions may not lead to ideal actual speedup on GPUs.\n\n- The method introduces a new pruning algorithm, which may not be easy to implement. Since the code is not available, I am a bit worried about the reproducibility of the method.\n\n\n-----------------\nPost rebuttal:\n\nI would like to thank the authors for the detailed feedback and additional results. The response addressed my concerns about the insufficient experiments and actual speedup on GPUs. I would like to upgrade my rating to 5.\n The paper presents a thorough study of the emerging area of efficient vision Transformers. The method is tested on multiple backbones and datasets. However, the joint optimization framework is similar to previous methods for CNNs, which makes the novelty of the pruning algorithm relatively low. Besides, I still have some concerns about the insufficient experiments, actual speedup on GPU, and reproducibility. This paper can be stronger if the issues mentioned in the weaknesses subsection can be addressed. Limitations of the proposed method are not discussed. ", " This paper presents a new neural network pruning method for vision transformers. The proposed technique can effectively accelerate most vision transformers such as ViT and Swin Transformer by collaboratively pruning components such as multi-head self-attention, hidden neurons, and embedding neurons. Extensive experiments show that the proposed technique is efficient yet competitive in accuracy as compared with the state-of-the-art. The manuscript has the following pros as follows: 1) The paper has a clear motivation and the proposed technique is based on the theoretical analysis. 2) The paper is clearly written, and easy to follow in general; 3) A large number of experiments have been performed to validate the performance in various aspects.\n \nI have some small concerns about this paper as listed. First, in Sec 4.4, the author said that the proposed method can bring accuracy gains when compressing the DeiT-Base. I think this conclusion is a bit inappropriate. As described in Sec. 4.1 and 4.2, the pruned models will further fine-tune 300 epochs, which means that the pruned models have a longer training schedule than baseline models. The accuracy gains may come from the longer training schedule.\n \nIn addition, the pseudo-code for the whole pruning algorithm in the Appendix is the core contribution of this work, which should be moved to the main paper.\n No questions. Yes", " This paper proposes SAViT, a structure-aware pruning method for transformer-based architecture, which jointly prunes parameters in different components by considering interactions between these components. Experiments on different ViT architectures and vision tasks demonstrate the effectiveness of SAViT. Strengths\n- The idea of collaboratively pruning all components of a model is interesting.\n- The performance gain is impressive when compared to sufficient state-of-the-art methods.\n- This paper is well-written and easy to follow.\n\nWeaknesses\n- It's not clear how much computation cost and data is needed for the proposed method during pruning, when compared to state-of-the-art network pruning methods.\n- The main novelty of this work is the interaction of different components during pruning, so the ablation study on this design is important. In my understanding, Line 297-305 and Table 7 aim to give such ablation studies, while it's not clear whether the setting of \"without second-order interations\" (Line 300) in this ablation study means dropping all Hessian-based terms in Eq. (4) or only drop the cross-components terms (green blocks in Figure 2(b))? I think the latter one can better reflect the main contribution of the proposed method. See \"Weaknesses\" The authors haven't discussed any limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 3 ]
[ "3qu4VOu8qCC", "nips_2022_w5DacXWzQ-Q", "woqfnRadvY", "tYKA4xEWww", "UOwyAnChqsY", "3qu4VOu8qCC", "3qu4VOu8qCC", "3qu4VOu8qCC", "BtIUuKhbtf", "nips_2022_w5DacXWzQ-Q", "nips_2022_w5DacXWzQ-Q", "nips_2022_w5DacXWzQ-Q", "nips_2022_w5DacXWzQ-Q" ]
nips_2022_ZG5Bi1N4V0U
SeqPATE: Differentially Private Text Generation via Knowledge Distillation
Protecting the privacy of user data is crucial for text generation models, which can leak sensitive information during generation. Differentially private (DP) learning methods provide guarantees against identifying the existence of a training sample from model outputs. PATE is a recent DP learning algorithm that achieves high utility with strong privacy protection on training samples. However, text generation models output tokens sequentially in a large output space; the classic PATE algorithm is not customized for this setting. Furthermore, PATE works well to protect sample-level privacy, but is not designed to protect phrases in samples. In this paper, we propose SeqPATE, an extension of PATE to text generation that protects the privacy of individual training samples and sensitive phrases in training data. To adapt PATE to text generation, we generate pseudo-contexts and reduce the sequence generation problem to a next-word prediction problem. To handle the large output space, we propose a candidate filtering strategy to dynamically reduce the output space, and refine the teacher aggregation of PATE to avoid low agreement due to voting for a large number of candidates. To further reduce privacy losses, we use knowledge distillation to reduce the number of teacher queries. The experiments verify the effectiveness of SeqPATE in protecting both training samples and sensitive phrases.
Accept
The paper studies PATE framework for text generation models and proposes algorithm based on KD to handle large output space. Reviewers think that proposed methods should generate interest among the NeurIPS audience. We encourage the authors to incorporate comments of the reviewers to improve the paper.
train
[ "otVdz2d0okz", "_o0WxvOZ7Bj", "jhv6OF4ivS", "EI1DDh1s80w", "mRx4bkr35Cj", "1Gxr7odK9mz", "DkyUVK1z9e", "bRjvEI0vliL", "Qd_Cy69Bffzc", "7wwv6_8TOz_", "3tM7AWkC4JU", "5hCgf2eXllT", "_9QMAaHMEH", "c2KLppkGk7u", "QA9pCj9wV3t", "FEC0Z0yZdja", "AWn_3KJsDh", "PButlHJjzs7", "TUvidruCY_2" ]
[ "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your time and effort in reviewing our paper. We appreciate your encouragement and potential support in the following discussion phase. \n\nThank you for reading our response and the revised paper carefully. We will polish this paper according to your suggestions. Hope you all are doing well.\n", " Thank you so much for your time and effort in reviewing our paper. Thank you for reading our response and the revised paper.\n\nWe are happy to receive your constructive comments, which improve this paper a lot in the revised version. We are pleased to hear your responses in the discussion phase.\n\nWe hope we have addressed all your concerns and expect that our clarifications and revision could be reflected in your final decision. Hope you all are doing well.\n", " Thank you so much for your time and effort in reviewing our paper. We are very grateful for your positive comments and potential support in the following discussion phase.\n\nWe will follow your kind suggestions in our future work and continue to refine this paper.\n\nWe hope you all are doing well.\n", " Thank you very much for your time and effort in reviewing our paper.\n\nWe are pleased with the improvement of the paper from your insightful suggestions. We hope we have clarified all your concerns and are happy to discuss any further comments you may have until the response deadline. (The discussion with the authors is still available now. We will keep solving any new concerns and questions until the author's response is closed.)\n\nWe hope you all are doing well and also hope our improvement, clarifications, and revision could be reflected in your final decision.\n", " We have uploaded a new version with the following modifications:\n\n1. In the introduction part (Sec.1), we make some new statements in lines 34$\\sim$36, 39$\\sim$40, and 51$\\sim$53 to explain,\n\n (a) what can SeqPATE do with the help of DP?\n\n (b) how can SeqPATE satisfy DP (the calibrated noise required by DP)? \n\n (c) that the SeqPATE's utility loss caused by the DP required noise.\n\n2. We add some sentences in Sec.3 (line 80$\\sim$83) to show the connection between DP definition and SeqPATE (also the DP's notations in SeqPATE).\n\n3. We add some sentences to the Approach section (Sec. 4.2, line 120$\\sim$121) to emphasize how to make SeqPATE satisfy DP.\n\n4. We refine the second paragraph of Sec. 5.2, the second paragraph of Sec. 5.3, and Sec. 5.4 to,\n\n (a) explain the DP's ''coordinates'' in SeqPATE.\n\n (b) say top-$k$ coordinates in DP means top-$k$ candidate in SeqPATE.\n\n (c) show how to understand ''privacy loss'' in real use.\n\nWe will keep polishing it before the camera-ready version.", " Thank you very much for your response and your time on the revised paper.\n\nDP theoretically provides quantifiable guarantees on privacy protection. Particularly, we can use the $\\varepsilon$ in the DP definition to measure the strength of protection.\n\nSome practical settings are more complex than the pure DP definition. For example, phrase A occurs $K$ times ($K > 1$) and the algorithm is required to prevent **all the $K$ occurrences of A** from being detected. Phrase B occurs only one time. Then, **protecting all the occurrences of A is much harder than protecting only one occurrence of B**. If we apply the same scale of noise to the model, the actual strength of protection on A (with all the occurrences) and B are indeed different. According to the group privacy (Theorem 2.2. in [Dwork et al., TCS’14]), the actual factor on A (with all the occurrences) is $K * \\varepsilon$ instead of $\\varepsilon$, which indicates the reduction of protection strength. However, the reduced strength of protection **is also bounded by the DP theory**. The algorithm and setting also meet DP requirements theoretically, and **DP still provides a theoretical guarantee on A**. The first paragraph of Sec. 5.3 was talking about the above case.\n\nHence, if we need to **protect all the occurrences of a data point** and the data point occurs more than once, the number of occurrences indeed affects the strength of protection. But, **DP still provides the theoretical guarantee on this kind of data.**\n\n[Dwork et al., TCS’14] The algorithmic foundations of differential privacy.", " Thank you very much for your response and your time on the revised paper.\n\nWe are also keeping revising the paper. We are trying to make the connection between DP and SeqPATE more fluency and more easy to follow. We need to reorganize some paragraphs while the two lines are indeed not enough to make a big effect on the paper reading. \n\nWe will upload a new version by the revision deadline (Aug 9).", " Thanks to the authors for adding two lines of text in the revised paper (lines 185 and 186 in section 5.2) to explain the connections between DP and SeqPATE.  As the authors may have been aware, this is NOT enough to make the connection clear and make it easier to follow.  So the authors have promised to make it happen after the rebuttal.  In our humble opinions, revising the paper is much easier than adding experiments and reporting them correctly.  So we expect the authors to take some time to start revising the paper while waiting for the results of other reviewers.", " Thank you for your answer. \nI'll have to look at the revised version.\nAs the privacy criterion depends on the data and the individual, \nit seems to me that the theoretical guarantees given by DP learning methods also depend on that. \nCan you provide an answer in this regard? ", " **Q5. Explain the superiority of this framework compared to the case where teacher and student are studied with $\\mathcal{D}^{\\text{pub}}$ and $\\mathcal{D}^{\\text{pri}}$.**\n\nIn our model, the teachers are trained with $\\tilde{\\mathcal{D}}^{\\text{pub}}$ and $\\mathcal{D}^{\\text{pri}}$, and the student is trained with $\\tilde{\\mathcal{D}}^{\\text{pub}}$.\n\n(1) If the teachers or the student is trained with $\\mathcal{D}^{\\text{pub}}$ instead of $\\tilde{\\mathcal{D}}^{\\text{pub}}$, the training samples are too short and contains too little information. As mentioned in Sec. 2 and Sec. 6, the samples in $\\mathcal{D}^{\\text{pub}}$ are only the prefix of sentences (contain only 4 words in our setting). Those samples are too short sufficient to learn to generate a full sentence. Besides, **each sample in $\\mathcal{D}^{\\text{pub}}$ is a subsequence of one sample in $\\tilde{\\mathcal{D}}^{\\text{pub}}$**, since $\\tilde{\\mathcal{D}}^{\\text{pub}}$ is the full sentence generated by GPT given the prefix in $\\mathcal{D}^{\\text{pub}}$. So, **$\\tilde{\\mathcal{D}}^{\\text{pub}}$ covers all information in $\\mathcal{D}^{\\text{pub}}$ and carries more information other than $\\mathcal{D}^{\\text{pub}}$**. $\\tilde{\\mathcal{D}}^{\\text{pub}}$ also makes it possible to learn to generate a sentence. Those advantages help the model on $\\tilde{\\mathcal{D}}^{\\text{pub}}$ obtains better performance rather than that on $\\mathcal{D}^{\\text{pub}}$.\n\n(2) If the student’s training set contains $\\mathcal{D}^{\\text{pri}}$, the framework does not satisfy the DP definition in privacy protection.\n\n**Q6. Show some examples or qualitative evaluations of how SeqPATE achieves utility with strong privacy protections on training samples.**\n\nIn the revised paper, we add a case study section to Appendix R, which provides some examples to show the utility of SeqPATE.\n\nWe also add qualitative analysis (evaluations) to Appendix O and Appendix. P as mentioned in the response to Q2.\n \n**Q7.lack of qualitative analysis.**\n\nIn the revised paper, we add qualitative analysis (evaluations) to Appendix O and Appendix. P as mentioned in the response to Q2.\n\n\n[Dwork et al., TCS’14] The algorithmic foundations of differential privacy.\n\n[Carlini et al., USENIX Security'19] The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks.\n\n[Li et al., ICLR’22] Large Language Models Can Be Strong Differentially Private Learners.\n\n[McMahan et al., ICLR’18] Learning Differentially Private Recurrent Language Models.\n\n[Zhu et al., CVPR'20] Private-kNN: Practical Differential Privacy for Computer Vision.\n\n[Kamath et al., 21] Algorithms for Private Data Analysis (Intro to Differential Privacy).", " **Q2. No qualitative and error analysis. (Part 2)**\n\n| | Methods | $R_{name}$ | \n| ------------------- | ----------------- | ----- |\n| Non-DP |Pri-GPT | 4.25% | \n| DP (phrase) $\\varepsilon =3$ | NoisySGD+GC+$\\tilde{\\mathcal{D}}^{pub}$ (batching users) | 0.43% | \n| DP (phrase) $\\varepsilon =3$ | SeqPATE(our) | 0.20% |\n\nThe second table shows the results of the Pri-GPT and DP-based methods with $\\varepsilon$ of 3 in the users' phrases. Under the same $\\varepsilon$ of protections on users' phrases, the gap between SeqPATE and NoisySGD is not so large and SeqPATE is still better than NoisySGD. It shows the superiority of SeqPATE in protecting users' phrases.\n\n(3) For error analysis, we add a case study section in Appendix R, which analyze two good cases and **a bad case (with error analyses)**.\n \n\n**Q3. Runtime analysis is lacking.**\n\nIn the original submission, we **have reported the training time in Appendix I and Sec. 6.1**: Our training time (including the teachers’ training) is roughly equal to the time of training a single GPT-2 model on our datasets (within 3 days).\n \nIn the revised paper, we **have added more details about the runtime and its analysis to Appendix I, including the runtime (training and inference) of our baselines.**\n \nFor SeqPATE, the teachers’ training takes 1 $\\sim$ 3 days; the student’s training takes at most half an hour. For the NoisySGD, the whole training takes 1 $\\sim$ 2 days. The running time of the inference for all methods is similar, which takes around 10 minutes. (See details in Appendix I).\n \n**Q4. Applying a simple method (e.g. word blacklist) to a benchmark.**\n \nSome simple methods (e.g. blacklist, anonymization, random permute) are intuitive and effective, but **those simple methods (including blacklist-based method) do not satisfy the DP definition** so there are some concerns when using them as baselines:\n\n(1) DP-based methods provide the theoretical guarantee for **all kinds of information against being detected**, while non-DP methods do not have the guarantee. For example, **a blacklist is a finite set** and may miss hiding some important information [Carlini et al., USENIX Security'19]. Theoretical guarantee is desired in many practical applications (e.g., satisfying some privacy policy or providing a guarantee to users who contribute the data).\n\n(2) DP-based methods have a quantifiable guarantee of privacy protection. Hence, in many DP papers [Li et al., ICLR’22][McMahan et al., ICLR’18][Zhu et al., CVPR'20], DP-based methods usually compare with other DP-based methods in the same level of protection (i.e. same $\\varepsilon$ and $\\delta$ in Table 1 and Table 2). However, **non-DP (e.g. blacklist-based) methods cannot theoretically measure the strength of privacy protection**, so it is **hard to compare DP-based methods with non-DP-based methods in the same (fair) level of protection.**\n \nThough they are not comparable, we did add a new experiment in Appendix Q, where we create a blacklist with the user name, destinations, and some other sensitive words/phrases (e.g. dates).\n| | | PPL | BLEU-3 | BLEU-4 |\n| ------------------- | ----------------- | ----- | ------ | ------ | \n| | Pri-GPT-blacklist| 6.84 | 11.40 | 8.13|\n| $\\varepsilon =5$ | NoisySGD+GC+$\\tilde{\\mathcal{D}}^{pub}$ (batching users)| 10.56 | 4.60| 2.87 |\n| $\\varepsilon =5$ | SeqPATE(our) |8.06 | 6.10| 3.90|\n\nThe first row indicates applying the blacklist to the results of the GPT model trained on private data. For each generated sentence, we replace the words in the blacklist with a special token. Notice that, although Pri-GPT-blacklist outperforms SeqPATE, the **blacklist-based methods have the following issues**:\n\n(1) It only protects the privacy of the given types (i.e. user names, destinations, and dates);\n\n(2) Even for the given types, it can only protect a part of sensitive information since the blacklist is finite;\n\n(3) It cannot measure the strength of protection.\n\n", " Thank you so much for your constructive comments and kind suggestions. We’ve revised the paper and the appendix according to your suggestions.\n\n**Q1. Several claims have not been verified: \"effectiveness of SeqPATE in protecting both samples and sensitive phrases\" and \"training corpora with a moderate privacy cost\".**\n \t\n**1. To verify \"the effectiveness of SeqPATE in protecting both samples and sensitive phrases\"**, we know the effectiveness contains: (1) strength of privacy protection, and (2) model performance (utility).\n\n**For (1) strength of privacy protection,**\n\n(a) All the DP-based methods employ the factor $\\varepsilon$ to quantify the strength of protections. We use $\\varepsilon=3$ and $\\varepsilon=5$ in our experiments, where researchers [Kamath et al., 21] usually set $\\varepsilon$ ranging from 0.1 to 10. $\\varepsilon=3$ and $\\varepsilon=5$ is a moderate value for $\\varepsilon$.\n \n(b) We add a new experiment to evaluate the intuitive effects of privacy protection. We measure the quantity of sensitive information (i.e. user’s name) from the training corpora generated by the model. The experiments are attached to Appendix P. It shows that SeqPATE provides satisfactory protection for sensitive information compared to other baselines.\n\n**For (2) model performance (utility),**\n\n(a) In the main experiment (Sec. 6.1), we compare SeqPATE with other baselines on sample level (Table 1) and sensitive phrases (Table 2).\n\n(b) In the ablation study (Sec. 6.2), we verify the effectiveness of our proposed strategies in SeqPATE.\n\n\n**2. To verify \"training corpora with a moderate privacy cost\"**, we know that the algorithms, which protect privacy, inevitably reduce the model performance, which causes the privacy cost of that algorithms.\n\n(a) Empirically, **the good model performance indicates that the privacy cost is not so high.** To achieve the same strength of protection as the baselines (in terms of $\\varepsilon$), SeqPATE's performance is better than those baselines on PPL and Bleu4 (in Tables 1 and 2).\n\n(b) Theoretically, as mentioned in Sec. 5.3, **traditional DP-based algorithms (i.e. NoisySGD) suffer from a very high privacy cost on sensitive phrases compared to the sample level privacy** [Dwork et al., TCS’14]. According to the theoretical analyses in Sec. 5.3, SeqPATE’s sensitivity is only $\\sqrt{2}\\tilde{n}_{s}$, where $\\tilde{n}\\_{s}$ is usually 1 or 2. **SeqPATE’s privacy cost on sensitive phrases is roughly the same as the sample level privacy**. \n \n**Q2. No qualitative and error analysis. (Part 1)**\n\n(1) To show the superiority of **SeqPATE compared to the original PATE**, we provide a **qualitative analysis based on some experimental results and estimations**. In the revised paper, we add the qualitative analysis to Appendix O.\n \n(2) In the revised paper, we add a **qualitative evaluation** section to Appendix. P. It demonstrates the **intuitive effects of the protections of SeqPATE and NoisySGD**. It shows that (a) the privacy leakage of no protection algorithm is very serious; (b) **SeqPATE avoids leaking information significantly**; (c) SeqPATE provides a stronger protection rather than NoisySGD. The experimental details and results are as follows.\n\nDP-based methods usually show the strength of privacy protection via the factor $\\varepsilon$ in the DP definition. As for the text generation application, we employ a more practical evaluation to show what and how the DP-based methods protect. We define a metric $R_{\\text{name}}$ to measure the average percentage of generating users' names in the output text. This metric indicates the degree of leaking users' secret phrases (i.e. users' names). A smaller $R_{\\text{name}}$ indicates better protection.\n\n| | Methods | $R_{name}$ | \n| ------------------- | ----------------- | ----- |\n| Non-DP |Pri-GPT | 4.25% | \n| DP (sample) $\\varepsilon =3$ | NoisySGD+GC+$\\tilde{\\mathcal{D}}^{pub}$ (batching users) | 1.89% | \n| DP (sample) $\\varepsilon =3$ | SeqPATE (ours) |0.21%|\n\nThe first Table shows the results of Pri-GPT and DP-based methods with the sample level $\\varepsilon$ is 3. The results show that our SeqPATE significantly avoids generating trained users' names trained (which avoids 95% of them). The Pri-GPT has no privacy protection and the percentage of generating users' names is high (4.25%), which demonstrates that information leakage is serious in the current pre-trained models (Pri-GPT). Under the same level of protection ($\\varepsilon = 3$), SeqPATE provides stronger protection than NoisySGD. It verifies our claim (in Sec. 5.3) that SeqPATE is skilled at protecting users' secret phrases.", " Thank you so much for your positive comments and kind suggestions. We’ve revised the paper according to your suggestions.\n\n**Q1. Connections between DP and SeqPATE can be explained more explicitly… make it easier to follow.**\n\nThank you very much! In the revised paper, we explained the connection in Sec. 5.2. The connection between DP theory and our proposed SeqPATE is that,\n\n (1) By satisfying DP theory, SeqPATE provides a quantifiable guarantee on the strength of privacy protection;\n\n (2) DP requires SeqPATE to add noise to its knowledge distillation (teachers’ output distribution) as mentioned in Sec. 4.2.\n\nAfter the rebuttal period, we will continue to polish the whole paper to make the connection clear and make it easier to follow. \t\n\n**Q2. Absence of social impact.**\n\nDue to the page limitation, we put the social impact in Appendix A and the limitation in Appendix M. In the revised paper, we refer to it in the paper body.", " Thank you so much for your positive comments and kind suggestions.\n \n**Q1. Try more text generation tasks.**\n\nThat is a good idea and we will investigate it in the future. Our current work is based on a representative text generation setting, as it conducts a text-to-text generation with GPT-2, a widely-used framework. In this work, we aim to explore some practical strategies that help us to apply PATE to text generation. \n\nWe expect the results to transfer to similar tasks, like dialog generation. For example, Li et al. [AAAI’20] treated conversational query and response as a long sentence and applied the GPT-2 to the dialog generation, where the usage is similar to our work. The duration of the rebuttal period is too short to conduct new experiments on new tasks, so we leave the experiments to future work.\n\n**Q2. Have a specific section on limitations and societal impact.**\n \nDue to the page limitation, we put the social impact in Appendix A and limitations in Appendix M in the previous submission version. In this revised paper, we refer to it in the paper body.\n\n[Li et al., AAAI’20] Relevance-Promoting Language Model for Short-Text Conversation.", " Thank you very much for your constructive comments and kind suggestions. Your comments are very helpful to improve this paper. We have refined this paper according to your comments.\n\n**Q1.Novelty (especially the active learning strategy).**\n \nOur proposed SeqPATE is a novel framework that makes PATE handle the paradigm of sequential classification in a large space. PATE is a widely-used DP algorithm that works well in computer vision but is not so mature in NLP (especially text generation), SeqPATE is the first work to adapt PATE to text generation.\n\nWe note that **adapting standard DP techniques to large models in NLP is not trivial and quite challenging [Li et al. ICLR’22, Yu et al., ICLR’22]**. \n\nAs for the technical novelty, our empirical results suggest useful strategies for using PATE in NLP:\n\n(1) Conducting the knowledge distillation (teacher inference and student training) on pseudo sentences to **avoid rolling out a number of teachers**;\n\n(2) Teacher aggregation over the probability distributions as opposed to the argmax prediction;\n\n(3) **Dynamic candidate filtering** to avoid adding large noises;\n\n(4) **Active learning** to reduce the number of teacher queries;\n\nThe first three have not been reported previously.\n\nBesides, we **extend the protection on users’ secret phrases, where we provide theoretical analyses** to show that its protection on those phrases is much stronger than other baselines (i.e. NoiscySGD). We further discuss the protection of users’ phrases in the answer to the next question.\n \n \n**Q2. DP-SGD achieves user-level privacy by batching users (with its experiments).**\n\nIn the revised paper, we conduct the experiments considering **batching examples by users on NoisySGD (DP-SGD) as a baseline** (A user's data are in one or very few batches). The experimental results show that our method on protecting users' secret phrases **still outperforms the NoisySGD baselines with batching users (rows 2 and 4)**. We have added the new results to the revised paper (Sec. 6.2 and Appendix N).\n\n| | | PPL | BLEU-3 | BLEU-4 |\n| ------------------- | ----------------- | ----- | ------ | ------ | \n| $\\varepsilon =3$ | NoisySGD+GC+$\\tilde{\\mathcal{D}}^{pub}$ | 16.75 | 1.71 | 0.57 |\n| $\\varepsilon =3$ | NoisySGD+GC+$\\tilde{\\mathcal{D}}^{pub}$ (batching users) | 13.42 | 3.25 | 1.45 |\n| $\\varepsilon =3$ | SeqPATE (ours) | **10.10** | **4.20** | **2.46** | \n| $\\varepsilon =5$ | NoisySGD+GC+$\\tilde{\\mathcal{D}}^{pub}$ | 16.49 | 1.89 |0.69 | \n| $\\varepsilon =5$ | NoisySGD+GC+$\\tilde{\\mathcal{D}}^{pub}$ (batching users) | 10.56 | 4.60| 2.87 |\n| $\\varepsilon =5$ | SeqPATE (ours) |**8.06** | **6.10** | **3.90** |\n\nWe note that SeqPATE still has some advantages over NoisySGD (with batching users) in protecting users’ secret phrases:\n \n(1). The privacy loss of SeqPATE scales linear with $\\tilde{n}_s$ (the number of one user’s teacher) as mentioned in Sec. 5.3. The average $\\tilde{n}_s$ is 1.038 as mentioned in Appendix F. **The privacy loss of NoisySGD (with batching users) scales with the root of training steps (number of batches the model trained)** according to advanced composition [Abadi et al., CSS’16]. Therefore, if the training phase consists of $K$ epochs, **a user’s phrase contributes to the privacy loss for $K$ times**. Deep learning models usually require many epochs for training. In this paper, the epoch number is usually 10 $\\sim$ 20. In short, **NoisySGD’s privacy loss on phrases is at least 3 $\\sim$ 4 times larger than its sample level privacy loss; SeqPATE’s privacy loss on phrases is roughly equal to its sample level privacy loss.**\n \n(2). It would be difficult to adjust the batch size in NoisySGD to satisfy the requirements of batching users, because (a) the performance of many deep learning models is sensitive to the batch size; (b) the batch size cannot be too large due to the limitations of GPU memory. \n\nIn the revised paper, we add NoisySGD with batching users **as a new baseline** in Table 2 and **revise the related claims in the introduction (Sec. 1), theoretical analyses (Sec. 5.2), and experiment analyses (Sec. 6.2)**. We also **create a new section (Appendix N)** to elaborate and analyze this baseline.\n \n[Li et al., ICLR’22] Large Language Models Can Be Strong Differentially Private Learners.\n\n[Yu et al., ICLR’22] Differentially Private Fine-tuning of Language Models.\n\n[Abadi et al., CSS’16] Deep learning with differential privacy.\n\n", " This paper extends the PATE approach to the text generation problem. The authors introduce additional steps to help boosting the performance of PATE in the text generation setting as the oiriginal approach does not bode well with the large output space of vocabulary. The paper further studies phrase-level privacy beyond the regularly studied sample-level privacy. The paper is well-written and easy to follow. Unfortunately my main concern is on the novelty of the paper. The approach is heavily based on the PATE algorithm with few tricks to work better for the text generation task. It utilizes a pre-trained LM to generate pseudo completions and reduces the output space by filtering the tail distribution without a privacy requirement and finally the privacy loss is reduced by acquiring the teacher supervision only when the student is not good at a certain prediction. The latter idea has also appeared in the more recent PATE paper. While I believe these extensions are valuable in improving the performance of PATE in this scenario, I do not think they provide sufficient novelty for this venue.I have one critical comment about the users' secret phrases section. The authors took the route of group privacy for this scenario, which I do not think might bethe effective way with DP. DP-SGD algorithm can easily be adapted to have \"user-level privacy\" by batching users instead of samples. I find it an unfair comparison in the sense that the authors have not employed this approach but took the naive way of applying group privacy on user-level. See strengths and weaknesses. NA.", " The paper proposes an extension of PATE, a private learning algorithm, to text generation tasks. The extensions are simple yet effective: they generate pseudo inputs and reduce the sequence generation problem to next word predictions. They also propose a strategy to dynamically filter out candidates to reduce the large output space in the text decoder. Experiments in the sentence completion task show that the proposed model is effective in protecting samples and sensitive phrases. Strengths\n* The proposed extension is very simple yet intuitive and effective for differentially private text generation.\n\nWeaknesses\n* The paper could have been more convincing if the model is tested on multiple text generation tasks such as dialog response generation (generate a response given previous utterances) where privacy is more crucial. None. As the work focuses on privacy, I think it would be nice to have a specific section on limitations and societal impact. This is currently absent in the main paper.", " In this paper, authors propose a novel framework, SeqPATE, an extension of PATE on text generation, as a Differentially private (DP) learning algorithm for text generations.\nSeqPATE aims to protect the privacy of both training samples and sensitive phrases in samples,\nand employs a teacher-student framework.\nAdditionally, authors propose several strategies for SeqPATE to handle text generations with a sequence of classifications over large spaces. Strengths\n\n+Privacy protections is important for text generation models and other tasks\n\n+Motivation and problem setting are clear\n\n+Previous work survey is enough\n\n\nWeaknesses\n\n-Several claims have not been adequately verified. For example, \"the effectiveness of SeqPATE in protecting both samples and sensitive phrases\", \"training corpora with a moderate privacy cost\".\n\n-No qualitative and error analysis\n\n-Runtime analysis is lacked. *How about applying a simple approach such as a word blacklist to a benchmark?\n\n*Can you explain the rationale for the superiority of this framework compared to the case where teacher models and student model are studied with $D^{pub}$ and $D^{pri}$, respectively?\n\n*Show some examples or qualitative evaluation of how SeqPATE achieves utility with strong privacy protections on training samples. Without qualitative analysis, \nit is a quantitative comparison of similar models and does not support the author's claim.\nOnly the usual text generation model metrics (i.e., PPL and Bleu)are used.", " This paper extends PATE into the field of text generation. To do so, the following technical challenges must be properly addressed: 1. in addition to protecting individual words, we need to protect phrases too. 2. Compared to other task, the output space is huge for text generation. 3. We need to control the privacy loss. This paper has done a solid work to address these challenges. This paper is well written. This work is original. It is based on the theory of differentially private (DP), so its potential and quality are pretty high. The important difficult points are well explained.\nBut as a reader, I think one area can be improved: the connection between theory of DP and the proposed SeqPATE method can be explained more explicitly. Doing so can greatly decrease the barriers to new researchers who are interested in this area. This is a solid work. But making it easier to follow may greatly increase the influence of this work. Privacy is an area with great social impact. This paper focuses solely on the technical aspect of privacy. But it’s too early to give an assessment on its social impact. So in my humble opinion, it is acceptable that its social impact are absent in the paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 3, 4 ]
[ "mRx4bkr35Cj", "1Gxr7odK9mz", "c2KLppkGk7u", "QA9pCj9wV3t", "DkyUVK1z9e", "Qd_Cy69Bffzc", "bRjvEI0vliL", "_9QMAaHMEH", "7wwv6_8TOz_", "3tM7AWkC4JU", "5hCgf2eXllT", "PButlHJjzs7", "TUvidruCY_2", "AWn_3KJsDh", "FEC0Z0yZdja", "nips_2022_ZG5Bi1N4V0U", "nips_2022_ZG5Bi1N4V0U", "nips_2022_ZG5Bi1N4V0U", "nips_2022_ZG5Bi1N4V0U" ]
nips_2022_wlEOsQ917F
A framework for bilevel optimization that enables stochastic and global variance reduction algorithms
Bilevel optimization, the problem of minimizing a value function which involves the arg-minimum of another function, appears in many areas of machine learning. In a large scale empirical risk minimization setting where the number of samples is huge, it is crucial to develop stochastic methods, which only use a few samples at a time to progress. However, computing the gradient of the value function involves solving a linear system, which makes it difficult to derive unbiased stochastic estimates. To overcome this problem we introduce a novel framework, in which the solution of the inner problem, the solution of the linear system, and the main variable evolve at the same time. These directions are written as a sum, making it straightforward to derive unbiased estimates. The simplicity of our approach allows us to develop global variance reduction algorithms, where the dynamics of all variables is subject to variance reduction. We demonstrate that SABA, an adaptation of the celebrated SAGA algorithm in our framework, has $O(\frac1T)$ convergence rate, and that it achieves linear convergence under Polyak-Lojasciewicz assumption. This is the first stochastic algorithm for bilevel optimization that verifies either of these properties. Numerical experiments validate the usefulness of our method.
Accept
The main topic of this work is stochastic bilevel optimization. It provides an efficient algorithm for this task, and provides theoretical results in this setting. The reviewers are unanimous that this is well-presented work of high quality and should be accepted, and so do I.
train
[ "EHufmexU4nj", "U-x-lGcNOb", "-39NDGecalW", "cKGXp0381Yt", "sYrbQhiY0V1", "FIu4aWzcClM", "vO8WSgdo0ep", "7pnZq1yuPCk", "6h1s93-EPg", "YCOZnHGI8sG", "7Tky25mBlIn", "jew388VycbJ", "PITZtkbRMtY", "1lz_0UFhUw4", "yhN9m0WX-4t" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed response and improvements on the revision. My concerns are resolved and I increase my rating from 6 to 7.", " Thank you for updating your review and for your suggestion. We agree that it is worth mentioning what rate we can expect if we stick with the usual regularity assumptions on $F$ and $G$. We will add it in the Appendix of the final version.", " I thank the authors for making the improvements. Since my only concern is resolved. I will raise my rating to 6.", " Dear reviewer,\n\nCan you read the author's rebuttal, check if it addresses your concerns, and react to it?\n\nIt is important to acknowledge this work by the authors and to respect it.\n\nBest,\nAC", " Dear reviewer,\n\nCan you read the author's rebuttal, check if it addresses your concerns, and react to it?\n\nIt is important to acknowledge this work by the authors and to respect it.\n\nBest,\nAC", " I thank the authors for the great effort put in the response and paper revision. I updated the review and increased the score accordingly. \n\nI have the following last wish, which will not negatively affect the score if not fulfilled. The new results are obtained by imposing additional regularity conditions which are somewhat unique to this work. Although I agree these are reasonable conditions, I would really appreciate if the authors would include, also in the appendix, a more detailed discussion on the results obtained without those conditions, i.e. the ones obtained in the original version of the paper. This would make it clearer to readers what exactly is the gain obtained with those conditions and further strengthen the paper.", " Dear Reviewers,\n\nThanks for your work reviewing this paper. There are only a few days left for the discussion period. \n\nAs the AC has already mentioned, you **must read the rebuttal**, and have any form of interaction with the authors, simply out of respect for the work they put addressing your comments. \n\nHence we kindly ask you to read and comment **ASAP** on this new content.\n\nSAC.", " Thanks to all reviewers and authors for their work on this submission.\n\nAs the discussion period starts, I want to make sure that reviewers have read the author's response, and if needed react to it.\n\nThis can be done either by communicating with authors or in private conversation within the reviewing team.", " Thank you for the feedback. We appreciated that you pointed out the clarity, the flexibility and the simplicity of our framework. \n\n**Sample complexity of SABA:** Indeed, the dependance in $N = n + m$ in the sample complexity of SABA was in $N$ in the submitted version of the paper, which did not achieve any theoretical improvement from the full batch method. Nevertheless, as indicated in the general comment, we managed to improve the analysis of SABA **leading to $O(N^{\\frac23}\\epsilon^{-1})$ sample complexity for SABA**. This provides a theoretical improvement from the full batch method and explains the gap between SABA and SOBA FULL BATCH in the experiments.\n\n**Global variance reduction:** The term “global” refers to the fact that we perform the variance reduction globally for all the variables and not separately. As explained in the text, if we do variance reduction in only one variable and use SOBA-like updates for the others, we get slower convergence rate. So, the **global variance reduction allows us to get fast convergence rate**.", " We thank the reviewer for they comments that, among others, commends the clarity and the simplicity of our method. The second weakness dealing with the choice of the comparisons made in the experiments is discussed in the general comment.\n\n**Introduction of $v$:** it is indeed not novel since it can be found in [1] and [2]. Nevertheless, the novelty of our paper lies more in **the generality of the framework** than in the introduction of $v$ itself. Note that [2] is a specific case of this framework using a STORM variance reduction technique only on the outer problem and [1] is out of our framework because it performs several steps in $z$ and $v$. Moreover, **we propose an adaptation of SAGA** which, to the best of our knowledge, has not been done in the literature of bilevel optimization and achieves fast convergence rates. \n\n**Experiments on only on dataset per application:** The datacleaning task with MNIST is classical in the stochastic bilevel optimization literature (see e.g. [2] or [3]). The hyperparameter selection for $\\ell^2$-regularized logistic regression is also classical in the literature but it is usually done with the 20newsgroups dataset. We think that this dataset is not adapted for stochastic algorithms because the number of features is much higher than the number of samples (130,107 features and 18,846 samples). For that reason, we chose to perform the task on the IJCNN1 dataset for which stochastic algorithms are more suited (141,691 samples in total and 22 features). **We added in the Appendix B.5 an hyperparameter selection additional experiment on the Covtype dataset** which has 581, 012 samples, 7 classes and 54 features.\n\n[1] Michael Arbel and Julien Mairal. Amortized Implicit Differentiation for Stochastic Bilevel Optimization. In *International Conference on Learning Representations (ICLR)*, 2022.\n\n[2] Junyi Li, Bin Gu, and Heng Huang. A Fully Single Loop Algorithm for Bilevel Optimization without Hessian Inverse. In *Proceedings of the Thirty-sixth AAAI Conference on Artificial Intelligence*, AAAI’22, 2022.\n\n[3] Kaiyi Ji, Junjie Yang, and Yingbin Liang. Bilevel optimization: Convergence analysis and enhanced design. In *International Conference on Machine Learning (ICML)*, 2021", " We thank the reviewer for the constructive feedback and for highlighting the clarity of the framework. The questions about the analysis of SOBA and the plots of the experiments are treated in the general comment. We address here the remaining points raised in the review:\n\n**Rolling average:** This indeed deserves clarifications. With the notation of the paper $S[\\phi, w]^t_i = \\phi(w_i^{t+1}) - \\phi_i(w^t_i) + \\frac1n \\sum_{i’=1}^n \\phi_{i’}(w_{i’}^t)$ and we denote by $A_t = \\frac1n \\sum_{i’=1}^n \\phi_{i’}(w_{i’}^t)$. By “Rolling average”, we mean that in practice we do not compute $A_{t+1}$ from scratch only using the stored gradients, implementing the formula $A_{t+1}=\\frac1n \\sum_{i=1}^n \\phi_i(w^t_i)$. Instead we do $A_{t+1} \\leftarrow A_t + \\frac1n (\\phi_i(w^{t+1}_i) - \\phi_i(w_i^t))$. This is not an approximation : both methods **compute the exact same quantity**, but the rolling average is more efficient because it has $O(1)$ computational complexity while the naive method has $O(n)$ computational complexity. Note that this is what is done in the classical SAGA for single level problems in practice. We have clarified this in the supplementary material (l.559).\n\n**Mentioning the finite sum in the abstract:** Indeed, it is worth making the abstract more precise on this point and we have added it.\n\n**Analysis of SOBA:** The analysis of SOBA of the submitted version is inspired by [1] and leads to $O(T^{-\\frac25})$ convergence rate which is worse than $O(T^{-\\frac12})$ in [2]. As explained in the general comment, we managed to improve our descent lemmas following the proof technique in [2], **leading to $O(T^{-\\frac12})$ convergence rate for SOBA**. The challenge was not in the introduction of $v$ but in getting a $\\gamma^2$ factor with the variance of $D_x^t$ instead of $\\frac{\\gamma^2}\\rho$ because it requires that the ratio $\\frac\\gamma\\rho$ goes to zero to get convergence. Also, we mentioned that we needed some more regularity on $F$ and $G$ to get the result. So, **in comparison with [2], we need more regularity to get the smoothness of $v^{*}$, but we avoid $O(\\log(T))$ Neumann iterations per outer iteration**.\n\n**PL assumption:** This is a good point. However, this result is still interesting since it shows that **SABA has the same behavior than SAGA under PL assumption**. This is another instance of such behavior. Also, some authors have shown convergence results under strong convexity assumption, which is stronger than PL assumption. See for instance Theorem 3.1 in [3], Theorem 1 in [1] and Corollary 2 in [4].\n\n**Figure 1:** The experimental details about this figure were indeed scarce. It is the selection of the regularization parameter on a Ridge regression problem with 10 features, 750 training samples and 250 validation samples. We have added these details in the Appendix.\n\n**Scale of the experiments:** About the scale of the experiments, we would like to stress that **we have performed a grid search** for the hyperparameters of the different methods, and **for each element of the grid, each method have been run 10 times**. For the experiment on hyperparameter selection on IJCNN1, we tried 63 combinations of hyperparameters for each method, and it took 1400 CPU hours. For the datacleaning task, we tried 121 combinations of hyperparameters and it took 2420 CPU hours. We also added in the appendix an experiment on the problem of hyperparameter selection for an $\\ell^2$-regularized multiclass logistic regression with the covtype dataset. We also tried 63 combinations of hyperparameter for each optimizer and the experiment took 525 CPU hours.\n\n[1] Mingyi Hong, Hoi-To Wai, Zhaoran Wang, and Zhuoran Yang. A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis and Application to Actor-Critic. *preprint ArXiv 2007.05170*, 2021.\n\n[2] Tianyi Chen, Yuejiao Sun, and Wotao Yin. Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2021.\n\n[3] Saeed Ghadimi and Mengdi Wang. Approximation Methods for Bilevel Programming. *preprint ArXiv 1802.02246*, 2018.\n\n[4] Michael Arbel and Julien Mairal. Amortized Implicit Differentiation for Stochastic Bilevel Optimization. In *International Conference on Learning Representations (ICLR)*, 2022.", " We first would like to thank all the reviewers for their remarks and for finding our framework *”clean”* (​​Rev. AcuN), *”clear”* (Rev. wFQ9, HH9R) and *”conceptually simple and is very flexible”* (Rev. HH9R).\nWe start by addressing two points that have been raised by several reviewers:\n\n**Improvement of the analysis of SOBA and SABA:** Two reviewers asked if the analysis of SOBA and SABA can be improved. We are happy to share that we managed to do that, adapting our descent lemmas following the proof technique of [1]. **We get $O(T^{-\\frac12})$ convergence rate for SOBA and $O(N^{\\frac23}T^{-1})$ convergence rate for SABA ($T$ beeing the number of iterations and $N = n+m$ the total number of samples). This leads to a sample complexity that is the same as in the single-level case for SGD (respectively for SAGA).** Note that this improvement requires a more regularity of $F$ and $G$: we need to assume $F$ twice differentiable with Lipschitz Hessian (instead of only once differentiable with Lipschitz gradient) and $G$ three times differentiable with Lipschitz continuous third order derivative (instead of twice differentiable with Lipschitz Hessian). This additional regularity enables us to get the smoothness of $v^*$ and then, adapt the inequality we get for $\\delta_z^t$ to $\\delta_v^t$. Nevertheless, these new regularity assumptions still hold when $F$ is the Ordinary Least Squares loss or the logistic loss and when $G$ is a regularized least squares or logistic loss. We included the modified convergence rates, assumptions and proofs in the revised version.\n\n**Choice of the plots for the experiments:** Indeed, our experiments show the performances over time because we wanted to compare the total complexity of the methods in practice. But we agree that sample complexity is important and that’s why we added plots of performance with respect to the number of calls to individual gradients or Hessian-vector products in the Appendix.\n\n\n[1] Tianyi Chen, Yuejiao Sun, and Wotao Yin. Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems. In *Advances in Neural Information Processing Systems (NeurIPS)*, 2021.", " This work studies fully single-loop stochastic algorithms for bilevel optimization problems where the inner problem is strongly convex and the inner and outer objective are smooth and given by a finite sum. In particular, it provides a unified framework where outer ($x$) and inner ($z$) variables together with the variable used to solve a linear system in the bilevel gradient expression ($v$), evolve jointly. The authors propose 2 methods: SOBA and SABA. SOBA updates $x, z, v$ similarly to single-level SGD and uses two-timescales decreasing step-sizes for the inner and outer variable, while SOBA uses variance reduction with constant step-sizes, similarly to SAGA, thus taking advantage of the finite-sum structure of the problem. The authors prove $O(T^{2/5})$ and $O(1/T)$ stationary point rates, where $T$ is the number of iterations, for SOBA and SABA respectively. Furthermore, SABA converges linearly to the solution when on bilevel problems satisfying the PL-condition. Experiments in optimizing one regularization parameter per feature on IJCNN1 and data hyper-cleaning on MNIST show that SABA outperforms several other bilevel methods introduced recently.\n Strengths:\n1. The presented framework is clean and allows to analyze fully single-loop algorithms. Previous analysis except for [1] usually did not consider updating the linear system variable online and with one descent step. Furthermore [1] directly consider variance reduction while here an analysis of a simpler algorithm (SOBA) is also provided.\n2. This is the first work exploiting the finite sum assumptions for this kind of bilevel problems and reaching the same rates as in single-level optimization.\n3. Experimental comparison is well done. SABA is promising also in terms of practical performance. \n4. Very well written.\n\nWeaknesses:\n1. Some possible discrepancy between the proposed algorithm and how it is actually implemented: the average for the variance reduction is a rolling average in practice, and this is not covered by the theory. \n2. Rate for SOBA is not optimal and it is not clear why.\n3. Experiments are quite small in scale and can be slightly improved. No experiment under the PL-condition. No plots showing how the performance varies w.r.t. the number of single-sample gradients and hessian-vector products. \n\nSome of the points are expanded in the Questions section.\n\n[1] Li, Junyi, Bin Gu, and Heng Huang. \"A fully single loop algorithm for bilevel optimization without hessian inverse.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 7. 2022.\n\n**Post authors’ response.**\n\nThe authors have thoroughly addressed my concerns in their revision and response. In particular, concerning weakness 2, they now achieve optimal rates for SOBA under additional smoothness assumptions which are still realistic. Therefore I accordingly increased the score from 7 to 8.\n I have the following questions and comments.\n\n- Authors say that the sum to compute the variance reduced gradients in SOBA is done in practice using a rolling average (Lines 177-178). Is this the case also in the presented experiments? If so it should be stated more explicitly. How is the rolling average computed exactly? I could not find details on this in the Appendix and I think they should be included. The authors should comment on the fact that this approximation is not covered by the theoretical analysis and maybe do an experiment comparing the exact and approximate average.\n\n- The authors should add the finite sum assumptions in the abstract to make it clearer, since it is not standard in the related literature on bilevel rates.\n\n- Can the analysis of SOBA be improved following [1]? The main difference between the two methods is that [1] uses a Neumann series approximation for the hypergradient, while SOBA updates the linear system variable online. What are the challenges in obtaining the rates in [1]? Explaining this could strengthen the paper.\n\n- Line 294-299: on the “real” sample complexity of SABA and SOBA in full batch mode. The authors say that the step-sizes in SABA are proportional to the inverse of the number of examples, which makes the sample complexity of SABA and SOBA using all the examples (full batch mode) equal, while empirical results show that SABA performs better. Isn’t this because the analysis does not take into account the correlations between the single sample gradients? I believe the stronger this correlation, the stronger the gap between SABA and SOBA full batch.\n\n- Plots for the experiments show how the performance varies over time. It could be interesting to see how the performance varies also in terms of the number of single sample gradients and jacobian vector products, which is not dependent on the hardware used.\n\n- The authors present rates also under the PL-condition for the outer objective. Are there some interesting bilevel applications where this condition holds? If so, an experiment under this setting could improve the paper.\n\n- Figure 1 shows SOBA and SABA performance in a toy problem. I could not find any detail on the specific problem. I suggest either to remove Figure 1 or to add some, even brief, details in the main body and/or in the Appendix. \n\n\n\nMinor:\n- Line 59 and 61: I think there should be an expectation in the inequalities for the rates. \n\n- Lines 93-96: reference to Hong et al.. Do the authors mean that only one element of the Neumann series is used? This is incorrect since they use more than one in that paper. I also found that paragraph a bit too fast.\n\n\n[1] Chen, Tianyi, Yuejiao Sun, and Wotao Yin. \"Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems.\" Advances in Neural Information Processing Systems 34 (2021): 25294-25307.\n Limitations are only partially addressed. The non standard finite sum assumption is properly discussed in the theoretical analysis section but could be anticipated in the abstract. The discrepancy between the theoretical and practical updates of SABA is mentioned briefly and not properly addressed (See Question section).\n", " This paper proposes a simple framework for solving bilevel optimization. The framework involves only 3 unbiased estimation in each iteration. The authors provide theoretical convergence guarantees for both SGD version and variance reduction version based on SAGA. Experimental results are provided showing their superior performance. Strengths:\n1. The proposed framework is very simple and clear. The idea of simplifying solving bilevel optimization into three unbiased estimation makes the theory more intuitive and straightforward.\n2. Many theoretical guarantees are provided, which makes the statements well-supported. To my best knowledge, the convergence analysis shows optimal convergence rate under corresponding settings.\n\nWeaknesses:\n1. From what I can see, the key idea of this framework is the way of estimating the Hessian-vector product, i.e. 'v' (equation (5) in the text). However, as the authors mention in line 92, this idea is not novel.\n2. In the discussion of convergence, the comparison is made in terms of iteration complexity. But the experiments part only shows comparison in terms of running time.\n2. The experiments are only performed on one dataset for each application.\n ", " In this paper, the author consider a finite-sum stochastic bilevel optimization problem:\n$$\\min_x F(x,z)\\qquad s.t. \\qquad z = \\arg\\min_{z'} G(x,z')$$\nwhere $F(x,z) = \\frac{1}{m} \\sum_{i=1}^m F_i(x,z)$, $G(x,z) = \\frac{1}{n} \\sum_{j=1}^n G_j(x,z)$, and $G$ is strongly convex. \nFor $h(x) = F(x,z^*(x)), z^*(x):=\\arg\\min_{z'} G(x,z')$, it is known that the gradient of $h$ can be written as \n$$\\nabla h(x) = \\nabla_x F(x,z^*(x)) - \\nabla_{xz}G(z^*(x),x)\\nabla_{zz}G(z^*(x),x)^{-1}\\nabla_zF(x,z^*(x)).$$\nDue to the finite sum feature of the objective function, it may not be easy to estimate the matrix inverse in the above formula. However, one should notice that $\\nabla_{zz}G(z^*(x),x)^{-1}\\nabla_zF(x,z^*(x)) = \\arg\\min_v \\frac{1}{2}v^T\\nabla_{zz}G(z^*(x),x)v - v^T\\nabla_zF(x,z^*(x))$, the authors propose to update $v$ and $z$ by one step of the gradient descent while updating $x$ by one step of ``gradient descent'' where the inverse-vector product is replaced with $v$. \n\nCompared to many previous algorithms, the proposed framework is matrix-inversion free and is very convenient to incorporate the SGD or SAGA variance reduction technique. \n\nThe reviewer think the proposed framework for stochastic bi-level optimization is convenient and flexible. Most importantly, it is conceptually simple and implementation friendly. However, the current work also has some room to improve. For example, for a typical finite-sum/stochastic optimization problem, the efficiency measure should be sample complexity or oracle complexity, instead of the iteration complexity that is presented in this work. Finally, the sample complexity dependence of SABA on $m+n$ is linear, which is a bit confusing to me since SAGA usually improve the dependence to $(m+n)^{2/3}$.\n\nOverall, the paper does provide some novel and interesting ideas. However, the theoretical result does not justify the reason for using variance reduction. If the authors can answer this question, the reviewer is very willing to adjust to a higher rating. Strength \\# 1: The framework is new and is conceptually simple and is very flexible to incorporate different variance reduction techniques. \nStrength \\# 2: The whole algorithm is Hessian inverse free, which is easy to implement. (Though this is not the only work that is matrix inverse free)\nStrength \\# 3: The paper is very clear and well organized. Very well-written and easy to follow. \n\n\nWeakness \\# 1: The main theorems in the main paper only discuss iteration complexity instead of sample complexity. \nWeakness \\# 2: The sample complexity in the appendix for SABA is $O((m+n)\\epsilon^{-1})$, which seems exactly the same as the full batch deterministic version of the algorithm. Theoretically, this does not provide any advantage of using SAGA variance reduction scheme. \nI'm confused why SAGA doesn't improve the factor $(m+n)$ to $(m+n)^{2/3}$. Major issue. \n1. Please discuss the sample complexity in the paper, in particular the dependence on m and n. This is because when we use sample average approximation to construct the empirical objective function, $m,n$ are often very large (In fact they are $\\Omega(\\epsilon^{-1})$, where $\\epsilon$ is for measuring $\\|h(x)\\|^2$)\n2. The authors should carefully justify why SAGA scheme does not provide any theoretical improvement over full batch method. \n\nMinor issue. \n1. The term ``global'' in the title is confusing, I'm not sure what this global means. \n2. In the appendix, in the table that list the complexity of the algorithms, a ``)'' is missing for SABA.\n NA. This is a theoretical paper. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "YCOZnHGI8sG", "FIu4aWzcClM", "6h1s93-EPg", "yhN9m0WX-4t", "1lz_0UFhUw4", "7Tky25mBlIn", "7pnZq1yuPCk", "nips_2022_wlEOsQ917F", "yhN9m0WX-4t", "1lz_0UFhUw4", "PITZtkbRMtY", "nips_2022_wlEOsQ917F", "nips_2022_wlEOsQ917F", "nips_2022_wlEOsQ917F", "nips_2022_wlEOsQ917F" ]
nips_2022_bMYU8_qD8PW
A Unified Model for Multi-class Anomaly Detection
Despite the rapid advance of unsupervised anomaly detection, existing methods require to train separate models for different objects. In this work, we present UniAD that accomplishes anomaly detection for multiple classes with a unified framework. Under such a challenging setting, popular reconstruction networks may fall into an "identical shortcut", where both normal and anomalous samples can be well recovered, and hence fail to spot outliers. To tackle this obstacle, we make three improvements. First, we revisit the formulations of fully-connected layer, convolutional layer, as well as attention layer, and confirm the important role of query embedding (i.e., within attention layer) in preventing the network from learning the shortcut. We therefore come up with a layer-wise query decoder to help model the multi-class distribution. Second, we employ a neighbor masked attention module to further avoid the information leak from the input feature to the reconstructed output feature. Third, we propose a feature jittering strategy that urges the model to recover the correct message even with noisy inputs. We evaluate our algorithm on MVTec-AD and CIFAR-10 datasets, where we surpass the state-of-the-art alternatives by a sufficiently large margin. For example, when learning a unified model for 15 categories in MVTec-AD, we surpass the second competitor on the tasks of both anomaly detection (from 88.1% to 96.5%) and anomaly localization (from 89.5% to 96.8%). Code is available at https://github.com/zhiyuanyou/UniAD.
Accept
This paper is on a highly-important topic, and makes solid contributions. Anomaly detection for multi-class datasets without class information is an underexplored area. Reviewers have appreciated the strong experimental results (especially on the important MVtech benchmark), high quality paper writing, and explainability results besides accuracy, via a novel attention mechanism. On the flip side, there were concerns on lack of deep analyses of the constituents of the method and novelty (given that there are some recent papers with similar ideas). The scores were borderline and the authors have put significant effort to address the concerns of the reviewers. Especially extra ablation studies and comparisons with other relevant papers are quite helpful in regards to convincingness of the ideas. I support the acceptance of the paper given all. Please update your paper with the additional content you have provided in the responses below.
train
[ "lsvGaO9YteS", "HfGzyo4M-uo", "IikAOq6YFK", "vHhfv8g1gSb", "JevmHaSsyg", "cq6crNIlv7O", "yns44QgO4gy", "kRezj1MzM36", "HHKTn2Bsphm", "X4Ul9a5HURX", "lWrJbMMFGGB", "YN4ujK8aDgm", "Cp5IcunPYL2", "KhhcvZxEFE", "UiTaKu_cyes" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your valuable suggestions that help us improve the manuscript. We are glad that you appreciate the \"identical short\" problem studied in this work, which is our major focus. In the meantime, we also agree that our current presentation (*i.e.*, abstract and introduction) may give too much space to the solutions in solving the \"identical short\" issue, which slightly upstage the problem itself. As suggested, we will rephrase some sentences to highlight the *novel problem setting* as well as the *reasons on why it is challenging* in the next version, such that the readers can have a better understanding of the scope of this work.\n\nThank you again for your effort in the review and the discussion!", " Thanks for liking our work. We will release the code.", " Thanks to the author for the reply. Most of my concerns were addressed. Anomaly detection in images is a very interesting topic. I hope the author releases the code and bring progress to the community. ", " Many thanks to the authors for their detailed response. In my original review I raised a couple of criticisms, asking for (1) clarifications w.r.t. existing AD settings, (2) novelty of the approach, and (3) additional ablations. The authors have partially addressed these. I will first reply point-by-point, and then summarize regarding (1), (2), (3) below.\n\n*“The task setting studied in this work clearly differs from \"semantic AD”.”*\n\nI fully agree, which is why I wanted to encourage a detailed discussion in the manuscript.\n\n*“We have added the discussion of such differences in the revised version (Line 213).”*\n\nThank you for adding this discussion. Given the many recent submissions that proposed new AD settings, I believe a careful explanation of their differences (in particular with regards to the AD problem introduced here) enhances the paper.\n\n*“Using transformer for anomaly detection is not our focus. We choose transformer as the reconstruction model considering its great potential in preventing the model from learning the \"identical shortcut\" (please refer to Sec. 3.1 in the submission)”*\n\nFrom the abstract and instruction readers will in all likelihood get a different impression, as considerable focus is put on the different elements used in the transformer-based architecture; for example “we propose a feature jittering strategy” [L13] is highlighted in the abstract. To readers, this raises the curiosity what methods will be introduced throughout — unfortunately I maintain that the actual methodologies are a little underwhelming in terms of their technical novelty, given the space awarded to them in the manuscript.\n\nAfter reading other reviewer’s concerns and respective author’s comments, my concerns w.r.t. ablations have been accounted for, thank you for adding these.\n\n## Summary\n\nThe authors have addressed (1) and (3), some concerns regarding (2) novelty of the methods in the manuscript remain — please note the score has been increased.", " Dear reviewers and AC\n\nThanks a lot for your effort in reviewing this submission! We have tried our best to address the mentioned concerns/problems in the rebuttal. Feel free to let us know if there is anything unclear or so. We are happy to clarify them.\n\nBest,\nAuthors", " Dear reviewers and AC\n\nThanks a lot for your effort in reviewing this submission! We have tried our best to address the mentioned concerns/problems in the rebuttal. Feel free to let us know if there is anything unclear or so. We are happy to clarify them.\n\nBest,\nAuthors", " Dear reviewers and AC\n\nThanks a lot for your effort in reviewing this submission! We have tried our best to address the mentioned concerns/problems in the rebuttal. Feel free to let us know if there is anything unclear or so. We are happy to clarify them.\n\nBest,\nAuthors", " Thanks for your suggestion. We have already planned to merge some necessary materials in the supplementary material into the main paper. Currently, we leave the revised version and the supplementary material as their current form to help the reviewers and ACs *track the revision* (*i.e.*, newly added materials).", " Thank you for the feedback.\n\nMy concerns are addressed in the revised version of the paper.\nThe authors show that 'The unified case is more challenging and hence magnifies the identical shortcut problem' with the additional experiment in S1. I think this is an important message of the paper and should be in the main text with detailed discussions.\nWithout a clear demonstration of the difference in identical shortcut problem between unified and separate anomaly detection settings, the contributions claimed in this paper are less plausible.", " **Q1: Differences from existing works involving multiple classes.**\n\nThe task setting studied in this work clearly differs from \"semantic AD\".\n\nFirst, we focus more on the industrial anomaly detection dataset, MVTec-AD, which is of more practical usage. Unlike CIFAR-10, _each category has normal and anomalous samples_ in MVTec-AD. We would like to model the _joint distribution of normal samples across all categories_. It requires the model to learn \"what normal samples from each category look like\" instead of \"what categories are normal\". The latter is the main focus of \"semantic AD\".\n\nSecond, we also differ from prior arts [1][2][3] regarding the CIFAR-10 task setting.\n* [1] studies the _one-versus-many_ setting, which treats 1 class as normal and the remaining 9 classes as anomalous.\n* [2] studies the _many-versus-one_ setting, which treats 9 classes as normal and the remaining class as anomalous.\n* [3] studies both _one-versus-many_ and _many-versus-one_ settings.\n* We study the **_many-versus-many_** setting, which treats 5 classes as normal and the other 5 classes as anomalous. We use such a setting to simulate the real scenario, where _both normal and anomalous samples contain multiple classes_.\n\nWe have added the discussion of such differences in the revised version (Line 213).\n\n[1] Deep Semi-Supervised Anomaly Detection. Ruff _et al._. ICLR'20.\n\n[2] Detecting Semantic Anomalies. Ahmed and Courville. AAAI'20. \n\n[3] Transfer-Based Semantic Anomaly Detection. Deecke _et al._. ICML'21.\n\n**Q2: Comparison with transformer-based competitors.**\n\nUsing transformer for anomaly detection is not our focus. We choose transformer as the reconstruction model considering its great potential in preventing the model from learning the \"identical shortcut\" (please refer to Sec. 3.1 in the submission). Concretely, we find that the _learnable query embedding_ is essential for avoiding such a shortcut but is seldom explored in existing transformer-based approaches [4][5][6]. As shown in the table below, after introducing even only one query embedding, our baseline already outperforms existing alternatives by a sufficiently large margin in the unified setting. Our proposed three components further improve our _strong baseline_. Recall that all three components are proposed to avoid the model from directly outputting the inputs.\n\nIn short, the baseline is not a previous approach, but instead a most straightforward modification based on our revisiting of the \"identical shortcut\" issue. We have added the clarification and the comparison in the revised supplementary material (Sec. E2).\n\n| Method | Loc. AUROC (unified / separate) | Det. AUROC (unified / separate) | 1 query | layer-wise query |\n| ---- | ---- | ---- | ---- | ---- |\n| InTra [4] | 70.6 / 96.6 | 65.3 / 95.0 | × | × |\n| VT-ADL [5] | 64.4 / 82.0 | 55.4 / 78.7 | × | × |\n| AnoVit [6] | 68.4 / 83 | 69.6 / 78 | × | × |\n| Ours (baseline) | 92.8 / 95.8 | 87.6 / 94.7 | ✓ | × |\n| Ours | **96.8** / 96.6 | **96.5** / 96.6 | × | ✓ |\n\n[4] Inpainting Transformer for Anomaly Detection. Pirnay and Chai. International Conference on Image Analysis and Processing, 2022.\n\n[5] VT-ADL: A Vision Transformer Network for Image Anomaly Detection and Localization. Mishra _et al._. International Symposium on Industrial Electronics, 2021.\n\n[6] AnoViT: Unsupervised Anomaly Detection and Localization With Vision Transformer-Based Encoder-Decoder. Lee and Kang. IEEE Access, 2022.\n\n**Q3: More ablation experiments.**\n\nPlease refer to **Q2 to Reviewer JpFX**. Each of the three components, _i.e._, layer-wise query embedding, neighbor masked attention (NMA), and feature jittering (FJ), could help the model not to learn the \"identical shortcut\" and hence bring considerable improvement on its own.\n\n**Q4: Societal impacts.**\n\nThanks. Anomaly detection may be used for video surveillance, which may infringe personal privacy. We have updated the potential societal impacts in the revised paper (Line 313). \n", " **Q1: Introduction to evaluation metric (AUROC).**\n\nArea Under the Receiver Operating Curve (AUROC) follows the standard evaluation protocol for the MVTec-AD dataset. It is independent of the threshold used to detect anomalies. To obtain the receiver operating curve, the true positive rate is defined as the percentage of pixels (for anomaly localization) or images (for anomaly detection) that are accurately identified as anomalies. The false positive rate is defined as the percentage of pixels or images that were wrongly classified as anomalies. We have included the introduction of AUROC in the revised version (Line 216).\n\n**Q2: More ablation experiments.**\n\nThanks. We provide the full ablation experiments regarding layer-wise query embedding, neighbor masked attention (NMA), and feature jittering (FJ) as below. We can tell that, all these three components could boost the performance, namely layer-wise query with 3.7\\% gain (92.8\\% to 96.5\\%), NMA with 3.5\\% gain (from 92.8\\% to 96.3\\%), and FJ with 3.0\\% gain (from 92.8\\% to 95.8\\%). It demonstrates that all these designs can help the model _not_ to learn the \"identical shortcut\". Furthermore, combining all three components brings the best performance. This table is also included in Tab. S2 of the revised supplementary material (Page 4). \n\n| w/o query | 1 query | layer-wise query | NMA | FJ | Loc. AUROC | Det. AUROC |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n| ✓ | - | - | - | - | 79.4 | 69.5 |\n| - | ✓ | - | - | - | 92.8 | 87.6 |\n| - | ✓ | - | ✓ | - | 96.3 | 96.1 |\n| - | ✓ | - | - | ✓ | 95.8 | 95.0 |\n| - | ✓ | - | ✓ | ✓ | 96.6 | 96.2 |\n| - | - | ✓ | - | - | 96.5 | 95.0 |\n| - | - | ✓ | ✓ | - | 96.5 | 95.8 |\n| - | - | ✓ | - | ✓ | 96.2 | 94.9 |\n| - | - | ✓ | ✓ | ✓ | **96.8** | **96.5** |\n\n**Q3: Feature jittering is a common practice and lacks novelty.**\n\nAlthough feature jittering (FJ) is simple, it _rightly fits our motivation_, which is to prevent the model from learning the \"identical shortcut\". Concretely, it urges the model to recover the correct information from noisy inputs, leaving the \"identical shortcut\" un-optimal anymore. In other words, the model is forced to learn semantic knowledge instead of directly outputting the inputs. The ablation studies in **Q2** also verify the effectiveness of FJ.\n\nOur major contributions lie in (1) defining a more challenging task setting for anomaly detection, (2) analyzing how the \"identical shortcut\" harms the performance of anomaly detection, and (3) proposing three feasible solutions. We believe this study could inspire more explorations along this direction.\n\n**Q4: Activations in MLP.**\n\nThanks. The revisiting in Sec. 3.1 is a _rough_ analysis of the \"identity shortcut\" problem. Providing a rigorous analysis could be far more challenging and not under the main scope of this work. But in our experiments shown in Fig. 2, all MPLs, CNNs, and Transformers have non-linear activations. We can still observe the phenomenon that \"loss gets smaller yet the performance drops\". This empirically verifies our claim. We have revised the paper (Line 121) to tell the readers that the analysis is not rigorous.\n\n**Q5: Analysis of DRAEM.**\n\nDRAEM relies on some simulated anomalies generated with Perlin noise. These pseudo-anomalies are very alike the actual anomalies of some categories, like Zipper and 5 Texture categories (_e.g._, usually present color perturbations or texture discontinuities). However, for some categories, like Capsule, Metal Nut, and Transistor (_e.g._, usually present structural anomalies), the pseudo-anomalies are clearly different from those real anomalies. Therefore, in the unified case, all these categories are trained together with the same simulation strategy, making the model prone to easy-to-learn ones.\n\n**Q6: Complexity comparison.**\n\nWith the image size fixed as $224 \\times 224$, we compare our UniAD with all competitors regarding the inference FLOPs and learnable parameters in the table below. We can tell that the advantage of our approach does not come from a larger model capacity. This table is also included in Tab. S7 of the revised supplementary material (Page 6).\n\n| | US | PSVDD | PaDiM | CutPaste | FCDD | MKD | DRAEM | Ours |\n| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | \n| FLOPs (G) | 60.32 | 149.74 | 23.25 | 3.65 | 13.16 | 32.11 | 245.15 | 6.46 |\n| Learnable Params (M) | 9.55 | 0.41 | 950.36 | 13.61 | 4.51 | 0.34 | 69.05 | 7.48 |\n", " **Q1: Relation between the \"identical shortcut\" problem and the unified case.**\n\nThanks. The \"identical shortcut\" issue is a general problem in auto-encoder networks. As a result, all reconstruction-based anomaly detection methods would face this risk. However, under the unified case, where the distribution of normal data is more complex, the \"identical shortcut\" problem is magnified. Intuitively, to learn a unified model that can reconstruct all kinds of objects, it requires the model to work extremely hard to learn the joint distribution. From this perspective, learning an \"identical shortcut\" appears as a far easier solution.\n\nIn Fig. 2a of the submission, we aim to visualize the \"identical shortcut\" issue, where the loss becomes smaller yet the performance drops. We conduct the same experiment under the separate case. As shown in Fig. S1 of the revised supplementary material (Page 2), the accuracy keeps growing up along with the loss getting smaller. This helps reveal the relation between the \"identical shortcut\" problem and the unified case, which is that _the unified case is more challenging and hence magnifies the \"identical shortcut\" problem_. Therefore, although our approach is not specially designed for the unified case, such a challenging task clearly highlights our strengths over existing alternatives.\n\n**Q2: Performance gain from the unified case to the separate case.**\n\nWe do not use \"label\" information for the separate case. For the separate case, we train 15 separate models, each for a single category, following prior arts. Consequently, the separate models are learned _exactly the same_ as the unified model, but _on some easier data distributions_. From this perspective, our approach does _not_ suffer from a performance drop when the to-learn distribution gets more complex. By contrast, existing alternatives only perform well on simple distributions yet fail to handle such a challenging task. Thus, instead of saying there is no performance gain from the unified case to the separate case, what we want to express is that _there is no performance drop from the separate case to the unified case_. \n\n**Q3: Training setup.**\n\nFor both the unified case and the separate case, the model learns on a collection of unlabeled images from scratch. The only difference is that whether the image collection comes from one object category or multiple categories. We have clarified this in the revised version (Line 227).", " The paper tackles anomaly detection of multiple classes without class labels. In other words, the proposed method learns the normality of multiple classes at once without the need for class label information. The paper analyzes that the reconstruction-based anomaly detectors learn ‘identity shortcut’ and introduces techniques how to prevent this phenomenon. To this end, the paper proposes three techniques: the use of query embedding in multiple layers of a transformer, neighbor masked attention and feature jittering. The experiments are conducted on MV-Tech and CIFAR10 datasets. ## Strength\n\n### Problem setup\nAnomaly detection on a multi-mode dataset (multi-class dataset without class information) is a relatively underexplored area. Most of the out-of-distribution detection papers assume class information given. In addition, this paper targets anomaly localization tasks.\n\n### Analysis and extensive ablation study supports the idea\n- Section 3.1 shows the performance deprecation over the training epochs showing that reconstruction-based models’ performance is unstable during training (the phenomenon of identical shortcuts).\n- Section4.5 includes extensive ablation studies supporting the design of the method and sensitivity in each component and hyperparameters. Most hyperparameters are insensitive to performance showing less than ~1% gap.\n\n### Strong performance\nThe proposed method shows a notable performance gap over competing methods in a unified scenario on MV-tech and CIFAR 10.\n\n\n## Weakness\n\n### Lack of analysis\n\n- The design of the method is not targeted for a unified case but is effective. Why?\nThe idea of the proposed method uses general ML techniques not tailored for unified (multi-class data without label information) cases, yet effective. What would be the main reason for this? How is the problem of learning identical-shortcut relevant to the performance of unified anomaly detection scenarios?\n\n- Why does the proposed method not gain performance when label information is added? (Separate case) In Table1 and Table2, the proposed method does not improve much by adding label information. How did the proposed method and competing methods are trained for the unified and separate case? In the separate case, are they trained on the whole dataset and finetuned for each class-wise dataset? It is unclear how the models are trained in each scenario. Limitations are addressed in Section5.", " This paper aims to learn a unified framework for detecting multi-class anomalies. Anomaly detection is only trained on normal data, so the so-called “identical shortcut” phenomenon may occur. To solve this problem, this paper proposes the following strategies: 1) layer-wise query decoder, 2) neighbor masked attention, 3) feature jittering. The experimental results on the MVTec-AD and CIFAR-10 datasets show that the proposed method can alleviate the “identical shortcut” phenomenon.\n Strengths\n1. This paper proposes a novel neighbor masked attention.\n2. This paper is well organized, easy to understand, and clearly written.\n3. Good results are achieved on the MVTec-AD and CIFAR-10 datasets.\n\nWeaknesses\n1. The introduction to evaluation metric (AUROC) is missing.\n2. The ablation experiments in Table 4 are incomplete. For example, under 1 q., the results in the case of only NMA and FJ are missing; under Layer-wise q., the results in the case of only NMA are missing; under Layer-wise q., the results in the case of only FJ are missing.\n3. Feature jittering is a common practice and lacks novelty.\n 1. On lines 125-126, the author mentions that the model may learn a trivial solution, causing anomaly detection to fail. But in MLP, this should not be the case due to the presence of nonlinear activations. Can the authors explain this further through experiments or visualizations?\n2. In Table 2, for the categories Capsule and Transistor, the results of the method DRAEM (50.5 and 64.5) are significantly smaller than those of the method in this paper (98.5 and 97.9), but for the category Zipper, the results of the method DRAEM (98.3) are higher than those of the method in this paper (96.8). What are the reasons for the above phenomenon? Please analyze this.\n3. Please analyze the complexity of the method in this paper compared with the one-class-one model method.\n The authors have adequately addressed the limitations and potential negative societal impact of their work. ", " The authors propose the learning of multi-class decision boundaries for the task of anomaly detection (AD) over multiple object classes. For this, they employ reconstruction-based scores obtained from a transformer network, modified with a couple of simple tricks, such as masking neighboring points in the attention map, and increasing the capacity of the decoder. Results on MVTec show this is a promising direction for AD over multiple object classes. # Strengths\n\nTo enable their transformer-based model to work with the task of adopting a complex normal distributions, the authors come up with some modifications, in particular neighbor-masked attention which they insert directly into the architecture to replace the default attention layer. While the authors employ other approaches such as \"feature jittering\", these correspond to a simple addition of Gaussian noise during the input stage, and given its simplicity I would suggest authors can remove this from abstract etc., as it doesn't present any significant novelty.\n\nThe experimental results are the strong point of this work, with outstanding performance on MVTec-AD and CIFAR10 (which is a much less relevant benchmark though, in particular as more challenging ones have recently been used, e.g. CIFAR100/STL10, c.f. works listed below).\n\nMoreover, the paper is well-written and easy to follow.\n\n# Weaknesses\n\nThe proposed modifications are relatively straightforward, in particular the ablations in Table 4 indicate that a vanilla transformer would end up outperforming the existing state of the art on MVTec-AD. Given there are recent works on using transformers for AD, e.g. AnoVIT (as pointed out by the authors on page 3 lines 99-101), this somewhat limits the novelty of the proposed method.\n\nSurprisingly, from the ablation in Table 4 it appears as if feature jittering (FJ) boosts performance nearly as much as neighborhood-masked attention (NMA). I would suggest included pairwise coupling (e.g. NMA+FJ) to shed light on which ones of these are required in unison, or whether they obtain similar outcomes.\n\nMoreover, there have been various works in the recent past that attempt to perform anomaly detection over classes that contain more than a single object class (with no labels assumed present to distinguish the objects). A discussion of these recent works, and how they relate/differ from the way in which multiple classes are treated here is missing, e.g.:\n- \"Deep Semi-Supervised Anomaly Detection\", Ruff et al., ICLR 2020\n- \"Detection Semantic Anomalies\", Ahmed & Courville, AAAI 2020\n- \"Transfer-Based Semantic Anomaly Detection\", Deecke et al., ICML 2021\nThese manuscripts investigate the presence of multiple classes in the normal distribution. While the focus is different (latent classes), it should be compared against in this work.\n\n# Update\n\nThe authors have addressed points raised in my original review by adding ablations and contrasting their proposed setting to existing ones. Some concerns remain around the novelty of the proposed approach.\n\nThe score has been increased to reflect the newly incorporated changes. This paper presents strong experimental results, however appears to target a somewhat loose definition of multi-class AD. I would suggest the authors work to improve the clarity of their manuscript.\n\nIn particular:\n- clarify how the proposed multi-object AD task of \"unified AD\" fits in or differs from existing works that assume multiple objects in the normal distribution (e.g. \"semantic AD\").\n- clarify the novelty of the proposed transformer components, which appear highly incremental. Improvements could consist in more direct comparisons (and ablations) against transformer-based competitor models. Limitations are discussed in Section 5, potential societal impacts (say, anomaly detection for tasks such as video surveillance) have not been discussed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "vHhfv8g1gSb", "IikAOq6YFK", "lWrJbMMFGGB", "X4Ul9a5HURX", "Cp5IcunPYL2", "KhhcvZxEFE", "UiTaKu_cyes", "HHKTn2Bsphm", "YN4ujK8aDgm", "UiTaKu_cyes", "KhhcvZxEFE", "Cp5IcunPYL2", "nips_2022_bMYU8_qD8PW", "nips_2022_bMYU8_qD8PW", "nips_2022_bMYU8_qD8PW" ]
nips_2022_0tG59j2efs
Learning from Future: A Novel Self-Training Framework for Semantic Segmentation
Self-training has shown great potential in semi-supervised learning. Its core idea is to use the model learned on labeled data to generate pseudo-labels for unlabeled samples, and in turn teach itself. To obtain valid supervision, active attempts typically employ a momentum teacher for pseudo-label prediction yet observe the confirmation bias issue, where the incorrect predictions may provide wrong supervision signals and get accumulated in the training process. The primary cause of such a drawback is that the prevailing self-training framework acts as guiding the current state with previous knowledge because the teacher is updated with the past student only. To alleviate this problem, we propose a novel self-training strategy, which allows the model to learn from the future. Concretely, at each training step, we first virtually optimize the student (i.e., caching the gradients without applying them to the model weights), then update the teacher with the virtual future student, and finally ask the teacher to produce pseudo-labels for the current student as the guidance. In this way, we manage to improve the quality of pseudo-labels and thus boost the performance. We also develop two variants of our future-self-training (FST) framework through peeping at the future both deeply (FST-D) and widely (FST-W). Taking the tasks of unsupervised domain adaptive semantic segmentation and semi-supervised semantic segmentation as the instances, we experimentally demonstrate the effectiveness and superiority of our approach under a wide range of settings. Code is available at https://github.com/usr922/FST.
Accept
This paper introduces an approach for reducing confirmation bias during self-training for semantic segmentation, by “learning from the future”, i.e. updating the teacher at a given timestep in self-training with a virtually updated version of the student, without actually using the gradients to update the student yet. Overall, reviewers were enthusiastic about the paper, finding the proposed method to be simple but interesting and of broad utility, and the paper well-written. The rebuttal responses seemed to address most questions and concerns, though there are some remaining weaknesses, such as the fact that the approach adds additional time/computation cost while the performance advantage versus standard self-training decreases with additional training iterations. However, on the balance I agree with reviewers that the strengths of the paper outweigh the weaknesses and recommend acceptance.
train
[ "GFDOgDaY-OL", "FWkEujpdUbt", "7JnxcfAN9W", "p8Z_5mRVZnS", "1s3OqRWvXnm", "I2MzVapZqfm" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **Q1: A more convincing clarification on the motivation.**\n\nFirst, we observe that, although the pseudo-labels are noisy during training, the performance roughly gets better, which means *more accurate predictions*.\nMotivated by this, we wonder if it is possible to use the future state to provide more reliable pseudo-labels for the current state, and hence boost the performance.\nSuch an idea clearly distinguishes our approach from the existing ST framework.\n\nAs for the \"cached\" model weights, they are primarily used for the teacher exploration.\nAfter getting a more reliable teacher, we can use it to better supervise the student, which causes an actual update instead of caching.\n\n\n\n**Q2: The necessity of the newly introduced hyper-parameter.**\n\nThe newly introduced hyper-parameter $\\mu'$ aims to balance the contribution of the *current states* and the *virtual future states* to the teacher updates.\nIt increases the flexibility of the method.\nWe provide ablations on $\\mu'$ in Tab. 4 and choose $\\mu' = 0.999$ in practice considering the performance mean and variance. Following the suggestion, we provide more experiments on $\\mu'$, including linearly increasing from 0.9 to 0.99999, linearly decreasing from 0.99999 to 0.9, and setting it as a learnable hyper-parameter.\nThe results are shown below and we find that fixing it as 0.999 performs best among all settings.\n\n| Setting | mIoU |\n| --------------- | :---: |\n| Linear Increase | 56.79 |\n| Linear Decrease | 58.08 |\n| Learnable | 58.87 |\n| Fixed (0.999) | 59.81 |\n\n\n\n\n\n**Q3: Observations or comments on the combination of FST-D and FST-W.**\n\nThanks.\nTraining a combination of FST-D and FST-W can be time-consuming because it takes even more time (*i.e.*, depth times width) for teacher exploration.\nAs suggested, we provide some preliminary results along this direction.\nAt each training iteration, we explore $K=3$ steps deeply (*i.e.*, FST-D), and the ensemble of $N=3$ explorations using different data batches (*i.e.*, FST-W).\nThe results are listed below, where our FST-D+W achieves the best performance with even fewer student updates.\nThis table is also included in Tab. S7 of the *revised supplementary material* (Page 9).\n\n| Method | mIoU (4k) | mIoU (8k) | mIoU (12k) | mIoU (28k) | mIoU (40k) |\n| :------ | :-------: | :-------: | :--------: | :--------: | :--------: |\n| ST | 45.55 | 44.97 | 50.54 | 53.47 | 55.99 |\n| FST-D | 50.08 | 54.35 | 57.12 | 58.77 | 59.82 |\n| FST-W | 51.00 | 54.32 | 56.78 | 57.96 | 59.23 |\n| FST-D+W | 54.48 | 57.27 | 58.45 | **61.49** | - |\n\n\n\n**Q4: Possibility of including more methods for the experimental comparison (more comparisons with semi-supervised methods).**\n\n\n\nWe have already provided more comparisons with state-of-the-art semi-supervised methods in the *supplementary material*.\nPlease refer to Tab. S6 on Page 5 for details.\n\n\n**Q5: Notations in the Fig. 3.**\n\nThe notations in Fig. 3 are correct, and follow the paradigm of [1]. Concretely, the student model is trained on both labeled data $\\\\{(x_l, y_l)\\\\}$ and unlabeled data $\\\\{(x_u)\\\\}$. The teacher model is momentum updated with the student, and provides pseudo-labels $\\hat{y}_u$ for the student as the supervision.\n\n[1] *Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Tarvainen and Valpola. NeurIPS'17*\n\n\n\n\n**Q6: Potential negative societal impact.**\n\nThanks. In Sec. H of the *supplementary material*, we have discussed some potential negative societal impacts.\nWe have also included more discussion as suggested in the revised version.\n\n", " **Q1: Some of the figures and comparisons might be somewhat misleading.**\n\nThese figures show performance comparisons under *the same number of updates* of the student.\nThis is a fair comparison because the student is the final model used for evaluation.\nWe have revised the captions of these figures so that readers can better understand the difference between our FST and the baseline model.\n\nYou are correct that we spend more time in the teacher exploration, however the main difference between our FST and the conventional ST is that our teacher could provide more accurate supervision signals (by acquiring knowledge from the future), instead of simply extending the training time.\nAs suggested, we involve an iteration-to-iteration comparison in Fig. S5 of the *revised supplementary material* (Page 9).\nWe can tell that, the performance of ST even suffers from a longer training time (note that we report its best performance instead of the final performance).\nBy contrast, our FST consistently benefits from a longer training time.\n\n\n\n**Q2: Comparison with longer training baselines.**\n\nThanks.\nA longer training strategy indeed can help boost performance, but the longer is not necessarily the better.\nThe major problem in the existing ST framework is that *wrong supervision signals may get accumulated* in the training process, which is also known as the \"confirmation bias\" issue.\nAs a result, the performance would even *drop* after a sufficient time of training (please refer to **Q1**).\nBy involving the \"future\" knowledge, our FST manages to alleviate such a problem to some extent, because the teacher could provide *more accurate pseudo-labels*.\nUnder such a case, our FST could benefit from further longer training iterations and obtain a better performance.\nThis property is beyond the capability of the conventional ST.\n\n", " **Q1: Does the original EMA-based self-training involves the current student information and can it simplify the computation?**\n\nThanks. In Eq. (3), \"$\\theta\\_{t} -\\gamma\\nabla_{\\theta} \\left[\\mathcal{L}(g\\_{\\theta_t}(x_l),y_l)+\\lambda\\mathcal{L}(g\\_{\\theta\\_{t}}(x_u),\\hat{y}\\_u|\\phi\\_{t})\\right]$\" represents a virtual future student, $\\theta_{t+1}$, which means the teacher is only updated with the virtual $(t+1)$-step student with Eq. (1).\nThis is the most straightforward version of FST.\nIn the improved FST in Eq. (4), we update the teacher with *both* the $t$-step student and the $(t+1)$-step student, to help the teacher gain more knowledge.\nYour understanding is correct and we have exactly the same implementation as you have suggested.\nWe have clarified this in the revised version.\n\n\n\n**Q2: About the implementation of FST-D.**\n\nAt each iteration $t$, we first make a \"copy\" of the current student, $\\theta_t$, and then conduct virtual exploration to obtain future states, *i.e.*, update the \"copy\" for $K$ steps under the supervision of the teacher.\nDuring such a virtual exploration, the teacher co-evolves with the \"copy\", while the original student, $\\theta_t$, remains untouched.\nFinally, the advanced teacher is used to provide pseudo-labels for the original student, $\\theta_t$, and perform updating *only once*.\n\n\n\n**Q3: Why can maintaining an ahead model save training time?**\n\nFor instance, if we maintain an ahead student model $\\theta'$, which is trained parallelly with the original student, $\\theta$, but $K$ steps faster than $\\theta$.\nWe can directly obtain the virtual future model states from the ahead model $\\theta\\_{t+K}'$ to guide the current student training $\\theta_t$.\nSince the ahead model is always $K$ steps faster than the student, we can *skip the virtual exploration*, and instead store the model weights $\\theta'\\_{t+1},...,\\theta'_{t+K-1}$. That is how we trade space for time. We leave this as a future study as mentioned in the paper.", " This paper introduces a novel extension of the mean-teacher method for semi-supervised learning. Rather than only using previous knowledge, this framework looks ahead to involve the future information to update the teacher model, which improves the quality of pseudo labels to mitigate the confirmation bias issue. Authors also propose different variants, like improved-FST, FST-D and FST-W. Comprehensive experiments demonstrate the effectiveness of the proposed pipeline. \n - This paper presents a novel framework that extends prevailing mean-teacher method by learning from the future. The paper is technically sound. Different variants are also designed. \n- Authors conducted extensive experiments for various tasks, like semi-supervised semantic segmentation and domain adaptation. The results on different datasets show the effectiveness of the introduced pipeline.\n- The paper is well-written and well-organized. It provides enough information for reproduction. \n- For the implementation, some parts are not clear. There are some questions posed in the questions part below. \n\n - The improved-FST is proposed since Eq(3) discards the contribution of the student weight at time t (line 153). If we let the teacher model equal to the student model at the very beginning (time t=0) and evolve the training following the EMA in equation(1), I wonder if, in this case, it could already involve the current student weights information? It may simply the computation a little. Correct me if I am wrong. \n- I am not very clear about the implementation of FST-D. Do we update the student weights when looking ahead K steps? For instance, let’s say K=3, we are looking 3 steps ahead (t+1, t+2, t+3). When we acquire the gradients at time=t+1, do we update the student model with these gradients, and then update the teacher model? If we update it, as shown in equation(5), it actually increases the number of updates to the student model(maybe another student model, but the cost of the update increases). However, in line 334, it says the number of updates remains the same. Also, in line 336, it says that it trades space for time). Even if we maintain only one ahead student model, for each time t, the original student model is updated, and we need to run this ahead student model over the next 3 steps to provide future information. In this case, I am curious about why it saves time? I am not sure whether I understand it accurately, it would be better if the authors clarify these questions. \n\nTypo: \n- Line 296: ‘of’ appears twice\n Limitations and potential social impacts have been discussed in this work.\n", " In this paper, the authors suggest a simple and interesting modification to the student-teacher self-training technique, where instead of having an exponential moving average teacher that is only reflecting the past states of the student model, they base it on a future hypothetical state of the model to prevent the confirmation bias on lower quality pseudo labels. This is achieved by one of the two variations that the authors suggest, referred to as deep future and wide future by either keeping a copy of the student model that is updated for multiple iterations over which the teacher model is updated as a moving average or by ensembling multiple updated student versions (with different data batches). The model is evaluated on unsupervised domain adaptations and semi-supervised learning and the empirical results show that the model improves as compared to the conventional teacher-student method. Strengths:\n* The paper aims to improve the student-teacher self-training scheme which is among the most frequently used semi-supervised techniques and therefore contributions here could be of interest to the community. \n* The empirical evaluations are fairly extensive. The method is compared against the baseline student-teacher model, state-of-the-art models in unsupervised domain adaptation, and multiple different variations in the ablation study. Besides, experimentations are also expanded to more modern baseline network architectures, making the conclusions more directly usable.\n* The paper is reasonably well-written and easy to follow.\n\nWeaknesses:\n* Some of the figures and comparisons might be somewhat misleading. More specifically I am referring to figures 2, and 4(a) where an iteration-to-iteration comparison is done among the proposed futuristic and baseline student-teacher methods. Even though the authors argue (at the end of 4.4) that the comparisons are fair as the two models are getting the same number of updates for the student model, actually each iteration of updating the student model implies multiple updates to a copy of the student model ($g_\\tilde{\\theta}$), which is e.g. K times more compute-extensive in the deep-future version. This is specifically questionable when iteration-to-iteration comparisons are made. \n* As another note, but on a relevant topic, I like that the authors bring some comparisons between the proposed multi-iteration student-teacher model and models that are trained for longer. But I find it insufficient for proving wrong a major question/hesitation if a significant part of the advantage is coming from training for more; specifically, we observe that for both those two experiments the model's advantage is less visible (less than a percent) while the two models are computationally roughly equivalent and therefore fairer to compare. I would have liked to see more evidence against this question. As stated above a major question is if the suggested model consistently delivers better performance when compared to baseline models with an equal total number of gradient descent updates. The major limitation, i.e. higher computational costs for training is discussed.", " This paper presents a new self-training framework for semantic segmentation, by proposing the idea of \"learning from future\". Specifically, instead of using the current step to generate pseudo-labels and train the student, the teacher was updated with a virtual future student and supervised the following training accordingly. The proposed future-self-training (FST) framework was validated on the semantic segmentation task with extensive experimental analysis. The main contributions are the proposed FST framework and the experimental analysis.\n\n\n**Post-rebuttal**\n\nThanks to the authors for their response and the additional experiments, which addressed most of my concerns. There are some remaining concerns I would suggest the authors address in their final version:\n\n1) the motivation still sounds not that convincing to me, the \"roughly better performance\" itself as an observation does not sounds to be a convincing motivation; \n2) in the additional experiment of the newly introduced hyper-parameter, the result showed that the \"learnable\" setting performs much worse (58.87 vs. 59.81) than the \"fixed\" setting, which is not that convincing to me. Theoretically, if 0.999 is the \"optimal\" value, a learnable setting should be able to learn this value, instead of a way worse one. It would be better if the authors could provide an explanation for this in their revised version.\n\nOtherwise, the paper is interesting and could be a good contribution to the community. As a result, I would keep my original positive rating. **Strengths**\n\n\\+ The idea of using a \"future student\" to update the current teacher and then using this future teacher to supervise the current student for the self-training framework is interesting and could potentially inspire following-up research.\n\n\\+ The authors perform thorough ablation studies, in which each of the proposed components was well analyzed. This could be useful for following-up research to well leverage the design of the proposed method.\n\n\\+ The proposed method performs better than existing alternative methods for the unsupervised domain adaptation task.\n\n\\+ The paper is generally well-written and easy to follow.\n\n\n**Weaknesses**\n\n\\- Although the idea of \"learning from future\" looks new, the motivation behind it is a bit unclear. Why learn from the future is beneficial, especially considering this \"future\" is also from the current state and the \"history\"? On the other hand, it also increases the ambiguity if the model (weights) is just cached without updating.\n\n\\- Please double-check if the variables were correctly labeled in Fig. 3. From the description, the teacher model g_\\phi was trained on labled data x_l and the student model on unlabeled data x_u. But what was shown in the figure is the other way around.\nPlease also add more details to the caption to make it self-contained.\n\n\\- In the proposed future self-training (FST), a new hyper-parameter \\mu' was introduced, as in Eq. 4. But it was a bit unclear how to define this parameter and the necessity of it. The authors did an ablation study to test different \\mu', but the result did not suggest a way to set it and it is also a bit unclear the significance of adding this additional parameter.\nWhat if removing it or is that possible to change it to a learnable parameter?\n\n\\- Although the authors stated that the combination of FST-D and FST-W is more beyond the scope of their study, it would be better to have some preliminary investigation to see if these two types of learning schemes actually boost the learning.\n\n\\- It would be better to include more state-of-the-art methods for the experimental comparison of semi-supervised learning (Table 5).\n\n\\- L296, \"... of of ...\" It would be better if the authors could address the concerns raised above in the Weaknesses section. For example,\n\n* A more convincing clarification on the motivation\n\n* The necessity of the newly introduced hyper-parameter\n\n* The authors' observation or comments on the combination of FST-D and FST-W\n\n* Possibility of including more methods for the experimental comparison.\n\n* notations in the figure. The authors discussed the limitations of their work and acceptable solution to address it, but did not mention the potential negative societal impact.\n\nA possible societal impact could be the bias within the learned model if there was also bias in the training data. The environmental impact could be another potential societal impact due to the large-scale and long-time training and the corresponding carbon emission." ]
[ -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, 4, 4, 4 ]
[ "I2MzVapZqfm", "1s3OqRWvXnm", "p8Z_5mRVZnS", "nips_2022_0tG59j2efs", "nips_2022_0tG59j2efs", "nips_2022_0tG59j2efs" ]
nips_2022_gRK9SLQHTDV
Don't Roll the Dice, Ask Twice: The Two-Query Distortion of Matching Problems and Beyond
In most social choice settings, the participating agents express their preferences over the different alternatives in the form of linear orderings. While this clearly simplifies preference elicitation, it inevitably leads to poor performance with respect to optimizing a cardinal objective, such as the social welfare, since the values of the agents remain virtually unknown. This loss in performance because of lack of information is measured by distortion. A recent array of works put forward the agenda of designing mechanisms that learn the values of the agents for a small number of alternatives via queries, and use this limited extra information to make better-informed decisions, thus improving distortion. Following this agenda, in this work we focus on a class of combinatorial problems that includes most well-known matching problems and several of their generalizations, such as One-Sided Matching, Two-Sided Matching, General Graph Matching, and k-Constrained Resource Allocation. We design two-query mechanisms that achieve the best-possible worst-case distortion in terms of social welfare, and outperform the best-possible expected distortion achieved by randomized ordinal mechanisms.
Accept
This work studies a narrow, but important problem of how much cardinal information is needed to achieve near optimal matchings. The authors show that with just two queries (one is required for any non-trivial results) they can achieve non-trivial results in a very general setting. Moreover, they show that their results are tight.
test
[ "YVKTVXVOjF5", "RRi3dDVd9S8", "Us5hxr5r3zNH", "JQCDJbSImDq", "88XW3SL5woQ", "cL6l7m78dLO", "NzCsW17ZHJm", "p019MdJakZV", "ABdX79-eCXk", "eT6KoIbpT0Q", "e2y7RfIamp6" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for all the responses, they have been quite insightful.", " The particular mechanisms we have designed in this paper are not strategyproof. Strategyproofness, as well as equilibrium efficiency (price of anarchy), has been considered in the context of distortion in previous works for matching (see reference [20] in the paper) with strategic agents. There, it was shown that the best possible distortion achievable by strategyproof mechanisms and the best possible price of anarchy of any mechanism is achieved by ordinal mechanisms. In other words, even if we had access to the full cardinal (numerical) information about the preferences, strategic behavior imposes constraints that only allow us to use the ordinal information. Essentially, improved distortion bounds via queries and incentive robustness (strategyproofness) are incompatible. \n", " Thank you for the response. With regards to NRMP [1], how do the proposed methods fare in terms of susceptibility to \"strategic voting\" (misrepresenting preferences)?", " -- You list handling a constant number > 2 queries as an open problem. It would be nice to discuss this a bit more, and in particular discuss why your approach can't already achieve this.\n\nRe: To obtain the bound of $O(\\sqrt{n})$ with two queries, our approach balances out two quantities: the number of agents that are assigned (in the sufficient representative assignment A; see Def. 2) to something less preferred than what they receive in X, and the number of agents assigned to each item in A. When both of these quantities are $O(\\sqrt{n})$, then this can be done. \n\nTo extend this technique to 3 queries to achieve a distortion of $O(n^{1/3})$, we would have to use the second query to “split” the set of agents (which is of course known) into two uneven sets of size $n^{2/3}$ and $n^{1/3}$ respectively, as before. Then, in the next step, using the third query, we could try to once again apply the same approach as above to further “split” the set of size $n^{2/3}$ into two even subsets of size $n^{2/3}$ agents. However, at this point we do not know who these $n^{2/3}$ agents would be, and therefore this is not possible. \n\n\n-- You also mention that for the general social choice problem, you suspect that adaptivity is necessary. It would be nice to expand on this. Why do you suspect that adaptivity is necessary? Could adaptivity be used even in the graph case to get better results with 3 or more queries?\n\nRe: The social choice setting is more challenging than the matching setting, because it is less structured. In fact, our techniques are enough to show a bound of $O(\\sqrt{n})$ for social choice, which translates to a bound of $O(\\sqrt{m})$ in some cases only. The relation between $m$ and $n$ seems to play an intricate role here, which is why we think that our technique has reached its limits. \n\nBy “adaptivity”, we mean that the algorithm will decide which queries to ask based on the ordinal information as well as the history of answers to previous queries (i.e., decide how to use the third query based on the ranking and the values learnt from the first two queries). Our algorithms only use the ordinal information to decide where to make queries, which seems, to us, insufficient for achieving the best possible bound in the social choice setting.\n\nSimilarly, the reviewer indeed has the same intuition as we do: for 3 or more queries in the matching setting, again we would need to use some adaptive queries, as our techniques seem to fall short as we explain in the answer to the first question above. \n\n", " -- Relevance to NeurIPS\n\nRe: We would like to point out that NeurIPS regularly accepts works that are related to the theoretical foundations of ML and more broadly of AI. There is a significant number of matching / computational social choice papers which are not directly related to learning. From NeurIPS 2021 alone, a representative sample is:\n\n- Lirong Xia: The Semi-Random Satisfaction of Voting Axioms\n\n- Nathan Noiry, Vianney Perchet, Flore Sentenac: Online Matching in Sparse Random Graphs: Non-Asymptotic Performances of Greedy Algorithm\n\n- Joshua Kavner, Lirong Xia: Strategic Behavior is Bliss: Iterative Voting Improves Social Welfare\n\n- Brian Brubach, Nathaniel Grammel, Will Ma, Aravind Srinivasan: Improved Guarantees for Offline Stochastic Matching via new Ordered Contention Resolution Schemes\n\n- Grant Schoenebeck, Biaoshuai Tao: Wisdom of the Crowd Voting: Truthful Aggregation of Voter Information and Preferences\n\nOn the topic of distortion in particular, reference [28] in our submission is a NeurIPS paper as well. \n\n-- Experiments\n\nRe: While we conceptually disagree with the statement that the logarithm is essentially a constant in practice in a setting where asking cardinal queries can be cognitively demanding, we do agree that having some experimental results could be a nice complement to our main theoretical results. We would like to point out, however, that such experimental results are scarce in the distortion literature in general. We do not think that simply running experiments on randomly generated instances (e.g., drawn from simple distributions) and adding them to our paper would necessarily be of much practical relevance for the real world. On the other hand, data about cardinal preferences are very limited and often not publicly available. An experimental approach would certainly be very useful, but it seems like the topic of a separate paper, which will build upon the theoretical results of the literature (including the ones we provide here). \n\nThat being said, we are not entirely sure what kind of experiments the reviewer has in mind for comparing the two vs logarithmically-many queries algorithms. The advantage of our algorithm by default lies in the number of queries (2 vs log-many), not in the distortion (as both algorithms achieve asymptotically the same bound). Perhaps it could give us some intuition about the constants hidden in the O(sqrt n) bounds, for some families of instances used in the experiments. \n\n\n-- On applications:\n\nRe: Over the years, matching problems have found numerous important applications in practice, such as residents matching [1], college admissions and kidney exchange (e.g., see [2]). Importantly, the algorithms employed for these applications are purely ordinal that have been proposed in the associated social choice literature. See also the discussion paper [3] on the application of (ordinal) matching algorithms in school choice in Amsterdam since 2005.\n\n[1] https://en.wikipedia.org/wiki/National_Resident_Matching_Program\n\n[2] https://qz.com/421547/nobel-prize-winner-alvin-roth-explains-the-hidden-economics-behind-tinder-marriage-and-college-admissions/\n\n[3] https://docs.iza.org/dp9118.pdf\n", " – “The available information is so limited, that the algorithm cannot do much here”. \n\nRe: In the distortion literature, the algorithms typically only have access to ordinal information, and such algorithms are also being used in practice for several applications. Our algorithms use *more* information than those (ordinal information + 2 queries), which is why they can do more in terms of the distortion. We do not agree that our algorithm does not “do much”. It utilizes an elegant combinatorial idea and manages to achieve distortion $O(\\sqrt{n})$, without using any randomization or normalization. In contrast, in the standard ordinal setting (without queries), the best possible algorithms, using both randomization and normalization, still cannot do better than $O(\\sqrt{n})$. In that sense, our $O(\\sqrt{n})$ bound is very meaningful in the context of this literature.\n \n– “Essentially, the algorithm will give the preferred option or a random choice.”\n\nRe: Please note that in the final matching computed by our algorithm, most agents will *not* be assigned their favorite choice in most preference profiles (for example, consider instances where most agents agree on which item is the best – only one of them could get it). Also, there is no randomness in our setting, we focus only on deterministic algorithms, which is clearly stated in the Introduction (for instance, see line 75 on page 2). Thus, it is not clear what the Reviewer means by “a random choice”. Anything along the lines of what the Reviewer is suggesting (try to assign to each agent her best item or give something arbitrarily) will only achieve a $\\Theta(n)$ distortion with two queries. \n\n– How this compares to the existing work? How this extends to lambda > 2 queries.\n\nRe: As we mention in lines 157-158 “without any normalization assumptions [...] a mechanism cannot have any guarantee unless it queries every agent about her favorite item”. Thus, *any* mechanism asking 2 or more queries and having bounded distortion must spend one query on the first position of each agent’s preference ranking. This is true for existing mechanisms as well. As we discuss in our Introduction, previous work showed that it is possible to achieve distortion $O(\\sqrt{n})$ with $O(\\log{n})$ queries per agent, while for 2 queries specifically, the best known algorithm before our work achieves a distortion of $O(n^{2/3} \\cdot \\sqrt{\\log{n}})$ for unit-sum valuations. Without normalization (like in our setting), the best known bound for 2 (or any constant number of) queries was $\\Theta(n)$. In all cases, even in the result assuming normalization, one query per agent is used on the top choice of each agent. The technically challenging part is choosing how to ask the remaining queries. In that respect, our approach provides not only a very significant improvement over the previous work, but also a novel perspective on how to make such choices.\n\nExtending our algorithm for lambda > 2 is a challenging open question, and seems to require new techniques. Please see our response to a relevant question of Reviewer NGwA for some intuition about the difficulty of extending our techniques for more queries. \n\n– Limited applications.\n\nRe: Over the years, matching problems have found numerous important applications in practice, such as residents matching [1], college admissions and kidney exchange (e.g., see [2]). Importantly, the algorithms employed for these applications are purely ordinal that have been proposed in the associated social choice literature. See also the discussion paper [3] on the application of (ordinal) matching algorithms in school choice in Amsterdam since 2005.\n\n[1] https://en.wikipedia.org/wiki/National_Resident_Matching_Program\n\n[2] https://qz.com/421547/nobel-prize-winner-alvin-roth-explains-the-hidden-economics-behind-tinder-marriage-and-college-admissions/\n\n[3] https://docs.iza.org/dp9118.pdf\n\n", " Re Q1 and W2: Over the years, matching problems have found numerous important applications in practice, such as residents matching [1], college admissions and kidney exchange (e.g., see [2]). Importantly, the algorithms employed for these applications are purely ordinal that have been proposed in the associated social choice literature. See also the discussion paper [3] on the application of (ordinal) matching algorithms in school choice in Amsterdam since 2005.\n\n[1] https://en.wikipedia.org/wiki/National_Resident_Matching_Program\n\n[2] https://qz.com/421547/nobel-prize-winner-alvin-roth-explains-the-hidden-economics-behind-tinder-marriage-and-college-admissions/\n\n[3] https://docs.iza.org/dp9118.pdf\n\nRe W1: We agree that having some experimental results could be a nice complement to our main theoretical results. We would like to point out, however, that such experimental results are scarce in the distortion literature in general. We do not think that simply running experiments on randomly generated instances (e.g., drawn from simple distributions) and adding them to our paper would necessarily be of much practical relevance for the real world. What we need is a systematic approach that perhaps uses some real-world data and performs extensive experiments which start with the original distortion settings (without queries) and then consider the settings with queries as well. This is challenging, because data about cardinal preferences are very limited and often not publicly available. So while an experimental approach could certainly be very useful, it seems like the topic of a separate paper, which will build upon the theoretical results of the literature (including the ones we provide here). \n", " The manuscript tackles the problem of maximising social welfare amongst agents for assignments of items to agents when agents only disclose their preference ordering of items (base on a hidden score of each item). The manuscript focuses strongly on the distortion of a matching mechanism, i.e., the worst-case ratio between achieved and optimal social welfare amongst all possible valuation profiles. One major result is that asking agents first for their favourite alternative, then computing a representative subset and asking for their favourite alternative amongst the subset yields a $O(\\sqrt{n})$ distortion ($n$ is both the number of items and agents). Previously, such a result required randomization and normalization of agent values, whereas the novel result is deterministic and obiviates the need for normalization and thereby extends the applicability of the result. This comes only at the cost of having to elicit preference twice from agents, which seems acceptable in most settings. The challenging part of developing the approach lies in the computation of the representative subset to allow for the resulting analysis.\n STRENGTHS\n\nS1. Motivation/Relevance: Interesting, very general problem of how to match agents with items based only on ordinal preference indications (which is generally easier to elicit).\n\nS2. Novelty/Significance: The basic idea of eliciting preferences twice, first on all items and then a representative subset seems novel and a simple, elegant solution\n\nS3. Presentation/Soundness/Related Work: The work is well-written, seems to substantiate all claims (very extensive supplementary material) and cover the relevant related work.\n\nS4. The work discusses a wider range of variants and both possibility results as well as achieved results.\n\nWEAKNESSES\n\nW1. An empirical study could make the paper more relevant to more practically oriented readers.\n\nW2. Although it would seem to be general enough to cover a wider range of applications, some examples for practical applications could be beneficial\n Q1. What are some examples of practical applications mentioned in Section C of the supplementary material? Specifically, as Section C in the supplementary highlights performance in practice, it would be useful to add a small (simple) empirical study with simulated agents with some prior randomised approach as a baseline and mention a few concrete examples of potential applications.", " The authors propose a framework that approximates a matching problem using only two queries per agent.\nThe authors show that they can achieve O(sqrt(n)) approximation guarantee, and show that this is the best\nthey can do given the queries. + Theoretically interesting result. Especially the tightness result completes the picture.\n- The amount of information is so limited that the algorithm cannot do too much.\n- Limited applications The two queries algorithm is allowed to ask are very different. The first asks the most preferred item,\nwhile the second asks the utility of a certain item. How this compares to the existing work? How this extends\nto lambda > 2 queries. The available information is so limited, that the algorithm cannot do much here. Essentially, the algorithm\nwill give the preferred option or a random choice. This hints that in practice two queries are too little\nto produce any interesting result. Perhaps developing an algoriithm for higher lambdas would be more fruitful.\n", " The paper deals with optimizing social welfare given preference orderings of agents. The paper improves social welfare by asking each agent two additional questions and it shows that social welfare can be optimized compared to knowing all agents' exact valuations up to a divisor of SQRT(n).\nBased on the solution to this problem further statements are derived. Strengths:\n+ Novel algorithm and theorems about social welfare optimzation given additional queries\n+ Useful extensions beyond the core approach\nWeakness:\n+ The main comparisons are constituted by logarithmically many queries. For most practical purposes, logarithm is like a constant. A practically relevant comparison should therefore show the advantage also empirically by experiments.\n+ I don't see this as a relevant paper for a machine learning conference none While the paper offers clear progress wrt underlying theory, \nthe paper could have discussed practical applications and applicability at least briefly.\n\n", " This paper studies social welfare maximization when given only ordinal information for each agent, but where we are allowed to make *two* cardinal queries per agent. They focus on \"one-sided matching\", but their results also extend more generally. In slightly more detail, suppose that every agent has a value for every item, and our goal is to find the matching maximizing the social welfare. However, we are not given the value (\"cardinal\") information, but are instead only given the *ordering* for every agent. We can then design mechanisms that use only this ordinal information, and use as our measure of quality the worst case over all valuations of the optimal social welfare (matching) vs the welfare achieved by the mechanism. This is called the \"distortion\". This has been studied extensively over the past decade, and much is known.\n\nThis paper continues a new line of work where we augment our mechanisms with the ability to make a small number of valuation (\"cardinal\") queries. A previous paper showed that with $O(\\log n)$ queries per agent we can get $O(^{1/k})$ distortion for any constant k that we want, and it was also shown that with k queries the best distortion we can hope for is $\\Omega(n^{1/k})$. So there was an obvious open question: can we get sublinear distortion with less than $\\log n$ queries? In the extreme: what distortion is achievable with only *two* queries?\n\nThis paper resolves the two-query case, showing that $O(n^{1/2})$ distortion is achievable. They do this with a deterministic algorithm and with no assumptions on the valuations. Previous sublinear distortion bounds (e.g., without any queries) required either randomness, assumptions on the valuations, or both. Their algorithm first queries the valuation of every agent's highest-rank item. They then use the ordinal information to compute a \"sufficiently representative assignment\" A -- this is basically a many-one assignment where every item is matched to at most $n^{1/2}$ agents, and also there are at most $n^{1/2}$ agents that prefer what they are matched with in OPT to what they are matched to in the assignment. The authors show that a simple algorithm can be used to generate such an assignment using only the ordinal information. Then for the second query, they query the valuation for every agent of the item they're matched to in A. Finally, they compute an optimal solution that uses only the valuations from these two queries (all non-queried valuations are set to 0).\n\nIn the analysis, they show that the dual $n^{1/2}$ guarantees of the sufficiently representative assignment can be used to balance the cost both of the agents that query a \"too high\" item (an item that they prefer to what they get in OPT) and of the agents that query a \"too low\" item.\n\nThey then study more general settings than just one-sided matching. First, they study a wider variety of graph problems which they call \"matching extended k-families\", and which include pretty natural problems such as two-sided matching and general graph k-matchings. Technically, this type of problem does not include the one-sided matching case, since they assume in their graphs that every vertex is an agent. So the cannot just re-use the exact same ideas and results. However, they show that an $\\Omega(n^{1/k})$ lower bound still holds, and that a similar algorithm with a more complicated argument also gives $O(n^{1/2})$ distortion with only two queries per agent.\n\nFinally, they study the most general social choice setting, where there are $n$ agents, $m$ alternatives, every agent has a valuation for every alternative, and we want to choose the alternative that maximizes social welfare. Here they show an $\\Omega(m^{1/k})$ lower bound on deterministic mechanisms that make at most $k$ queries, and give an algorithm which is essentially a generalization of their previous algorithms which gives distortion $O(\\sqrt{m})$ as long as $m = \\Omega(n)$.\n Strengths:\n- This is a pretty natural set of questions. Distortion of having only ordinal information has been well-studied and seems to be pretty well-motivated. But since we have strong lower bounds in that setting, it's natural to try to go \"beyond worst-case\" by allowing extra information. A small number of value queries seems to me to be a pretty natural way to do this (of course it's not the only possible way of going beyond worst-case, but I think it's an interesting one). And this is not the first paper which studies this setting.\n\n- The main upper bound seems quite strong to me. The problem is particularly natural (one-sided matching), it matches a known lower bound, the algorithm is quite simple but requires some non-obvious analysis, and they show that two queries are already enough to give nontrivial distortion guarantees. It feels to me like the \"right\" result, for a natural and interesting problem.\n\n- The generalizations are also quite interesting. The problems are very natural (particularly two-sided matching and general social choice), and the results are pretty strong (particularly for two-sided matching and the other graph-based generalizations, where no extra assumptions are needed). And the algorithms are basically the same as the one-sided matching case, but with slight tweaks and more complicated analyses. So they're generalizations in a sense that I like a lot -- they reinforce the main ideas of the paper, rather than giving an entirely new set of ideas, but do add significant technical difficulty and complexity.\n\nWeaknesses:\n- The main result is quite strong (as I discussed above), but is actually a little weaker than I was hoping for. This is because the result only holds for two-queries. In many algorithmic settings, there is a tradeoff between some parameter and some notion of \"quality\", and when there's not such a tradeoff there is some kind of \"phase transition\". Based on the known lower bound, I was hoping that this paper have an $O(n^{1/k})$-distortion upper bound for $k$ queries (giving a tradeoff), or alternatively showed that it was not possible to get below $n^{1/2}$ queries without using superconstant queries (showing a phase transition). But this paper didn't give any results on constant queries beyond 2, which I found a little disappointing. However, it's worth pointing out that two is a particularly interesting case, since (as the authors point out) it is the smallest number of queries for which sublinear bounds are possible. I just wish the authors had either given more general results, or discussed why their approach doesn't / can't give such results.\n\nOverall, I really liked this paper, and think it should be accepted. - You list handling a constant number $>2$ queries as an open problem. It would be nice to discuss this a bit more, and in particular discuss why your approach can't already achieve this.\n\n- You also mention that for the general social choice problem, you suspect that adaptivity is necessary. It would be nice to expand on this. Why do you suspect that adaptivity is necessary? Could adaptivity be used even in the grpah case to get better results with 3 or more queries?\n This is a theory paper, so I think the discussion of limitations and negative social impact is sufficient. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 4, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 1, 3 ]
[ "RRi3dDVd9S8", "Us5hxr5r3zNH", "NzCsW17ZHJm", "e2y7RfIamp6", "eT6KoIbpT0Q", "ABdX79-eCXk", "p019MdJakZV", "nips_2022_gRK9SLQHTDV", "nips_2022_gRK9SLQHTDV", "nips_2022_gRK9SLQHTDV", "nips_2022_gRK9SLQHTDV" ]
nips_2022_1bE24ZURBqm
Biologically Inspired Dynamic Thresholds for Spiking Neural Networks
The dynamic membrane potential threshold, as one of the essential properties of a biological neuron, is a spontaneous regulation mechanism that maintains neuronal homeostasis, i.e., the constant overall spiking firing rate of a neuron. As such, the neuron firing rate is regulated by a dynamic spiking threshold, which has been extensively studied in biology. Existing work in the machine learning community does not employ bioinspired spiking threshold schemes. This work aims at bridging this gap by introducing a novel bioinspired dynamic energy-temporal threshold (BDETT) scheme for spiking neural networks (SNNs). The proposed BDETT scheme mirrors two bioplausible observations: a dynamic threshold has 1) a positive correlation with the average membrane potential and 2) a negative correlation with the preceding rate of depolarization. We validate the effectiveness of the proposed BDETT on robot obstacle avoidance and continuous control tasks under both normal conditions and various degraded conditions, including noisy observations, weights, and dynamic environments. We find that the BDETT outperforms existing static and heuristic threshold approaches by significant margins in all tested conditions, and we confirm that the proposed bioinspired dynamic threshold scheme offers homeostasis to SNNs in complex real-world tasks.
Accept
The paper proposes a biologically plausible dynamic thresholding mechanism. Spiking neural nets with dynamic thresholding appears to be novel. The paper does a good job of motivating the choice of the model and illustrating its benefits across a series of control tasks. All reviewers support the acceptance of the paper conditional on the following points to be included in the revised manuscript: - The new experiments on image processing performed during the discussion phase have to be included in the revised version. - I agree with Reviewer r3Sy that the paper overly emphasizes the biological plausibility of the method as a point of strength. Please try to focus the paper more on the technical benefits and analysis of the proposed method, following the instructions of the reviewer. - Include the complexity analysis of the model in the revised version. - Please import some of the tables provided during the rebuttal to the revised manuscript. - Explicitly denote the details of the statistics of your experimental results. It might be helpful to import some of the tables from the supplementary materials to the main text. I recommend the acceptance of this paper.
train
[ "1ZSNU9c3Q0l", "_rf0fzJkVu1", "RosBJT-XYq", "e3IJfSs3-nG", "ypk0rm6WC9D", "EyQA9P9blylI", "MsEiUHuUWV", "hjcIaVgMFs-", "W8ZFFMh_HrVT", "55209be0ALD", "a4OpK0VY8kfo", "aaI3Bpe6Bun", "Erux35IddkV", "rUU-BLQBRJn", "rYoxqz3OcOZ", "9yAmUPBtf72", "7zfws4EhJ27" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hello,\n\nI thank the authors for their responses. The results of their experiments outlined in the table are compelling. I would still suggest the authors have better statistical results, but I will increase my score to a 5.", " I appreciate the authors' effort on the response and additional experiments. I raise my rate accordingly.", " Dear reviewer r3Sy,\n\nAddressing your questions and concerns, we conducted additional experiments on image classification as an established computer vision task. The results are well aligned with our findings in the three robotic tasks; the proposed threshold method improves the generalization of this vision task. Besides, we provided more details and insights into our baselines; and we provide the requested analysis of the time and space complexity.\n\nIn light of this, we would like to know whether the experimental results and updated exposition have addressed your concerns. If so, we hope you would be willing to increase your score. We appreciate the effort that went into reviewing our work!\n\nThe Authors", " Dear reviewer qTH8,\n\nAddressing your review, we summarized all 76 conducted experiments, which all empirically confirm that homeostasis improves SNN generalization. As such, we believe our work has the potential to ignite excitement in this research direction. In addition, we discuss the choices of the three statistics we reported for homeostasis from a bio-plausible perspective. \n\nIn light of this, we would like to know whether the provided analysis and insights have addressed your concerns. If so, we hope you would be willing to increase your score. Thank you for your consideration! \n\nThe Authors", " >**Q5: Complexity analysis. Can the authors evaluate the computation and space complexities of the proposed method and compare with previous works?**\n\n**Runtime complexity:**\n\nThe computational complexity of the proposed BDETT is bounded by the computational complexity of calculating the mean, maximum, and minimum, i.e., Eqs. 3, 4, and 6. Therefore, the upper bound of estimating BDETT complexity, $\\Theta_i^l(t+1)$, is $O(n)$, where $n$ is the number of neurons on the $l$-th layer. \n\nOther methods, DT1 and DT2, are bounded by the summation operations, and their upper bound are also $O(n)$, where $n$ is also the number of neurons on a layer; see Eqs. 8 and 9 in Supplementary Note 2.\n\nWe report the layer-wise running time with PyTorch 1.2 on an i7-7700 CPU and NVIDIA GTX 1080Ti GPU. As we can see the running time of the proposed BDETT for the testing network is 1.36 ms.\n\n| | Layer 1 (256 neurons) | Layer 2 (256 neurons) | Layer 3 (256 neurons) | Layer 4(2 neurons) | Total |\n|------------|-----------------------|-----------------------|-----------------------|--------------------|-------|\n| DET (ms) | 0.18 | 0.19 | 0.19 | 0.18 | 0.74 |\n| DTT (ms) | 0.11 | 0.11 | 0.11 | 0.10 | 0.43 |\n| BDETT (ms) | 0.34 | 0.35 | 0.35 | 0.32 | 1.36 |\n\n**Memory complexity:**\n\nTo evaluate BDETT, $\\Theta_i^l(t+1)$, we need to evaluate $V_m^l(t)$, $V_{\\theta}^l(t)$, and $\\mu(\\Theta_i^l(t))$. Therefore, the upper bound of the memory complexity is $O(n)$, where $n$ is the number of neurons on the $l$-th layer. The lower bound is $O(1)$.\n\nDT1 and DT2 offer the same memory complexity.\n\n>**Limitations: The authors did not address the limitations of the present work. But, I believe that the current result limits the impact of the present work to merely two simple tasks**\n\nWe discussed the limitations of our work in lines 80-83 of the main manuscript. The three tasks we use to assess the proposed BDETT have not been tackled successfully with SNNs under different degraded conditions. In contrast, the majority of SNN works still leverage classification tasks on small datasets for their evaluation and only under normal conditions.\n\n\n#### References\n[R4-1] Sutton, R. S., & Barto, A. G. (2018). Reinforcement learning: An introduction. MIT press.", " >**Q4: Applicability of BDETT to other domains. It is required to apply the proposed method to different application domains, e.g., computer vision. The goal is two-fold: (i) the impact of BDETT on SNNs can be highlighted, and (ii) true baseline performances are available, particularly, for vision domain, there exist tons of previous publications reporting their official performances. Can the authors evaluate the impact of BDETT on other application domains?**\n\nIn principle, the proposed BDETT may be used for any SNN-based method. We apply the method to three robotic tasks under 19 different experimental setups, including normal and degraded conditions. Note also that the proposed BDETT has been tested with two different SNN models, i.e., LIF and SRM. \n\nWe agree that applying the proposed method to different application domains is interesting. As the reviewer suggested, we applied the proposed BDETT to image classification as an established computer vision task.\n\nTo this end, we adopted the SCNN model [R3-1] and train on the MNIST dataset. Following the experimental setup of [R3-1], each pixel of an MNIST image is encoded into 30 Poisson spikes as inputs to SCNN for both training and testing. \n\nSimilar to our experimental setup for robotic control tasks, we also designed two different degraded conditions: degraded inputs and weight uncertainty. \n\n* **Adversarial samples as degraded inputs**: to test the generalization on degrated inputs we use existing adversarial attack methods to generate relevant degraded inputs for the image classification tasks.\n * ‘FGSM $\\epsilon=x$’: Fast gradient sign method (FGSM) [R3-2] with $\\epsilon=x$; \n * ‘PGD $iter_\\epsilon=x$ $iter_{num}=y$’: projected gradient descent (PGD) [R3-3], with iteration epsilon of x and iteration number of y for each attack step.\n* **Weight uncertainty**: We also evaluate the robustness to internal weight uncertainty as follows.\n * ‘GN(0, x)’: adding Gaussian noise (GN) with zero mean and standard deviation of x to all synaptic weights;\n * ‘x% zero weight’: for the synaptic weights between every two adjacent layers, we randomly set x% of them to 0; \n\n\n*Directly applying the proposed approach without any changes to image classification in degraded conditions compares favorably across all experimental settings and in terms of generalization.* With stronger degradations, the top 1 classification accuracy of both baseline and our approach decreases, but the proposed method is less affected, validating BDETT for this vision task.\n\n| Experimental setup | Threshold Type | LIF-based SCNN | SRM-based SCNN |\n|--------------------------------------|-------|--------|--------|\n| Original | Static Threshold | 99.42% | 99.13% |\n| | BDETT | **99.45%** | **99.15%** |\n| FGSM $\\epsilon$=0.20 | Static Threshold | 66.33% | 56.85% |\n| | BDETT | **69.14%** | **57.01%** |\n| PGD $iter_\\epsilon$=0.01 $iter_{num}$=20 | Static Threshold | 84.31% | 67.53% |\n| | BDETT | **85.74%** | **68.06%** \n| GN(0, 0.3) | Static Threshold | 81.98% | 78.24% |\n| | BDETT | **85.09%** | **78.68%** |\n| GN(0, 0.5) | Static Threshold | 39.84% | 45.32% |\n| | BDETT | **47.74%** | **46.34%** |\n| 20% zero weight | Static Threshold | 90.52% | 95.10% |\n| | BDETT | **96.37%** | **96.59%** |\n| 30% zero weight | Static Threshold | 84.37% | 89.75% |\n| | BDETT | **90.68%** | **91.02%** |\n\n\n", " Thank you for your insightful feedback and suggestions!\n\n>**Q1: Weakness of baseline. the baseline performance is questionable given that the performance of BDETT is not compared with the performance of any previously published works. I am convinced that BDETT outperforms the previous methods that the authors addressed for a given set of parameters. However, I am not convinced if the proposed baseline is the ground-truth or close to the ground-truth.**\n\nWe compared BDETT to four state-of-the-art methods published in the last two years. Specifically, we compare static threshold methods, SAN[9] and PopSAN[35], and two recent dynamic schemes, DT1[24] and DT2[26]. \n\nSpecifically, SAN and PopSAN are two reinforcement learning methods that do not require preexisting ground truth mappings. These methods learn behavior by experiencing rewards for actions, which is distinctively different from supervised learning from a training set of labeled examples provided by a knowledgable external supervisor. [R4-1] \n\n>**Q2: Wrong citations. I found that Refs. 24 and 26 are irrelevant to the baseline methods DT1 and DT2.**\n\nDT1 is defined by Eqs. 4 and 5 of Hao et al.[24], on page 8. The $\\alpha$ in Eq. 5 of Hao et al.[24] was set to 1.0 as the maximum value of the increment is 1.0 in our experimental settings. DT2 is defined by Eq. 4 of Kim et al.[26], on page 5. \n\n>**Q3: Bio-plausibility of BDETT. I agree on the point that bio-plausibility of a newly proposed method is good. But, such bio-plausibility does not justify the proposed method. The authors took a biological notion and recreated the notion largely, so that I do not think the fidelity of BDETT to biological notions is very high. Instead, I would like to suggest the authors to systematically address the advantages of BDETT over the previous methods from an engineering viewpoint rather that bio-plausibility.**\n\nBDETT is a *bio-inspired* algorithm (also reflected in the title) with the *aim* of a bio-plausible behavior, i.e., homeostasis. BDETT aims to achieve homeostasis, keeping the firing rates constant regardless of the external conditions. To this end, we deliberately deviate from bio-plausible dynamic threshold schemes adapt to the SNN models; see also manuscript lines 149-151, \"the proposed dynamic energy threshold is inspired by this biological predictive model but includes several changes that are critical for the model to be effective in SNNs.\" We will further highlight this in the abstract.\n\nWe provide an analysis of the relationship between the two main components of the method, DET and DTT, in section _Interaction of DET and DTT_ and Q1 of reviewer wR8D. We provide a systematic evaluation for all tested tasks from an engineering viewpoint in Section 4. Specifically, for each tested task we reported the following conventional metrics established in the respective engineering sub-domain:\n\n| Task | Performance metric | \n|------------ |-----------------------|\n| Obstacle avoidance | Success Rate (SR): percentage of successful passes out of 200 trials |\n| |overtime percentage (OTP): overtime is defined as a trial in which the robot cannot reach the goal within 1000 steps but does not touch any obstacle.\n| HalfCheetha-V3 | Reward | \n| Ant-v3 | Reward |\n\n\n", " >**Q3: The experiments are restricted to the robot and control tasks. It would be better to include more tasks such as tasks in computer vision.**\n\nIn principle, the proposed BDETT may be applied to tasks solved with an SNN. We assess the method on *three tasks under 19 different experimental setups, including normal and degraded conditions.* The proposed BDETT has been tested with two different SNN models, i.e., LIF and SRM. \n\nNevertheless, we conducted additional experiments on image classification. To this end, we adopted the SCNN model [R3-1] and trained on the MNIST dataset. Following the experimental setup of [R3-1], each pixel of an MNIST image is encoded into 30 Poisson spikes as inputs to SCNN for both training and testing. \n\nSimilar to our experimental setup for robotic control tasks, we also designed two different degraded conditions: degraded inputs and weight uncertainty. \n\n* **Adversarial samples as degraded inputs**: to test the generalization on degrated inputs we use existing adversarial attack methods to generate relevant degraded inputs for the image classification tasks.\n * ‘FGSM $\\epsilon=x$’: Fast gradient sign method (FGSM) [R3-2] with $\\epsilon=x$; \n * ‘PGD $iter_\\epsilon=x$ $iter_{num}=y$’: projected gradient descent (PGD) [R3-3], with iteration epsilon of x and iteration number of y for each attack step.\n* **Weight unvertainty**: We also evaluate the robustness to internal weight uncertainty as follows.\n * ‘GN(0, x)’: adding Gaussian noise (GN) with zero mean and standard deviation of x to all synaptic weights;\n * ‘x% zero weight’: for the synaptic weights between every two adjacent layers, we randomly set x% of them to 0; \n\n\n*Directly applying the proposed approach without any changes to image classification in degraded conditions compares favorably across all experimental settings and in terms of generalization.* With stronger degradations, the top 1 classification accuracy of both baseline and our approach decreases, but the proposed method is less affected, validating BDETT for this vision task.\n\n| Experimental setup | Threshold Type | LIF-based SCNN | SRM-based SCNN |\n|--------------------------------------|-------|--------|--------|\n| Original | Static Threshold | 99.42% | 99.13% |\n| | BDETT | **99.45%** | **99.15%** |\n| FGSM $\\epsilon$=0.20 | Static Threshold | 66.33% | 56.85% |\n| | BDETT | **69.14%** | **57.01%** |\n| PGD $iter_\\epsilon$=0.01 $iter_{num}$=20 | Static Threshold | 84.31% | 67.53% |\n| | BDETT | **85.74%** | **68.06%** \n| GN(0, 0.3) | Static Threshold | 81.98% | 78.24% |\n| | BDETT | **85.09%** | **78.68%** |\n| GN(0, 0.5) | Static Threshold | 39.84% | 45.32% |\n| | BDETT | **47.74%** | **46.34%** |\n| 20% zero weight | Static Threshold | 90.52% | 95.10% |\n| | BDETT | **96.37%** | **96.59%** |\n| 30% zero weight | Static Threshold | 84.37% | 89.75% |\n| | BDETT | **90.68%** | **91.02%** |\n\n\n#### References\n[R3-1] Wu, Y., Deng, L., Li, G., Zhu, J., & Shi, L. (2018). Spatio-temporal backpropagation for training high-performance spiking neural networks. Frontiers in neuroscience, 12, 331.\n\n[R3-2] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.6572, 2014.\n\n[R3-3] Madry, A. et al. “Towards Deep Learning Models Resistant to Adversarial Attacks.” ArXiv abs/1706.06083 (2018): n. pag.", " Thank you for the insightful feedback and suggestions!\n\n>**Q1: The paper combines two dynamic thresholds that exhibit positive and negative correlations with the average membrane potential. It would be better to explain the reason and the related mathematical formulation in detail.** \n\nThe positive and negative correlations in the proposed method are motivated by Fontaine et al.[16] who found that the spike threshold was positively correlated with the average membrane potential preceding spikes and negatively correlated with the rate of depolarization. We emphasize that DET leverages the _magnitude of the membrane potential_ to estimate a threshold, while the DTT is based on the _preceding rate of depolarization_. Eqs. 2-4 provide mathematical formulations for DET, also illustrated in Figure 1b. Eqs. 5-6 formalize DTT along with illustrations in Figure 1c.\n\n>**Q2: The author compares BDETT with four variants of the spiking actor-network (SAN), are there any other recent SNNs to compare with?**\n\nFor the obstacle avoidance tasks, we compared SRM- and LIF-based SAN and SAN-NR, four variants of SAN[9]. For the continuous robot control tasks, we compared SRM- and LIF-based PopSAN, two variants of PopSAN[35]. Note that both SAN and PopSAN are pure SNNs, meaning they have no ANN/CNN-based components. To the best of our knowledge, SAN and PopSAN are the only relevant pure SNN-based models in the reinforcement learning domain.\n\n", " Thank you for the insightful feedback and suggestions!\n\n>**Q1: Since the authors claim in the methods section that the two dynamic threshold mechanisms can help each other achieve optimal settings. Thus, more sufficient evidence needs to be provided to support this viewpoint.**\n\nIn the section _Interaction of DET and DTT_ in our manuscript, we illustrate the interaction between DET and DTT with two examples. As suggested by the reviewer, we provide additional experimental evidence in the following.\n\n#### Interaction DET/DTT with low potential fluctuations.\n\n* **Experimental setup**: We randomly chose a timestamp and recorded all postsynaptic membrane potentials and spiking thresholds. Then, for each layer, we randomly selected $X$ neurons based on the binomial distribution with a probability of 0.5. The chosen neurons were added random positive noise, generated based on a normal distribution $\\mathcal{N}(0.2, 0.05)$. The mean of 0.2 is around 20% of the average of the recorded membrane potentials. To reduce the impact of the randomness, we did 5-round tests and reported the average and standard deviation (STD) of the obtained DETs and DTTs.\n* **Experimental thesis**: In this case, we expect DET increases as the noise increases the membrane potential. DTT should remain at a relatively constant threshold (i.e., a + 1) as the preceding rate of depolarization caused by the noise is close to 0.\n* **Experimental result**: The layerwise mean $(\\mu)$ and STD $(\\sigma)$ of the 5-round DETs and DTTs with and without added noise are reported below, aligning well with the experimental thesis. \n\n| | $\\mu(X)$ | original ($\\mu$ / $\\sigma$) | with added noise ($\\mu$ / $\\sigma$) | \n|------------|-----|--------------|---------------------------|\n| layer 1 DET|130.4|1.4950 / 0.0051 |1.5143 / 0.0066 | \n| layer 1 DTT| |0.0570 / 0.0064 |0.0571 / 0.0063 |\n| layer 2 DET|128.4|2.1548 / 0.0120 |2.1745 / 0.0111 | \n| layer 2 DTT| |0.2725 / 0.0154 |0.2724 / 0.0152 |\n| layer 3 DET|126.6|3.3528 / 0.0788 |3.3720 / 0.0777 | \n| layer 3 DTT| |0.4549 / 0.0033 |0.4549 / 0.0033 |\n\n#### Interaction DET/DTT with fast membrane potential drop\n\n* **Experimental setup**: We adopted the same binomial distribution as in the first experiment and randomly selected $X$ neurons. To mimic fast membrane potential drops from $t$ to $t+1$, we added random negative membrane potentials with a larger magnitude than the first experiment, which was generated by sampling a normal distribution $\\mathcal{N}(-2.0, 0.5)$.\n* **Experimental thesis**: In this scenario, even though DET decreases with the reduced membrane potential, we expect DTT to increase faster, and BDETT to increase the overall threshold.\n* **Experimental result**: The layerwise mean and STD of the 5-round $X$ DETs, DTTs, and BDETTs with and without fast membrane potential drop are shown in the table below. Again, the findings align with the experimental thesis.\n\n\n| | $\\mu(X)$ |original ($\\mu$ / $\\sigma$) | fast potential drop ($\\mu$ / $\\sigma$) | \n|----------------|-----|-------------------|---------------------------|\n| layer 1 DET |128.2|1.4919 / 0.0089 |1.4032 / 0.0106 | \n| layer 1 DTT | |0.0579 / 0.0044 |1.0115 / 0.0251 |\n| layer 1 BDETT | |0.7749 / 0.0039 |1.2074 / 0.0124 |\n| layer 2 DET |124.8|2.1865 / 0.0292 |2.0348 / 0.0514 | \n| layer 2 DTT | |0.2863 / 0.0053 |1.1938 / 0.0332 |\n| layer 2 BDETT | |1.2364 / 0.0148 |1.6143 / 0.0293 |\n| layer 3 DET |130.4|3.6456 / 0.1114 |3.3802 / 0.1298 | \n| layer 3 DTT | |0.4479 / 0.0141 |1.3337 / 0.0307 |\n| layer 3 BDETT | |2.0468 / 0.0610 |2.3570 / 0.0574 |", " >**Q2: Do the shaded regions in Figs 2e and 3d represent SDs, SEMs, or something different?**\n\nThe shaded regions in Figs 2e and 3d represent SDs. We will clarify this in the revised version.\n\n>**Q3: L234: \"soft-reset\" is unclear**\n\n\"Soft-reset\" means that when the membrane potential passes a threshold, it is reset by subtracting the threshold value. We adopt this concept from Tang et al. [R1-2]. In contrast, \"Hard-reset\" implies that once the membrane potential is over a threshold, the potential is reset to zero. We will clarify this in our revised version.\n\n#### References\n\n[R1-1] Turrigiano, G. G., & Nelson, S. B. (2004). Homeostatic plasticity in the developing nervous system. Nature reviews neuroscience, 5(2), 97-107.\n\n[R1-2] Tang, G., Kumar, N., Yoo, R., & Michmizos, K. P. (2020). Deep reinforcement learning with population-coded spiking neural network for continuous control. arXiv preprint arXiv:2010.09635.", " Thank you for the insightful feedback and suggestions!\n\n**Scope and correlation between homeostasis and generalization.**\n\nWe report empirical evidence that the bio-plausible state of homeostasis can allow for strong generalization across diverse tasks. However, we do not provide guarantees for this generalization capability across task domains but rather make a first step in demonstrating that bio-plausible homeostasis *can* improve generalization. Although such theoretical results are out of the scope of our work, we hope that this work provides an impulse in this direction. \n\nSpecifically, we establish a direct connection between homeostasis and generalization for three different robotic tasks, i.e., obstacle avoidance, HalfCheetah-v3, and Ant-v3, with two widely used SNN models, i.e., LIF and SRM, for two different types of degradations:\n\n&emsp;1. Measurement Uncertainty (i.e., degraded inputs)\n\n&emsp;2. Internal Changes (i.e., weight uncertainty)\n\nFor each type of degradation, three different experimental settings were conducted. Our experimental results validate that our approach offers homeostasis and generalization in *all 19 different experimental settings with two different SNN models (i.e., LIF and SRM) and two different timestamp settings (i.e., T=5 and T=25); total 19x2x2=76 experiments. For each SNN model and timestamp setting, we conducted seven experiments for the obstacle avoidance task; six and six for HalfCheetah-v3 and Ant-v3, respectively.* All experimental settings are listed in the table below. \n\n\n| | | Obstacle Avoidance | HalfCheetah-v3 | Ant-v3 |\n|-------------------|-----------------------------------------------|--------------------|----------------|--------|\n| Dynamic obstacle | | $\\surd$ | | |\n| Degraded inputs |0.2 (set the range of the 3rd, 9th, and 15th lasers to 0.2 m) | $\\surd$ | | |\n| |6.0 (set the range of the 3rd, 9th, and 15th lasers to 6.0 m) | $\\surd$ | | | \n| |GN<br>$(clip(s_{input} + \\mathcal{N}(0, 1.0), 0.2, 6.0))$| $\\surd$ | | |\n| |Random joint position | | $\\surd$ |$\\surd$ | \n| |Random joint velocity | | $\\surd$ |$\\surd$ |\n| |GN<br>$(s_{input} + \\mathcal{N}(0, 1.0))$ | | $\\surd$ |$\\surd$ | \n| Weight uncertainty|8-bit Loihi weights | $\\surd$ | $\\surd$ |$\\surd$ |\n| |GN weights | $\\surd$ | $\\surd$ |$\\surd$ |\n| |30% zero weights | $\\surd$ | $\\surd$ |$\\surd$ | \n\n\n>**The reader's understanding would be facilitated if this information (simulation-based testing) were included in the main text and in Fig 2's caption.**\n\nWe agree with the reviewer, and we will add this information along with Figure 2. \n\n\n\n>**Q1: What is the motivation for choosing the 3 statistics outlined for homeostasis? While the mean and standard deviation of the firing rates are 2 natural statistics to measure, why they (and the 3rd) should be the ones to choose for measuring homeostasis is unclear. Are there drawbacks in deciding on these particular ones (e.g., perhaps in choosing these, the networks perform worse wrt other candidate metrics?) ?**\n\nThe three statistics are motivated by existing work investigating homeostasis in biological neural networks. Specifically, Zenke et al.[45] pointed out that \"homeostasis comprises any compensatory mechanism that stabilizes neural firing rates in the face of plasticity induced changes.” Lazar et al.[44] define it as an “...effective way of modeling the effect of a network of inhibitory interneurons that maintains a constant level of firing in the network.” Turrigiano et al.[R1-1] find “the ability of neurons to adjust synaptic or intrinsic excitability in a homeostatic manner to keep firing rates relatively constant.” As such, together, the three statistics reflect the constantness of the firing rates of an SNN-based network. \n\n", " We thank all reviewers for their thoughtful feedback and we are happy to see the positive reception. All reviewers agree that the proposed BDETT is a novel threshold scheme. In particular, reviewer qTH8 finds “dynamic thresholds have not been studied in the context of SNNs”, and reviewer r3Sy mentioned “BDETT has bio-plausible grounds and is novel.” The reviewers agreed that “the proposed threshold scheme achieves bioplausible homeostasis”[T4kg] and can “enhance the generalization of SNNs”[wR8D]. We address remaining concerns as separate responses to the the reviewers.", " The authors study spiking neural networks (SNNs) with a biologically-motivated dynamic spiking threshold. The underlying hypothesis is that having a dynamic threshold allows the network to fire similarly with a wide array of stimuli / external conditions, allowing the network to potentially generalize across such conditions better. They show that the networks increase robots' performance under degraded conditions, and increase homeostasis per metrics they provide, such as the mean and standard deviation of the firing rates across trials. **Originality:**\n\nTo my knowledge, dynamic thresholds have not been studied in the context of ANNs.\n\n**Quality:**\n\nThe authors have provided biologically motivated models and intuitions for why these models might be beneficial in ANNs. They have then provided highly applied use cases to test the performance of their models. However, while the performance of the networks is illustrated, no statistical tests are undertaken. The reader is thus left to wonder as to whether which, if any, of the performance improvements are statistically significant. Moreover, while the authors have shown both an increase in homeostasis with their metrics and an increase in performance in 2 robotic tests, the reader is left with only this correspondence. Presumably there are ways in which generalization can occur without improved homeostasis, and ways to increase homeostasis too far, resulting in decreased learning performance? Yet these possibilities are not raised in the present work.\n\n**Clarity:**\n\nOverall, the motivation, intuitions, experimental setups, and analyses are all clear. However, several aspects remain unclear to me. Please see questions, below. Also, it is only made clear in the appendix that the robotic experimental setup is indeed a virtual one, from what I have been able to see. The reader's understanding would be facilitated if this information were included in the main text and in Fig 2's caption. \n\n\n**Significance:**\n\nBiological neurons are known, as pointed out by the authors, to have dynamic thresholds, though the function of such thresholds is, as yet, uncertain. This study represents a welcome introduction of these known dynamics into ANNs, where such functionality can be queried in the context of feedforward networks, and the authors provide an existence proof that in certain contexts, such dynamics might contribute to increased generalization performance. However, without stronger statistical analysis and, especially, a substantively deeper exploration of how homeostasis and generalization are linked beyond example cases, it is unclear how robust the authors' findings are, and whether the hypothesized link from increased homeostasis --> increased generalization performance is causal, and under what circumstances if so. 1- What is the motivation for choosing the 3 statistics outlined for homeostatsis? While the mean and standard deviation of the firing rates are 2 natural statistics to measure, why they (and the 3rd) should be the ones to choose for measuring homeostasis is unclear. Are there drawbacks in deciding on these particular ones (e.g., perhaps in choosing these, the networks perform worse wrt other candidate metrics?) ?\n\n2 - Do the shaded regions in Figs 2e and 3d represent SDs, SEMs, or something different?\n\n2- L234: \"soft-reset\" is unclear Yes", " In this work, the authors aim at bridging this gap by introducing a novel bioinspired dynamic energy-temporal threshold (BDETT) scheme for spiking neural networks (SNNs). Meanwhile, the authors propose a BDETT scheme mirrors two bioplausible observations: a dynamic threshold has 1) a positive correlation with the average membrane potential and 2) a negative correlation with the preceding rate of depolarization. \n--------------------------------------------------------------------------------------------------\nThe author's revisions address all my concerns and I would further recommend this manuscript. Strenghts:\n1. The authors designed a bio-inspired dynamic threshold mechanism to enhance the generalization of SNNs.\n2. The new dynamic energy-temporal threshold mechanism reflects the two biological observations.\n3. The parameter setting of the dynamic threshold mechanism can use layerwise control of statistical information.\n\nWeaknesses:\n1. The interaction of DET and DTT does not seem to be supported by ablation experiments. 1. Since the authors claim in the methods section that the two dynamic threshold mechanisms can help each other achieve optimal settings. Thus, more sufficient evidence needs to be provided to support this viewpoint. The authors emphasize in the introduction section that it still requires hardware engineering efforts.", " This paper introduces a new bioinspired dynamic energy-temporal threshold (BDETT) scheme for spiking neural networks (SNNs), and \nvalidates the strong homeostasis along with its generalization to diverse degraded conditions. Experiments are conducted on the robot obstacle avoidance and continuous control tasks. The result shows that BDETT outperforms existing static and heuristic threshold approaches. The supplemental video is impressive. \n Strenghts:\n1. Introduces a bioinspired dynamic threshold scheme for SNNs that increases their generalizability.\n2, Designs a new method that uses layerwise statistical cues of SNNs to set the parameters of our bioinspired threshold method.\n3. The proposed threshold scheme achieves bioplausible homeostasis, dramatically enhancing the generalizability across tasks, including obstacle avoidance and robotic control, and in normal and degraded conditions. \n\n\nWeakness:\n1. The paper combines two dynamic thresholds that exhibit positive and negative correlations with the average membrane potential. It would be better to explain the reason and the related mathematical formulation in detail. Plus, the author compares BDETT with four variants of the spiking actor-network (SAN), are there any other recent SNNs to compare with?\n2. The experiments are restricted to the robot and control tasks. It would be better to include more tasks such as tasks in computer vision. 3. The supplementary material has a heavy overlap with the main body part. See the weakness part. None. ", " The authors propose a dynamic spiking threshold function that consists of DET and DTT. DET is determined by the distributions of the thresholds and membrane potentials over the neurons in a given layer at a given timestep. DET is reconfigured s.t. the larger the average potential over the in-layer neurons, the larger the threshold, so that it avoids too high firing rate over the neurons as well as it induces competition among the neurons. DTT effectively realizes a refractory period s.t. it raises the spiking threshold when the potential decreases. The effect of the proposed spiking threshold (BDETT) applied to SNN-based policy functions for reinforcement learning on two tasks, i.e., robot obstacle avoidance and robotic continuous control. The results highlight a better performance for the use of BDETT in terms of success rate and homeostasis than the baseline methods. $\\textbf{Strengths}$:\n\nS1. BDETT highlights a better performance than the baseline methods at least for a given set of parameters. \nS2. As the authors highlight, BDETT has bio-plausible grounds and is novel.\n\n$\\textbf{Weaknesses}$:\n\nW1. Weakness of baseline. the baseline performance is questionable given that the performance of BDETT is not compared with the performance of any previously published works. I am convinced that BDETT outperforms the previous methods that the authors addressed for a given set of parameters. However, I am not convinced if the proposed baseline is the ground-truth or close to the ground-truth.\n\nW2. Wrong citations. I found that Refs. 24 and 26 are irrelevant to the baseline methods DT1 and DT2. \n\nW3. Lack of in-depth analysis of BDETT. I am slightly tired of the authors’ emphasis on the bio-plausibility of BDETT. I agree on the point that bio-plausibility of a newly proposed method is good. But, such bio-plausibility does not justify the proposed method. The authors took a biological notion and recreated the notion largely, so that I do not think the fidelity of BDETT to biological notions is very high. Instead, I would like to suggest the authors to systematically address the advantages of BDETT over the previous methods from an engineering viewpoint rather that bio-plausibility.\n\nW4. Additional computation and space complexity. The evaluation and memorization of BDETT for all neurons cause additional computation and space complexity, which should have been addressed in detail.\n Q1. Applicability of BDETT to other domains. It is required to apply the proposed method to different application domains, e.g., computer vision. The goal is two-fold: (i) the impact of BDETT on SNNs can be highlighted, and (ii) true baseline performances are available, particularly, for vision domain, there exist tons of previous publications reporting their official performances. Can the authors evaluate the impact of BDETT on other application domains?\n\nQ2. Complexity analysis. Can the authors evaluate the computation and space complexities of the proposed method and compare with previous works? \n The authors did not address the limitations of the present work. But, I believe that the current result limits the impact of the present work to merely two simple tasks with unreliable baseline methods." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "e3IJfSs3-nG", "RosBJT-XYq", "ypk0rm6WC9D", "a4OpK0VY8kfo", "EyQA9P9blylI", "MsEiUHuUWV", "7zfws4EhJ27", "W8ZFFMh_HrVT", "9yAmUPBtf72", "rYoxqz3OcOZ", "aaI3Bpe6Bun", "rUU-BLQBRJn", "nips_2022_1bE24ZURBqm", "nips_2022_1bE24ZURBqm", "nips_2022_1bE24ZURBqm", "nips_2022_1bE24ZURBqm", "nips_2022_1bE24ZURBqm" ]
nips_2022_-bLLVk-WRPy
Structural Kernel Search via Bayesian Optimization and Symbolical Optimal Transport
Despite recent advances in automated machine learning, model selection is still a complex and computationally intensive process. For Gaussian processes (GPs), selecting the kernel is a crucial task, often done manually by the expert. Additionally, evaluating the model selection criteria for Gaussian processes typically scales cubically in the sample size, rendering kernel search particularly computationally expensive. We propose a novel, efficient search method through a general, structured kernel space. Previous methods solved this task via Bayesian optimization and relied on measuring the distance between GP's directly in function space to construct a kernel-kernel. We present an alternative approach by defining a kernel-kernel over the symbolic representation of the statistical hypothesis that is associated with a kernel. We empirically show that this leads to a computationally more efficient way of searching through a discrete kernel space.
Accept
This is a strong submission that benefitted greatly from productive and clarifying discussion between the authors and reviewers, after which the reviewers reached a unanimous stance in favor of acceptance. I recommend the authors to revise the manuscript accordingly in light of these discussions.
train
[ "jDOEqp2Hed1", "og4RA_8m7gc", "2P_Gvk6OANj", "UTnrUPHgutH", "9vhHfAjXuq2", "Ttx26-maaR9", "iheDwxwgKn", "AYjfwETDETP", "tFipHSkQC-Q", "_B8mX1iUmD-", "UmmfCtnMBv9", "IqxXk-908uO" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your careful response to each question.\nI think it is great that positive results are obtained, especially for the validity of the choice of base distance and the hyperparameter optimization of the proposed method, which I wanted to know.\nI continue to recommend the acceptance.", " Thank you for the rebuttal. This addresses my concerns so I have raised my recommendation to accept.\n\n\nWith regard to BO convergence, it is reasonably common practice to use a global optimiser like DIRect or similar to maximise the acquisition function, which has convergence guarantees regardless of non-convexity. However I accept that this case requires an alternative approach, and it is certainly not the alone in that regard - for example in high-dimensional problems, even if global optimisers are used, early stopping may be required for practical reasons. While it is outside the scope of this paper it would certainly be interesting to see an analysis of the influence (or possibly lack thereof) that failure to optimise the acquisition function has on convergence guarantees in BO!", " Thank you for these convincing clarifications! I will accordingly raise my score by one. ", " We thank Reviewer iFD5 for the positive and constructive feedback and will discuss the questions and comments below:\n\n*I think the focus on \"Optimal Transport\" is misleading because the special case of the OT distance that is used is quite trivial, and looks more like an l1 norm of the weights. Nowhere else is optimal transport used.*\n\nWe agree with the reviewer that the focus of our work is not explicitly on the optimal transport techniques, but rather on showing that the kernel-kernel should be defined on the grammar expressions rather than directly on the function space distribution of the GP. We note that we also don't emphasize a focus on OT techniques in the abstract. We will also make this clearer in the introduction of the camera-ready version. \n\nThe OT formulation offers one possible way of defining such a kernel over the grammar expressions. One advantage of that formulation, though, is that it allows in principle to incorporate other ground metrics, such as tree metrics used for neural architecture search (NAS) in Nguyen et al. (2021). This increases the flexibility of our proposed method/ our formulation.\n\n*How does this work together with hyper parameter tuning of GP? Can I first run your method and then tune the hyper parameters as usual?*\n\nsee comments to all reviewers\n\n*Can this be used to pick the kernel for RKHS regression or SVMs?*\n\nWe did not test this, but in principle yes, as our kernel-kernel is defined over the structural form of the kernel and doesn't use other parts tied to GPs. \n\nSimilar to the presented GP selection, one might need to do an optimization of the kernel hyperparameters before calculating the model selection criteria. Applying our method to an SVM classifier might for example use our algorithm in an outer kernel selection loop, while the kernel parameters are optimized in an inner loop, e.g. also via BO. The accuracy on a validation set could be used as selection criteria in both loops. \n\n*Are Rule 3 and Rule 2 ('exchanging a base kernel with another kernel') really necessary? It seems to that they assuming the other rules would be enough...*\n\nWe thank the reviewer for that question/remark. In fact, the kernel space would stay the same without rule 3 (or rule 2 for the general grammar). However, these rules are not only used to define the kernel space but also specify *directions* in the kernel space. These operations/ search directions are used in both BO methods (ours and in the Hellinger kernel approach) in the optimization of the acquisition function. Furthermore, the operations are used in Greedy Search to define the next stage of kernels and in TreeGEP to generate the next population of kernels.", " We thank Reviewer PF7c for the positive and constructive feedback and will discuss the questions and comments below:\n\n*I would be more comfortable in overlooking the second point if there was more experimental results (ie. more than 4 datasets evaluated), but it is difficult to draw strong conclusions from limited results, even though the improvements shown appear strong. However I think it is important to include comparisons with non-parametric approaches for completeness.*\n\nIn order to address this point we added more experimental results to the revised version of the supplementary material in a new file called additional_experiments_rebuttal.pdf. We made a comparison to the most recent nonparametric kernel learning method we could find, called \"Functional Kernel Learning\" of Benton et al. (2019), which places a GP prior over the spectral density of kernels. We observe that almost all search methods over the kernel grammar (including our proposed one) lead to better performing models in the end. Furthermore, we added experiments for the UCI dataset \"Concrete\", which also can be found in Appendix E (experiments of FKL on Concrete were not yet finished and will be added later).\n\n*When constructing base kernels, do you \"fix\" hyperparameters like length-scales (for the SE kernels) for each base, or do you tune such parameters using e.g. max-likelihood?*\n\nsee comments to all reviewers\n\n*1b. if the latter, how do you measure the similarity between two examples of the \"same\" base kernels with different lengthscales?*\n\nWe thank the reviewer for mentioning this important point.\n\nWe treat each expression and also the base kernels as *kernel-families over their parameters*. Comparing two identical families thus results in a distance of 0. We mention this in more detail in Appendix C and in footnote 1 in the main paper. \n\nFor example, the symbol SE stands for the whole family of SE kernels over their lengthscale and variance parameters. The distance between two SE kernels thus is 0 as we deal with the same element, the same kernel-family. All other considered search methods also act on the kernel-family level. The whole procedure might be seen as two model selection loops. Our method searches over kernel families and acts as the outer selection loop. The inner selection loop selects the best parameters for a given kernel family - this is done automatically when calculating the model selection criteria (see comment at the top). We will make this point clearer in the camera-ready version of the main paper.\n\n*Similarly, when doing operations like ADD, do you considered allowing for weighted sums (a1.K1(x,y) + a2.K2(x,y), where a1,a2 are scaling factors) rather than simple sums (K1(x,y)+K2(x,y))?*\n\nAll base kernels come with a variance term, as defined in the kernel grammar in Duvenaud et al. (2013). This automatically leads to a scaling of the base kernels. The similarity is measured on the kernel-family level (as stated above) which includes the scaling parameters.\n\nThus, two expressions that both have the form $k_1=SE+SE$ and $k_2=SE+SE$ will both consider the same kernel family spanned by $K(x,y|\\theta_1)+K(x,y|\\theta_2)$ over the combined parameter space $\\theta_1\\times \\theta_2$, which includes both lengthscales and variances. As $k_1$ and $k_2$ define the same kernel-family, they have distance 0. In case a prior is given on the kernel parameters (and thus also on the scaling parameters) this can also be seen as a distance between two Bayesian models M1 and M2, which have the same prior in function space. \n\n*As a rough order-of-magnitude, how complicated do SOT kernels become? I would imagine this would depend on how long you let the BO run and what heuristic limits you place on the your evolutionary algorithm, but I'm curious how complex the tree becomes.*\n\nWe thank the reviewer for mentioning this.\n\n The complexity of the best kernel found indeed depends mainly on three things: i) the limits of the evolutionary algorithm in the acquisition optimization ii) the model selection criteria iii) the dataset. After inspecting multiple runs, we found expressions consisting out of 6 - 16 base kernels.\n\n*Does using an evolutionary algorithm rather than global optimisation when maximising your BO acquisition function adversely affect convergence guarantees that usually apply to BO?*\n\nIn fact, results on convergence rates in BO require that a global optimum of the acquisition function has been found. It might be an interesting research question what happens if the acquisition function is not optimized perfectly. However, this might also be a general problem in BO as the acquisition function in standard BO can get highly non-convex, making it hard to find the global optimum.\n", " We thank Reviewer Cka7 for the positive and constructive feedback and will discuss the questions and comments below:\n\n*Given that your method eschews functional distance like in [2], are we sure that it would do well in cases where the true kernel is known? I suppose this can be evaluated on synthetic data.*\n\nWe thank the reviewer for the interesting question. In case synthetic data from a ground truth kernel $k_{gt}$ is used, one might consider two separate points:\n\n1. Do we select the ground truth kernel in the end?\n\n2. Do we reach the same log-evidence value as the ground-truth kernel aka $g(k_{gt}|D)$?\n\nThe first point might depend on the model-selection criteria $g$ that is optimized with our method - e.g. which approximation is used for the log-evidence and if this approximation obtains its maximal value at the ground truth kernel. Considering the second point, we think that our method will select competitive kernels compared to $k_{gt}$ in terms of $g(k|D)$. We plan to add a small experimental section in the camera ready version considering this topic. \n\nOur expectation here is that if we would calculate the log-evidence perfectly and our dataset $D$ would be large enough than our search method might have a chance to select the ground truth kernel as the log-evidence would also probably have the highest value at the ground truth kernel. However, as we use a Laplace approximation for the log-evidence we observe that the model-selection criteria slightly prefers larger hypothesis/kernels (in terms of number of hyperparameters) over smaller ones - thus it might happen that a different kernel is selected, that has the same or even better model selection criteria value. However, our method is not tight to one special selection criteria. One could use for example BIC which tends to prefer smaller models, or one might use a computationally intense sampling approach to get a better estimate of the log-evidence. \n\n*Or is there some weakness here, hidden in the simplicity (both theoretical and computational) of the pairwise evaluation?*\n\nWe did not observe that the simplicity of the kernel-kernel did harm the performance on the real-world datasets - we thus also don't expect that to happen for synthetic data.\n\nWe think that the grammar expressions act like low-dimensional, compressed representations of the distributions in function space, which in turn allows the usage of simple distance measures - this might be the reason for the good performance despite the simplicity. One might compare this to SVM classification with a simple RBF kernel on image data, like MNIST, which can work fairly good in case well-chosen low-dimensional features of the high-dimensional image are used as input to the kernel - in our case these well-chosen features are already naturally given through the grammar expressions.\n\n*How can the OT distance be interpreted? Is there something of interest happening with the multiplicative factors in the total distance eq3 once they get learnt?*\n\nWe thank the reviewer for mentioning this interesting topic.\n\nThe distance value $d(k_{1},k_{2})$ itself might be hard to interpret. However, the multiplicative factors indeed reveal which parts of the distance are more important, depending on the base dataset for which the selection criteria $g(k|D)$ is calculated.\n\nThis can be seen for example in Appendix E, Figure 4, where the values of the factors over the BO iterations for the Airfoil and Airline dataset are shown. Here, we observe that for the Airfoil dataset a comparison of the base kernels (a high weight on the distance over base kernels) is more important while on the Airline dataset the distance over subtrees is more important - this might be an indication that for the Airline dataset very specific kernel-operator combinations are important while on Airfoil selecting the correct set of base kernels is more important.", " We thank Reviewer i6Db for the positive and constructive feedback and will discuss the questions and comments below: \n\n*In this paper, an indicator function is used for the base distance to describe the Wasserstein distance as a closed-form solution (total variation distance). However, this seems to be a discrete and extreme similarity evaluation that only looks at whether or not the elements that make up the two representation trees match*\n\nIt is true that the indicator function is a very simple basic metric and in fact only considers feature match. However, this is compensated by the fact that the feature match is measured on a diverse set of features, namely the base kernels, paths, and subtrees, which also represent different levels of granularity. \n\nFor example, the contribution of two subtrees $S_{1}$ and $S_{2}$ to the total distance between two kernels is determined not only by whether the subtrees completely match - but also whether their own subtrees match, or whether their base kernels match - this makes the overall distance expressive even though it has a simple base distance.\n\n*Is it possible to approximate the Wasserstein distance and use the proposed method when a general distance that is not an indicator function is used as the base distance?*\n\nYes, in case one wants to use a different ground distance, there are two options:\n\n1. One could compute the Wasserstein distance by solving the optimal transport problem using linear programming for each kernel evaluation - this was done for neural architecture search in Kandasamy et al. (2019) and for BO over molecules in Korovina et al. (2020). However, kernels based on general Wasserstein distances are not necessarily p.s.d.\n\n2. One could consider tree metrics as a basic metric. These were used for neural architecture search in Nguyen et al. (2021) and allow for more flexibility. For these kinds of metrics, the OT problem also has closed form solutions and the resulting kernel is also p.s.d. We conducted some tests with these metrics and did not find any significant change in performance. Therefore, we decided to use the conceptually simpler ground metric.\n\n*The proposed method seems to cover only model selection for the kernel function itself, but if we want to further perform hyperparameter selection for the kernel, can we combine it with conventional methods (e.g., marginal likelihood maximization)?*\n\nsee comments to all reviewers\n\n*Are the hyperparameters alpha1, 2, and 3 in distance (3) optimized at the same time as the other kernel-kernel hyperparameters? How does this affect performance of the proposed method?*\n\nYes, all hyperparameters of the kernel-kernel, including the distance weights, lengthscale and variance are learned in a combined way via marginal-likelihood maximization using the LBFGS optimizer.\n\nWe also think that this is the correct way, as we want to find the combined set of hyperparameters with highest marginal-likelihood value - thus solve for the combined optimization problem. Furthermore, we observed that the optimization of all hyperparameters combined is fairly stable and usually ends up at the same values for different starting points, indicating a well-behaved loss landscape for the combined optimization problem.\n\n*The distance for kernel-kernel (3) itself contains the hyperparameters alpha1, 2, and 3, so it seems that the hyperparameter tuning for kernel-kernel are going to be harder than for ordinary BO when performing BO on the kernel space.*\n\nOur hyperparameter optimization problem is comparable to standard BO with an RBF kernel on $\\mathbb{R}^{d}$ containing the same number of hyperparameters. Since we learn five hyperparameters it is comparable to BO over four dimensions with an RBF kernel with four lengthscales and one variance.\n\nHowever, the hyperparameter optimization step is not necessarily the most computationally intensive part of GP inference in kernel space. For the Hellinger kernel, the distance calculations, for example, consume much more computational time than the hyperparameter optimization itself (which can be performed without recalculating the distances in each iteration). Our kernel-kernel on the other hand performs distance calculations much faster (see Appendix D, Figure 3) - rendering BO over kernel space much more similar (in terms of computation time) to standard BO over euclidean spaces.", " **Changes in the revised version:**\n\nA new file named additional_experiments_rebuttal.pdf that addresses the experiment requests of reviewer PF7c is included in the supplementary zip. All other proposed changes and clarifications will be included in the camera-ready version of the paper/supplementary.\n\n**Optimization of the kernel hyperparameters:**\n\nWe thank the reviewers for bringing to our attention that we were not clear enough on how we proceed with the GP/kernel hyperparameters of the selected kernels $k_{t}$ during the BO iterations. We actually *learn the hyperparameters* of the kernel as part of calculating the model selection criteria.\n\nAs we approximate the log-evidence $g(k_{t}|D)$ via Laplace approximation, the calculation of $g(k_{t}|D)$ already includes an MAP optimization of the kernel hyperparameters (see Appendix A.3) - we thus automatically receive learned hyperparameters when calculating the log-evidence. In fact, most of the GP model selection criteria, such as the utilized log-evidence, but also the Bayesian information criteria (BIC) implicitly perform a hyperparameter optimization. We will make this point clearer in the camera-ready version of the main paper.", " This paper considers the problem of selecting the kernel of a Gaussian process (i.e., the model selection problem), which is one of the hyperparameter optimization problems for Bayesian optimization. \nThe authors considered that a kernel function is consisted of a set of \"basis\" kernels in the space of kernel functions and a number of operations, following the previous (kernel grammer) work of Duvenaud et al.\nA kernel grammar can be mapped to a tree that represents it.\nFor the discrete probability distribution that is a summary of this representation tree, we can define the Wasserstein distance with the indicator function as the cost function. In this case, the Wasserstein distance coincides with the total variation distance. This distance can be used to define a meta kernel function (kernel-kernel) that measures the similarity of two kernel functions on the space of kernel functions. The proposed method uses a Gaussian process with this kernel-kernel as the covariance function, and Bayesian optimization is used to select the kernel model. \nIn the experiment, the authors compared and evaluated the model selection performance of the proposed method with existing methods on four types of benchmark data including time series. strength\n\n- Traditionally, model selection for the kernel function itself has been done manually depending on the problem or by multiple kernel modeling for several candidate kernels. On the other hand, this study performs Bayesian optimization of combinations of kernel components based on the similarity defined between the kernel functions. In this approach, only the base kernel and operation rules need to be prepared, and there is no need to prepare multiple candidate kernel functions as in multiple kernel learning. Bayesian optimization for model selection allows for the \"construction\" of good kernel functions without human intervention.\n\n- The mapping of kernels to representation trees allows the problem of finding the optimal kernel to be viewed as a problem of finding the optimal tree structure. \n\n- The proposed method has higher scalability than existing methods (Malkomes et al.) that use Bayesian optimization based on Hellinger distance for the same problem. This is because the Hellinger distance-based method requires an integral calculation on the kernel parameters for kernel-kernel evaluation.\n\nweakness\n\n- In this paper, an indicator function is used for the base distance to describe the Wasserstein distance as a closed-form solution (total variation distance). However, this seems to be a discrete and extreme similarity evaluation that only looks at whether or not the elements that make up the two representation trees match.\n\n- The distance for kernel-kernel (3) itself contains the hyperparameters alpha_1, 2, and 3, so it seems that the hyperparameter tuning for kernel-kernel are going to be harder than for ordinary BO when performing BO on the kernel space. - The proposed method seems to cover only model selection for the kernel function itself, but if we want to further perform hyperparameter selection for the kernel, can we combine it with conventional methods (e.g., marginal likelihood maximization)?\n\n- Is it possible to approximate the Wasserstein distance and use the proposed method when a general distance that is not an indicator function is used as the base distance?\n\n- Are the hyperparameters alpha_1, 2, and 3 in distance (3) optimized at the same time as the other kernel-kernel hyperparameters? How does this affect performance of the proposed method? The authors adequately addressed the limitations and potential negative societal impact of their work.", " To find the best kernel for a GP, rather than conducting BO with a value function given by a GP comparison in function space, this paper develops a novel way to compare two kernels, based on optimal transport between feature representations of the tree representation of the kernel. The paper comes up with a good idea for kernel search, inspired I believe from references [10,4] which conduct NAS by BO using OT over the computational graph. Thus the central idea is at hand (structure search by BO using OT over structure distance), though not immediate.\nThe paper is clearly written. Many details are left to the appendix, but this still seems adequate.\nThe experimental validation is adequate.\nThe formalization of the OT metric in sec 3 is justified thoroughly.\nFig 1 is not legible when printed on A4 format. Given that your method eschews functional distance like in [2], are we sure that it would do well in cases where the true kernel is known? I suppose this can be evaluated on synthetic data. Or is there some weakness here, hidden in the simplicity (both theoretical and computational) of the pairwise evaluation?\nHow can the OT distance be interpreted? Is there something of interest happening with the multiplicative factors in the total distance eq3 once they get learnt? Yes, though section 5 could be made more explicit. ", " The paper proposes a method for kernel selection (search) wherein kernels are built from base kernels using fundamental operations (eg addition, multiplication etc). Kernels so constructed are represented as trees, allowing their similarity to be evaluated based on their grammar (tree structure and assumptions regarding basic operations, eg their commutativity or otherwise) rather than e.g. distance between GPs. The construction of the kernel-kernel is thorough and I am persuaded that, while heuristic, this is a good alternative to measuring the difference between GPs or kernels using e.g. L2-norm. Further, the overall approach of building kernels from base kernels reflects the human (parametric) approach to the problem well.\n\nMy main problems with this paper are:\n\n1. Comparison with non-parametric approaches to kernel selection (such as hyperkernels) is missing.\n2. It could be argued that this approach is incremental, simply substituting one kernel-kernel [2] with another that has the advantage of being more readily evaluated.\n3. Experimental results are limited.\n\nI would be more comfortable in overlooking the second point if there was more experimental results (ie. more than 4 datasets evaluated), but it is difficult to draw strong conclusions from limited results, even though the improvements shown appear strong. However I think it is important to include comparisons with non-parametric approaches for completeness. 1. When constructing base kernels, do you \"fix\" hyperparameters like length-scales (for the SE kernels) for each base, or do you tune such parameters using e.g. max-likelihood? Further:\n1a. if the former, do you attempt to give some flexibility by e.g. having multiple SE kernels with different lengthscales as distinct base kernels?\n1b. if the latter, how do you measure the similarity between two examples of the \"same\" base kernels with different lengthscales?\n\n2. Similarly, when doing operations like ADD, do you considered allowing for weighted sums (a1.K1(x,y) + a2.K2(x,y), where a1,a2 are scaling factors) rather than simple sums (K1(x,y)+K2(x,y))? This would allow more flexibility, but I'm not sure how well it would fit in your kernel similarity calculations.\n\n3. As a rough order-of-magnitude, how complicated do SOT kernels become? I would imagine this would depend on how long you let the BO run and what heuristic limits you place on the your evolutionary algorithm, but I'm curious how complex the tree becomes.\n\n4. Does using an evolutionary algorithm rather than global optimisation when maximising your BO acquisition function adversely affect convergence guarantees that usually apply to BO? As noted previously, more experimental comparisons (more datasets) and at least one non-parametric result (e.g. hyperkernels) would improve this paper.", " This paper proposes a new way to select a kernel (covariance function) for Gaussian processes (GP). The authors borrow ideas from NN architecture search to propose a so called symbolical-optimal-transport (SOT) kernel over the architecture of the kernel. In contrast to prior work, they do not compare to different kernels in function space. Instead they compare the symbolic architecture of the kernel directly. They do this by capturing the structure of the kernel in a tree and then compare these trees by comparing them by some kind of optimal-transport metric. \nThis SOT kernel is then used for Bayesian optimization for model selection. This is fairly standard. A comparison to a hellinger kernel-kernel is given. The SOT kernel is computational advantageous because no integrals have to be computed.\nThe authors then demonstrate in a series of four experiments that the SOT kernel outperforms Greedy, Hellinger, Tree-GEP and SOT. This is true for both the training and the test set. Strengths: While this idea has been executed for Neural Networks before, the presented idea is novel for Gaussian process model selection. The experiments are very encouraging and the idea seems natural. The computational advantages of this approach (over the known function space view) are well-explained. While the idea is simple, I do not consider this a weakness. The topic is very significant because model selection is extremely important for GPs. The quality of writing is good, it is quite easy to understand.\n\nWeakness: There is almost no theory in the paper, except for the trivial Proposition 1. I think the focus on \"Optimal Transport\" is misleading because the special case of the OT distance that is used is quite trivial, and looks more like an l1 norm of the weights. Nowhere else is optimal transport used. --- How does this work together with hyper parameter tuning of GP? Can I first run your method and then tune the hyper parameters as usual?\n--- Can this be used to pick the kernel for RKHS regression or SVMs?\n--- On page 4: Are Rule 3 and Rule 2 ('exchanging a base kernel with another kernel') really necessary? It seems to that they assuming the other rules would be enough...\n\nMinor points:\n--- The plural of GP is GPs, not GP's.\n--- Please define g before it's used on page 3. The limitations are transparent and well-addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "iheDwxwgKn", "9vhHfAjXuq2", "UTnrUPHgutH", "IqxXk-908uO", "UmmfCtnMBv9", "_B8mX1iUmD-", "tFipHSkQC-Q", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy", "nips_2022_-bLLVk-WRPy" ]
nips_2022_19MmorTQhho
One Inlier is First: Towards Efficient Position Encoding for Point Cloud Registration
Transformer architecture has shown great potential for many visual tasks, including point cloud registration. As an order-aware module, position encoding plays an important role in Transformer architecture applied to point cloud registration task. In this paper, we propose a one-inlier based position encoding method for point cloud registration network. Specifically, we first find one correspondence by a differentiable optimal transport layer, and use it to normalize each point for position encoding. It can eliminate the challenges brought by the different reference frames of two point clouds, and mitigate the feature ambiguity by learning the spatial consistency. Then, we propose a joint approach for establishing correspondence and position encoding, presenting an iterative optimization process. Finally, we design a progressive way for point cloud alignment and feature learning to gradually optimize the rigid transformation. The proposed position encoding is very efficient, requiring only a small addition of memory and computing overhead. Extensive experiments demonstrate the proposed method can achieve competitive performance with the state-of-the-art methods in both indoor and outdoor scenes.
Accept
Thanks in large part to the rebuttal conversation, the reviewers converged to accept this paper. The reviewers recognize the interest and value of the approach and careful empirical results, bolstered by additional results introduced during the discussion. In preparing the camera-ready, the authors of this paper are encouraged to revisit the comments from reviewer 2PZE suggesting to verify whether the two anchor points are truly ‘inlier’ correspondence; this can be easily done by calculating the distance between the two anchor points under ground-truth transformation. Also, please make the title change and any other edits promised in the rebuttal, especially discussion of drawbacks and avenues for future research.
train
[ "41WK1sVYUyT", "Rm3HrTQAIF", "xCNWQpWVkD2", "XlIlkqws79s", "3aS5o0RXAHA", "mvDDiJuqD4", "PzdUg0pMMZ2", "WcsVdfl72N", "pUkd9rX0ihB", "31SJOF-G23t", "8oEPpglFeng", "szX9PZcpCO", "s2pcVnRoimF" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your constructive suggestions and helps about improving this paper! We will add the suggested experiments and explanations in the revised version.", " Thanks for providing such a detailed answer!\n\nEspecially the additional experimental evidence on the positional encodings helps to overcome the doubts verbalized in the review. The additional ablation (absolute, 1) and the performance analysis (3) would be great to be included in the paper / supplementary material as this question may arise and it backs up the claims made in the text.\nThe explanation in (5) equally prevents others from misunderstanding - would be good to add this comment.\n\nThe new title suggestion is also more specific and fitting.", " 1. **Q**: Do you also used RANSAC with \"10k validation\" in your experiments for Table 1? \n**A**: Yes. In Table 1, we also used RANSAC with \"10k validation\" in our experiments.\n2. **Q**: To be effective for position encoding of other points, I think the two points in the one-inlier correspondence should be true corresponding points. By quality, what I want to know is how accurate this correspondence is. Do you calculate the distance between the two points under ground-truth transformation? \n**A**: Our method employs a coarse-to-fine manner to find correspondences and aims at finding coarse correspondences (patch correspondences) in the coarse stage. Our position encoding is designed for establishing coarse correspondences. The higher the precision of the “one inlier”, the more accurate our position encoding will be. The “one inlier” is calculated by an averaging operation of $\\bar{\\textbf{C}}_{topk}$ (because of formatting issues it is represented as C-topk in the following part), thus we represent the precision of the “one-inlier” by the inlier ratio of C-topk in Fig. 3. Following GeoTransformer [29], we also consider a coarse correspondence is correct if the overlap of the corresponding two patches is greater than a predefined threshold (we set the threshold to be 0 as GeoTransformer [29]) under ground-truth transformation. Otherwise, it will be regarded as a wrong coarse correspondence. According to this way, we determine if each coarse correspondence is an inlier and count the inlier ratio of C-topk in Fig. 3. For the overlap of two patches, we calculate the alignment distance of each point in the patches, i.e., calculating the distance between the two points under ground-truth transformation. If the alignment distance of a point in one patch is lower than a predefined threshold (we set the threshold to be 0.05 as GeoTransformer [29]), it will be considered as being in the overlapping area. In Fig. 3, we count the inlier ratio of C-topk, the experimental results show that the inlier ratio increases after optimization. Thus the precision of the “one-inlier” is improved.", " I appreciate the detailed response from the authors. Some further clarification is desired about question 7 and 9. \n\nIn reply to question 7, you mentioned \"GeoTransformer utilizes the RANSAC with 50k iterations and 50k validation, while our method uses the RANSAC with 50k iterations and 10k validation\". Do you also used RANSAC with \"10k validation\" in your experiments for Table 1?\n\nAbout the quality of the “one-inlier” in question 9. To be effective for position encoding of other points, I think the two points in the one-inlier correspondence should be true corresponding points. By quality, what I want to know is how accurate this correspondence is. Do you calculate the distance between the two points under ground-truth transformation? ", " Thank all the reviewers for their careful review. For the grammar mistakes, we will carefully proofread them later.\n1. **About the title may be a little misleading (To Reviewer dDv9 Q2 and Reviewer 2PZE Q3).** \n“One inlier” means that we find a virtual correspondence, then the two points corresponding to the correspondence are considered as the reference points. Next, the reference points are utilized to further encode point-wise geometric position features, and generate more discriminative features for the point cloud. We expect the found virtual correspondence to be an inlier, and design iterative optimization strategies to realize this. Once we find an inlier, it is utilized to efficiently perform the proposed position encoding. We also realize that the title may not accurately express our core ideas and is a little misleading, because after finding an inlier, we leverage the inlier to perform the subsequent operations, including: position encoding for feature reconstruction and several times joint optimization. Thus, we consider revising the title to: “One Inlier is First: Towards Efficient Position Encoding for Point Cloud Registration”, and emphasize that finding an inlier is an important prerequisite for our approach.\n2. **The analysis about the relationship between the registration results and the established correspondences (To Reviewer fXaq Q2 and Reviewer 2PZE Q1).** \nIn order to analyze the reason why our method can get better registration recall while the inlier ratio is not as good as Geotransformer, we conduct an experiment: we count the scene frequency of different inlier ratios for Geotransformer and our method, and present the curve charts of scene distributions in Fig.2 of supplementary material. Specifically, we count the number of scenes with different inlier ratios on 3DMatch and 3DLoMatch datasets, and present them in frequency form. Besides, we also count the scene registration recall of different inlier ratios and provide the curve charts in Fig.3 of supplementary material.\nHere we take the results on 3DLoMatch as an example, from the Fig. 2 of the supplementary material, we can observe that the inlier ratio of our method is about 0.05 to 0.6, while Geotransformer has more scenes with inlier ratio between 0.5 and 0.9 than ours. Geotransformer has higher inlier ratio in some scenarios, which causes the mean inlier ratio to be higher than ours. However, higher inlier ratio is not a necessary condition for high registration recall. For example, for the same scenario, suppose there are two different correspondence results with the same inlier ratio but different distributions of inliers, the correspondences with more uniform distribution of inliers may achieve better registration performance than the correspondences with more locally clustered inliers. This can be explained by the fact that too locally clustered distribution of inliers may cause degradation issues in model estimation. From Fig. 3 of the supplementary material we can observe that our method can achieve better registration recall than Geotransformer with the same inlier ratio (e.g, the interval [0.15, 0. 5]), which shows that the inliers of our method may be better distributed for model estimation. Meanwhile, it also explains why our method has lower inlier ratios, but achieves better registration results. Besides, when the inlier ratio is higher than 0.45, our registration recall is almost 1. This also proves that when the inlier ratio reaches a certain extent, higher inlier ratio has no more contributions for the final registration results. It is worth pointing out that our method has higher inlier ratio than other methods except for GeoTransformer. We speculate that our method constructs global positional features by first finding an inlier, which can introduce spatial consistency. So our method is possible to find matching pairs that cannot be found by local features alone, which may benefit the final registration task. Similarly, we can get this conclusion by analyzing the experimental results on 3DMatch dataset.", " 1. **Q**: A good position encoding should be helpful to find more and better correspondences and a better registration is only a by-product of better correspondences, but the experimental results show that the IR and FMR of the proposed method are far lower than that of GeoTrans. Analysis are needed about this issue. \n**A**: Question 1 and Weakness 1 are the similar issues, so we answer them here together. As pointed in [17, 2, 7, 47], registration recall is the more important metric than IR and FMR in point cloud registration task, because the final goal of point cloud registration is to estimate the rigid transformation. Although the inlier rate of our method is lower than Geotransformer, our method can consistently achieve the best registration recall in all settings. We analyze the reason why our method can get better registration recall while the inlier ratio is not as good as Geotransformer in the Part 2 of the General Response section.\n2. **Q**: The last two contributions are not significant. Especially, iterative optimization is widely used in recent learning-based registration methods, such as PRNet, RPM-Net. \n**A**: Indeed, iterative optimization has been widely used in recent learning-based registration methods. Some learning-based registration methods do correspondence establishment and the final transformation estimation in an iterative manner, similar to the classical ICP algorithm, such as PRNet, RPM-Net. However, our iterative optimization aims to find accurate reference points and progressively accurate position encoding, not for the final correspondence establishment and transformation estimation. Our method jointly optimizes the reference points and position encoding with point-wise feature encoding as the agent several times. In the process, the features of the point clouds are continuously optimized. Finally, the optimized point cloud features are utilized for the final correspondence establishment.\n3. **Q**: The part “one-inlier is enough” in title is a little misleading. \n**A**: The explanation and the revised proposal of the title are presented in the Part 1 of the General Response section.\n4. **Q**: In Fig.2, it should be the Matched Correspondences instead of {R, t}, that are transferred from the Inlier Learning module to the Progressive Alignment module. \n**A**: We will revise this part and make sure Fig.2 is clear in the revised version.\n5. **Q**: For the “Progressive alignment”, how many iterations are performed? \n**A**: We repeat the “progressive alignment” 2 times. We have clarified it in the supplementary material.\n6. **Q**: For Fig. 3, the inlier ratio of every iteration should be given, instead of only the initialization and the final optimized one. \n**A**: We perform the iterative joint optimization only two times. The initialization one and the final optimized one means the inlier ratio of $\\bar{\\textbf{C}}_{topk}$ in the first and second iterative joint optimization, respectively.\n7. **Q**: In Table 2, what constitutes the “Pose” Time? Is it the runtime of the RANSAC 50k? If so, why is the proposed method much faster than GeoTransformer with lower IR? \n**A**: We test the runtime with RANSAC 50k. GeoTransformer utilizes the RANSAC with 50k iterations and 50k validation, while our method uses the RANSAC with 50k iterations and 10k validation. Too much validation times cause that GeoTransformer inferences too slow and it is difficult to apply it to real-world scenarios.\n8. **Q**: What’s the number of sample correspondences in the runtime experiment in Table 2? \n**A**: The number of sample correspondences is 5000 in the runtime experiment. We will add a description in the revised version.\n9. **Q**: The quality of the “one-inlier” for position encoding should be given in each iteration. \n**A**: We represent the quality of the “one-inlier” by the inlier ratio of $\\bar{\\textbf{C}}_{topk}$ in Fig. 3. We perform the iterative joint optimization two times, so we show the statistical results in the two iterations (represented as “initialization” and “optimized” respectively).\n10. **Q**: RRE and RTE should be reported for the experiments on 3DMatch and 3DLoMatch since the objective of this paper is registration. \n**A**: We provide the RRE and RTE of our method and the closest competitor Gentransformer in Tab.3 of supplementary material.\n11. **Q**: I suggest address the issue about if the proposed method is applicable to registration of objects, such as models in ModelNet40. \n**A**: Limited by the time of rebuttal, we do not have enough time to complete the experiment at this time. We will conduct experiments on object benchmark (e.g., ModelNet40) to verify the applicability of the proposed method to the registration of objects in the revised version.", " 1. **Q**: line 32: \"... the straightforward position encoding is not a good idea [23].\" In [23] it shows up to 3.8% improvement on 4DLoMatch and 1.5% higher RR on 3DMatch and 2.3% on 3DLoMatch. This does not imply straightforward position encoding is not a good idea. To make this statement true, one should show the comparison with/without direct position encoding, which is not provided in the paper. \n**A**: For the “straightforward position encoding”, it means the absolute position encoding. [23] is cited here because it also indicates that the straightforward position encoding is not a good idea. Thus, it [23] proposes a relative position encoding. However, it is still not enough to just satisfy the relative position encoding. An arbitrary reference point would introduce high uncertainty, because the two reference points of point clouds are hardly guaranteed to be correlated. Therefore, we propose the one-inlier based position encoding to compensate the positional differences caused by reference frames. In fact, we have presented the experiments about replacing the one-inlier based position encoding with the centroid based position encoding in our ablation studies, i.e., “w/o associated reference points”. Besides, we also provide an experiment about replacing our relative position encoding module with the absolute position encoding. Results can be found in the following table, where we report the results of ours, absolute position encoding and centroid based position encoding. The results show that our position encoding achieves the best performance. \n**3DMatch:**\n|Methods|RR(%)|FMR(%)|IR(%)| \n|:--------------------|:-----:|:-----:|:-----:|\n|absolute|89.5|97.3|56.6|\n|centroid based|89.8|97.2|57.1|\n|ours|92.4|98.1|62.3|\n**3DLoMatch:**\n|absolute|68.4|82.2|23.7|\n|centroid based|69.8|82.7|24.2|\n|ours|76.1|84.6|27.5|\n2. **Q**: line 51: \"This shows that only one inlier is enough to preserve the spatial consistency [2, 7] ...\" I am not sure why those two papers are cited here. Do they mention anything related to this statement? \n**A**: Those two papers both introduce the spatial consistency for finding good correspondences in point cloud registration. They verify that preserving the spatial consistency is beneficial to point cloud registration. We cite them here because the proposed one-inlier based position encoding method also tries to utilize the spatial consistency constraint.\n3. **Q**: line 286: \"This is because ... both consider the position information ...\" To justify this statement, one should provide an ablation study on a model with only the difference with and without positional encoding. However, this is not provided in the evaluation section. \n**A**: In the following table, we present an ablation study on our model. We remove the progressive alignment module in all the following experiments. “iterative position encoding” means that we combine the proposed position encoding with the joint optimization. We can see that the performance of our method benefits from the position encoding. \n**3DMatch:**\n|Methods|RR(%)|FMR(%)|IR(%)| \n|:--------------------|:-----:|:-----:|:-----:|\n|w/o position encoding|88.9|96.7|45.0|\n|with position encoding|91.1|97.5|58.9|\n|iterative position encoding|91.4|98.2|61.8|\n**3DLoMatch:**\n|w/o position encoding|69.1|81.3|22.4|\n|with position encoding|72.6|83.6|25.7|\n|iterative position encoding|74.2|84.1|27.1|\n4. **Q**: line 342: \"our method is scene-agnostic and maintains good registration accuracy in strongly differing scenarios.\" Without a description of how the network was trained in those experiments, it is impossible to know whether the written sentence is true. \n**A**: “our method is scene-agnostic” means that our method performs well on differing scenarios, including indoor and outdoor scenes. We provide the extensive description about how to train the network on different datasets in the supplementary material, including the network architecture details, implementation details and datasets.\n5. **Q**: In Figure 3, the inlier ratio for 3DLoMatch. Why does the 0.0 percentage increase after optimization? \n**A**: In fact, the “0.0 percentage” you said corresponds to the interval [0, 0.025]. For extreme low-overlap scenarios, the initial inlier ratio of $\\bar{\\textbf{C}}_{topk}$ may be very low, which would cause the produced virtual correspondence not to be a truth correspondence and introduce false position encoding for the subsequent feature reconstruction. Finally, due to the introduction of extremely wrong position encoding, the optimized inlier ratio decreases and the number of scenes with inlier ratios close to 0 increases after optimization.\n6. **Q**: The authors do not mention limitations in the main paper. No text in the main paper indicates a limitation section in the supplementary material. \n**A**: We present the limitations in line 128-135 of supplementary material and we will either mention them in or move them to the main body in the revised version.", " 1. **Q**: I am a bit worried about the robustness of the proposed algorithm since the inlier ratio is significantly dropped. \n**A**: It is true that our method performs worse than GeoTransformer in terms of inlier ratio, but our method achieves the highest registration recall (i.e., the most important metric in point cloud registration task) and performs consistently well under different numbers of sampled correspondences. In fact, inlier ratio does not totally reflect the robustness of the registration. Because usually the inlier ratio is counted by averaging all the point cloud pairs. For some easy-to-handle scenarios, if the inlier ratio reaches a certain extent, the higher inlier ratio will have no more contributions to the final registration results, but the mean inlier ratio will rise. We think that the inlier ratio in the challenging scenes better reflects the robustness of the algorithm rather than the mean inlier ratio. To illustrate the issue more intuitively, we count the scene frequency of different inlier ratios for Geotransformer and our method, and present the curve charts of scene distributions in Fig.2 of supplementary material. Specifically, we count the number of scenes with different inlier ratios and present them in frequency form. Besides, we also count the scene registration recall of different inlier ratios and provide the curve charts in Fig.3 of supplementary material. Here we take the results on 3DLoMatch dataset as an example, and the results show that: 1) the inlier ratio of our method is between 0.05 and 0.6 in most scenarios. For Geotransformer, it has more scenarios where the inlier ratio is between 0.5 and 0.9. However, RANSAC has certain anti-noise capability. When combined with the Fig. 3 of the supplementary material we can see that our relatively lower inlier rate is adequate to achieve the high registration recall. High inlier ratio is not a requisite for the high registration recall. 2) Besides, it is worth noting that our method has fewer scenarios than Geotransformer in extremely low inlier ratio, which proves that our method is more robust for extremely challenging scenes. For most scenarios, our method is stable. Although it can’t provide inlier ratio as high as Geotransformer, it generally has the ability to provide enough inliers and still has more consistent performance than Geotransformer. In fact, we can get a similar conclusion by analyzing the experimental results on 3DMatch dataset.\nIn addition, our method can achieve consistently competitive performance in both indoor and outdoor datasets, even the low overlap scenes. It also proves that our method has strong robustness and adaptability to different scenarios.\n2. **Q**: Why the inlier ratio is relatively low? \n**A**: Please refer to the Part 2 in General Response section, we present a detailed analysis about why the inlier ratio of our method is relatively lower than Geotransformer and the relationship between the registration results and the established correspondences.\n3. **Q**: Limitations. \n**A**: We present the limitations in line 128-135 of supplementary material and we will either mention them in or move them to the main body in the revised version.", " 1. **Q**: I feel the paper is a little bit out of fashion/date, given that the community has designed many supervised learned point cloud feature descriptors and moved to studying unsupervised or \"self-supervised\" feature descriptors. The proposed descriptor, though outperforming others, is still supervised and requires mining ground-truth correspondences for supervision. [Ref1] and others, for example, have already shown the possibility to learn features in a self-supervised way. Therefore, I think the paper is making an incremental contribution rather than a major contribution. \n**A**: Recently several unsupervised or self-supervised works have been proposed to solve the point cloud registration problem and achieved good performances. The supervised point cloud registration task still receives great attention and many works have emerged, such as Predator, CoFiNet, REGTR, Geotransformer and so on. The unsupervised or self-supervised methods do not require ground-truth annotations and have good generalization ability, but the registration accuracy is limited and it is currently difficult to exceed the supervised methods. For supervised methods, they could achieve the state-of-the-art performances, but rely on a sufficient number of ground-truth labels. Whether supervised or unsupervised methods, they both are worth further exploration. The proposed method belongs to the category of supervised methods and our main contributions are: 1) we propose an efficient one-inlier based position encoding for point cloud registration and achieve competitive performance with the latest state-of-the-art approaches. 2) Our method is lightweight and very efficient, and the GPU memory usage is only about 40% of GeoTransformer’s. Besides, it is fast and our registration speed is 3.5 times faster than GeoTransformer. Thus, our method has the potential to be applied in real scenarios.\n\n2. **Q**: The title of this paper, to me, is deceiving and jargoning. One inlier correspondence is NOT enough to register a pair of point clouds, and at least THREE inlier correspondences (with noncollinear points) are required to define a unique rigid transformation (see [Ref2]). What the paper really means is to predict a \"anchor\" point (sort of like the centroid of the point cloud) and use the anchor point to subtract all other points. I think the name of the paper needs to be better worded. \n**A**: The explanation and the revised proposal of the title are presented in the Part 1 of the General Response section.\n3. **Q**: Since you know the groundtruth pose, could you add into the loss function a pose error and see how it affects the performance? \n**A**: We have attempted to add a loss function term about the error of the estimated pose in each progressive alignment process. Results can be found in the following table. We can observe that the “pose loss term” has limited effect on the performance. We analyze that the “pose loss term” plays the similar role as coarse correspondence loss and fine correspondence loss in the main body in the network training. \n**3DMatch:**\n|Methods|RR(%)|FMR(%)|IR(%)|\n|:--------------------|:-----:|:-----:|:-----:| \n|with pose loss term|91.8|97.0|62.5|\n|w/o pose loss term|92.4|98.1|62.3| \n**3DLoMatch:** \n|with pose loss term|75.6|84.4|26.0|\n|w/o pose loss term|76.1|84.6|27.5|\n\n4. **Q**: Limitations not mentioned. \n**A**: Due to the limitation of pages, we introduce the limitations in line 128-135 of supplementary material and we will either mention them in or move them to the main body in the revised version.", " This paper presents a learning-based algorithm that establishes reliable correspondences for registering a pair of point clouds.\nPoint cloud registration has been a long studied problem in robotics and computer vision, and there exist many papers that propose learning-based methods to perform feature learning and establish correspondences.\nIn my view, there are two unique ideas about this paper that differ it from previous papers:\n- The proposed pipeline predicts an \"anchor\" point for each point cloud (in the paper this is called \"one inlier\") and uses the \"anchor\" point to \"normalize\" the rest of the point cloud (in the paper, the normalization is basically subtracting the anchor point from each other point in the cloud). After normalization, an additional feature learning step is performed to augment the coarse features (learned by KPConv).\n- The proposed pipeline does correspondence learning and transformation estimation in an iterative manner, similar to the classical ICP algorithm. In the classical ICP, correspondences are established via nearest neighbor search. But in this paper, correspondences are established from learning, using the features learned from the \"anchoring\" idea mentioned above.\n\nThis paper then tests the proposed pipeline in the 3DMatch and 3DLoMatch dataset, and demonstrates the proposed pipeline delivers similar or better performance compared to several previous learned features for point cloud registration. Strengths:\n+ The paper is overall well written and easy to understand.\n+ The idea of predicting or voting an \"anchor\" point and using it to normalize the point cloud seems to be relatively new (though I feel this idea is similar to Hough voting).\n+ Experiments are well conducted and convincing.\n\nWeaknesses:\n- I feel the paper is a little bit out of fashion/date, given that the community has designed many **supervised** learned point cloud feature descriptors and moved to studying **unsupervised** or \"self-supervised\" feature descriptors. The proposed descriptor, though outperforming others, is still supervised and requires mining ground-truth correspondences for supervision. [Ref1] and others, for example, have already shown the possibility to learn features in a self-supervised way. Therefore, I think the paper is making an incremental contribution rather than a major contribution.\n- The title of this paper, to me, is deceiving and jargoning. One inlier correspondence is NOT enough to register a pair of point clouds, and at least THREE inlier correspondences (with noncollinear points) are required to define a unique rigid transformation (see [Ref2]). What the paper really means is to predict a \"anchor\" point (sort of like the centroid of the point cloud) and use the anchor point to subtract all other points. I think the name of the paper needs to be better worded.\n\n\n[Ref1] Yang H, Dong W, Carlone L, Koltun V. Self-supervised geometric perception. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 2021 (pp. 14350-14361).\n[Ref2] Horn BK. Closed-form solution of absolute orientation using unit quaternions. Josa a. 1987 Apr 1;4(4):629-42. Since you know the groundtruth pose, could you add into the loss function a pose error and see how it affects the performance? Limitations not mentioned.", " This paper proposes a position encoding method by using optimal transport and use it to normalize each point. The benefit is that the optimal transport based position encoding can mitigate the feature ambiguity. Then, an iterative optimization process is introduced to establish the correspondence and transformation matrix. Strength:\n1. The idea is reasonable and the overall performance is good.\n2. The presentation is easy to understand.\n\nWeakness:\n1. I am a bit worried about the robustness of the proposed algorithm since the inlier ratio is significantly dropped.\n\n Why the inlier ratio is relatively low? NO", " This paper proposes a progressive point cloud registration method that estimates an initial transformation between two point clouds before registration estimation. The authors claim that one inlier is enough to estimate a good enough initial transformation between two point clouds, which boosts the downstream point cloud registration process, including positional encoding and progressive alignment. The main contributions of this paper are a) the one inlier finding process which is light-weight, b) the process can be jointly optimized with other steps, and c) the progressive alignment approach reduces the dependency on initialization. ### Strengths:\n#### S1. Core Idea.\nThe idea is novel and outperforms other methods in two public benchmarks.\n#### S2. Impact on Applications.\nThe method is lightweight and can potentially be applied in many other pipelines / with other methods, which benefits the community.\n#### S3. The paper is well written and easy to follow.\n\n### Weaknesses:\n#### W1. Limited theoretical / technical Novelty\nAlthough the paper is novel regarding the idea, the theoretical and technical novelty is limited.\n#### W2. Inaccurate Claims\nThis paper has made some claims with some cited papers that are inaccurate or unrelated (details below).\n#### W3. Missing Justification\nSome claims are made without proper proof (see details below).\n#### W4. Minor Typos / Writing\n* line 134: \"C is ...\" -> \"where C is ...\"\n* Eq 5: \"..., M,\" -> \"..., M.\" #### Q1 (ad W2): Inaccurate or Unrelated Citations\nline 32: \"... the straightforward position encoding is not a good idea [23].\" \nIn [23] it shows up to 3.8% improvement on 4DLoMatch and 1.5% higher RR on 3DMatch and 2.3% on 3DLoMatch.\nThis does not imply straightforward position encoding is not a good idea.\nTo make this statement true, one should show the comparison with/without direct position encoding, which is not provided in the paper.\n\nline 51: \"This shows that only one inlier is enough to preserve the spatial consistency [2, 7] ...\" \nI am not sure why those two papers are cited here. Do they mention anything related to this statement?\n\n#### Q2 (ad W3): Claims Without Proof:\nline 286: \" This is because ... both consider the position information ...\" \nTo justify this statement, one should provide an ablation study on a model with only the difference with and without positional encoding. However, this is not provided in the evaluation section.\n\nline 342: \"our method is scene-agnostic and maintains good registration accuracy in strongly differing scenarios.\" \nWithout a description of how the network was trained in those experiments, it is impossible to know whether the written sentence is true. \n\n#### Q3: Additional questions\n- In Figure 3, the inlier ratio for 3DLoMatch. Why does the 0.0 percentage increase after optimization? The authors do not mention limitations in the main paper. No text in the main paper indicates a limitation section in the supplementary material.", " This paper studies correspondence-based rigid point cloud registration problem and proposes a new position encoding method for point clouds to establish better correspondences. A virtual corresponding point pair, which is regarded as the one inlier, is first constructed from a set of real correspondences, and then the two point clouds are normalized by using the two points in the correspondence as reference and the position embedding is extracted from the normalized point clouds. Point features and position embedding are added together for establishing correspondences. Based on the above method of establishing correspondences, an iterative strategy is adopted to align two point clouds gradually. Experiments are conducted on 3DMatch, 3DLoMatch and KITTI, and sota registration recall are achieved on these datasets. Strengths:\n1.The proposed one-inlier based position encoding method is novel. The method is straightforward and effective. \n2.Experiments are conducted on three typical datasets and comparison are done to latest methods. High performance is achieved.\n\nWeakness:\n1. A good position encoding should be helpful to find more and better correspondences and a better registration is only a by-product of better correspondences, but the experimental results show that the IR and FMR of the proposed method are far lower than that of GeoTrans. Analysis are needed about this issue.\n2. The last two contributions are not significant. Especially, iterative optimization is widely used in recent learning-based registration methods, such as PRNet, RPM-Net.\n3. The proposed position-encoding idea is not fully analyzed. See question No.1. \n\nMinor issue:\n1. The part “one-inlier is enough” in title is a little misleading. \n2. In Fig.2, it should be the Matched Correspondences instead of {R, t}, that are transferred from the Inlier Learning module to the Progressive Alignment module.\n 1. In section 4.1, especially for the experiments on 3DLoMatch, the proposed method consistently has higher RR but lower FMR and IR. I agree that FMR and IR are not total positively-correlated to RR, but more analysis is needed to show why the proposed method has higher RR with a far lower IR, and how this is related to the proposed position encoding strategy. \n2. For the “Progressive alignment”, how many iterations are performed?\n3. For Fig. 3, the inlier ratio of every iteration should be given, instead of only the initialization and the final optimized one. \n4. In Table 2, what constitutes the “Pose” Time? Is it the runtime of the RANSAC 50k? If so, why is the proposed method much faster than GeoTransformer with lower IR?\n5. What’s the number of sample correspondences in the runtime experiment in Table 2?\n6. The quality of the “one-inlier” for position encoding should be given in each iteration.\n7. RRE and RTE should be reported for the experiments on 3DMatch and 3DLoMatch since the objective of this paper is registration. \n The authors talk about the limitation of the proposed method in supplementary material. I suggest address the issue about if the proposed method is applicable to registration of objects, such as models in ModelNet40. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4, 5 ]
[ "Rm3HrTQAIF", "PzdUg0pMMZ2", "XlIlkqws79s", "mvDDiJuqD4", "nips_2022_19MmorTQhho", "s2pcVnRoimF", "szX9PZcpCO", "8oEPpglFeng", "31SJOF-G23t", "nips_2022_19MmorTQhho", "nips_2022_19MmorTQhho", "nips_2022_19MmorTQhho", "nips_2022_19MmorTQhho" ]